repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/image_models/labs/4_tpu_training.ipynb
|
apache-2.0
|
import os
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
os.environ["BUCKET"] = BUCKET
"""
Explanation: Transfer Learning on TPUs
In the <a href="3_tf_hub_transfer_learning.ipynb">previous notebook</a>, we learned how to do transfer learning with TensorFlow Hub. In this notebook, we're going to kick up our training speed with TPUs.
Learning Objectives
Know how to set up a TPU strategy for training
Know how to use a TensorFlow Hub Module when training on a TPU
Know how to create and specify a TPU for training
First things first. Configure the parameters below to match your own Google Cloud project details.
End of explanation
"""
%%writefile tpu_models/trainer/task.py
"""TPU trainer command line interface"""
import argparse
import sys
import tensorflow as tf
from . import model, util
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
"--epochs", help="The number of epochs to train", type=int, default=5
)
parser.add_argument(
"--steps_per_epoch",
help="The number of steps per epoch to train",
type=int,
default=500,
)
parser.add_argument(
"--train_path",
help="The path to the training data",
type=str,
default="gs://cloud-ml-data/img/flower_photos/train_set.csv",
)
parser.add_argument(
"--eval_path",
help="The path to the evaluation data",
type=str,
default="gs://cloud-ml-data/img/flower_photos/eval_set.csv",
)
parser.add_argument(
"--tpu_address",
help="The path to the TPUs we will use in training",
type=str,
required=True,
)
parser.add_argument(
"--hub_path",
help="The path to TF Hub module to use in GCS",
type=str,
required=True,
)
parser.add_argument(
"--job-dir",
help="Directory where to save the given model",
type=str,
required=True,
)
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = # TODO: Your code goes here
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = # TODO: Your code goes here
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model,
args.epochs,
args.steps_per_epoch,
train_data,
eval_data,
args.job_dir,
)
return model_history
if __name__ == "__main__":
main()
"""
Explanation: Packaging the Model
In order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in tpu_models with the data processing functions from the pevious lab copied into <a href="tpu_models/trainer/util.py">util.py</a>.
Similarly, the model building and training functions are pulled into <a href="tpu_models/trainer/model.py">model.py</a>. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new task.py file.
We've added five command line arguments which are standard for cloud training of a TensorFlow model: epochs, steps_per_epoch, train_path, eval_path, and job-dir. There are two new arguments for TPU training: tpu_address and hub_path
tpu_address is going to be our TPU name as it appears in Compute Engine Instances. We can specify this name with the ctpu up command.
hub_path is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.
The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a TPU Cluster Resolver, which will help tensorflow communicate with the hardware to set up workers for training (more on TensorFlow Cluster Resolvers). Once the resolver connects to and initializes the TPU system, our Tensorflow Graphs can be initialized within a TPU distribution strategy, allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.
TODO: Complete the code below to setup the resolver and define the TPU training strategy.
End of explanation
"""
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
"""
Explanation: The TPU server
Before we can start training with this code, we need a way to pull in MobileNet. When working with TPUs in the cloud, the TPU will not have access to the VM's local file directory since the TPU worker acts as a server. Because of this all data used by our model must be hosted on an outside storage system such as Google Cloud Storage. This makes caching our dataset especially critical in order to speed up training time.
To access MobileNet with these restrictions, we can download a compressed saved version of the model by using the wget command. Adding ?tf-hub-format=compressed at the end of our module handle gives us a download URL.
End of explanation
"""
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
"""
Explanation: This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.
End of explanation
"""
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
"""
Explanation: Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.
End of explanation
"""
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
"""
Explanation: Spinning up a TPU
Time to wake up a TPU! Open the Google Cloud Shell and copy the gcloud compute command below. Say 'Yes' to the prompts to spin up the TPU.
gcloud compute tpus execution-groups create \
--name=my-tpu \
--zone=us-central1-b \
--tf-version=2.3.2 \
--machine-type=n1-standard-1 \
--accelerator-type=v3-8
It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively Compute Engine Interface can be used to SSH in. You'll know you're running on a TPU when the command line starts with your-username@your-tpu-name.
This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the . at the end as it tells gsutil to copy data into the currect directory.
End of explanation
"""
%%bash
export TPU_NAME=my-tpu
echo "export TPU_NAME="$TPU_NAME
echo "python3 -m tpu_models.trainer.task \
# TODO: Your code goes here \
# TODO: Your code goes here \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
"""
Explanation: Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out.
TODO: Complete the code below by adding flags for tpu_address and the hub_path. Have another look at task.py to see how these flags are used. The tpu_address denotes the TPU you created above and hub_path should denote the location of the TFHub module. (Note that the training code requires a TPU_NAME environment variable, set in the first two lines below -- you may reuse it in your code.)
End of explanation
"""
|
AllenDowney/ThinkStats2
|
solutions/chap11soln.ipynb
|
gpl-3.0
|
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import pandas as pd
import thinkstats2
import thinkplot
"""
Explanation: Chapter 11
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import first
live, firsts, others = first.MakeFrames()
"""
Explanation: Multiple regression
Let's load up the NSFG data again.
End of explanation
"""
import statsmodels.formula.api as smf
formula = 'totalwgt_lb ~ agepreg'
model = smf.ols(formula, data=live)
results = model.fit()
results.summary()
"""
Explanation: Here's birth weight as a function of mother's age (which we saw in the previous chapter).
End of explanation
"""
inter = results.params['Intercept']
slope = results.params['agepreg']
inter, slope
"""
Explanation: We can extract the parameters.
End of explanation
"""
slope_pvalue = results.pvalues['agepreg']
slope_pvalue
"""
Explanation: And the p-value of the slope estimate.
End of explanation
"""
results.rsquared
"""
Explanation: And the coefficient of determination.
End of explanation
"""
diff_weight = firsts.totalwgt_lb.mean() - others.totalwgt_lb.mean()
diff_weight
"""
Explanation: The difference in birth weight between first babies and others.
End of explanation
"""
diff_age = firsts.agepreg.mean() - others.agepreg.mean()
diff_age
"""
Explanation: The difference in age between mothers of first babies and others.
End of explanation
"""
slope * diff_age
"""
Explanation: The age difference plausibly explains about half of the difference in weight.
End of explanation
"""
live['isfirst'] = live.birthord == 1
formula = 'totalwgt_lb ~ isfirst'
results = smf.ols(formula, data=live).fit()
results.summary()
"""
Explanation: Running a single regression with a categorical variable, isfirst:
End of explanation
"""
formula = 'totalwgt_lb ~ isfirst + agepreg'
results = smf.ols(formula, data=live).fit()
results.summary()
"""
Explanation: Now finally running a multiple regression:
End of explanation
"""
live['agepreg2'] = live.agepreg**2
formula = 'totalwgt_lb ~ isfirst + agepreg + agepreg2'
results = smf.ols(formula, data=live).fit()
results.summary()
"""
Explanation: As expected, when we control for mother's age, the apparent difference due to isfirst is cut in half.
If we add age squared, we can control for a quadratic relationship between age and weight.
End of explanation
"""
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz")
import nsfg
live = live[live.prglngth>30]
resp = nsfg.ReadFemResp()
resp.index = resp.caseid
join = live.join(resp, on='caseid', rsuffix='_r')
join.shape
"""
Explanation: When we do that, the apparent effect of isfirst gets even smaller, and is no longer statistically significant.
These results suggest that the apparent difference in weight between first babies and others might be explained by difference in mothers' ages, at least in part.
Data Mining
We can use join to combine variables from the preganancy and respondent tables.
End of explanation
"""
import patsy
def GoMining(df):
"""Searches for variables that predict birth weight.
df: DataFrame of pregnancy records
returns: list of (rsquared, variable name) pairs
"""
variables = []
for name in df.columns:
try:
if df[name].var() < 1e-7:
continue
formula = 'totalwgt_lb ~ agepreg + ' + name
model = smf.ols(formula, data=df)
if model.nobs < len(df)/2:
continue
results = model.fit()
except (ValueError, TypeError, patsy.PatsyError) as e:
continue
variables.append((results.rsquared, name))
return variables
variables = GoMining(join)
variables
"""
Explanation: And we can search for variables with explanatory power.
Because we don't clean most of the variables, we are probably missing some good ones.
End of explanation
"""
import re
def ReadVariables():
"""Reads Stata dictionary files for NSFG data.
returns: DataFrame that maps variables names to descriptions
"""
vars1 = thinkstats2.ReadStataDct('2002FemPreg.dct').variables
vars2 = thinkstats2.ReadStataDct('2002FemResp.dct').variables
all_vars = vars1.append(vars2)
all_vars.index = all_vars.name
return all_vars
def MiningReport(variables, n=30):
"""Prints variables with the highest R^2.
t: list of (R^2, variable name) pairs
n: number of pairs to print
"""
all_vars = ReadVariables()
variables.sort(reverse=True)
for r2, name in variables[:n]:
key = re.sub('_r$', '', name)
try:
desc = all_vars.loc[key].desc
if isinstance(desc, pd.Series):
desc = desc[0]
print(name, r2, desc)
except (KeyError, IndexError):
print(name, r2)
"""
Explanation: The following functions report the variables with the highest values of $R^2$.
End of explanation
"""
MiningReport(variables)
"""
Explanation: Some of the variables that do well are not useful for prediction because they are not known ahead of time.
End of explanation
"""
formula = ('totalwgt_lb ~ agepreg + C(race) + babysex==1 + '
'nbrnaliv>1 + paydu==1 + totincr')
results = smf.ols(formula, data=join).fit()
results.summary()
"""
Explanation: Combining the variables that seem to have the most explanatory power.
End of explanation
"""
y = np.array([0, 1, 0, 1])
x1 = np.array([0, 0, 0, 1])
x2 = np.array([0, 1, 1, 1])
"""
Explanation: Logistic regression
Example: suppose we are trying to predict y using explanatory variables x1 and x2.
End of explanation
"""
beta = [-1.5, 2.8, 1.1]
"""
Explanation: According to the logit model the log odds for the $i$th element of $y$ is
$\log o = \beta_0 + \beta_1 x_1 + \beta_2 x_2 $
So let's start with an arbitrary guess about the elements of $\beta$:
End of explanation
"""
log_o = beta[0] + beta[1] * x1 + beta[2] * x2
log_o
"""
Explanation: Plugging in the model, we get log odds.
End of explanation
"""
o = np.exp(log_o)
o
"""
Explanation: Which we can convert to odds.
End of explanation
"""
p = o / (o+1)
p
"""
Explanation: And then convert to probabilities.
End of explanation
"""
likes = np.where(y, p, 1-p)
likes
"""
Explanation: The likelihoods of the actual outcomes are $p$ where $y$ is 1 and $1-p$ where $y$ is 0.
End of explanation
"""
like = np.prod(likes)
like
"""
Explanation: The likelihood of $y$ given $\beta$ is the product of likes:
End of explanation
"""
import first
live, firsts, others = first.MakeFrames()
live = live[live.prglngth>30]
live['boy'] = (live.babysex==1).astype(int)
"""
Explanation: Logistic regression works by searching for the values in $\beta$ that maximize like.
Here's an example using variables in the NSFG respondent file to predict whether a baby will be a boy or a girl.
End of explanation
"""
model = smf.logit('boy ~ agepreg', data=live)
results = model.fit()
results.summary()
"""
Explanation: The mother's age seems to have a small effect.
End of explanation
"""
formula = 'boy ~ agepreg + hpagelb + birthord + C(race)'
model = smf.logit(formula, data=live)
results = model.fit()
results.summary()
"""
Explanation: Here are the variables that seemed most promising.
End of explanation
"""
endog = pd.DataFrame(model.endog, columns=[model.endog_names])
exog = pd.DataFrame(model.exog, columns=model.exog_names)
"""
Explanation: To make a prediction, we have to extract the exogenous and endogenous variables.
End of explanation
"""
actual = endog['boy']
baseline = actual.mean()
baseline
"""
Explanation: The baseline prediction strategy is to guess "boy". In that case, we're right almost 51% of the time.
End of explanation
"""
predict = (results.predict() >= 0.5)
true_pos = predict * actual
true_neg = (1 - predict) * (1 - actual)
sum(true_pos), sum(true_neg)
"""
Explanation: If we use the previous model, we can compute the number of predictions we get right.
End of explanation
"""
acc = (sum(true_pos) + sum(true_neg)) / len(actual)
acc
"""
Explanation: And the accuracy, which is slightly higher than the baseline.
End of explanation
"""
columns = ['agepreg', 'hpagelb', 'birthord', 'race']
new = pd.DataFrame([[35, 39, 3, 2]], columns=columns)
y = results.predict(new)
y
"""
Explanation: To make a prediction for an individual, we have to get their information into a DataFrame.
End of explanation
"""
import first
live, firsts, others = first.MakeFrames()
live = live[live.prglngth>30]
# Solution
# The following are the only variables I found that have a statistically significant effect on pregnancy length.
import statsmodels.formula.api as smf
model = smf.ols('prglngth ~ birthord==1 + race==2 + nbrnaliv>1', data=live)
results = model.fit()
results.summary()
"""
Explanation: This person has a 51% chance of having a boy (according to the model).
Exercises
Exercise: Suppose one of your co-workers is expecting a baby and you are participating in an office pool to predict the date of birth. Assuming that bets are placed during the 30th week of pregnancy, what variables could you use to make the best prediction? You should limit yourself to variables that are known before the birth, and likely to be available to the people in the pool.
End of explanation
"""
# Solution
def GoMining(df):
"""Searches for variables that predict birth weight.
df: DataFrame of pregnancy records
returns: list of (rsquared, variable name) pairs
"""
df['boy'] = (df.babysex==1).astype(int)
variables = []
for name in df.columns:
try:
if df[name].var() < 1e-7:
continue
formula='boy ~ agepreg + ' + name
model = smf.logit(formula, data=df)
nobs = len(model.endog)
if nobs < len(df)/2:
continue
results = model.fit()
except:
continue
variables.append((results.prsquared, name))
return variables
variables = GoMining(join)
# Solution
#Here are the 30 variables that yield the highest pseudo-R^2 values.
MiningReport(variables)
# Solution
# Eliminating variables that are not known during pregnancy and
# others that are fishy for various reasons, here's the best model I could find:
formula = 'boy ~ agepreg + fmarout5==5 + infever==1'
model = smf.logit(formula, data=join)
results = model.fit()
results.summary()
"""
Explanation: Exercise: The Trivers-Willard hypothesis suggests that for many mammals the sex ratio depends on “maternal condition”; that is, factors like the mother’s age, size, health, and social status. See https://en.wikipedia.org/wiki/Trivers-Willard_hypothesis
Some studies have shown this effect among humans, but results are mixed. In this chapter we tested some variables related to these factors, but didn’t find any with a statistically significant effect on sex ratio.
As an exercise, use a data mining approach to test the other variables in the pregnancy and respondent files. Can you find any factors with a substantial effect?
End of explanation
"""
# Solution
# I used a nonlinear model of age.
join.numbabes.replace([97], np.nan, inplace=True)
join['age2'] = join.age_r**2
# Solution
formula = 'numbabes ~ age_r + age2 + age3 + C(race) + totincr + educat'
formula = 'numbabes ~ age_r + age2 + C(race) + totincr + educat'
model = smf.poisson(formula, data=join)
results = model.fit()
results.summary()
"""
Explanation: Exercise: If the quantity you want to predict is a count, you can use Poisson regression, which is implemented in StatsModels with a function called poisson. It works the same way as ols and logit. As an exercise, let’s use it to predict how many children a woman has born; in the NSFG dataset, this variable is called numbabes.
Suppose you meet a woman who is 35 years old, black, and a college graduate whose annual household income exceeds $75,000. How many children would you predict she has born?
End of explanation
"""
# Solution
columns = ['age_r', 'age2', 'age3', 'race', 'totincr', 'educat']
new = pd.DataFrame([[35, 35**2, 35**3, 1, 14, 16]], columns=columns)
results.predict(new)
"""
Explanation: Now we can predict the number of children for a woman who is 35 years old, black, and a college
graduate whose annual household income exceeds $75,000
End of explanation
"""
# Solution
# Here's the best model I could find.
formula='rmarital ~ age_r + age2 + C(race) + totincr + educat'
model = smf.mnlogit(formula, data=join)
results = model.fit()
results.summary()
"""
Explanation: Exercise: If the quantity you want to predict is categorical, you can use multinomial logistic regression, which is implemented in StatsModels with a function called mnlogit. As an exercise, let’s use it to guess whether a woman is married, cohabitating, widowed, divorced, separated, or never married; in the NSFG dataset, marital status is encoded in a variable called rmarital.
Suppose you meet a woman who is 25 years old, white, and a high school graduate whose annual household income is about $45,000. What is the probability that she is married, cohabitating, etc?
End of explanation
"""
# Solution
# This person has a 75% chance of being currently married,
# a 13% chance of being "not married but living with opposite
# sex partner", etc.
columns = ['age_r', 'age2', 'race', 'totincr', 'educat']
new = pd.DataFrame([[25, 25**2, 2, 11, 12]], columns=columns)
results.predict(new)
"""
Explanation: Make a prediction for a woman who is 25 years old, white, and a high
school graduate whose annual household income is about $45,000.
End of explanation
"""
|
analysiscenter/dataset
|
examples/experiments/zeroing_of_weights/zeroing_of_weights.ipynb
|
apache-2.0
|
import sys
import copy
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from tqdm import tqdm_notebook as tqn
%matplotlib inline
sys.path.append('../../..')
from simple_model import ConvModel
from batchflow.opensets import MNIST
from batchflow import V, B
plt.style.use('seaborn-poster')
plt.style.use('ggplot')
"""
Explanation: How many weigths can we zero?
Let’s talking about the weights of neural networks.
This notebook appears by virtue of an article: Song Han at all. "Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding".
The main idea of the paper is that most of the weights in the network are not needed. It’s time to check this statement.
1. Zeroing near-zero weights.
Now we will train the network and keep the weights from there.
After that, gradually get rid of the near-zero weights and see how the quality changes.
End of explanation
"""
mnist = MNIST()
"""
Explanation: About model:
2 convolution layers:
first layer:
kernel = 7x7x1, num_filters = 16 => 784 weights
second layer:
kernel = 5x5x16, num_filters = 32 => 12800 weights
2 dense layers:
first layer: num_filters = 256, num_inputs = 128 => 32768 weights
second layer: num_filters = 10, num_inputs = 256 => 2560 weights
Number of weights: 48912
Traditionally we use a dataset with MNIST data
End of explanation
"""
train_pipeline = (
mnist.train.p
.init_variable('loss', init_on_each_run=list)
.init_model('dynamic',
ConvModel,
'conv',
config={'inputs': dict(images={'shape': (28, 28, 1)},
labels={'classes': (10),
'transform': 'ohe',
'name': 'targets'}),
'loss': 'se',
'optimizer': 'Adam',
'input_block/inputs': 'images',
'head/units': [256, 10],
'output': dict(ops=['labels', 'accuracy'])})
.train_model('conv',
feed_dict={'images': B('images'),
'labels': B('labels')})
)
test_pipeline = (
mnist.test.p
.import_model('conv', train_pipeline)
.init_variable('predict', init_on_each_run=list)
.predict_model('conv',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('predict'),
mode='a')
)
"""
Explanation: Create a pipeline to train and test our model
End of explanation
"""
MAX_ITER = 600
for curr_iter in tqn(range(1, MAX_ITER + 1)):
train_pipeline.next_batch(100, n_epochs=None, shuffle=True)
test_pipeline.next_batch(100, n_epochs=None, shuffle=True)
"""
Explanation: After that make several iterations of the process
End of explanation
"""
plt.xlabel('Iteraion', fontsize=16)
plt.ylabel('Accuracy', fontsize=16)
acc = test_pipeline.get_variable('predict')
plt.plot(acc)
"""
Explanation: And explore the accuracy graph
End of explanation
"""
sess = train_pipeline.get_model_by_name('conv').session
graph = sess.graph
def apply(weights, biases):
""" Loading weights and biases in the model
Parameters
----------
weights : np.array
weights from model
biases : np.array
biases from model
"""
assign = []
for num_layer in range(0, 7, 2):
assign.append(tf.assign(graph.get_collection('trainable_variables')[num_layer], weights[num_layer//2]))
for num_layer in range(1, 8, 2):
assign.append(tf.assign(graph.get_collection('trainable_variables')[num_layer], biases[num_layer//2]))
sess.run(assign)
"""
Explanation: Function get_model_by_name returns the output of the model. In our case, we fetched tensorflow session that allows us to get model's weights.
End of explanation
"""
weights, biases = [], []
variables = graph.get_collection('trainable_variables')
weights.append(sess.run(variables[::2]))
biases.append(sess.run(variables[1::2]))
weights = np.array(weights[0])
biases = np.array(biases[0])
weights_global = copy.deepcopy(weights)
biases_global = copy.deepcopy(biases)
percentage = []
accuracy = []
for const in tqn(np.linspace(1e-2, 9e-2)):
zeros_on_layer = []
for i in range(len(weights)):
weight_ind = np.where(np.abs(weights[i]) < const)
zeros_on_layer.append(len(weight_ind[0]) / np.array(weights[i].shape).prod())
weights[i][weight_ind] = 0
biases[i][np.where(np.abs(biases[i]) < const)] = 0
percentage.append(zeros_on_layer)
apply(weights, biases)
test_pipeline.next_batch(100, shuffle=True)
accuracy.append(acc[-1])
"""
Explanation: Next, let's zero weights by slowly moving the threshold from 1e-2 to 9e-2 and see how this affects the quality
End of explanation
"""
a = np.linspace(1e-2,9e-2)
_, ax = plt.subplots(2, 1, sharex=True, figsize=(14, 17))
ax = ax.reshape(-1)
ax[0].set_xlabel('Threshold', fontsize=14)
ax[0].set_ylabel('Percentage of zero weights', fontsize=14)
ax[0].plot(a, np.array(percentage)[:,0], label='Percentage of weights at zero on first conv layer', c='b')
ax[0].plot(a, np.array(percentage)[:,1], label='Percentage of weights at zero on second conv layer', c='y')
ax[0].plot(a, np.array(percentage)[:,2], label='Percentage of weights at zero on first dense layer', c='g')
ax[0].plot(a, np.array(percentage)[:,3], label='Percentage of weights at zero on second dense layer', c='m')
ax[0].axvline(x=0.03, ymax=0.27, color='r')
ax[0].axhline(y=0.3, xmax=0.27, color='r')
ax[0].plot(0.03, 0.3, 'ro')
ax[0].text(0.0016, 0.3, '0.3', fontsize=16, color='r')
ax[0].legend()
ax[1].set_ylabel('Accuracy', fontsize=14)
ax[1].set_xlabel('Theshold', fontsize=14)
ax[1].plot(a, accuracy, label='Total accuracy', c='g')
ax[1].axvline(x=0.03, ymax=0.91, color='r')
ax[1].axhline(y=0.94, xmax=0.27, color='r')
ax[1].plot(0.03, 0.94, 'ro')
ax[1].text(0.0016, 0.89, '0.9', fontsize=16, color='r')
ax[1].legend()
"""
Explanation: You can see two plots below. First plot reflects how number of zero parameters depends on threshold, while second one shows model's quality-threshold relation
End of explanation
"""
weights = copy.deepcopy(weights_global)
biases = copy.deepcopy(biases_global)
def clear():
""" Function to load initial values """
weights = copy.deepcopy(weights_global)
biases = copy.deepcopy(biases_global)
apply(weights, biases)
"""
Explanation: According to the graphs, you can zero about 30 percent of the scales without losing quality.
2. Replace weights on each layer and convert them into cluster.
Next step is to split weights of each layer into multiple clusters using k-means algorithm and replace them by clusters centers.
If the quality doesn't change, this method can allow storing weights more optimally and, therefore, the network will weigh less.
Now saving the parameters from the already trained network. Then replacing all weights by the values of the cluster which are the closest to the given weight. Don't forget to save the original parameters to prevent several times retraining of the network.
End of explanation
"""
accuracy = []
clusters = np.hstack((np.linspace(30, 4, 15, dtype=np.int32), \
np.linspace(100, 4, 15, dtype=np.int32), \
np.linspace(500, 4, 15, dtype=np.int32), \
np.linspace(50, 4, 15, dtype=np.int32))).reshape(4,-1).T
uniq = (sum([len(np.unique(i)) for i in weights]) + sum([len(np.unique(i)) for i in biases]))
for cluster in tqn(zip(clusters,np.array([2, 2, 2, 2]*15).reshape(15, 4))):
weights_clust, bias_clust = cluster
for i in range(4):
kmeans = KMeans(weights_clust[i]).fit(weights[i].reshape(-1, 1).astype(np.float64))
shape = weights[i].shape
weights[i] = kmeans.cluster_centers_[kmeans.predict(weights[i].reshape(-1, 1))].reshape(shape)
kmeans = KMeans(bias_clust[i]).fit(biases[i].reshape(-1, 1))
shape = biases[i].shape
biases[i] = kmeans.cluster_centers_[kmeans.predict(biases[i].reshape(-1, 1))].reshape(shape)
apply(weights, biases)
test_pipeline.next_batch(100)
accuracy.append(acc[-1])
clear()
"""
Explanation: We tend to reduce the number of clusters from a notably large number, in which the quality will not change exactly, to a very small one, until the quality will drop noticeably. For each layer, the initial number of clusters is different, because there is a different number of weights in each layer. The more weights - the more corresponding clusters. You can see it in clusters variable.
End of explanation
"""
plt.xlabel('Number of clustarisation', fontsize=16)
plt.ylabel('Accuracy', fontsize=16)
plt.plot(accuracy)
optimal_index = np.array([i for i in range(len(accuracy)-1) if accuracy[i] - accuracy[i+1] > 0.05][0])
print("Optimal set of clusters is: {} from {} number of clusters".format(clusters[optimal_index], optimal_index))
"""
Explanation: Now we can tune optimal number of clusters using quality graph
End of explanation
"""
|
IS-ENES-Data/submission_forms
|
dkrz_forms/Templates/CMIP6_submission_form.ipynb
|
apache-2.0
|
# Evaluate this cell to identifiy your form
from dkrz_forms import form_widgets, form_handler, checks
form_infos = form_widgets.show_selection()
# Evaluate this cell to generate your personal form instance
form_info = form_infos[form_widgets.FORMS.value]
sf = form_handler.init_form(form_info)
form = sf.sub.entity_out.report
"""
Explanation: DKRZ CMIP6 data submission form for ESGF publication
Overview
You want to store and publish CMIP6 data at DKRZ via ESGF ? This form will provide some background information and guide you through the process.
<br> To organize the data ingest we need some specific information with respect to the CMIP6 data collection you want to publish (e.g. concerning data structure, content and quality). The form has to be filled before the ESGF data ingest and publication process can be started.
<br> In case you have questions please contact esgf-publication@dkrz.de
Preconditions for your data submission
You need to be aware of a set of technical requirements which have to be addressed before CMIP6 data submission to DKRZ and ESGF data publication are possible. They are collected at the official WCRP CMP Phase6 (CMIP6) site in the Guide to CMP6 Participation. In the following a short summary of key prerequisites is given:
Your institution as well as your model has to be registered on the WCRP-CMIP github site
Contact and citation information has to be registered in the citation GUI documentation of GUI
Your data conforms to the CMIP6 specifications for file names, directory structures and CMIP6 Data Reference Syntax (DRS)
Directory structure:
<pre><code>
<mip_era>/<activity_id>/<institution_id>/<source_id>/
<experiment_id>/<member_id>/<table_id>/<variable_id>/<grid_label>/<version>
</code>
</pre>
File naming convention:
<pre><code> <variable_id>_<table_id>_<source_id>_<experiment_id><member_id>
_<grid_label[_<time_range>].nc
</code>
</pre>
Please make sure your data is quality checked before submission to a data center. Two tools for checking are recommended:
CMOR/PREPARE checker (minimal check):
github: https://github.com/PCMDI/cmor
documentation: https://cmor.llnl.gov/mydoc_cmip6_validator/
DKRZ_QA checker (incluces CMOR/PREPARE checker optionally):
github: https://github.com/IS-ENES-Data/QA-DKRZ
documentation: http://qa-dkrz.readthedocs.io/en/latest/
Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate.<br>
Evaluation of cells is done by selecting the cell and pressing the keys "Shift" + "Enter".
Please evaluate the following cell to identifiy your form (associate your name and email to this form).
Attention: the name selected must match the name at the opt of this page !
End of explanation
"""
form.submission_type = "init" # example: sf.submission_type = "initial_version"
"""
Explanation: Step 1: provide generic data submission related information
Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data
End of explanation
"""
form.cmor = '..' ## options: 'CMOR', 'CDO-CMOR', etc.
form.cmor_compliance_checks = '..' ## please name the tool you used to check your files with respect to CMIP6 compliance
## 'PREPARE' for the CMOR PREPARE checker and "DKRZ" for the DKRZ tool.
"""
Explanation: CMOR compliance
please provide information on the software and tools you used to make sure your data is CMIP6 CMOR3 compliant
End of explanation
"""
form.es_doc = " .. " # 'yes' related esdoc model information is/will be available, 'no' otherwise
form.errata = " .. " # 'yes' if errata information was provided based on the CMIP6 errata mechanism
# fill the following info only in case this form refers to new versions of already published ESGF data
form.errata_id = ".." # the errata id provided by the CMIP6 errata mechanism
form.errata_comment = "..." # any additional information on the reason of this new version, not yet provided
"""
Explanation: Documentation availability
please provide information with respect to availability of es-doc model documentation
in case this form addresses a new version replacing older versions: provide info on the availability of errata information especially refer to errata information provided using the CMIP6 errata web frontend
End of explanation
"""
form.uniqueness_of_tracking_id = "..." # example: form.uniqueness_of_tracking_id = "yes"
"""
Explanation: Uniqueness of tracking_id and creation_date
All your file have unique tracking_ids assigned in the structure required by CMIP6 ?
In case any of your files is replacing a file already published, it must not have the same tracking_id nor the same creation_date as the file it replaces.
Did you make sure that that this is true ?
Reply 'yes'; otherwise adapt your files, no ESGF publication is possible !
End of explanation
"""
form.data_dir_1 = " ... "
# uncomment for additional entries ...
# form.data_dir_2 = " ... "
# form.data_dir_3 = " ... "
# ...
"""
Explanation: Generic content characterization based on CMIP6 directory structure
Please name the respective directory names characterizing your submission:
- all files within the specified directory pattern are subject to ESGF publication
CMIP6 directory structure:
<pre><code>
<CMIP6>/<activity_id>/<institution_id>/<source_id>/
<experiment_id>/<member_id>/<table_id>/<variable_id>/
<grid_label>/<version> </code> </pre>
e.g.
form_data_dir_1 = '/CMIP6/CMIP/MPI-M/MPIESM-1-2-HR/
piControl/r1i2p1f1//3hr//* '
addresses all 3hr data in the specified experiment/member
End of explanation
"""
form.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
form.grid = ".."
"""
Explanation: Provide specific additional information for this submission
variables, grid, calendar, ...
example file name
.. what do we need ..?
End of explanation
"""
form.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
"""
Explanation: Exclude variable list
In each CMIP6 file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
End of explanation
"""
form.terms_of_use = "..." # has to be "ok"
"""
Explanation: CMIP6 terms of use
please explicitly note, you are ok with the CMIP6 terms of use
End of explanation
"""
form.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
form.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
"""
Explanation: Step 2: provide information on the data handover mechanism
the following information (and other information needed for data transport and data publication)
End of explanation
"""
form.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
"""
Explanation: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
End of explanation
"""
# simple consistency check report for your submission form - not completed
report = checks.check_report(sf,"sub")
checks.display_report(report)
"""
Explanation: Step 3: Check your submission before submission
End of explanation
"""
form_handler.save_form(sf,"any comment you want") # add a comment
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
"""
Explanation: Step 4: Save and review your form
your form will be stored (the form name consists of your last name plut your keyword)
End of explanation
"""
#form_handler.email_form_info(sf)
form_handler.form_submission(sf)
"""
Explanation: Step 5: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications
End of explanation
"""
|
RoebideBruijn/datascience-intensive-course
|
project/notebooks/project-milestone-report.ipynb
|
mit
|
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
sns.set_style('white')
"""
Explanation: Milestone report
Instruction
You have proposed a project, collected a data set, cleaned up the data and explored it with descriptive and inferential statistics techniques. Now’s the time to take stock of what you’ve learned. The project milestone is an opportunity for you to practice your data story skills. Your milestone will be reached when you produce an early draft of your final Capstone report. This is a slightly longer (3-5 page) draft that should have the following:
An introduction to the problem: What is the problem? Who is the Client? (Feel free to reuse points 1-2 from your proposal document)
A deeper dive into the data set:
What important fields and information does the data set have?
What are its limitations i.e. what are some questions that you cannot answer with this data set?
What kind of cleaning and wrangling did you need to do?
Are there other datasets you can find, use and combine with, to answer the questions that matter?
Any preliminary exploration you’ve performed and your initial findings. Test the hypotheses one at a time. Often, the data story emerges as a result of a sequence of testing hypothesis e.g. You first tested if X was true, and because it wasn't, you tried Y, which turned out to be true.
Based on these findings, what approach are you going to take? How has your approach changed from what you initially proposed, if applicable?
Add your code and milestone report to the github repository. As before, once your mentor has approved your milestone document, please share the github repository URL on the community and ask the community for feedback.
While we require only one milestone report, we encourage you and your mentor to plan multiple milestones, especially for more complex projects.
End of explanation
"""
loans = pd.read_csv('../data/loan.csv')
print(loans.shape)
closed_status = ['Fully Paid', 'Charged Off',
'Does not meet the credit policy. Status:Fully Paid',
'Does not meet the credit policy. Status:Charged Off']
closed_loans = loans[loans['loan_status'].isin(closed_status)]
print(closed_loans.shape)
sns.countplot(loans['loan_status'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
sns.countplot(closed_loans['loan_status'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
"""
Explanation: Introduction
Crowdfunding has become a new and exciting way to get capitale and to invest. Lending club has jumped into the trend by offering loans with fixed interest rates and terms that the public can choose to invest in. Lending club screens the loans that are applied for and only 10% gets approved and is subsequently offered to the public. By investing a small proportion in many different loans investors can diversify their portfolio and in this way keep the default risk to a minimum (which is estimated by lending club to be 4%). For their services lending club asks a fee of 1%. For investors this is an interesting way to get profit on their investment since it supposedly gives more stable returns than the stock market and higher interest rates than a savings account. The profits depend on the interest rate and the default rate. Therefore it is interesting to see whether certain characteristics of the loan or the buyer give a bigger chance of default. And whether loans with higher interest rates have a bigger chance to default.
For this project the lending club loans dataset is used from Kaggle. (https://www.kaggle.com/wendykan/lending-club-loan-data). Their file contains complete loans data for loans issued between 2007 and 2015. The client is the investor who wants to get the most profit on his portfolio of loans and wants to know whether investing with lending club is profitable. The problem is that some of the loans will not be fully paid, therefore interest rate is not the only interesting characteristic of the loan. We will therefore investigate the characteristics of the loans that have an effect on the chance a loan gets 'charged off'.
Data set
loan status
The complete dataset consists of 887,379 loans with 74 features. We select only the loans that went to fullterm, because we don't know whether the loans that are still ongoing will end in 'charged off' or 'fully paid'. Most loans are current loans, but there are four categories of loans that went to full term: 'Fully Paid', 'Charged Off', 'Does not meet the credit policy. Status:Fully Paid', 'Does not meet the credit policy. Status:Charged Off'. When selecting only those categories, 255,720 of the loans are left of which most are 'fully paid'.
End of explanation
"""
nr_charged_off = (len(closed_loans[closed_loans['loan_status']=='Charged Off']) +
len(closed_loans[closed_loans['loan_status']=='Does not meet the credit policy. Status:Charged Off']))
round(nr_charged_off / len(closed_loans) * 100)
"""
Explanation: percentage charged off
The first question is what the percentage of 'charged off' loans actually is, so our investors know the risk. Lending club claims its around 4%. But in the loans that went to full term we see that the percentage is a shocking 18%. So hopefully lending club's selection of the loans will become better in the future in order to get this risk down. This is a question that is left for the future when the current loans went to full term.
End of explanation
"""
nr_nulls = closed_loans.isnull().apply(sum, 0)
nr_nulls = nr_nulls[nr_nulls != 0]
print(nr_nulls.sort_values(ascending=False) / 255720)
print('nr of features having more than 5% missing values:', sum(nr_nulls.sort_values(ascending=False) / 255720 > 0.05))
"""
Explanation: features
There are 74 features in this dataset. They are displayed below. A couple have to do with the loan (32) and a couple have to do with the one that's asking for the loan (39). A few are about loans that were applied for by more than one borrower, namely 'annual_inc_joint', 'dti_joint' and 'verification_status_joint'. But in the loans that went to full term there is only one loan that is not an individual loan, hence these features are not interesting in this case. Also a lot of features have missing values. If we concentrate only on features that have less than 5% missing values, we are left with only 48 features.
Loan
- id: loan
- loan_amnt: 1914 times is loan amount bigger than funded amount
- funded_amnt
- funded_amnt_inv
- term: 36 or 60 months
- int_rate: interest rates
- installment: height monthly pay
- grade: A-G, A low risk, G high risk
- sub_grade
- issue_d: month-year loan was funded
- loan_status
- pymnt_plan: n/y
- url
- desc: description provided by borrower
- purpose: 'credit_card', 'car', 'small_business', 'other', 'wedding', 'debt_consolidation', 'home_improvement', 'major_purchase', 'medical', 'moving', 'vacation', 'house', 'renewable_energy','educational'
- title: provided by borrower
- initial_list_status: w/f (what is this?)
- out_prncp: outstanding prinicipal --> still >0 in fully paid?!
- out_prncp_inv
- total_pymnt
- total_pymnt_inv
- total_rec_prncp
- total_rec_int: total recieved interest
- total_rec_late_fee
- recoveries: post charged off gross recovery
- collection_recovery_fee: post charged off collection fee
- last_pymnt_d
- last_pymnt_amnt
- next_pymnt_d
- collections_12_mths_ex_med: almost all 0
- policy_code: 1 publicly available, 2 not
- application_type (only 1 JOINT, rest INDIVIDUAL)
Borrower
- emp_title
- emp_length: 0-10 (10 stands for >=10)
- home_ownership: 'RENT', 'OWN', 'MORTGAGE', 'OTHER', 'NONE', 'ANY'
- member_id: person
- annual_inc (stated by borrower)
- verification_status: 'Verified', 'Source Verified', 'Not Verified' (income verified by LC?)
- zip_code
- addr_state
- dti: debt to income (without mortgage)
- delinq_2yrs: The number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years
- mths_since_last_delinq
- mths_since_last_record
- pub_rec
- earliest_cr_line
- inq_last_6mths
- open_acc (nr of open credit lines)
- total_acc (nr of total credit lines in credit file)
- revol_bal
- last_credit_pull_d
- mths_since_last_major_derog: Months since most recent 90-day or worse rating
- acc_now_delinq: The number of accounts on which the borrower is now delinquent.
- tot_coll_amt: Total collection amounts ever owed
- tot_cur_bal: Total current balance of all accounts
- open_acc_6m: Number of open trades in last 6 months
- open_il_6m: Number of currently active installment trades
- open_il_12m: Number of installment accounts opened in past 12 months
- open_il_24m
- mths_since_rcnt_il: Months since most recent installment accounts opened
- total_bal_il: Total current balance of all installment accounts
- il_util: Ratio of total current balance to high credit/credit limit on all install acct
- open_rv_12m: Number of revolving trades opened in past 12 months
- open_rv_24m
- max_bal_bc: Maximum current balance owed on all revolving accounts
- all_util: Balance to credit limit on all trades
- total_rev_hi_lim: Total revolving high credit/credit limit
- inq_fi: Number of personal finance inquiries
- total_cu_tl: Number of finance trades
- inq_last_12m: Number of credit inquiries in past 12 months
Two borrowers (only in 1 case)
- annual_inc_joint
- dti_joint
- verification_status_joint
End of explanation
"""
paid_status = ['Fully Paid', 'Does not meet the credit policy. Status:Fully Paid']
closed_loans['charged_off'] = [False if loan in paid_status else True for loan in closed_loans['loan_status']]
sns.distplot(closed_loans['funded_amnt'], kde=False, bins=50)
plt.show()
sns.countplot(closed_loans['term'], color='turquoise')
plt.show()
purpose_paid = closed_loans.groupby(['purpose', 'charged_off'])['id'].count()
sns.barplot(data=pd.DataFrame(purpose_paid).reset_index(), x='purpose', y='id', hue='charged_off')
plt.xticks(rotation=90)
plt.show()
sns.boxplot(data=closed_loans, x='charged_off', y='dti')
plt.show()
home_paid = closed_loans.groupby(['home_ownership', 'charged_off'])['id'].count()
sns.barplot(data=pd.DataFrame(home_paid).reset_index(), x='home_ownership', y='id', hue='charged_off')
plt.xticks(rotation=90)
plt.show()
from scipy.stats import ttest_ind
print(ttest_ind(closed_loans[closed_loans['charged_off']==True]['dti'], closed_loans[closed_loans['charged_off']==False]['dti']))
print((closed_loans[closed_loans['charged_off']==True]['dti']).mean())
print((closed_loans[closed_loans['charged_off']==False]['dti']).mean())
print(closed_loans.groupby(['home_ownership', 'charged_off'])['id'].count()[1:3])
print(closed_loans.groupby(['home_ownership', 'charged_off'])['id'].count()[7:11])
print('mortgage:', 20226/(105874+20226))
print('own:', 4074/(18098+4074))
print('rent:', 21663/(85557+21663))
"""
Explanation: limitations
To answer the questions about the 'charged off' status and whether investing with lending club is profitable we use only the loans that went to full term. The term the loans run are 3 or 5 years. And the latest loan information is from 2015. Hence the most recent loan we can look at is already from 2012 and the rest is even older. It might be that lending club has changed its protocols and the found results on this dataset might therefore not apply anymore on new loans. Also 1/3 of the features have so many missing values that they can't be used for analysis. There is one feature 'initial_list_status' where they do not explain what it means (values w/f), hence cannot be used for interpretation. Some of the features are unique for different loans like 'desc', 'url', 'id', 'title' and are therefore not interesting for our analysis. It might be that there are other features about a borrower that might have an influence on 'charged off' rate for instance 'gender', 'age', 'nr-of-kids', 'nr-of-pets', 'marital status', 'political preference'. But we will not be able to investigate this, since we are restricted to features that lending club collected. Also some features might have been registrered better for newer loans than older loans or in a different way (because protocols changed) and this might influence our results.
cleaning and wrangling
First the selection of only loans that went to full term and selecting only the loans with not to much missing values. In a later stage, we want to use features for prediction that are selected based on their ability to lead to insights for new investors. Since we work with sklearn non-numerical features will have to be transformed to numerical features. Dates can be transformed into timestamps, categorical features will be transformed as good as possible into numerical values. Ordering is important for most algorithms, hence it's important to find an order in the categorical features to keep during transformation to numerical features. Also scaling/normalizing is important for some algorithms and we have to keep in mind that we have to use the exact same transformation for the test set as we did on the training set. Lastly, missing values, infinity and minus infinity values are not possible during prediction so also need to be transformed.
other datasets
The American gouvernment has a lot of other datasets available that can be used in combination with this dataset. For instance both zipcode and state information is available. Hence we might add a feature that describes what the political preference is of the state the person lives in. Secondly we might transform the state feature to 'north/west/south/east'. Also we might use the average income for a certain zipcode or state as extra feature or the average age.
## Explorations
features of the loans
We will look at a few interesting features to see if what the loans characteristics look like. The funded amount turns out to be between 0 and 35,000. Hence more like an amount to buy a car than to buy a house. Lending club therefore competes with creditcards and consumer credits. The loans are either 3 or 5 years of length. Furthermore, the purpose of the loan could have something to do with the chance whether someone would pay the loan back. If it's for debt consolidation, someone has more loans and therefore will probably be more likely to get into trouble. As it turns out almost all loans are for debt consolidation or creditcard debt, which is practically the same thing. Hence it looks like not the most interesting to base your choice of investment on. Moreover, debt-to-income seems of course also a very interesting feature. But the difference between loans that were paid fully or is only 16% debt-to-income versus 18% debt-to-income. Nevertheless, this difference is significant with a T-test. Lastly, people with a mortgage do seem to pay off their loans more often than people who rent. The order is mortgage (16% charged off), own (18% charged off) and rent (20% charged off).
End of explanation
"""
grade_paid = closed_loans.groupby(['grade', 'charged_off'])['id'].count()
risk_grades = dict.fromkeys(closed_loans['grade'].unique())
for g in risk_grades.keys():
risk_grades[g] = grade_paid.loc[(g, True)] / (grade_paid.loc[(g, False)] + grade_paid.loc[(g, True)])
risk_grades = pd.DataFrame(risk_grades, index=['proportion_unpaid_loans'])
sns.stripplot(data=risk_grades, color='darkgray', size=15)
closed_loans['grade'] = closed_loans['grade'].astype('category', ordered=True)
sns.boxplot(data=closed_loans, x='grade', y='int_rate', color='turquoise')
"""
Explanation: grade
Lending club has made its own risk assesment of the loans and gives them categories namely A-F. Including subcategories like A1 etc. As we can see below, the proportion of loans that get charged off does increase nicely with the increase in risk category (grade). In the highest risk still more than half gets fully paid. To compensate for the higher risk, investors in these higher risk loans get higher interest rates. Although it's not completely linear.
End of explanation
"""
closed_loans['profit'] = (closed_loans['total_rec_int'] + closed_loans['total_rec_prncp'] + closed_loans['collections_12_mths_ex_med']
+ closed_loans['total_rec_late_fee'] + closed_loans['recoveries'] - closed_loans['funded_amnt']
- closed_loans['collection_recovery_fee'])
profits = closed_loans.groupby('grade')['profit'].sum()
sns.barplot(data=profits.reset_index(), x='grade', y='profit', color='gray')
plt.show()
profits = closed_loans.groupby('charged_off')['profit'].sum()
sns.barplot(data=profits.reset_index(), x='charged_off', y='profit')
plt.show()
profits = closed_loans.groupby(['grade', 'charged_off'])['profit'].sum()
sns.barplot(data=profits.reset_index(), x='profit', y='grade', hue='charged_off', orient='h')
plt.show()
"""
Explanation: To answer the question whether it's profitable to invest in the higher risk categories. One could calculate the charged off % and calculate the average interest rate. But than you don't take into account that some loans might default very quickly and other loans might default right before the end and this difference makes a huge difference in how much profit/loss one got on that loan. Hence it's important to know how much money came back in total per loan minus the money one put in to see if it turned out to be profitable in the end. Therefore 'total_recevied_interest', 'total_recieved_principal', 'total_recieved_late_fee', 'recoveries', 'collections_12_mths_ex_med' will all be used as income from the loan. While 'funded_amount' is seen as what was put in in the loan at the start and 'collection_recovery_fee' is what was paid to the person who collected the money that was recovered after the loan was charged off. This leads to the conclusion that of one had invested in all loans of that category only the A-C category was profitable and that the higher interest rates of the riskier categories did not compensate for the loss of money due to charging off of the loans.
End of explanation
"""
|
harmsm/pythonic-science
|
chapters/00_inductive-python/04_loops.ipynb
|
unlicense
|
for i in range(10):
print(i)
for i in range(15):
print(i)
"""
Explanation: Loops
Loops let you execute code over and over.
Basic Loops
Predict
What will this code do?
End of explanation
"""
for i in range(11):
print(i)
"""
Explanation: Summarize
What does the range function do?
Modify
Change the cell below so it prints all $i$ between 0 and 20.
End of explanation
"""
for i in range(10):
if i > 5:
print(i)
"""
Explanation: Implement
Write a loop that calculates the sum:
$$ 1 + 2 + 3 + ... + 1001$$
Loops with conditionals
Predict
What will this code do?
End of explanation
"""
for i in range(10):
if i > 5:
print(i)
"""
Explanation: Modify
Change the code below so it goes from 0 to 20 and prints all i less than 8.
End of explanation
"""
for i in range(10):
if i > 5:
break
print(i)
"""
Explanation: Predict
What will this code do?
End of explanation
"""
for i in range(10):
print("HERE")
if i > 5:
continue
print(i)
"""
Explanation: Summarize
What does the break keyword do?
Predict
What will this code do?
End of explanation
"""
x = 0
for i in range(1,100001):
x = x + i
if x > 30000:
break
"""
Explanation: Summarize
What does the continue keyword do?
Implement
Write a program that starts calculating the sum:
$$1 + 2 + 3 + ... 100,000$$
but stops if the sum is greater than 30,000.
End of explanation
"""
x = 1
while x < 10:
print(x)
x = x + 1
"""
Explanation: While loops
Predict
What will this code do?
End of explanation
"""
x = 1000
while x > 100:
print(x)
x = x - 10
"""
Explanation: Predict
What will this code do?
End of explanation
"""
x = 0
while x < 20:
print(x*x)
x = x + 1
"""
Explanation: Summarize
How does while work?
Modify
Change the following code so it will print all values of $x^2$ for $x$ between 5 and 11.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/hadgem3-gc31-hm/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: HADGEM3-GC31-HM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
HeyIamJames/bikeshare
|
BikeShareStep3.ipynb
|
mit
|
station_counts = usage.groupby('station_start')['station_start'].count()
station_rentals_per_day = DataFrame()
station_rentals_per_day['rentals'] = station_counts.values / 366.0
station_rentals_per_day['station'] = station_counts.index
station_rentals_per_day.head()
"""
Explanation: To start with, we'll need to compute the number of rentals per station per day. Use pandas to do that.
End of explanation
"""
s = stations[['station']]
u = pd.concat([usage['station_start']], axis=1, keys=['station'])
counts = u['station'].value_counts()
c = DataFrame(counts.index, columns=['station'])
c['counts'] = counts.values
c['counts'] = c['counts'].apply(lambda x: x / 366)
m = pd.merge(s, c, on='station')
stations_data = stations.merge(m, on='station')
df = DataFrame(stations_data.index, columns=['station'])
df['avg_rentals'] = m[['counts']]
df['station'] = m[['station']]
stations_vals = pd.merge(left=df, right=stations, on='station')
x = stations_vals[list(stations_vals.columns.values[8:])]
y = stations_vals[list(stations_vals.columns.values[1:2])]
linear_regression = linear_model.LinearRegression()
linear_regression.fit(x, y)
"""
Explanation: a. Our stations data has a huge number of quantitative attributes: fast_food, parking, restaurant, etc... Some of them are encoded as 0 or 1 (for absence or presence), others represent counts. To start with, run a simple linear regression where the input (x) variables are all the various station attributes and the output (y) variable is the average number of rentals per day.
End of explanation
"""
plt.scatter(linear_regression.predict(x), y)
plt.xlabel('predicted values')
plt.ylabel('actual values')
plt.show()
"""
Explanation: b. Plot the predicted values (model.predict(x)) against the actual values and see how they compare.
End of explanation
"""
linear_regression.coef_
"""
Explanation: c. In this case, there are 129 input variables and only 185 rows which means we're very likely to overfit. Look at the model coefficients and see if anything jumps out as odd.
End of explanation
"""
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)
lin_regr = linear_model.LinearRegression()
lin_regr.fit(x_train, y_train)
plt.scatter(lin_regr.predict(x_test), y_test)
plt.xlabel('predicted values')
plt.ylabel('actual values')
plt.show()
plt.scatter(y_test, lin_regr.predict(x_test) )
"""
Explanation: Outlier coeeficents would be inacurate at predicting.
d. Go back and split the data into a training set and a test set. Train the model on the training set and evaluate it on the test set. How does it do?
End of explanation
"""
model = Lasso(alpha=.1)
model.fit(x_train, y_train)
np.round(model.coef_, 1)
model = Lasso(alpha=.5)
model.fit(x_train, y_train)
np.round(model.coef_, 1)
"""
Explanation: Too many outliers. Would be innacurate.
a. Since we have so many variables, this is a good candidate for regularization. In particular, since we'd like to eliminate a lot of them, lasso seems like a good candidate. Build a lasso model on your training data for various values of alpha. Which variables survive?
End of explanation
"""
plt.scatter(lin_regr.predict(x_test), y_test)
plt.xlabel('predicted values')
plt.ylabel('actual values')
plt.show()
"""
Explanation: b. How does this model perform on the test set?
End of explanation
"""
x = stations_vals[list(stations_vals.columns.values[111:112])]
y = stations_vals[list(stations_vals.columns.values[1:2])]
lin_regr = linear_model.LinearRegression()
lin_regr.fit(x, y)
plt.scatter(lin_regr.predict(x), y)
plt.xlabel('predicted value')
plt.ylabel('actual value')
plt.show()
"""
Explanation: Better.
No matter how high I make alpha, the coefficient on crossing ("number of nearby crosswalks") never goes away. Try a simple linear regression on just that variable.
End of explanation
"""
|
kwhanalytics/rdtools
|
docs/degradation_example.ipynb
|
mit
|
from datetime import timedelta
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pvlib
%matplotlib inline
#Update the style of plots
import matplotlib
matplotlib.rcParams.update({'font.size': 12,
'figure.figsize': [4.5, 3],
'lines.markeredgewidth': 0,
'lines.markersize': 2
})
import rdtools
"""
Explanation: Degradation example
This juypter notebook is intended to test the degradation analysis workflow. In addition, the notebook demonstrates the effects of changes in the workflow.
Degradation analysis of PV systems includes several steps:
1. <b>Standardize</b> data inputs
2. <b>Normalize</b> data using a performance metric
3. <b>Filter</b> data that creates bias
4. <b>Analyze</b> aggregated data to estimate the degradation rate
End of explanation
"""
file_name = '84-Site_12-BP-Solar.csv'
df = pd.read_csv(file_name)
df = df.rename(columns = {
'12 BP Solar - Active Power (kW)':'power',
'12 BP Solar - Wind Speed (m/s)': 'wind',
'12 BP Solar - Weather Temperature Celsius (\xc2\xb0C)': 'Tamb',
'12 BP Solar - Global Horizontal Radiation (W/m\xc2\xb2)': 'ghi',
'12 BP Solar - Diffuse Horizontal Radiation (W/m\xc2\xb2)': 'dhi'
})
df.index = pd.to_datetime(df.Timestamp)
df.index = df.index.tz_localize('Australia/North') # TZ is required for irradiance transposition
# Chage power to watts
df['power'] = df.power * 1000.0
# There is some missing data, but we can infer the frequency from the first several data points
freq = pd.infer_freq(df.index[:10])
# And then set the frequency of the dataframe
df = df.resample(freq).asfreq()
# plot the AC power time series
fig, ax = plt.subplots()
ax.plot(df.index, df.power, 'o', alpha = 0.01)
ax.set_ylim(0,7000)
fig.autofmt_xdate()
ax.set_ylabel('AC Power (W)');
# Calculate energy yield in kWh
df['energy'] = df.power * pd.to_timedelta(df.power.index.freq).total_seconds()/(3600.0)
"""
Explanation: 1. <b>Standardize</b>
Please download the site data from Site 12, and unzip the csv file in the folder:
./rdtools/docs/
http://dkasolarcentre.com.au/historical-data/download
The following script loads the data, parses a pandas.DateTimeIndex, and renames the critical columns.
End of explanation
"""
# Metadata
lat = -23.762028
lon = 133.874886
azimuth = 0
tilt = 20
pdc = 5100.0 # DC rating in watts
meta = {"altitude":0,
"latitude": lat,
"longitude": lon,
"Name": "Alice Springs",
"State": "n/a",
"TZ": 8.5}
# calculate the POA irradiance
sky = pvlib.irradiance.isotropic(tilt, df.dhi)
sun = pvlib.solarposition.get_solarposition(df.index, lat, lon)
df['dni'] = (df.ghi - df.dhi)/np.cos(np.deg2rad(sun.zenith))
beam = pvlib.irradiance.beam_component(tilt, azimuth, sun.zenith, sun.azimuth, df.dni)
df['poa'] = beam + sky
# Calculate temperature
df_temp = pvlib.pvsystem.sapm_celltemp(df.poa, df.wind, df.Tamb, model = 'open_rack_cell_polymerback')
df['Tcell'] = df_temp.temp_cell
pv = pvlib.pvsystem.systemdef(meta, surface_tilt=tilt, surface_azimuth=azimuth,
albedo=0.2, modules_per_string=6, strings_per_inverter=5)
pvwatts_kws = {"poa_global" : df.poa,
"P_ref" : pdc,
"T_cell" :df.Tcell,
"G_ref" : 1000,
"T_ref": 25,
"gamma_pdc" : -0.005}
normalized, insolation = rdtools.normalize_with_pvwatts(df.energy, pvwatts_kws)
df['normalized'] = normalized
df['insolation'] = insolation
# Plot the normalized power time series
fig, ax = plt.subplots()
ax.plot(normalized.index, normalized, 'o', alpha = 0.05)
ax.set_ylim(0,7)
fig.autofmt_xdate()
ax.set_ylabel('Normalized power');
"""
Explanation: 2. <b>Normalize</b>
Data normalization typically requires some additional metadata about the PV system power time series. Metadata consists of site location information, module product details, PV circuit configuration, and other items.
End of explanation
"""
# Perform rudimetary filtering, more advanced filtering will be integrated
# into Rdtools in the future
filter_criteria = ((df['normalized']>0) & (df['normalized']<2) & (df.poa>200))
filtered = df[filter_criteria]
filtered = filtered[['insolation', 'normalized']]
# Plot the normalized and filtered power time series
fig, ax = plt.subplots()
ax.plot(normalized.index, normalized, 'o', alpha = 0.05)
ax.plot(filtered.index, filtered.normalized, 'o', alpha = 0.05)
ax.set_ylim(0,2.4)
fig.autofmt_xdate()
ax.set_ylabel('Normalized power');
"""
Explanation: 3. <b>Filter</b>
Data filtering is used to exclude data points that represent invalid data, create bias in the analysis, or introduce significant noise.
End of explanation
"""
daily = rdtools.aggregation_insol(filtered.normalized, filtered.insolation)
# Plot the normalized and filtered power time series along with the aggregation
fig, ax = plt.subplots()
ax.plot(filtered.index, filtered.normalized, 'o', alpha = 0.05)
ax.plot(daily.index, daily, 'o', alpha = 0.1)
ax.set_ylim(0,2.4)
fig.autofmt_xdate()
ax.set_ylabel('Normalized power');
"""
Explanation: 4. <b>Aggregate</b>
Data is aggregated with an irradiance weighted average. This can be useful, for example with daily aggregation, to reduce the impact of high-error data points in the morning and evening.
End of explanation
"""
ols_rd, ols_ci, ols_info = rdtools.degradation_ols(daily)
print '''The degradation rate calculated with ols is %0.2f %%/year
with a confidence interval of %0.2f to %0.2f %%/year
''' % (ols_rd, ols_ci[0], ols_ci[1])
yoy_rd, yoy_ci, yoy_info = rdtools.degradation_year_on_year(daily)
print '''The degradation rate calculated with year on year is %0.2f %%/year
with a confidence interval of %0.2f to %0.2f %%/year
''' % (yoy_rd, yoy_ci[0], yoy_ci[1])
# plot the regression through the normalized data
fig, ax = plt.subplots()
ax.plot(daily.index, daily, 'o', alpha = 0.1)
x_vals = np.array(ax.get_xlim())
y_vals = ols_info['intercept'] + ols_info['slope'] * (x_vals-min(x_vals)) / 365
ax.plot(x_vals, y_vals, '--k')
ax.set_ylim(0,1.4)
fig.autofmt_xdate()
ax.set_ylabel('Normalized power');
# Plot the year-on-year distribution
# Note that the uncertainty is from bootstrapping the median
# not the standard deviation of the plotted distribution
yoy_values = yoy_info['YoY_values']
plt.hist(yoy_values, alpha=0.5, label='YOY', bins=int(yoy_values.__len__()/20))
plt.axvline(x=yoy_rd, color='black', linestyle='dashed', linewidth=3)
#plt.legend(loc='upper right')
plt.title('Year-on-Year 15-minute Distribution')
plt.tight_layout(w_pad=1, h_pad=2.0)
plt.xlabel('Annual degradation (%)');
"""
Explanation: 5. <b>Degradation calculation</b>
Data is then analyzed to estimate the degradation rate representing the PV system behavior.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
dev/_downloads/da444a4db06576d438b46fdb32d045cd/topo_compare_conditions.ipynb
|
bsd-3-clause
|
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.viz import plot_evoked_topo
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Compare evoked responses for different conditions
In this example, an Epochs object for visual and auditory responses is created.
Both conditions are then accessed by their respective names to create a sensor
layout plot of the related evoked responses.
End of explanation
"""
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up amplitude-peak rejection values for MEG channels
reject = dict(grad=4000e-13, mag=4e-12)
# Create epochs including different events
event_id = {'audio/left': 1, 'audio/right': 2,
'visual/left': 3, 'visual/right': 4}
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks='meg', baseline=(None, 0), reject=reject)
# Generate list of evoked objects from conditions names
evokeds = [epochs[name].average() for name in ('left', 'right')]
"""
Explanation: Set parameters
End of explanation
"""
colors = 'blue', 'red'
title = 'MNE sample data\nleft vs right (A/V combined)'
plot_evoked_topo(evokeds, color=colors, title=title, background_color='w')
plt.show()
"""
Explanation: Show topography for two different conditions
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cccma/cmip6/models/sandbox-2/ocnbgchem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
JAmarel/Phys202
|
Optimization/OptimizationEx01.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
from scipy.optimize import minimize
"""
Explanation: Optimization Exercise 1
Imports
End of explanation
"""
def hat(x,a,b):
return -a*x**2 + b*x**4
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
"""
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
"""
a = 5.0
b = 1.0
x = np.linspace(-3,3,100);
plt.figure(figsize=(8,6))
plt.plot(x,hat(x,a,b));
plt.xlabel('x');
plt.ylabel('V(x)');
plt.title('Hat Potential');
plt.tick_params(axis='x',top='off',direction='out');
plt.tick_params(axis='y',right='off',direction='out');
assert True # leave this to grade the plot
"""
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
"""
#Finding the Left Minima
guess = (-2)
results = minimize(hat,guess,args=(a,b),method = 'SLSQP')
xL = results.x
print("Left Minima: x = " + str(xL[0]))
#Finding the Right Minima
guess = (2)
results = minimize(hat,guess,args=(a,b),method = 'SLSQP')
xR = results.x
print("Right Minima: x = " + str(xR[0]))
x = np.linspace(-3,3,100);
plt.figure(figsize=(8,6))
plt.plot(x,hat(x,a,b));
plt.xlabel('x');
plt.ylabel('V(x)');
plt.title('Hat Potential with Minimums');
plt.tick_params(axis='x',top='off',direction='out');
plt.tick_params(axis='y',right='off',direction='out');
plt.plot(xL, hat(xL,a,b), marker='o', linestyle='',color='red');
plt.plot(xR, hat(xR,a,b), marker='o', linestyle='',color='red');
assert True # leave this for grading the plot
"""
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation
"""
|
huseinzol05/Deep-Learning-Tensorflow
|
Seq-to-Seq/Basic-Seq2Seq/basic sequence-to-sequence.ipynb
|
mit
|
with open('/home/huseinzol05/AI/chat/Assassination Classroom The Graduation (2016) BluRay 720p 900MB Ganool is.srt', 'r') as fopen:
text = fopen.read().split('\n')
text = filter(None, text)
# any first character in a string is digit or '\r' or '<', we remove the string
# but we must start from the back, or later python will throw exception because the index no longer exist
for i in reversed(xrange(len(text))):
if text[i][0].isdigit() or text[i][0] == '\r' or text[i][0] == '<':
del text[i]
# replace all '\r' with empty character
text = [i.replace('\r', '') for i in text]
# strip spaces with empty character
text = [i.strip() for i in text]
import re
# remove non ascii from our string
text = [re.sub(r'[^\x00-\x7F]+','', i) for i in text]
text
"""
Explanation: Hi, welcome to simple sequence to sequence model using dynamic RNN and LSTM for activation gate, based on this research paper Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, Quoc V. Le, 2014
Encoder part also a network of shallow/deep recurrent, and same goes to decoder network. Both necessary have same size.
This model is very good in to become translator model, chatbot, and even text compression.
Our encoder side will learn input vectors and compressed the inputs into certain noise vector, called thought vector, and this vector will be used in decoder part to change that compressed noise into useful information.
Let say, i have a word 'i like husein', and i want to translate this word into bahasa, like this, 'saya suka husein'.
When I put 'i like husein' into encoder part, it will change into thought vectors, 'i' maybe become 1, 'like' maybe become 100, 'husein' maybe 1000. and our decoder part must learn, if it received 1, it must be 'i', if 100, it must be 'like' and so on so on. It will learn to match thought vector with the correct output.
This time i want to predict incoming word if i said 'hello, my name is husein'. and the model we trained later should know how to respond 'ok bro' or something like that, depends on dataset.
But we are using bahasa subtitle, from Assassination Classroom The Graduation (2016)! What should model predict if i put 'Markas Pusat. Ini Maruhito', the model should predict, 'Markas Pusat. Ini Maruhito' also. Simple as that.
Even lines are the prediction or target, odd lines are the inputs.
I need to clean the dataset first, change into integer representation. But before we changed into integer representation, we do minor biased for our model, the dataset will sorted from highest frequency to least frequency.
End of explanation
"""
inputs = []; predict = []
# We split the dataset into input and predict. Even line is predict, else is input.
for i in xrange(len(text)):
if i % 2 == 0:
predict.append(text[i])
else:
inputs.append(text[i])
vocab_inputs = []; vocab_predict = []
# Then we tokenized each sentence in both dataset, turn into vocabulary.
for i in xrange(len(inputs)):
vocab_inputs += inputs[i].split(); vocab_predict += predict[i].split()
# Then we sorted our tokenized words from highest freq to lowest freq.
vocab_inputs = sorted(vocab_inputs,key = vocab_inputs.count,reverse = True)
vocab_predict = sorted(vocab_predict,key = vocab_predict.count,reverse = True)
d1 = dict((k,v) for v,k in enumerate(reversed(vocab_inputs)))
d2 = dict((k,v) for v,k in enumerate(reversed(vocab_predict)))
# Then we turned our sorted words into unique words, while maintaining the position of sorting.
vocab_inputs = ['PAD', 'EOS'] + sorted(d1, key = d1.get, reverse = True)
vocab_predict = ['PAD', 'EOS'] + sorted(d2, key = d2.get, reverse = True)
print 'vocab size for inputs: ' + str(len(vocab_inputs))
print 'vocab size for predict: ' + str(len(vocab_predict))
# Then turned into dictionary {'husein': 0, 'suka': 1.. n}
dict_inputs = dict(zip(vocab_inputs, [i for i in xrange(len(vocab_inputs))]))
dict_predict = dict(zip(vocab_predict, [i for i in xrange(len(vocab_predict))]))
split_inputs = []; split_predict = []
for i in xrange(len(inputs)):
split_inputs.append(inputs[i].split()); split_predict.append(predict[i].split())
greatestvalue_inputs = 0; greatestvalue_predict = 0
for i in xrange(len(split_inputs)):
if len(split_inputs[i]) > greatestvalue_inputs:
greatestvalue_inputs = len(split_inputs[i])
for i in xrange(len(split_predict)):
if len(split_predict[i]) > greatestvalue_predict:
greatestvalue_predict = len(split_predict[i])
# need to add one because our decoder need to include EOS
greatestvalue_predict += 1
print 'longest sentence in our input dataset: ' + str(greatestvalue_inputs)
print 'longest sentence in out predict dataset: ' + str(greatestvalue_predict)
"""
Explanation: Now we need vocabulary and dictionary for our input and predict.
step:
1- We split the dataset into input and predict. Even line is predict, else is input.
2- Then we tokenized each sentence in both dataset, turn into vocabulary.
['saya suka makan nasi goreng', 'makan mee udang']
[['saya', 'suka'.. n], ['makan', 'mee', 'udang']]
3- Then we sorted our tokenized words from highest freq to lowest freq.
4- Then we turned our sorted words into unique words, while maintaining the position of sorting.
5- Then turned into dictionary {'husein': 0, 'suka': 1.. n}
End of explanation
"""
import numpy as np
import tensorflow as tf
import helpers
sess = tf.InteractiveSession()
encoder_inputs = tf.placeholder(shape = (None, None), dtype = tf.int32)
decoder_targets = tf.placeholder(shape = (None, None), dtype = tf.int32)
decoder_inputs = tf.placeholder(shape = (None, None), dtype = tf.int32)
encoder_embeddings = tf.Variable(tf.random_uniform([len(vocab_inputs), greatestvalue_predict]
, -1.0, 1.0), dtype = tf.float32)
decoder_embeddings = tf.Variable(tf.random_uniform([len(vocab_predict), greatestvalue_predict]
, -1.0, 1.0), dtype = tf.float32)
encoder_inputs_embedded = tf.nn.embedding_lookup(encoder_embeddings, encoder_inputs)
decoder_inputs_embedded = tf.nn.embedding_lookup(decoder_embeddings, decoder_inputs)
"""
Explanation: Now we import our tensorflow and design our dynamic rnn
I'm still using Tensorflow 0.12 and will use most of the API from Tensorflow to make it short.
0.12, API for LSTM tf.nn.rnn_cell.LSTMCell
1.X, API for tf.contrib.rnn.LSTMCell
End of explanation
"""
# RNN size of greatestvalue_inputs
encoder_cell = tf.nn.rnn_cell.LSTMCell(greatestvalue_predict)
# 2 layers of RNN
encoder_rnn_cells = tf.nn.rnn_cell.MultiRNNCell([encoder_cell] * 2)
_, encoder_final_state = tf.nn.dynamic_rnn(encoder_rnn_cells, encoder_inputs_embedded,
dtype = tf.float32, time_major = True)
"""
Explanation: Encoder part
End of explanation
"""
# RNN size of greatestvalue_predict
decoder_cell = tf.nn.rnn_cell.LSTMCell(greatestvalue_predict)
# 2 layers of RNN
decoder_rnn_cells = tf.nn.rnn_cell.MultiRNNCell([decoder_cell] * 2)
# declare a scope for our decoder, later tensorflow will confuse
decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn(decoder_rnn_cells, decoder_inputs_embedded,
initial_state = encoder_final_state,
dtype = tf.float32, time_major = True, scope ='decoder')
decoder_logits = tf.contrib.layers.linear(decoder_outputs, len(vocab_predict))
decoder_prediction = tf.argmax(decoder_logits, 2)
# this might very costly if you have very large vocab
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
labels = tf.one_hot(decoder_targets, depth = len(vocab_predict), dtype = tf.float32),
logits = decoder_logits)
loss = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
sess.run(tf.global_variables_initializer())
"""
Explanation: Decoder part
End of explanation
"""
batch_size = 50
epoch = 3000
LOSS = []
def feeding(inputs, labels):
inputs_int = []; predict_int = []
for i in xrange(len(inputs)):
single_input = []
single_predict = []
for x in xrange(len(labels[i])):
try:
single_input += [dict_inputs[inputs[i][x]]]
except:
single_input += [0]
for x in xrange(len(labels[i])):
single_predict += [dict_predict[labels[i][x]]]
inputs_int.append(single_input); predict_int.append(single_predict)
enc_input, _ = helpers.batch(inputs_int)
dec_target, _ = helpers.batch([(sequence) + [1] for sequence in predict_int])
dec_input, _ = helpers.batch([[1] + (sequence) for sequence in inputs_int])
return {encoder_inputs: enc_input, decoder_inputs: dec_input, decoder_targets: dec_target}
import time
for q in xrange(epoch):
total_loss = 0
lasttime = time.time()
for w in xrange(0, len(split_inputs) - batch_size, batch_size):
_, losses = sess.run([optimizer, loss],
feeding(split_inputs[w: w + batch_size], split_predict[w: w + batch_size]))
total_loss += losses
total_loss = total_loss / ((len(split_inputs) - batch_size) / (batch_size * 1.0))
LOSS.append(total_loss)
if (q + 1) % 100 == 0:
print 'epoch: ' + str(q + 1) + ', total loss: ' + str(total_loss) + ', s/epoch: ' + str(time.time() - lasttime)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
plt.plot([i for i in xrange(len(LOSS))], LOSS)
plt.title('loss vs epoch')
plt.show()
"""
Explanation: Now we create a function to generate integer input and predict for our model. Let we flashback our model based on the paper
if our encoder input is [5, 6, 7], decoder predicts must be [5, 6, 7, 1] right?
decoder input must [1, 5, 6, 7], decoder lagged 1 step, passing previous token as input at current step
End of explanation
"""
def label_to_text(label):
string = ''
for i in xrange(len(label)):
if label[i] == 0 or label[i] == 1:
continue
string += vocab_predict[label[i]] + ' '
return string
for i in xrange(10):
rand = np.random.randint(len(split_inputs))
in_testing = feeding(split_inputs[rand: rand + 1], split_predict[rand: rand + 1])
predict = sess.run(decoder_prediction, in_testing)
print 'input: ' + str(in_testing[encoder_inputs].T)
print 'supposed label: ' + str(in_testing[decoder_targets].T)
print 'predict label:' + str(predict.T)
print 'predict text: ' + str(label_to_text(predict.T[0])) + '\n'
"""
Explanation: As i expected for the loss slope, now let we test our model for 10 sentences from the input dataset
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb
|
apache-2.0
|
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install apache_beam
!pip install 'scikit_learn~=0.23.0' # For gaussian_random_matrix.
!pip install annoy
"""
Explanation: 最近傍とテキスト埋め込みによるセマンティック検索
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_semantic_approximate_nearest_neighbors"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/tf2_semantic_approximate_nearest_neighbors.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
<td><a href="https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを見る</a></td>
</table>
このチュートリアルでは、TensorFlow Hub(TF-Hub)が提供する入力データから埋め込みを生成し、抽出された埋め込みを使用して最近傍(ANN)インデックスを構築する方法を説明します。構築されたインデックスは、リアルタイムに類似性の一致と検索を行うために使用できます。
大規模なコーパスのデータを取り扱う場合、特定のクエリに対して最も類似するアイテムをリアルタイムで見つけるために、レポジトリ全体をスキャンして完全一致を行うというのは、効率的ではありません。そのため、おおよその類似性一致アルゴリズムを使用することで、正確な最近傍の一致を見つける際の精度を少しだけ犠牲にし、速度を大幅に向上させることができます。
このチュートリアルでは、ニュースの見出しのコーパスに対してリアルタイムテキスト検索を行い、クエリに最も類似する見出しを見つけ出す例を示します。この検索はキーワード検索とは異なり、テキスト埋め込みにエンコードされた意味的類似性をキャプチャします。
このチュートリアルの手順は次のとおりです。
サンプルデータをダウンロードする
TF-Hub モジュールを使用して、データの埋め込みを生成する。
埋め込みの ANN インデックスを構築する
インデックスを使って、類似性の一致を実施する
TF-Hub モデルから埋め込みを生成するには、Apache Beam を使用します。また、最近傍インデックスの構築には、Spotify の ANNOY ライブラリを使用します。
その他のモデル
アーキテクチャは同じであっても異なる言語でトレーニングされたモデルについては、こちらのコレクションを参照してください。こちらでは、現在 tfhub.dev にホストされているすべてのテキスト埋め込みを検索できます。
セットアップ
必要なライブラリをインストールします。
End of explanation
"""
import os
import sys
import pickle
from collections import namedtuple
from datetime import datetime
import numpy as np
import apache_beam as beam
from apache_beam.transforms import util
import tensorflow as tf
import tensorflow_hub as hub
import annoy
from sklearn.random_projection import gaussian_random_matrix
print('TF version: {}'.format(tf.__version__))
print('TF-Hub version: {}'.format(hub.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
"""
Explanation: 必要なライブラリをインポートします。
End of explanation
"""
!wget 'https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true' -O raw.tsv
!wc -l raw.tsv
!head raw.tsv
"""
Explanation: 1. サンプルデータをダウンロードする
A Million News Headlines データセットには、15 年にわたって発行されたニュースの見出しが含まれます。出典は、有名なオーストラリア放送協会(ABC)です。このニュースデータセットは、2003 年早期から 2017 年の終わりまでの特筆すべき世界的なイベントについて、オーストラリアにより焦点を当てた記録が含まれます。
形式: 1)発行日と 2)見出しのテキストの 2 列をタブ区切りにしたデータ。このチュートリアルで関心があるのは、見出しのテキストのみです。
End of explanation
"""
!rm -r corpus
!mkdir corpus
with open('corpus/text.txt', 'w') as out_file:
with open('raw.tsv', 'r') as in_file:
for line in in_file:
headline = line.split('\t')[1].strip().strip('"')
out_file.write(headline+"\n")
!tail corpus/text.txt
"""
Explanation: 単純化するため、見出しのテキストのみを維持し、発行日は削除します。
End of explanation
"""
embed_fn = None
def generate_embeddings(text, module_url, random_projection_matrix=None):
# Beam will run this function in different processes that need to
# import hub and load embed_fn (if not previously loaded)
global embed_fn
if embed_fn is None:
embed_fn = hub.load(module_url)
embedding = embed_fn(text).numpy()
if random_projection_matrix is not None:
embedding = embedding.dot(random_projection_matrix)
return text, embedding
"""
Explanation: 2. データの埋め込みを生成する
このチュートリアルでは、ニューラルネットワーク言語モデル(NNLM)を使用して、見出しデータの埋め込みを生成します。その後で、文章レベルの意味の類似性を計算するために、文章埋め込みを簡単に使用することが可能となります。埋め込み生成プロセスは、Apache Beam を使用して実行します。
埋め込み抽出メソッド
End of explanation
"""
def to_tf_example(entries):
examples = []
text_list, embedding_list = entries
for i in range(len(text_list)):
text = text_list[i]
embedding = embedding_list[i]
features = {
'text': tf.train.Feature(
bytes_list=tf.train.BytesList(value=[text.encode('utf-8')])),
'embedding': tf.train.Feature(
float_list=tf.train.FloatList(value=embedding.tolist()))
}
example = tf.train.Example(
features=tf.train.Features(
feature=features)).SerializeToString(deterministic=True)
examples.append(example)
return examples
"""
Explanation: tf.Example メソッドへの変換
End of explanation
"""
def run_hub2emb(args):
'''Runs the embedding generation pipeline'''
options = beam.options.pipeline_options.PipelineOptions(**args)
args = namedtuple("options", args.keys())(*args.values())
with beam.Pipeline(args.runner, options=options) as pipeline:
(
pipeline
| 'Read sentences from files' >> beam.io.ReadFromText(
file_pattern=args.data_dir)
| 'Batch elements' >> util.BatchElements(
min_batch_size=args.batch_size, max_batch_size=args.batch_size)
| 'Generate embeddings' >> beam.Map(
generate_embeddings, args.module_url, args.random_projection_matrix)
| 'Encode to tf example' >> beam.FlatMap(to_tf_example)
| 'Write to TFRecords files' >> beam.io.WriteToTFRecord(
file_path_prefix='{}/emb'.format(args.output_dir),
file_name_suffix='.tfrecords')
)
"""
Explanation: Beam パイプライン
End of explanation
"""
def generate_random_projection_weights(original_dim, projected_dim):
random_projection_matrix = None
random_projection_matrix = gaussian_random_matrix(
n_components=projected_dim, n_features=original_dim).T
print("A Gaussian random weight matrix was creates with shape of {}".format(random_projection_matrix.shape))
print('Storing random projection matrix to disk...')
with open('random_projection_matrix', 'wb') as handle:
pickle.dump(random_projection_matrix,
handle, protocol=pickle.HIGHEST_PROTOCOL)
return random_projection_matrix
"""
Explanation: ランダムプロジェクションの重み行列を生成する
ランダムプロジェクションは、ユークリッド空間に存在する一連の点の次元を縮小するために使用される、単純でありながら高性能のテクニックです。理論的背景については、Johnson-Lindenstrauss の補題をご覧ください。
ランダムプロジェクションを使用して埋め込みの次元を縮小するということは、ANN インデックスの構築とクエリに必要となる時間を短縮できるということです。
このチュートリアルでは、Scikit-learn ライブラリのガウスランダムプロジェクションを使用します。
End of explanation
"""
module_url = 'https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1' #@param {type:"string"}
projected_dim = 64 #@param {type:"number"}
"""
Explanation: パラメータの設定
ランダムプロジェクションを使用せずに、元の埋め込み空間を使用してインデックスを構築する場合は、projected_dim パラメータを None に設定します。これにより、高次元埋め込みのインデックス作成ステップが減速することに注意してください。
End of explanation
"""
import tempfile
output_dir = tempfile.mkdtemp()
original_dim = hub.load(module_url)(['']).shape[1]
random_projection_matrix = None
if projected_dim:
random_projection_matrix = generate_random_projection_weights(
original_dim, projected_dim)
args = {
'job_name': 'hub2emb-{}'.format(datetime.utcnow().strftime('%y%m%d-%H%M%S')),
'runner': 'DirectRunner',
'batch_size': 1024,
'data_dir': 'corpus/*.txt',
'output_dir': output_dir,
'module_url': module_url,
'random_projection_matrix': random_projection_matrix,
}
print("Pipeline args are set.")
args
print("Running pipeline...")
%time run_hub2emb(args)
print("Pipeline is done.")
!ls {output_dir}
"""
Explanation: パイプラインの実行
End of explanation
"""
embed_file = os.path.join(output_dir, 'emb-00000-of-00001.tfrecords')
sample = 5
# Create a description of the features.
feature_description = {
'text': tf.io.FixedLenFeature([], tf.string),
'embedding': tf.io.FixedLenFeature([projected_dim], tf.float32)
}
def _parse_example(example):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example, feature_description)
dataset = tf.data.TFRecordDataset(embed_file)
for record in dataset.take(sample).map(_parse_example):
print("{}: {}".format(record['text'].numpy().decode('utf-8'), record['embedding'].numpy()[:10]))
"""
Explanation: 生成された埋め込みをいくつか読み取ります。
End of explanation
"""
def build_index(embedding_files_pattern, index_filename, vector_length,
metric='angular', num_trees=100):
'''Builds an ANNOY index'''
annoy_index = annoy.AnnoyIndex(vector_length, metric=metric)
# Mapping between the item and its identifier in the index
mapping = {}
embed_files = tf.io.gfile.glob(embedding_files_pattern)
num_files = len(embed_files)
print('Found {} embedding file(s).'.format(num_files))
item_counter = 0
for i, embed_file in enumerate(embed_files):
print('Loading embeddings in file {} of {}...'.format(i+1, num_files))
dataset = tf.data.TFRecordDataset(embed_file)
for record in dataset.map(_parse_example):
text = record['text'].numpy().decode("utf-8")
embedding = record['embedding'].numpy()
mapping[item_counter] = text
annoy_index.add_item(item_counter, embedding)
item_counter += 1
if item_counter % 100000 == 0:
print('{} items loaded to the index'.format(item_counter))
print('A total of {} items added to the index'.format(item_counter))
print('Building the index with {} trees...'.format(num_trees))
annoy_index.build(n_trees=num_trees)
print('Index is successfully built.')
print('Saving index to disk...')
annoy_index.save(index_filename)
print('Index is saved to disk.')
print("Index file size: {} GB".format(
round(os.path.getsize(index_filename) / float(1024 ** 3), 2)))
annoy_index.unload()
print('Saving mapping to disk...')
with open(index_filename + '.mapping', 'wb') as handle:
pickle.dump(mapping, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Mapping is saved to disk.')
print("Mapping file size: {} MB".format(
round(os.path.getsize(index_filename + '.mapping') / float(1024 ** 2), 2)))
embedding_files = "{}/emb-*.tfrecords".format(output_dir)
embedding_dimension = projected_dim
index_filename = "index"
!rm {index_filename}
!rm {index_filename}.mapping
%time build_index(embedding_files, index_filename, embedding_dimension)
!ls
"""
Explanation: 3. 埋め込みの ANN インデックスを構築する
ANNOY(Approximate Nearest Neighbors Oh Yeah)は、特定のクエリ点に近い空間内のポイントを検索するための、Python バインディングを使った C++ ライブラリです。メモリにマッピングされた、大規模な読み取り専用ファイルベースのデータ構造も作成します。Spotify が構築したもので、おすすめの音楽に使用されています。興味があれば、NGT、FAISS などの ANNOY に代わるライブラリを使用してみてください。
End of explanation
"""
index = annoy.AnnoyIndex(embedding_dimension)
index.load(index_filename, prefault=True)
print('Annoy index is loaded.')
with open(index_filename + '.mapping', 'rb') as handle:
mapping = pickle.load(handle)
print('Mapping file is loaded.')
"""
Explanation: 4. インデックスを使って、類似性の一致を実施する
ANN インデックスを使用して、入力クエリに意味的に近いニュースの見出しを検索できるようになりました。
インデックスとマッピングファイルを読み込む
End of explanation
"""
def find_similar_items(embedding, num_matches=5):
'''Finds similar items to a given embedding in the ANN index'''
ids = index.get_nns_by_vector(
embedding, num_matches, search_k=-1, include_distances=False)
items = [mapping[i] for i in ids]
return items
"""
Explanation: 類似性の一致メソッド
End of explanation
"""
# Load the TF-Hub module
print("Loading the TF-Hub module...")
%time embed_fn = hub.load(module_url)
print("TF-Hub module is loaded.")
random_projection_matrix = None
if os.path.exists('random_projection_matrix'):
print("Loading random projection matrix...")
with open('random_projection_matrix', 'rb') as handle:
random_projection_matrix = pickle.load(handle)
print('random projection matrix is loaded.')
def extract_embeddings(query):
'''Generates the embedding for the query'''
query_embedding = embed_fn([query])[0].numpy()
if random_projection_matrix is not None:
query_embedding = query_embedding.dot(random_projection_matrix)
return query_embedding
extract_embeddings("Hello Machine Learning!")[:10]
"""
Explanation: 特定のクエリから埋め込みを抽出する
End of explanation
"""
#@title { run: "auto" }
query = "confronting global challenges" #@param {type:"string"}
print("Generating embedding for the query...")
%time query_embedding = extract_embeddings(query)
print("")
print("Finding relevant items in the index...")
%time items = find_similar_items(query_embedding, 10)
print("")
print("Results:")
print("=========")
for item in items:
print(item)
"""
Explanation: クエリを入力して、類似性の最も高いアイテムを検索する
End of explanation
"""
|
opesci/devito
|
examples/mpi/overview.ipynb
|
mit
|
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
"""
Explanation: Prerequisites
This notebook contains examples which are expected to be run with exactly 4 MPI processes; not because they wouldn't work otherwise, but simply because it's what their description assumes. For this, you need to:
Install an MPI distribution on your system, such as OpenMPI, MPICH, or Intel MPI (if not already available).
Install some optional dependencies, including mpi4py and ipyparallel; from the root Devito directory, run
pip install -r requirements-optional.txt
Create an ipyparallel MPI profile, by running our simple setup script. From the root directory, run
./scripts/create_ipyparallel_mpi_profile.sh
Launch and connect to an ipyparallel cluster
We're finally ready to launch an ipyparallel cluster. Open a new terminal and run the following command
ipcluster start --profile=mpi -n 4
Once the engines have started successfully, we can connect to the cluster
End of explanation
"""
%%px --group-outputs=engine
from mpi4py import MPI
print(f"Hi, I'm rank %d." % MPI.COMM_WORLD.rank)
"""
Explanation: In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.
End of explanation
"""
%%px
from devito import configuration
configuration['mpi'] = True
%%px
# Keep generated code as simple as possible
configuration['language'] = 'C'
# Fix platform so that this notebook can be tested by py.test --nbval
configuration['platform'] = 'knl7210'
"""
Explanation: Overview of MPI in Devito
Distributed-memory parallelism via MPI is designed so that users can "think sequentially" for as much as possible. The few things requested to the user are:
Like any other MPI program, run with mpirun -np X python ...
Some pre- and/or post-processing may be rank-specific (e.g., we may want to plot on a given MPI rank only, even though this might be hidden away in the next Devito releases, when newer support APIs will be provided.
Parallel I/O (if and when necessary) to populate the MPI-distributed datasets in input to a Devito Operator. If a shared file system is available, there are a few simple alternatives to pick from, such as NumPy’s memory-mapped arrays.
To enable MPI, users have two options. Either export the environment variable DEVITO_MPI=1 or, programmatically:
End of explanation
"""
%%px
from devito import Grid, TimeFunction, Eq, Operator
grid = Grid(shape=(4, 4))
u = TimeFunction(name="u", grid=grid, space_order=2, time_order=0)
"""
Explanation: An Operator will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed in a later section.
Let's start by creating a TimeFunction.
End of explanation
"""
%%px --group-outputs=engine
u.data
"""
Explanation: Domain decomposition is performed when creating a Grid. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the Grid over the available MPI processes. Since u is defined over a decomposed Grid, its data get distributed too.
End of explanation
"""
%%px
u.data[0, 1:-1, 1:-1] = 1.
%%px --group-outputs=engine
u.data
"""
Explanation: Globally, u consists of 4x4 points -- this is what users "see". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is: for the user, the fact that u.data is distributed is completely abstracted away -- the perception is that of indexing into a classic NumPy array, regardless of whether MPI is enabled or not. All sort of NumPy indexing schemes (basic, slicing, etc.) are supported. For example, we can write into a slice-generated view of our data.
End of explanation
"""
%%px
op = Operator(Eq(u.forward, u + 1))
summary = op.apply(time_M=0)
"""
Explanation: The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment u.data[0, 0] = u.data[3, 3] will raise an exception unless both entries belong to the same MPI rank).
We can finally write out a trivial Operator to try running something.
End of explanation
"""
%%px --group-outputs=engine
u.data
"""
Explanation: And we can now check again the (distributed) content of our u.data
End of explanation
"""
%%px --targets 0
print(op)
"""
Explanation: Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...
End of explanation
"""
%%px --targets 0
op = Operator(Eq(u.forward, u.dx + 1))
print(op)
"""
Explanation: Hang on. There's nothing MPI-specific here! At least apart from the header file #include "mpi.h". What's going on? Well, it's simple. Devito was smart enough to realize that this trivial Operator doesn't even need any sort of halo exchange -- the Eq implements a pure "map computation" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil Eq.
End of explanation
"""
%%px --group-outputs=engine
u.data_with_halo
"""
Explanation: Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines:
haloupdate0 performs a blocking halo exchange, relying on three additional functions, gather0, sendrecv0, and scatter0;
gather0 copies the (generally non-contiguous) boundary data into a contiguous buffer;
sendrecv0 takes the buffered data and sends it to one or more neighboring processes; then it waits until all data from the neighboring processes is received;
scatter0 copies the received data into the proper array locations.
This is the simplest halo exchange scheme available in Devito. There are a few, and some of them apply aggressive optimizations, as shown later on.
Before looking at other scenarios and performance optimizations, there is one last thing it is worth discussing -- the data_with_halo view.
End of explanation
"""
%%px
u.data_with_halo[:] = 1.
%%px --group-outputs=engine
u.data_with_halo
"""
Explanation: This is again a global data view. The shown with_halo is the "true" halo surrounding the physical domain, not the halo used for the MPI halo exchanges (often referred to as "ghost region"). So it gets trivial for a user to initialize the "true" halo region (which is typically read by a stencil Eq when an Operator iterates in proximity of the domain bounday).
End of explanation
"""
%%px
from devito import Function, SparseFunction
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
f = Function(name='f', grid=grid)
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
"""
Explanation: MPI and SparseFunction
A SparseFunction represents a sparse set of points which are generically unaligned with the Grid. A sparse point could be anywhere within a grid, and is therefore attached some coordinates. Given a sparse point, Devito looks at its coordinates and, based on the domain decomposition, logically assigns it to a given MPI process; this is purely logical ownership, as in Python-land, before running an Operator, the sparse point physically lives on the MPI rank which created it. Within op.apply, right before jumping to C-land, the sparse points are scattered to their logical owners; upon returning to Python-land, the sparse points are gathered back to their original location.
In the following example, we attempt injection of four sparse points into the neighboring grid points via linear interpolation.
End of explanation
"""
%%px
sf.data[:] = 5.
op = Operator(sf.inject(field=f, expr=sf))
summary = op.apply()
%%px --group-outputs=engine
f.data
"""
Explanation: Let:
* O be a grid point
* x be a halo point
* A, B, C, D be the sparse points
We show the global view, that is what the user "sees".
O --- O --- O --- O
| A | | |
O --- O --- O --- O
| | C | B |
O --- O --- O --- O
| | D | |
O --- O --- O --- O
And now the local view, that is what the MPI ranks own when jumping to C-land.
```
Rank 0 Rank 1
O --- O --- x x --- O --- O
| A | | | | |
O --- O --- x x --- O --- O
| | C | | C | B |
x --- x --- x x --- x --- x
Rank 2 Rank 3
x --- x --- x x --- x --- x
| | C | | C | B |
O --- O --- x x --- O --- O
| | D | | D | |
O --- O --- x x --- O --- O
```
We observe that the sparse points along the boundary of two or more MPI ranks are duplicated and thus redundantly computed over multiple processes. However, the contributions from these points to the neighboring halo points are naturally ditched, so the final result of the interpolation is as expected. Let's convince ourselves that this is the case. We assign a value of $5$ to each sparse point. Since we are using linear interpolation and all points are placed at the exact center of a grid quadrant, we expect that the contribution of each sparse point to a neighboring grid point will be $5 * 0.25 = 1.25$. Based on the global view above, we eventually expect f to look like as follows:
1.25 --- 1.25 --- 0.00 --- 0.00
| | | |
1.25 --- 2.50 --- 2.50 --- 1.25
| | | |
0.00 --- 2.50 --- 3.75 --- 1.25
| | | |
0.00 --- 1.25 --- 1.25 --- 0.00
Let's check this out.
End of explanation
"""
%%px
configuration['mpi'] = 'full'
"""
Explanation: Performance optimizations
The Devito compiler applies several optimizations before generating code.
Redundant halo exchanges are identified and removed. A halo exchange is redundant if a prior halo exchange carries out the same Function update and the data is not “dirty” yet.
Computation/communication overlap, with explicit prodding of the asynchronous progress engine to make sure that non-blocking communications execute in background during the compute part.
Halo exchanges could also be reshuffled to maximize the extension of the computation/communication overlap region.
To run with all these optimizations enabled, instead of DEVITO_MPI=1, users should set DEVITO_MPI=full, or, equivalently
End of explanation
"""
%%px
op = Operator(Eq(u.forward, u.dx + 1))
# Uncomment below to show code (it's quite verbose)
# print(op)
"""
Explanation: We could now peek at the generated code to see that things now look differently.
End of explanation
"""
|
rajul/tvb-library
|
tvb/simulator/demos/Monitoring with transformations.ipynb
|
gpl-2.0
|
sim = simulator.Simulator(
model=models.Generic2dOscillator(),
connectivity=connectivity.Connectivity(load_default=True),
coupling=coupling.Linear(),
integrator=integrators.EulerDeterministic(),
monitors=Raw(pre_expr='V;W;V**2;W-V', post_expr=';;sin(mon);exp(mon)'))
sim.configure()
ts, ys = [], []
for (t, y), in sim(simulation_length=250):
ts.append(t)
ys.append(y)
t = numpy.array(ts)
v, w, sv2, ewmv = numpy.array(ys).transpose((1, 0, 2, 3))
"""
Explanation: Monitoring with transformations
Very often it's useful to apply specific transformations to the state variables before applying the observation model of a monitor. Additionally, it can be useful to apply other transformations on the monitor's output.
The pre_expr and post_expr attributes of the Monitor classes allow for this.
End of explanation
"""
figure(figsize=(7, 5), dpi=600)
subplot(311)
plot(t, v[:, 0, 0], 'k')
plot(t, w[:, 0, 0], 'k')
ylabel('$V(t), W(t)$')
grid(True, axis='x')
xticks(xticks()[0], [])
subplot(312)
plot(t, sv2[:, 0, 0], 'k')
ylabel('$\\sin(G(V^2(t)))$')
grid(True, axis='x')
xticks(xticks()[0], [])
subplot(313)
plot(t, ewmv[:, 0, 0], 'k')
ylabel('$\\exp(G(W(t)-V(t)))$')
grid(True, axis='x')
xlabel('Time (ms)')
tight_layout()
"""
Explanation: Plotting the results demonstrates the effect of the transformations of the state variables through the monitor. Here, a Raw monitor was used to make the effects clear, but the pre- and post-expressions can be provided to any of the Monitors.
End of explanation
"""
|
caisq/tensorflow
|
tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
tf.enable_eager_execution()
"""
Explanation: Eager execution basics
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /><span>Run in Google Colab</span></a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /><span>View source on GitHub</span></a></td></table>
This is an introductory tutorial for using TensorFlow. It will cover:
Importing required packages
Creating and using Tensors
Using GPU acceleration
Datasets
Import TensorFlow
To get started, import the tensorflow module and enable eager execution.
Eager execution enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.
End of explanation
"""
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
"""
Explanation: Tensors
A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, Tensor objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce Tensors. These operations automatically convert native Python types. For example:
End of explanation
"""
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
"""
Explanation: Each Tensor has a shape and a datatype
End of explanation
"""
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
"""
Explanation: The most obvious differences between NumPy arrays and TensorFlow Tensors are:
Tensors can be backed by accelerator memory (like GPU, TPU).
Tensors are immutable.
NumPy Compatibility
Conversion between TensorFlow Tensors and NumPy ndarrays is quite simple as:
* TensorFlow operations automatically convert NumPy ndarrays to Tensors.
* NumPy operations automatically convert Tensors to NumPy ndarrays.
Tensors can be explicitly converted to NumPy ndarrays by invoking the .numpy() method on them.
These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.
End of explanation
"""
x = tf.random_uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
"""
Explanation: GPU acceleration
Many TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example:
End of explanation
"""
def time_matmul(x):
%timeit tf.matmul(x, x)
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
"""
Explanation: Device Names
The Tensor.device property provides a fully qualified string name of the device hosting the contents of the Tensor. This name encodes a bunch of details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of TensorFlow programs, but we'll skip that for now. The string will end with GPU:<N> if the tensor is placed on the N-th tensor on the host.
Explicit Device Placement
The term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the tf.device context manager. For example:
End of explanation
"""
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename)
"""
Explanation: Datasets
This section demonstrates the use of the tf.data.Dataset API to build pipelines to feed data to your model. It covers:
Creating a Dataset.
Iteration over a Dataset with eager execution enabled.
We recommend using the Datasets API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
If you're familiar with TensorFlow graphs, the API for constructing the Dataset object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler.
You can use Python iteration over the tf.data.Dataset object and do not need to explicitly create an tf.data.Iterator object.
As a result, the discussion on iterators in the TensorFlow Guide is not relevant when eager execution is enabled.
Create a source Dataset
Create a source dataset using one of the factory functions like Dataset.from_tensors, Dataset.from_tensor_slices or using objects that read from files like TextLineDataset or TFRecordDataset. See the TensorFlow Guide for more information.
End of explanation
"""
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
"""
Explanation: Apply transformations
Use the transformations functions like map, batch, shuffle etc. to apply transformations to the records of the dataset. See the API documentation for tf.data.Dataset for details.
End of explanation
"""
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
"""
Explanation: Iterate
When eager execution is enabled Dataset objects support iteration.
If you're familiar with the use of Datasets in TensorFlow graphs, note that there is no need for calls to Dataset.make_one_shot_iterator() or get_next() calls.
End of explanation
"""
|
ML4DS/ML4all
|
TM3.Topic_Models_with_MLlib/ExB3_TopicModels/.ipynb_checkpoints/TM_Exam2-checkpoint.ipynb
|
mit
|
#nltk.download()
mycorpus = nltk.corpus.reuters
"""
Explanation: Master Telefónica Big Data & Analytics
Prueba de Evaluación del Tema 4:
Topic Modelling.
Date: 2016/04/10
Para realizar esta prueba es necesario tener actualizada la máquina virtual con la versión más reciente de MLlib.
Para la actualización, debe seguir los pasos que se indican a continuación:
Pasos para actualizar MLlib:
Entrar en la vm como root:
vagrant ssh
sudo bash
Ir a /usr/local/bin
Descargar la última versión de spark desde la vm mediante
wget http://www-eu.apache.org/dist/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz
Desempaquetar:
tar xvf spark-1.6.1-bin-hadoop2.6.tgz (y borrar el tgz)
Lo siguiente es un parche, pero suficiente para que funcione:
Guardar copia de spark-1.3: mv spark-1.3.1-bin-hadoop2.6/ spark-1.3.1-bin-hadoop2.6_old
Crear enlace a spark-1.6: ln -s spark-1.6.1-bin-hadoop2.6/ spark-1.3.1-bin-hadoop2.6
Librerías
Puede utilizar este espacio para importar todas las librerías que necesite para realizar el examen.
0. Adquisición de un corpus.
Descargue el contenido del corpus reuters de nltk.
import nltk
nltk.download()
Selecciona el identificador reuters.
End of explanation
"""
n_docs = 500000
filenames = mycorpus.fileids()
fn_train = [f for f in filenames if f[0:5]=='train']
corpus_text = [mycorpus.raw(f) for f in fn_train]
# Reduced dataset:
n_docs = min(n_docs, len(corpus_text))
corpus_text = [corpus_text[n] for n in range(n_docs)]
print 'Loaded {0} files'.format(len(corpus_text))
"""
Explanation: Para evitar problemas de sobrecarga de memoria, o de tiempo de procesado, puede reducir el tamaño el corpus, modificando el valor de la variable n_docs a continuación.
End of explanation
"""
corpusRDD = sc.parallelize(corpus_text, 4)
print "\nRDD created with {0} elements".format(corpusRDD.count())
"""
Explanation: A continuación cargaremos los datos en un RDD
End of explanation
"""
# Compute RDD replacing tokens by token_ids
corpus_sparseRDD = corpus_wcRDD2.map(lambda x: [(invD[t[0]], t[1]) for t in x])
# Convert list of tuplas into Vectors.sparse object.
corpus_sparseRDD = corpus_sparseRDD.map(lambda x: Vectors.sparse(n_tokens, x))
corpus4lda = corpus_sparseRDD.zipWithIndex().map(lambda x: [x[1], x[0]]).cache()
"""
Explanation: 1. Ejercicios
Ejercicio 1: Preprocesamiento de datos.
Prepare los datos para aplicar un algoritmo de modelado de tópicos en pyspark. Para ello, aplique los pasos siguientes:
Tokenización: convierta cada texto a utf-8, y transforme la cadena en una lista de tokens.
Homogeneización: pase todas las palabras a minúsculas y elimine todos los tokens no alfanuméricos.
Limpieza: Elimine todas las stopwords utilizando el fichero de stopwords disponible en NLTK para el idioma inglés.
Guarde el resultado en la variable corpus_tokensRDD
Ejercicio 2: Stemming
Aplique un procedimiento de stemming al corpus, utilizando el SnowballStemmer de NLTK. Guarde el resultado en corpus_stemRDD.
Ejercicio 3: Vectorización
En este punto cada documento del corpus es una lista de tokens.
Calcule un nuevo RDD que contenga, para cada documento, una lista de tuplas. La clave (key) de cada lista será un token y su valor el número de repeticiones del mismo en el documento.
Imprima una muestra de 20 tuplas uno de los documentos del corpus.
Ejercicio 4: Cálculo del diccionario de tokens
Construya, a partir de corpus_wcRDD, un nuevo diccionario con todos los tokens del corpus. El resultado será un diccionario python de nombre wcDict, cuyas entradas serán los tokens y sus valores el número de repeticiones del token en todo el corpus.
wcDict = {token1: valor1, token2, valor2, ...}
Imprima el número de repeticiones del token interpret
Ejercicio 5: Número de tokens.
Determine el número total de tokens en el diccionario. Imprima el resultado.
Ejercicio 6: Términos demasiado frecuentes:
Determine los 5 tokens más frecuentes del corpus. Imprima el resultado.
Ejercicio 7: Número de documentos del token más frecuente.
Determine en qué porcentaje de documentos aparece el token más frecuente.
Ejercicio 8: Filtrado de términos.
Elimine del corpus los dós términos más frecuentes. Guarde el resultado en un nuevo RDD denominado corpus_wcRDD2, con la misma estructura que corpus_wcRDD (es decir, cada documento una lista de tuplas).
Ejercicio 9: Lista de tokens y diccionario inverso.
Determine la lista de topicos de todo el corpus, y construya un dictionario inverso, invD, cuyas entradas sean los números consecutivos de 0 al número total de tokens, y sus salidas cada uno de los tokens, es decir
invD = {0: token0, 1: token1, 2: token2, ...}
Ejercicio 10: Algoritmo LDA.
Para aplicar el algoritmo LDA, es necesario reemplazar las tuplas (token, valor) de wcRDD por tuplas del tipo (token_id, value), sustituyendo cada token por un identificador entero.
El código siguiente se encarga de completar este proceso:
End of explanation
"""
|
lekshmideepu/nest-simulator
|
doc/userdoc/model_details/aeif_models_implementation.ipynb
|
gpl-2.0
|
# Install assimulo package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install assimulo
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 6)
"""
Explanation: NEST implementation of the aeif models
Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09
This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire
(AEIF) neuronal model and compares it with several numerical implementations using simpler solvers.
In particular this justifies the change of implementation in September 2016 to make the simulation
closer to the reference solution.
Position of the problem
Basics
The equations governing the evolution of the AEIF model are
$$\left\lbrace\begin{array}{rcl}
C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\
\tau_s\dot{w} &=& a(V-E_L) - w
\end{array}\right.$$
when $V < V_{peak}$ (threshold/spike detection).
Once a spike occurs, we apply the reset conditions:
$$V = V_r \quad \text{and} \quad w = w + b$$
Divergence
In the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large.
This can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\Delta_T$ is small.
Tested solutions
Old implementation (before September 2016)
The orginal solution was to bind the exponential argument to be smaller than 10 (ad hoc value to be close to the original implementation in BRIAN).
As will be shown in the notebook, this solution does not converge to the reference LSODAR solution.
New implementation
The new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$.
We will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller.
Reference solution
The reference solution is implemented using the LSODAR solver which is described and compared in the following references:
http://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one)
http://www.sciencedirect.com/science/article/pii/S0377042712000684
http://www.radford.edu/~thompson/RP/rootfinding.pdf
https://computation.llnl.gov/casc/nsde/pubs/u88007.pdf
http://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf
http://www.sciencedirect.com/science/article/pii/0377042789903348
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf
https://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf
Technical details and requirements
Implementation of the functions
The old and new implementations are reproduced using Scipy and are called by the scipy_aeif function
The NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver.
The reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package.
Requirements
To run this notebook, you need:
numpy and scipy
assimulo
matplotlib
End of explanation
"""
def rhs_aeif_new(y, _, p):
'''
New implementation bounding V < V_peak
Parameters
----------
y : list
Vector containing the state variables [V, w]
_ : unused var
p : Params instance
Object containing the neuronal parameters.
Returns
-------
dv : double
Derivative of V
dw : double
Derivative of w
'''
v = min(y[0], p.Vpeak)
w = y[1]
Ispike = 0.
if p.DeltaT != 0.:
Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT)
dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm
dw = (p.a * (v-p.EL) - w) / p.tau_w
return dv, dw
def rhs_aeif_old(y, _, p):
'''
Old implementation bounding the argument of the
exponential function (e_arg < 10.).
Parameters
----------
y : list
Vector containing the state variables [V, w]
_ : unused var
p : Params instance
Object containing the neuronal parameters.
Returns
-------
dv : double
Derivative of V
dw : double
Derivative of w
'''
v = y[0]
w = y[1]
Ispike = 0.
if p.DeltaT != 0.:
e_arg = min((v-p.vT)/p.DeltaT, 10.)
Ispike = p.gL * p.DeltaT * np.exp(e_arg)
dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm
dw = (p.a * (v-p.EL) - w) / p.tau_w
return dv, dw
"""
Explanation: Scipy functions mimicking the NEST code
Right hand side functions
End of explanation
"""
def scipy_aeif(p, f, simtime, dt):
'''
Complete aeif model using scipy `odeint` solver.
Parameters
----------
p : Params instance
Object containing the neuronal parameters.
f : function
Right-hand side function (either `rhs_aeif_old`
or `rhs_aeif_new`)
simtime : double
Duration of the simulation (will run between
0 and tmax)
dt : double
Time increment.
Returns
-------
t : list
Times at which the neuronal state was evaluated.
y : list
State values associated to the times in `t`
s : list
Spike times.
vs : list
Values of `V` just before the spike.
ws : list
Values of `w` just before the spike
fos : list
List of dictionaries containing additional output
information from `odeint`
'''
t = np.arange(0, simtime, dt) # time axis
n = len(t)
y = np.zeros((n, 2)) # V, w
y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.)
y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.)
s = [] # spike times
vs = [] # membrane potential at spike before reset
ws = [] # w at spike before step
fos = [] # full output dict from odeint()
# imitate NEST: update time-step by time-step
for k in range(1, n):
# solve ODE from t_k-1 to t_k
d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True)
y[k, :] = d[1, :]
fos.append(fo)
# check for threshold crossing
if y[k, 0] >= p.Vpeak:
s.append(t[k])
vs.append(y[k, 0])
ws.append(y[k, 1])
y[k, 0] = p.Vreset # reset
y[k, 1] += p.b # step
return t, y, s, vs, ws, fos
"""
Explanation: Complete model
End of explanation
"""
from assimulo.solvers import LSODAR
from assimulo.problem import Explicit_Problem
class Extended_Problem(Explicit_Problem):
# need variables here for access
sw0 = [ False ]
ts_spikes = []
ws_spikes = []
Vs_spikes = []
def __init__(self, p):
self.p = p
self.y0 = [self.p.EL, 5.] # V, w
# reset variables
self.ts_spikes = []
self.ws_spikes = []
self.Vs_spikes = []
#The right-hand-side function (rhs)
def rhs(self, t, y, sw):
"""
This is the function we are trying to simulate (aeif model).
"""
V, w = y[0], y[1]
Ispike = 0.
if self.p.DeltaT != 0.:
Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT)
dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm
dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w
return np.array([dotV, dotW])
# Sets a name to our function
name = 'AEIF_nosyn'
# The event function
def state_events(self, t, y, sw):
"""
This is our function that keeps track of our events. When the sign
of any of the events has changed, we have an event.
"""
event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike
if event_0 < 0:
if not self.ts_spikes:
self.ts_spikes.append(t)
self.Vs_spikes.append(y[0])
self.ws_spikes.append(y[1])
elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01):
self.ts_spikes.append(t)
self.Vs_spikes.append(y[0])
self.ws_spikes.append(y[1])
return np.array([event_0])
#Responsible for handling the events.
def handle_event(self, solver, event_info):
"""
Event handling. This functions is called when Assimulo finds an event as
specified by the event functions.
"""
ev = event_info
event_info = event_info[0] # only look at the state events information.
if event_info[0] > 0:
solver.sw[0] = True
solver.y[0] = self.p.Vreset
solver.y[1] += self.p.b
else:
solver.sw[0] = False
def initialize(self, solver):
solver.h_sol=[]
solver.nq_sol=[]
def handle_result(self, solver, t, y):
Explicit_Problem.handle_result(self, solver, t, y)
# Extra output for algorithm analysis
if solver.report_continuously:
h, nq = solver.get_algorithm_data()
solver.h_sol.extend([h])
solver.nq_sol.extend([nq])
"""
Explanation: LSODAR reference solution
Setting assimulo class
End of explanation
"""
def reference_aeif(p, simtime):
'''
Reference aeif model using LSODAR.
Parameters
----------
p : Params instance
Object containing the neuronal parameters.
f : function
Right-hand side function (either `rhs_aeif_old`
or `rhs_aeif_new`)
simtime : double
Duration of the simulation (will run between
0 and tmax)
dt : double
Time increment.
Returns
-------
t : list
Times at which the neuronal state was evaluated.
y : list
State values associated to the times in `t`
s : list
Spike times.
vs : list
Values of `V` just before the spike.
ws : list
Values of `w` just before the spike
h : list
List of the minimal time increment at each step.
'''
#Create an instance of the problem
exp_mod = Extended_Problem(p) #Create the problem
exp_sim = LSODAR(exp_mod) #Create the solver
exp_sim.atol=1.e-8
exp_sim.report_continuously = True
exp_sim.store_event_points = True
exp_sim.verbosity = 30
#Simulate
t, y = exp_sim.simulate(simtime) #Simulate 10 seconds
return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol
"""
Explanation: LSODAR reference model
End of explanation
"""
# Regular spiking
aeif_param = {
'V_reset': -58.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 420.,
'g_L': 11.,
'tau_w': 300.,
'E_L': -70.,
'Delta_T': 2.,
'a': 3.,
'b': 0.,
'C_m': 200.,
'V_m': -70., #! must be equal to E_L
'w': 5., #! must be equal to 5.
'tau_syn_ex': 0.2
}
# Bursting
aeif_param2 = {
'V_reset': -46.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 500.0,
'g_L': 10.,
'tau_w': 120.,
'E_L': -58.,
'Delta_T': 2.,
'a': 2.,
'b': 100.,
'C_m': 200.,
'V_m': -58., #! must be equal to E_L
'w': 5., #! must be equal to 5.
}
# Close to chaos (use resolution < 0.005 and simtime = 200)
aeif_param3 = {
'V_reset': -48.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 160.,
'g_L': 12.,
'tau_w': 130.,
'E_L': -60.,
'Delta_T': 2.,
'a': -11.,
'b': 30.,
'C_m': 100.,
'V_m': -60., #! must be equal to E_L
'w': 5., #! must be equal to 5.
}
class Params:
'''
Class giving access to the neuronal
parameters.
'''
def __init__(self):
self.params = aeif_param
self.Vpeak = aeif_param["V_peak"]
self.Vreset = aeif_param["V_reset"]
self.gL = aeif_param["g_L"]
self.Cm = aeif_param["C_m"]
self.EL = aeif_param["E_L"]
self.DeltaT = aeif_param["Delta_T"]
self.tau_w = aeif_param["tau_w"]
self.a = aeif_param["a"]
self.b = aeif_param["b"]
self.vT = aeif_param["V_th"]
self.Ie = aeif_param["I_e"]
p = Params()
"""
Explanation: Set the parameters and simulate the models
Params (chose a dictionary)
End of explanation
"""
# Parameters of the simulation
simtime = 100.
resolution = 0.01
t_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resolution)
t_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resolution)
t_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime)
"""
Explanation: Simulate the 3 implementations
End of explanation
"""
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the potentials
ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.")
ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old")
ax.plot(t_new, y_new[:,0], linestyle="--", label="V new")
# Plot the adaptation variables
ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.")
ax2.plot(t_old, y_old[:,1], linestyle="-.", c="m", label="w old")
ax2.plot(t_new, y_new[:,1], linestyle="--", c="y", label="w new")
# Show
ax.set_xlim([0., simtime])
ax.set_ylim([-65., 40.])
ax.set_xlabel("Time (ms)")
ax.set_ylabel("V (mV)")
ax2.set_ylim([-20., 20.])
ax2.set_ylabel("w (pA)")
ax.legend(loc=6)
ax2.legend(loc=2)
plt.show()
"""
Explanation: Plot the results
Zoom out
End of explanation
"""
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the potentials
ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.")
ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old")
ax.plot(t_new, y_new[:,0], linestyle="--", label="V new")
# Plot the adaptation variables
ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.")
ax2.plot(t_old, y_old[:,1], linestyle="-.", c="y", label="w old")
ax2.plot(t_new, y_new[:,1], linestyle="--", c="m", label="w new")
ax.set_xlim([90., 92.])
ax.set_ylim([-65., 40.])
ax.set_xlabel("Time (ms)")
ax.set_ylabel("V (mV)")
ax2.set_ylim([17.5, 18.5])
ax2.set_ylabel("w (pA)")
ax.legend(loc=5)
ax2.legend(loc=2)
plt.show()
"""
Explanation: Zoom in
End of explanation
"""
print("spike times:\n-----------")
print("ref", np.around(s_ref, 3)) # ref lsodar
print("old", np.around(s_old, 3))
print("new", np.around(s_new, 3))
print("\nV at spike time:\n---------------")
print("ref", np.around(vs_ref, 3)) # ref lsodar
print("old", np.around(vs_old, 3))
print("new", np.around(vs_new, 3))
print("\nw at spike time:\n---------------")
print("ref", np.around(ws_ref, 3)) # ref lsodar
print("old", np.around(ws_old, 3))
print("new", np.around(ws_new, 3))
"""
Explanation: Compare properties at spike times
End of explanation
"""
plt.semilogy(t_ref, h_ref, label='Reference')
plt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old')
plt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New')
plt.legend(loc=6)
plt.show();
"""
Explanation: Size of minimal integration timestep
End of explanation
"""
plt.plot(t_ref, y_ref[:,0], label="V ref.")
resolutions = (0.1, 0.01, 0.001)
di_res = {}
for resolution in resolutions:
t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resolution)
t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resolution)
di_res[resolution] = (t_old, y_old, t_new, y_new)
plt.plot(t_old, y_old[:,0], linestyle=":", label="V old, r={}".format(resolution))
plt.plot(t_new, y_new[:,0], linestyle="--", linewidth=1.5, label="V new, r={}".format(resolution))
plt.xlim(0., simtime)
plt.xlabel("Time (ms)")
plt.ylabel("V (mV)")
plt.legend(loc=2)
plt.show();
"""
Explanation: Convergence towards LSODAR reference with step size
Zoom out
End of explanation
"""
plt.plot(t_ref, y_ref[:,0], label="V ref.")
for resolution in resolutions:
t_old, y_old = di_res[resolution][:2]
t_new, y_new = di_res[resolution][2:]
plt.plot(t_old, y_old[:,0], linestyle="--", label="V old, r={}".format(resolution))
plt.plot(t_new, y_new[:,0], linestyle="-.", linewidth=2., label="V new, r={}".format(resolution))
plt.xlim(90., 92.)
plt.ylim([-62., 2.])
plt.xlabel("Time (ms)")
plt.ylabel("V (mV)")
plt.legend(loc=2)
plt.show();
"""
Explanation: Zoom in
End of explanation
"""
|
leriomaggio/python-in-a-notebook
|
07 More Functions.ipynb
|
mit
|
def thank_you(name):
# This function prints a two-line personalized thank you message.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
thank_you('Adriana')
thank_you('Billy')
thank_you('Caroline')
"""
Explanation: More Functions
Earlier we learned the most bare-boned versions of functions. In this section we will learn more general concepts about functions, such as how to use functions to return values, and how to pass different kinds of data structures between functions.
<a name="top"></a>Contents
Default argument values
Exercises
Positional arguments
Exercises
Keyword arguments
Mixing positional and keyword arguments
Exercises
Accepting an arbitrary number of arguments
Accepting a sequence of arbitrary length
Accepting an arbitrary number of keyword arguments
<a name='default_values'></a>Default argument values
When we first introduced functions, we started with this example:
End of explanation
"""
def thank_you(name):
# This function prints a two-line personalized thank you message.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
thank_you('Billy')
thank_you('Caroline')
thank_you()
"""
Explanation: This function works fine, but it fails if you don't pass in a value:
End of explanation
"""
def thank_you(name='everyone'):
# This function prints a two-line personalized thank you message.
# If no name is passed in, it prints a general thank you message
# to everyone.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
thank_you('Billy')
thank_you('Caroline')
thank_you()
"""
Explanation: That makes sense; the function needs to have a name in order to do its work, so without a name it is stuck.
If you want your function to do something by default, even if no information is passed to it, you can do so by giving your arguments default values. You do this by specifying the default values when you define the function:
End of explanation
"""
# Ex 8.1 : Games
# put your code here
# Ex 8.2 : Favorite Movie
# put your code here
"""
Explanation: This is particularly useful when you have a number of arguments in your function, and some of those arguments almost always have the same value. This allows people who use the function to only specify the values that are unique to their use of the function.
top
<a name='exercises_default_values'></a>Exercises
Games
Write a function that accepts the name of a game and prints a statement such as, "I like playing chess!"
Give the argument a default value, such as chess.
Call your function at least three times. Make sure at least one of the calls includes an argument, and at least one call includes no arguments.
Favorite Movie
Write a function that accepts the name of a movie, and prints a statement such as, "My favorite movie is The Princess Bride."
Give the argument a default value, such as The Princess Bride.
Call your function at least three times. Make sure at least one of the calls includes an argument, and at least one call includes no arguments.
End of explanation
"""
def describe_person(first_name, last_name, age):
# This function takes in a person's first and last name,
# and their age.
# It then prints this information out in a simple format.
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
print("Age: %d\n" % age)
describe_person('brian', 'kernighan', 71)
describe_person('ken', 'thompson', 70)
describe_person('adele', 'goldberg', 68)
"""
Explanation: top
<a name="positional_arguments"></a>Positional Arguments
Much of what you will have to learn about using functions involves how to pass values from your calling statement to the function itself. The example we just looked at is pretty simple, in that the function only needed one argument in order to do its work. Let's take a look at a function that requires two arguments to do its work.
Let's make a simple function that takes in three arguments. Let's make a function that takes in a person's first and last name, and then prints out everything it knows about the person.
Here is a simple implementation of this function:
End of explanation
"""
def describe_person(first_name, last_name, age):
# This function takes in a person's first and last name,
# and their age.
# It then prints this information out in a simple format.
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
print("Age: %d\n" % age)
describe_person(71, 'brian', 'kernighan')
describe_person(70, 'ken', 'thompson')
describe_person(68, 'adele', 'goldberg')
"""
Explanation: The arguments in this function are first_name, last_name, and age. These are called positional arguments because Python knows which value to assign to each by the order in which you give the function values. In the calling line
describe_person('brian', 'kernighan', 71)
we send the values brian, kernighan, and 71 to the function. Python matches the first value brian with the first argument first_name. It matches the second value kernighan with the second argument last_name. Finally it matches the third value 71 with the third argument age.
This is pretty straightforward, but it means we have to make sure to get the arguments in the right order.
If we mess up the order, we get nonsense results or an error:
End of explanation
"""
# Ex 8.3 : Favorite Colors
# put your code here
# Ex 8.4 : Phones
# put your code here
"""
Explanation: This fails because Python tries to match the value 71 with the argument first_name, the value brian with the argument last_name, and the value kernighan with the argument age. Then when it tries to print the value first_name.title(), it realizes it can't use the title() method on an integer.
top
<a name='exercises_positional_arguments'></a>Exercises
Favorite Colors
Write a function that takes two arguments, a person's name and their favorite color. The function should print out a statement such as "Hillary's favorite color is blue."
Call your function three times, with a different person and color each time.
Phones
Write a function that takes two arguments, a brand of phone and a model name. The function should print out a phrase such as "iPhone 6 Plus".
Call your function three times, with a different combination of brand and model each time.
End of explanation
"""
def describe_person(first_name, last_name, age):
# This function takes in a person's first and last name,
# and their age.
# It then prints this information out in a simple format.
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
print("Age: %d\n" % age)
describe_person(age=71, first_name='brian', last_name='kernighan')
describe_person(age=70, first_name='ken', last_name='thompson')
describe_person(age=68, first_name='adele', last_name='goldberg')
"""
Explanation: top
<a name='keyword_arguments'></a>Keyword arguments
Python allows us to use a syntax called keyword arguments. In this case, we can give the arguments in any order when we call the function, as long as we use the name of the arguments in our calling statement. Here is how the previous code can be made to work using keyword arguments:
End of explanation
"""
def describe_person(first_name, last_name, age, favorite_language):
# This function takes in a person's first and last name,
# their age, and their favorite language.
# It then prints this information out in a simple format.
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
print("Age: %d" % age)
print("Favorite language: %s\n" % favorite_language)
describe_person('brian', 'kernighan', 71, 'C')
describe_person('ken', 'thompson', 70, 'Go')
describe_person('adele', 'goldberg', 68, 'Smalltalk')
"""
Explanation: This works, because Python does not have to match values to arguments by position. It matches the value 71 with the argument age, because the value 71 is clearly marked to go with that argument. This syntax is a little more typing, but it makes for very readable code.
<a name='positional_and_keyword'></a>Mixing positional and keyword arguments
It can make good sense sometimes to mix positional and keyword arguments. In our previous example, we can expect this function to always take in a first name and a last name. Before we start mixing positional and keyword arguments, let's add another piece of information to our description of a person. Let's also go back to using just positional arguments for a moment:
End of explanation
"""
def describe_person(first_name, last_name, age=None, favorite_language=None, died=None):
"""
This function takes in a person's first and last name, their age,
and their favorite language.
It then prints this information out in a simple format.
"""
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
# Optional information:
if age:
print("Age: %d" % age)
if favorite_language:
print("Favorite language: %s" % favorite_language)
if died:
print("Died: %d" % died)
# Blank line at end.
print("\n")
describe_person('brian', 'kernighan', favorite_language='C')
describe_person('adele', 'goldberg', age=68, favorite_language='Smalltalk')
describe_person('dennis', 'ritchie', favorite_language='C', died=2011)
describe_person('guido', 'van rossum', favorite_language='Python')
"""
Explanation: We can expect anyone who uses this function to supply a first name and a last name, in that order. But now we are starting to include some information that might not apply to everyone. We can address this by keeping positional arguments for the first name and last name, but expect keyword arguments for everything else. We can show this works by adding a few more people, and having different information about each person:
End of explanation
"""
# Ex 8.5 : Sports Team
# put your code here
# Ex 8.6 : Word Languages
# put your code here
"""
Explanation: Everyone needs a first and last name, but everthing else is optional. This code takes advantage of the Python keyword None, which acts as an empty value for a variable. This way, the user is free to supply any of the 'extra' values they care to. Any arguments that don't receive a value are not displayed. Python matches these extra values by name, rather than by position. This is a very common and useful way to define functions.
top
<a name='exercises_keyword_arguments'></a>Exercises
Sports Teams
Write a function that takes in two arguments, the name of a city and the name of a sports team from that city.
Call your function three times, using a mix of positional and keyword arguments.
World Languages
Write a function that takes in two arguments, the name of a country and a major language spoken there.
Call your function three times, using a mix of positional and keyword arguments.
End of explanation
"""
def adder(num_1, num_2):
# This function adds two numbers together, and prints the sum.
sum = num_1 + num_2
print("The sum of your numbers is %d." % sum)
# Let's add some numbers.
adder(1, 2)
adder(-1, 2)
adder(1, -2)
"""
Explanation: top
<a name='arbitrary_arguments'></a>Accepting an arbitrary number of arguments
We have now seen that using keyword arguments can allow for much more flexible calling statements.
This benefits you in your own programs, because you can write one function that can handle many different situations you might encounter.
This benefits you if other programmers use your programs, because your functions can apply to a wide range of situations.
This benefits you when you use other programmers' functions, because their functions can apply to many situations you will care about.
There is another issue that we can address, though. Let's consider a function that takes two number in, and prints out the sum of the two numbers:
End of explanation
"""
def adder(num_1, num_2):
# This function adds two numbers together, and prints the sum.
sum = num_1 + num_2
print("The sum of your numbers is %d." % sum)
# Let's add some numbers.
adder(1, 2, 3)
"""
Explanation: This function appears to work well. But what if we pass it three numbers, which is a perfectly reasonable thing to do mathematically?
End of explanation
"""
def example_function(arg_1, arg_2, *arg_3):
# Let's look at the argument values.
print('\narg_1:', arg_1)
print('arg_2:', arg_2)
print('arg_3:', arg_3)
example_function(1, 2)
example_function(1, 2, 3)
example_function(1, 2, 3, 4)
example_function(1, 2, 3, 4, 5)
"""
Explanation: This function fails, because no matter what mix of positional and keyword arguments we use, the function is only written two accept two arguments. In fact, a function written in this way will only work with exactly two arguments.
<a name='arbitrary_sequence'></a>Accepting a sequence of arbitrary length
Python gives us a syntax for letting a function accept an arbitrary number of arguments. If we place an argument at the end of the list of arguments, with an asterisk in front of it, that argument will collect any remaining values from the calling statement into a tuple. Here is an example demonstrating how this works:
End of explanation
"""
def example_function(arg_1, arg_2, *arg_3):
# Let's look at the argument values.
print('\narg_1:', arg_1)
print('arg_2:', arg_2)
for value in arg_3:
print('arg_3 value:', value)
example_function(1, 2)
example_function(1, 2, 3)
example_function(1, 2, 3, 4)
example_function(1, 2, 3, 4, 5)
"""
Explanation: You can use a for loop to process these other arguments:
End of explanation
"""
def adder(*nums):
"""This function adds the given numbers together and prints the sum."""
s = 0
for num in nums:
s = s + num
# Print the results.
print("The sum of your numbers is %d." % s)
# Let's add some numbers.
adder(1, 2, 3)
def adder(*nums):
"""This function adds the given numbers together and prints the sum."""
# Print the results.
print("The sum of your numbers is %d." % sum(nums))
# Let's add some numbers.
adder(1, 2, 3)
"""
Explanation: We can now rewrite the adder() function to accept two or more arguments, and print the sum of those numbers:
End of explanation
"""
def adder(num_1, num_2, *nums):
# This function adds the given numbers together,
# and prints the sum.
# Start by adding the first two numbers, which
# will always be present.
sum = num_1 + num_2
# Then add any other numbers that were sent.
for num in nums:
sum = sum + num
# Print the results.
print("The sum of your numbers is %d." % sum)
# Let's add some numbers.
adder(1, 2)
adder(1, 2, 3)
adder(1, 2, 3, 4)
adder(1, 2, 3, 4, 5)
"""
Explanation: In this new version, Python does the following:
stores the first value in the calling statement in the argument num_1;
stores the second value in the calling statement in the argument num_2;
stores all other values in the calling statement as a tuple in the argument nums.
We can then "unpack" these values, using a for loop. We can demonstrate how flexible this function is by calling it a number of times, with a different number of arguments each time.
End of explanation
"""
import sys
f = open('./test.txt', 'w')
FILE_OUT = f #sys.stdout
def example_function(*args, **kwargs):
print(*args, sep='++', end=' ', file=FILE_OUT)
for k, v in kwargs.items():
print(k, ': ', v, end=' ', file=FILE_OUT)
example_function(1, 2, 4, 5)
example_function(1, 3, value=1, name=5)
example_function(store='ff', quote='Do. Or do not. There is no try.')
f.close()
def example_function(arg_1, arg_2, **kwargs):
# Let's look at the argument values.
print('\narg_1:', arg_1)
print('arg_2:', arg_2)
print('arg_3:', kwargs)
example_function('a', 'b')
example_function('a', 'b', value_3='c')
example_function('a', 'b', value_3='c', value_4='d')
example_function('a', 'b', value_3='c', value_4='d', value_5='e')
"""
Explanation: top
<a name='arbitrary_keyword_arguments'></a>Accepting an arbitrary number of keyword arguments
Python also provides a syntax for accepting an arbitrary number of keyword arguments. The syntax looks like this:
End of explanation
"""
def example_function(arg_1, arg_2, **kwargs):
# Let's look at the argument values.
print('\narg_1:', arg_1)
print('arg_2:', arg_2)
for key, value in kwargs.items():
print('arg_3 value:', value)
example_function('a', 'b')
example_function('a', 'b', value_3='c')
example_function('a', 'b', value_3='c', value_4='d')
example_function('a', 'b', value_3='c', value_4='d', value_5='e')
def example_function(**kwargs):
print(type(kwargs))
for key, value in kwargs.items():
print('{}:{}'.format(key, value))
example_function(first=1, second=2, third=3)
example_function(first=1, second=2, third=3, fourth=4)
example_function(name='Valerio', surname='Maggio')
"""
Explanation: The third argument has two asterisks in front of it, which tells Python to collect all remaining key-value arguments in the calling statement. This argument is commonly named kwargs. We see in the output that these key-values are stored in a dictionary. We can loop through this dictionary to work with all of the values that are passed into the function:
End of explanation
"""
def describe_person(first_name, last_name, age=None, favorite_language=None, died=None):
# This function takes in a person's first and last name,
# their age, and their favorite language.
# It then prints this information out in a simple format.
# Required information:
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
# Optional information:
if age:
print("Age: %d" % age)
if favorite_language:
print("Favorite language: %s" % favorite_language)
if died:
print("Died: %d" % died)
# Blank line at end.
print("\n")
describe_person('brian', 'kernighan', favorite_language='C')
describe_person('ken', 'thompson', age=70)
describe_person('adele', 'goldberg', age=68, favorite_language='Smalltalk')
describe_person('dennis', 'ritchie', favorite_language='C', died=2011)
describe_person('guido', 'van rossum', favorite_language='Python')
"""
Explanation: Earlier we created a function that let us describe a person, and we had three things we could describe about a person. We could include their age, their favorite language, and the date they passed away. But that was the only information we could include, because it was the only information that the function was prepared to handle:
End of explanation
"""
def describe_person(first_name, last_name, **kwargs):
# This function takes in a person's first and last name,
# and then an arbitrary number of keyword arguments.
# Required information:
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
# Optional information:
for key in kwargs:
print("%s: %s" % (key.title(), kwargs[key]))
# Blank line at end.
print("\n")
describe_person('brian', 'kernighan', favorite_language='C')
describe_person('ken', 'thompson', age=70)
describe_person('adele', 'goldberg', age=68, favorite_language='Smalltalk')
describe_person('dennis', 'ritchie', favorite_language='C', died=2011)
describe_person('guido', 'van rossum', favorite_language='Python')
"""
Explanation: We can make this function much more flexible by accepting any number of keyword arguments. Here is what the function looks like, using the syntax for accepting as many keyword arguments as the caller wants to provide:
End of explanation
"""
def describe_person(first_name, last_name, **kwargs):
# This function takes in a person's first and last name,
# and then an arbitrary number of keyword arguments.
# Required information:
print("First name: %s" % first_name.title())
print("Last name: %s" % last_name.title())
# Optional information:
for key in kwargs:
print("%s: %s" % (key.title().replace('_', ' '), kwargs[key]))
# Blank line at end.
print("\n")
describe_person('brian', 'kernighan', favorite_language='C', famous_book='The C Programming Language')
describe_person('ken', 'thompson', age=70, alma_mater='UC Berkeley')
describe_person('adele', 'goldberg', age=68, favorite_language='Smalltalk')
describe_person('dennis', 'ritchie', favorite_language='C', died=2011, famous_book='The C Programming Language')
describe_person('guido', 'van rossum', favorite_language='Python', company='Dropbox')
"""
Explanation: This is pretty neat. We get the same output, and we don't have to include a bunch of if tests to see what kind of information was passed into the function. We always require a first name and a last name, but beyond that the caller is free to provide any keyword-value pair to describe a person. Let's show that any kind of information can be provided to this function. We also clean up the output by replacing any underscores in the keys with a space.
End of explanation
"""
|
kuo77122/deep-learning-nd
|
Lesson15-TFLearn/Sentiment Analysis with TFLearn.ipynb
|
mit
|
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
from collections import Counter
total_counts = Counter()# bag of words here
for idx, row in reviews.iterrows():
total_counts.update(row[0].lower().replace(",", " ").replace(".", " ").split(" "))
print("Total words in data set: ", len(total_counts))
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = {}## create the word-to-index dictionary here
for i, word in enumerate(vocab):
word2idx[word] = i
"""
Explanation: The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
# Slow....
def text_to_vector(text):
words_vector = np.zeros(len(vocab))
words = text.lower().replace(",", " ").replace(".", " ").split(" ")
keys = list(word2idx)
for key in words:
if key in keys:
words_vector[word2idx[key]] += 1
return words_vector
# Mat's Fast solution
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.replace(",", " ").replace(".", " ").split(" "):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] = 1
return np.array(word_vector)
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainX.shape[1]
trainY
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, trainX.shape[1]])
net = tflearn.fully_connected(net, 200 , activation='ReLU')
net = tflearn.fully_connected(net, 25 , activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
"""
Explanation: Try out your own text!
End of explanation
"""
|
pycroscopy/pycroscopy
|
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
|
mit
|
from __future__ import division, print_function, absolute_import, unicode_literals
import os
import numpy
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import Image
path = os.getcwd()
fig1 = path + '/Fig1.jpg'
Image(filename=fig1)
"""
Explanation: Introduction to dynamic AFM simulations
Content under Creative Commons Attribution license CC-BY 4.0 version,
Enrique A. López-Guerra.
Purpose of the notebook: show an application of numerical methods to simulate the dynamics of a probe in atomic force microscopy.
Requirements to take the best advantage of this notebook: knowing the fundamentals of Harmonic Oscillators in clasical mechanics and Fundamentals of Vibrations.
Introduction
Since the atomic force microscope (AFM) was invented in 1986 it has become one of the main tools to study matter at the micro and nanoscale. This powerful tool is so versatile that it can be used to study a wide variety of materials, ranging from stiff inorganic surfaces to soft biological samples.
In its early stages the AFM was used in permanent contact with the sample (the probe is dragged over the sample during the whole operation), which brought about important drawbacks, such as rapid probe wear and often sample damage, but these obstacles have been overcome with the development of dynamic techniques.
In this Jupyter notebook, we will focus on the operation of the probe in dynamic mode.
End of explanation
"""
fig2 = path + '/Fig2DHO.jpg'
Image(filename=fig2)
"""
Explanation: Figure 1. Schematics of the setup of a atomic force microscope (Adapted from reference 6)
In AFM the interacting probe is in general a rectangular cantilever (please check the image above that shows the AFM setup where you will be able to see the probe!).
Probably the most used dynamic technique in AFM is the Tapping Mode. In this method the probe taps a surface in intermittent contact fashion. The purpose of tapping the probe over the surface instead of dragging it is to reduce frictional forces that may cause damage of soft samples and wear of the tip. Besides with the tapping mode we can get more information about the sample! HOW???
In Tapping Mode AFM the cantilever is shaken to oscillate up and down at a specific frequency (most of the time shaken at its natural frequency). Then the deflection of the tip is measured at that frequency to get information about the sample. Besides acquiring the topography of the sample, the phase lag between the excitation and the response of the cantilever can be related to compositional material properties!
In other words one can simultaneously get information about how the surface looks and also get compositional mapping of the surface! THAT SOUNDS POWERFUL!!!
End of explanation
"""
k = 10.
fo = 45000
wo = 2.0*numpy.pi*fo
Q = 25.
period = 1./fo
m = k/(wo**2)
Ao = 60.e-9
Fd = k*Ao/Q
spp = 28. # time steps per period
dt = period/spp #Intentionally chosen to be quite big
#you can decrease dt by increasing the number of steps per period
simultime = 100.*period
N = int(simultime/dt)
#Analytical solution
time_an = numpy.linspace(0,simultime,N) #time array for the analytical solution
z_an = numpy.zeros(N) #position array for the analytical solution
#Driving force amplitude this gives us 60nm of amp response (A_target*k/Q)
Fo_an = 24.0e-9
A_an = Fo_an*Q/k #when driven at resonance A is simply Fo*Q/k
phi = numpy.pi/2 #when driven at resonance the phase is pi/2
z_an[:] = A_an*numpy.cos(wo*time_an[:] - phi) #this gets the analytical solution
#slicing the array to include only steady state (only the last 10 periods)
z_an_steady = z_an[int(90.*period/dt):]
time_an_steady = time_an[int(90.*period/dt):]
plt.title('Plot 1 Analytical Steady State Solution of Eq 1', fontsize=20)
plt.xlabel('time, ms', fontsize=18)
plt.ylabel('z_Analytical, nm', fontsize=18)
plt.plot(time_an_steady*1e3, z_an_steady*1e9, 'b--')
"""
Explanation: Figure 2. Schematics of a damped harmonic oscillator without tip-sample interactions
Analytical Solution
The motion of the probe can be derived using Euler-Bernoulli's equation. However that equation has partial derivatives (it depends on time and space) because it deals with finding the position of each point of the beam in a certain time, which cant make the problem too expensive computationally for our purposes. In our case, we have the advantage that we are only concerned about the position of the tip (which is the only part of the probe that will interact with the sample). As a consequence many researchers in AFM have successfully made approximations using a simple mass point model approximation [see ref. 2] like the one in figure 2 (with of course the addition of tip sample forces! We will see more about this later).
First we will study the system of figure 2 AS IS (without addition of tip-sample force term), WHY? Because we want to get an analytical solution to get a reference of how our integration schemes are working, and the addition of tip sample forces to our equation will prevent the acquisition of straightforward analytical solutions :(
Then, the equation of motion of the damped harmonic oscillator of figure 2, which is DRIVEN COSINUSOIDALLY (remember that we are exciting our probe during the scanning process) is:
$$\begin{equation}
m \frac{d^2z}{dt^2} = - k z - \frac{m\omega_0}{Q}\frac{dz}{dt} + F_0\cos(\omega t)
\end{equation}$$
where k is the stiffness of the cantilever, z is the vertical position of the tip with respect to the cantilever base position, Q is the quality factor (which is related to the damping of the system), $F_0$ is the driving force amplitude, $\omega_0$ is the resonance frequency of the oscillator, and $\omega$ is the frequency of the oscillating force.
The analytical solution of the above ODE is composed by a transient term and a steady state term. We are only interested in the steady state part because during the scanning process it is assumed that the probe has achieved that state.
The steady state solution is given by:
$$\begin{equation}
A\cos (\omega t - \phi)
\end{equation}$$
where A is the steady state amplitude of the oscillation response, which depends on the cantilever parameters and the driving parameters, as can be seen in the following relation:
$$\begin{equation}
A = \frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+(\frac{\omega\omega_0}{Q})^2}}
\end{equation}$$
and $\phi$ is given by:
$$\begin{equation}
\phi = \arctan \big( \frac{\omega\omega_0/Q}{\omega_0^2 - \omega^2} \big)
\end{equation}$$
Let's first name the variables that we are going to use. Because we are dealing with a damped harmonic oscillator model we have to include variables such as: spring stiffness, resonance frequency, quality factor (related to damping coefficient), target oscillation amplitude, etc.
End of explanation
"""
t= numpy.linspace(0,simultime,N) #time grid for Euler method
#Initializing variables for Euler
vdot_E = numpy.zeros(N)
v_E = numpy.zeros(N)
z_E = numpy.zeros(N)
#Initial conditions
z_E[0]= 0.0
v_E[0]=0.0
for i in range (N-1):
vdot_E[i] =( ( -k*z_E[i] - (m*wo/Q)*(v_E[i]) +\
Fd*numpy.cos(wo*t[i]) ) / m) #Equation 7
v_E[i+1] = v_E[i] + dt*vdot_E[i] #Based on equation 5
z_E[i+1] = z_E[i] + v_E[i]*dt #Equation 5
plt.title('Plot 2 Eulers approximation of Equation1', fontsize=20);
plt.plot(t*1e3,z_E*1e9);
plt.xlabel('time, s', fontsize=18);
plt.ylabel('z_Euler, nm', fontsize=18);
"""
Explanation: Approximating through Euler's method
If we perform a Taylor series expansion of $z_{n+1}$ around $z_{n}$ we get:
$$z_{n+1} = z_{n} + \Delta t\frac{dz}{dt}\big|_n + {\mathcal O}(\Delta t^2)$$
The Euler formula neglects terms in the order of two or higher, ending up as:
$$\begin{equation}
z_{n+1} = z_{n} + \Delta t\frac{dz}{dt}\big|_n
\end{equation}$$
It can be easily seen that the truncation error of the Euler algorithm is in the order of ${\mathcal O}(\Delta t^2)$.
This is a second order ODE, but we can convert it to a system of two coupled 1st order differential equations. To do it we will define $\frac{dz}{dt} = v$. Then equation (1) will be decomposed as:
$$\begin{equation}
\frac{dz}{dt} = v
\end{equation}$$
$$\begin{equation}
\frac{dv}{dt} = -kz-\frac{m\omega_0}{Q}+F_o\cos(\omega t)
\end{equation}$$
These coupled equations will be used during Euler's aproximation and also during our integration using Runge Kutta 4 method.
End of explanation
"""
time_V = numpy.linspace(0,simultime,N)
#Initializing variables for Verlet
zdoubledot_V = numpy.zeros(N)
zdot_V = numpy.zeros(N)
z_V = numpy.zeros(N)
#Initial conditions Verlet. Look how we use Euler for the first step approximation!
z_V[0] = 0.0
zdot_V[0] = 0.0
zdoubledot_V[0] = ( ( -k*z_V[0] - (m*wo/Q)*zdot_V[0] +\
Fd*numpy.cos(wo*t[0]) ) ) / m
zdot_V[1] = zdot_V[0] + zdoubledot_V[0]*dt
z_V[1] = z_V[0] + zdot_V[0]*dt
zdoubledot_V[1] = ( ( -k*z_V[1] - (m*wo/Q)*zdot_V[1] +\
Fd*numpy.cos(wo*t[1]) ) ) / m
#VERLET ALGORITHM
for i in range(2,N):
z_V[i] = 2*z_V[i-1] - z_V[i-2] + zdoubledot_V[i-1]*dt**2 #Eq 10
zdot_V[i] = (z_V[i]-z_V[i-2])/(2.0*dt) #Eq 11
zdoubledot_V[i] = ( ( -k*z_V[i] - (m*wo/Q)*zdot_V[i] +\
Fd*numpy.cos(wo*t[i]) ) ) / m #from eq 1
plt.title('Plot 3 Verlet approximation of Equation1', fontsize=20);
plt.xlabel('time, ms', fontsize=18);
plt.ylabel('z_Verlet, nm', fontsize=18);
plt.plot(time_V*1e3, z_V*1e9, 'g-');
plt.ylim(-65,65);
"""
Explanation: This looks totally unphysical! We were expecting to have a steady state oscillation of 60 nm and we got a huge oscillation that keeps growing. Can it be due to the scheme? The timestep that we have chosen is quite big with respect to the oscillation period. We have intentionally set it to ONLY 28 time steps per period (That could be the reason why the scheme can't capture the physics of the problem). That's quite discouraging. However the timestep is quite big and it really gets better as you decrease the time step. Try it! Reduce the time step and see how the numerical solution acquires an amplitude of 60 nm as the analytical one. At this point we can't state anything about accuracy before doing an analysis of error (we will make this soon). But first, let's try to analyze if another more efficient scheme can capture the physics of our damped harmonic oscillator even with this large time step.
Let's try to get more accurate... Verlet Algorithm
This is a very popular algorithm widely used in molecular dynamics simulations. Its popularity has been related to high stability when compared to the simple Euler method, it is also very simple to implement and accurate as we will see soon! Verlet integration can be seen as using the central difference approximation to the second derivative. Consider the Taylor expansion of $z_{n+1}$ and $z_{n-1}$ around $z_n$:
$$\begin{equation}
z_{n+1} = z_n + \Delta t \frac{dz}{dt}\big|_n + \frac{\Delta t^2}{2} \frac{d^2 z}{d t^2}\big|_n + \frac{\Delta t^3}{6} \frac{d^3 z}{d t^3}\big|_n + {\mathcal O}(\Delta t^4)
\end{equation}$$
$$\begin{equation}
z_{n-1} = z_n - \Delta t \frac{dz}{dt}\big|_n + \frac{\Delta t^2}{2} \frac{d^2 z}{dt^2}\big|_n - \frac{\Delta t^3}{6}
\frac{d^3 z}{d t^3}\big|_n + {\mathcal O}(\Delta t^4)
\end{equation}$$
Adding up these two expansions and solving for $z_{n+1}$ we get:
$$z_{n+1}= 2z_{n} - z_{n-1} + \frac{d^2 z}{d t^2} \Delta t^2\big|_n + {\mathcal O}(\Delta t^4) $$
Verlet algorithm neglects terms on the order of 4 or higher, ending up with:
$$\begin{equation}
z_{n+1}= 2z_{n} - z_{n-1} + \frac{d^2 z}{d t^2} \Delta t^2\big|_n
\end{equation}$$
This looks nice; it seems that the straightforward calculation of the second derivative will give us good results. BUT have you seen that we also need the value of the first derivative (velocity) to put it into the equation of motion that we are integrating (see equation 1). YES, that's a main drawback of this scheme and therefore it's mainly used in applications where the equation to be integrated doesn't have first derivative. But don't panic we will see what can we do...
What about subtracting equations 8 and 9 and then solving for $\frac{dz}{dt}\big|n$:
$$
\frac{dz}{dt}\big|_n = \frac{z{n+1} - z_{n-1}}{2\Delta t} + {\mathcal O}(\Delta t^2)
$$
If we neglect terms on the order of 2 or higher we can calculate velocity:
$$\begin{equation}
\frac{dz}{dt}\big|n = \frac{z{n+1} - z_{n-1}}{2\Delta t}
\end{equation}$$
This way of calculating velocity is pretty common in Verlet integration in applications where velocity is not explicit in the equation of motion. However for our purposes of solving equation 1 (where first derivative is explicitly present) it seems that we will lose accuracy because of the velocity, we will discuss more about this soon after...
Have you noticed that we need a value $z_{n-1}$? Does it sound familiar? YES! This is not a self-starting method. As a result we will have to overcome the issue by setting the initial conditions of the first step using Euler approximation. This is a bit annoying, but a couple of extra lines of code won't kill you :)
End of explanation
"""
#Slicing the full response vector to get the steady state response
z_steady_V = z_V[int(90*period/dt):]
time_steady_V = time_V[int(90*period/dt):]
plt.title('Plot 3 Verlet approx. of steady state sol. of Eq 1', fontsize=20);
plt.xlabel('time, ms', fontsize=18);
plt.ylabel('z_Verlet, nm', fontsize=18);
plt.plot(time_steady_V*1e3, z_steady_V*1e9, 'g-');
plt.ylim(-65,65);
plt.show();
"""
Explanation: It WAS ABLE to capture the physics! Even with the big time step that we use with Euler scheme!
As you can see, and as we previously discussed the harmonic response is composed of a transient and a steady part. We are only concerned about the steady-state, since it is assumed that the probe achieves steady state motion during the imaging process. Therefore, we are going to slice our array in order to show only the last 10 oscillations, and we will see if it resembles the analytical solution.
End of explanation
"""
#Definition of v, z, vectors
vdot_RK4 = numpy.zeros(N)
v_RK4 = numpy.zeros(N)
z_RK4 = numpy.zeros(N)
k1v_RK4 = numpy.zeros(N)
k2v_RK4 = numpy.zeros(N)
k3v_RK4 = numpy.zeros(N)
k4v_RK4 = numpy.zeros(N)
k1z_RK4 = numpy.zeros(N)
k2z_RK4 = numpy.zeros(N)
k3z_RK4 = numpy.zeros(N)
k4z_RK4 = numpy.zeros(N)
#calculation of velocities RK4
#INITIAL CONDITIONS
v_RK4[0] = 0
z_RK4[0] = 0
for i in range (1,N):
#RK4
k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14
k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \
Fd*numpy.cos(wo*t[i-1]) ) ) / m ) #m1 Equation 15
k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16
k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\
(v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m2 Eq 17
k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18
k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\
(v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m3, Eq 19
k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20
k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\
(v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt)) ) ) / m )#m4, Eq 21
#Calculation of velocity, Equation 23
v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\
2.*k3v_RK4[i] + k4v_RK4[i] )
#calculation of position, Equation 22
z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\
2.*k3z_RK4[i] + k4z_RK4[i] )
#slicing array to get steady state
z_steady_RK4 = z_RK4[int(90.*period/dt):]
time_steady_RK4 = t[int(90.*period/dt):]
plt.title('Plot 3 RK4 approx. of steady state sol. of Eq 1', fontsize=20);
plt.xlabel('time, ms', fontsize=18);
plt.ylabel('z_RK4, nm', fontsize=18);
plt.plot(time_steady_RK4 *1e3, z_steady_RK4*1e9, 'r-');
plt.ylim(-65,65);
plt.show();
"""
Explanation: Let's use now one of the most popular schemes... The Runge Kutta 4!
The Runge Kutta 4 (RK4) method is very popular for the solution of ODEs. This method is designed to solve 1st order differential equations. We have converted our 2nd order ODE to a system of two coupled 1st order ODEs when we implemented the Euler scheme (equations 5 and 6). And we will have to use these equations for the RK4 algorithm.
In order to clearly see the RK4 implementation we are going to put equations 5 and 6 in the following form:
$$\begin{equation}
\frac{dz}{dt}=v \Rightarrow f1(t,z,v)
\end{equation}$$
$$\begin{equation}
\frac{dv}{dt} = -kz-\frac{m\omega_0}{Q}+F_ocos(\omega t) \Rightarrow f2(t,z,v)
\end{equation}$$
It can be clearly seen that we have two coupled equations f1 and f2 and both depend in t, z, and v.
The RK4 equations for our special case where we have two coupled equations, are the following:
$$\begin{equation}
k_1 = f1(t_i, z_i, v_i)
\end{equation}$$
$$\begin{equation}
m_1 = f2(t_i, z_i, v_i)
\end{equation}$$
$$\begin{equation}
k_2 = f1(t_i +1/2\Delta t, z_i + 1/2k_1\Delta t, v_i + 1/2m_1\Delta t)
\end{equation}$$
$$\begin{equation}
m_2 = f2(t_i +1/2\Delta t, z_i + 1/2k_1\Delta t, v_i + 1/2m_1\Delta t)
\end{equation}$$
$$\begin{equation}
k_3 = f1(t_i +1/2\Delta t, z_i + k_2\Delta t, v_i + 1/2m_2\Delta t)
\end{equation}$$
$$\begin{equation}
m_3 = f2(t_i +1/2\Delta t, z_i + 1/2k_2\Delta t, v_i + 1/2m_2\Delta t)
\end{equation}$$
$$\begin{equation}
k_4 = f1(t_i + \Delta t, z_i + k_3\Delta t, v_i + m_3\Delta t)
\end{equation}$$
$$\begin{equation}
k_4 = f2(t_i + \Delta t, z_i + k_3\Delta t, v_i + m_3\Delta t)
\end{equation}$$
$$\begin{equation}
f1_{n+1} = f1_n + \Delta t/6(k_1+2k_2+2k_3+k_4)
\end{equation}$$
$$\begin{equation}
f2_{n+1} = f2_n + \Delta t/6(m_1+2m_2+2m_3+m_4)
\end{equation}$$
Please notice how k values and m values are used sequentially, since it is crucial in the implementation of the method!
End of explanation
"""
plt.title('Plot 4 Schemes comparison with analytical sol.', fontsize=20);
plt.plot(time_an_steady*1e3, z_an_steady*1e9, 'b--' );
plt.plot(time_steady_V*1e3, z_steady_V*1e9, 'g-' );
plt.plot(time_steady_RK4*1e3, z_steady_RK4*1e9, 'r-');
plt.xlim(2.0, 2.06);
plt.legend(['Analytical solution', 'Verlet method', 'Runge Kutta 4']);
plt.xlabel('time, ms', fontsize=18);
plt.ylabel('z_position, nm', fontsize=18);
"""
Explanation: Error Analysis
Let's plot together our solutions using the different schemes along with our analytical reference.
End of explanation
"""
fig3 = path + '/Fig3FDcurve.jpg'
Image(filename=fig3)
"""
Explanation: It was pointless to include Euler in the last plot because it was not following the physics at all for this given time step. REMEMBER that Euler can give fair approximations, but you MUST decrease the time step in this particular case if you want to see the sinusoidal trajectory!
It seems our different schemes are giving different quality in approximating the solution. However it's hard to conclude something strong based on this qualitative observations. In order to state something stronger we have to perform further error analysis. We will do this at the end of the notebook after the references and will choose L1 norm for this purpose (You can find more information about this L1 ).
As we can see Runge Kutta 4 converges faster than Verlet for the range of time steps studied. And the difference between both is near one order of magnitude. One additional advantage with Runge Kutta 4 is that the method is very stable, even with big time steps (eg. 10 time steps per period) the method is able to catch up the physics of the oscillation, something where Verlet is not so good at.
Let's add a sample and oscillate our probe over it
It is very common in the field of probe microscopy to model the tip sample interactions through DMT contact mechanics.
DMT stands for Derjaguin, Muller and Toporov who were the scientists that developed the model (see ref 1). This model uses Hertz contact mechanics (see ref 2) with the addition of long range tip-sample interactions. These long range tip-sample interactions are ascribed to intermolecular interactions between the atoms of the tip and the upper atoms of the surface, and include mainly the contribution of van de Waals forces and Pauli repulsion from electronic clouds when the atoms of the tip meet closely the atoms of the surface. Figure 2 displays a force vs distance curve (FD curve) where it is shown how the forces between the tip and the sample behave with respect to the separation. It can be seen that at positive distances the tip starts "feeling" attraction from the tip (from the contribution of van der Waals forces) where the slope of the curve is positive and at some minimum distance ($a_0$) the tip starts experiencing repulsive interactions arising from electronic cloud repulsion (area where the slope of the curve is negative and the forces are negative). At lower distances, an area known as "contact area" arises and it is characterized by a negative slope and an emerging positive force.
End of explanation
"""
fig4 = path + '/Fig4Hertzspring.jpg'
Image(filename= fig4)
"""
Explanation: Figure 3. Force vs Distance profile depicting tip-sample interactions in AFM (Adapted from reference 6)
In Hertz contact mechanics, one central aspect is to consider that the contact area increases as the sphere is pressed against an elastic surface, and this increase of the contact area "modulates" the effective stiffness of the sample. This concept is represented in figure 4 where the sample is depicted as comprised by a series of springs that are activated as the tip goes deeper into the sample. In other words, the deeper the sample goes, the larger the contact area and therefore more springs are activated (see more about this on reference 5).
End of explanation
"""
#DMT parameters (Hertz contact mechanics with long range Van der Waals forces added
a=0.2e-9 #intermolecular parameter
H=6.4e-20 #hamaker constant of sample
R=20e-9 #tip radius of the cantilever
Es=70e6 #elastic modulus of sample
Et=130e9 #elastic modulus of the tip
vt=0.3 #Poisson coefficient for tip
vs=0.3 #Poisson coefficient for sample
E_star= 1/((1-pow(vt,2))/Et+(1-pow(vs,2))/Es) #Effective Young Modulus
"""
Explanation: Figure 4. Conceptual representation of Hertz contact mechanics
This concept is represented mathematically by a non-linear spring whose elastic coefficient is a function of the contact area which at the same time depends on the sample indentation ( k(d) ).
$$F_{ts} = k(d)d$$
where
$$k(d) = 4/3E\sqrt{Rd}$$
being $\sqrt{Rd}$ the contact area when a sphere of radius R indents a half-space to depth d.
$E$ is the effective Young's modulus of the tip-sample interaction.
The long range attractive forces are derived using Hamaker's equation (see reference 4): $if$ $d > a_0$
$$F_{ts} = \frac{-HR}{6d^2}$$
where H is the Hamaker constant, R the tip radius and d the tip sample distance. $a_0$ is defined as the intermolecular distance and normally is chosen to be 0.2 nm.
In summary the equations that we will include in our code to take care of the tip sample interactions are the following:
$$\begin{equation}
Fts_{DMT} = \begin{cases} \frac{-HR}{6d^2} \quad \quad d \leq{a_0}\ \
\frac{-HR}{6d^2} + 4/3E*R^{1/2}d^{3/2} \quad \quad d> a_0 \end{cases}
\end{equation}$$
where the effective Young's modulus E is defined by:
$$\begin{equation}
1/E = \frac{1-\nu^2}{E_t}+\frac{1-\nu^2}{E_s}
\end{equation}$$
where $E_t$ and $E_s$ are the tip and sample Young's modulus respectively. $\nu_t$ and $\nu_s$ are tip and sample Poisson ratios, respectively.
Enough theory, Let's make our code!
Now we will have to solve equation (1) but with the addition of tip-sample interactions which are described by equation (5). So we have a second order non-linear ODE which is no longer analytically straightforward:
$$\begin{equation}
m \frac{d^2z}{dt^2} = - k z - \frac{m\omega_0}{Q}\frac{dz}{dt} + F_0 cos(\omega t) + Fts_{DMT}
\end{equation}$$
Therefore we have to use numerical methods to solve it. RK4 has shown to be more accurate to solve equation (1) among the methods reviewed in the previous section of the notebook, and therefore it is going to be the chosen method to solve equation (6).
Now we have to declare all the variables related to the tip-sample forces. Since we are modeling our tip-sample forces using Hertz contact mechanics with addition of long range Van der Waals forces we have to define the Young's modulus of the tip and sample, the diameter of the tip of our probe, Poisson ratio, etc.
End of explanation
"""
#IMPORTANT distance where you place the probe above the sample
z_base = 40.e-9
spp = 280. # time steps per period
dt = period/spp
simultime = 100.*period
N = int(simultime/dt)
t = numpy.linspace(0,simultime,N)
#Initializing variables for RK4
v_RK4 = numpy.zeros(N)
z_RK4 = numpy.zeros(N)
k1v_RK4 = numpy.zeros(N)
k2v_RK4 = numpy.zeros(N)
k3v_RK4 = numpy.zeros(N)
k4v_RK4 = numpy.zeros(N)
k1z_RK4 = numpy.zeros(N)
k2z_RK4 = numpy.zeros(N)
k3z_RK4 = numpy.zeros(N)
k4z_RK4 = numpy.zeros(N)
TipPos = numpy.zeros(N)
Fts = numpy.zeros(N)
Fcos = numpy.zeros(N)
for i in range(1,N):
#RK4
k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14
k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \
Fd*numpy.cos(wo*t[i-1]) +Fts[i-1]) ) / m ) #m1 Equation 15
k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16
k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\
(v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt/2.)) +Fts[i-1]) ) / m ) #m2 Eq 17
k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18
k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\
(v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt/2.)) +Fts[i-1]) ) / m ) #m3, Eq19
k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20
k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\
(v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt)) +Fts[i-1]) ) / m )#m4, Eq 21
#Calculation of velocity, Equation 23
v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\
2.*k3v_RK4[i] + k4v_RK4[i] )
#calculation of position, Equation 22
z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\
2.*k3z_RK4[i] + k4z_RK4[i] )
TipPos[i] = z_base + z_RK4[i] #Adding base position to z position
#calculation of DMT force
if TipPos[i] > a: #this defines the attractive regime
Fts[i] = -H*R/(6*(TipPos[i])**2)
else: #this defines the repulsive regime
Fts[i] = -H*R/(6*a**2)+4./3*E_star*numpy.sqrt(R)*(a-TipPos[i])**1.5
Fcos[i] = Fd*numpy.cos(wo*t[i]) #Driving force (this will be helpful to plot the driving force)
#Slicing arrays to get steady state
TipPos_steady = TipPos[int(95*period/dt):]
t_steady = t[int(95*period/dt):]
Fcos_steady = Fcos[int(95*period/dt):]
Fts_steady = Fts[int(95*period/dt):]
plt.figure(1)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(t_steady*1e3,TipPos_steady*1e9, 'g-')
ax2.plot(t_steady*1e3, Fcos_steady*1e9, 'b-')
ax1.set_xlabel('Time,s')
ax1.set_ylabel('Tip position (nm)', color='g')
ax2.set_ylabel('Drive Force (nm)', color='b')
plt.title('Plot 7 Tip response and driving force', fontsize = 20)
plt.figure(2)
plt.title('Plot 8 Force-Distance curve', fontsize=20)
plt.plot(TipPos*1e9, Fts*1e9, 'b--' )
plt.xlabel('Tip Position, nm', fontsize=18)
plt.ylabel('Force, nN', fontsize=18)
plt.xlim(-20, 30)
"""
Explanation: Now let's declare the timestep, the simulation time and let's oscillate our probe!
End of explanation
"""
print('This cell takes a while to compute')
"""ERROR ANALYSIS EULER, VERLET AND RK4"""
# time-increment array
dt_values = numpy.array([8.0e-7, 2.0e-7, 0.5e-7, 1e-8, 0.1e-8])
# array that will contain solution of each grid
z_values_E = numpy.zeros_like(dt_values, dtype=numpy.ndarray)
z_values_V = numpy.zeros_like(dt_values, dtype=numpy.ndarray)
z_values_RK4 = numpy.zeros_like(dt_values, dtype=numpy.ndarray)
z_values_an = numpy.zeros_like(dt_values, dtype=numpy.ndarray)
for n, dt in enumerate(dt_values):
simultime = 100*period
timestep = dt
N = int(simultime/dt)
t = numpy.linspace(0.0, simultime, N)
#Initializing variables for Verlet
zdoubledot_V = numpy.zeros(N)
zdot_V = numpy.zeros(N)
z_V = numpy.zeros(N)
#Initializing variables for RK4
vdot_RK4 = numpy.zeros(N)
v_RK4 = numpy.zeros(N)
z_RK4 = numpy.zeros(N)
k1v_RK4 = numpy.zeros(N)
k2v_RK4 = numpy.zeros(N)
k3v_RK4 = numpy.zeros(N)
k4v_RK4 = numpy.zeros(N)
k1z_RK4 = numpy.zeros(N)
k2z_RK4 = numpy.zeros(N)
k3z_RK4 = numpy.zeros(N)
k4z_RK4 = numpy.zeros(N)
#Initial conditions Verlet (started with Euler approximation)
z_V[0] = 0.0
zdot_V[0] = 0.0
zdoubledot_V[0] = ( ( -k*z_V[0] - (m*wo/Q)*zdot_V[0] + \
Fd*numpy.cos(wo*t[0]) ) ) / m
zdot_V[1] = zdot_V[0] + zdoubledot_V[0]*timestep**2
z_V[1] = z_V[0] + zdot_V[0]*dt
zdoubledot_V[1] = ( ( -k*z_V[1] - (m*wo/Q)*zdot_V[1] + \
Fd*numpy.cos(wo*t[1]) ) ) / m
#Initial conditions Runge Kutta
v_RK4[1] = 0
z_RK4[1] = 0
#Initialization variables for Analytical solution
z_an = numpy.zeros(N)
# time loop
for i in range(2,N):
#Verlet
z_V[i] = 2*z_V[i-1] - z_V[i-2] + zdoubledot_V[i-1]*dt**2 #Eq 10
zdot_V[i] = (z_V[i]-z_V[i-2])/(2.0*dt) #Eq 11
zdoubledot_V[i] = ( ( -k*z_V[i] - (m*wo/Q)*zdot_V[i] +\
Fd*numpy.cos(wo*t[i]) ) ) / m #from eq 1
#RK4
k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14
k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \
Fd*numpy.cos(wo*t[i-1]) ) ) / m ) #m1 Equation 15
k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16
k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\
(v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m2 Eq 17
k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18
k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\
(v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m3, Eq 19
k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20
k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\
(v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\
numpy.cos(wo*(t[i-1] + dt)) ) ) / m )#m4, Equation 21
#Calculation of velocity, Equation 23
v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\
2.*k3v_RK4[i] + k4v_RK4[i] )
#calculation of position, Equation 22
z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\
2.*k3z_RK4[i] + k4z_RK4[i] )
#Analytical solution
A_an = Fo_an*Q/k #when driven at resonance A is simply Fo*Q/k
phi = numpy.pi/2 #when driven at resonance the phase is pi/2
z_an[i] = A_an*numpy.cos(wo*t[i] - phi) #Analytical solution eq. 1
#Slicing the full response vector to get the steady state response
z_steady_V = z_V[int(80*period/timestep):]
z_an_steady = z_an[int(80*period/timestep):]
z_steady_RK4 = z_RK4[int(80*period/timestep):]
time_steady = t[int(80*period/timestep):]
z_values_V[n] = z_steady_V.copy() # error for certain value of timestep
z_values_RK4[n] = z_steady_RK4.copy() #error for certain value of timestep
z_values_an[n] = z_an_steady.copy() #error for certain value of timestep
def get_error(z, z_exact, dt):
#Returns the error with respect to the analytical solution using L1 norm
return dt * numpy.sum(numpy.abs(z-z_exact))
#NOW CALCULATE THE ERROR FOR EACH RESPECTIVE DELTA T
error_values_V = numpy.zeros_like(dt_values)
error_values_RK4 = numpy.zeros_like(dt_values)
for i, dt in enumerate(dt_values):
### call the function get_error() ###
error_values_V[i] = get_error(z_values_V[i], z_values_an[i], dt)
error_values_RK4[i] = get_error(z_values_RK4[i], z_values_an[i], dt)
plt.figure(1)
plt.title('Plot 5 Error analysis Verlet based on L1 norm', fontsize=20)
plt.tick_params(axis='both', labelsize=14)
plt.grid(True) #turn on grid lines
plt.xlabel('$\Delta t$ Verlet', fontsize=16) #x label
plt.ylabel('Error Verlet', fontsize=16) #y label
plt.loglog(dt_values, error_values_V, 'go-') #log-log plot
plt.axis('equal') #make axes scale equally;
plt.figure(2)
plt.title('Plot 6 Error analysis RK4 based on L1 norm', fontsize=20)
plt.tick_params(axis='both', labelsize=14)
plt.grid(True) #turn on grid lines
plt.xlabel('$\Delta t$ RK4', fontsize=16) #x label
plt.ylabel('Error RK4', fontsize=16) #y label
plt.loglog(dt_values, error_values_RK4, 'co-') #log-log plot
plt.axis('equal') #make axes scale equally;
"""
Explanation: Check that we have two sinusoidals. The one in green (the output) is the response signal of the tip (the tip trajectory in time) while the blue one (the input) is the cosinusoidal driving force that we are using to excite the tip. When the tip is excited in free air (without tip sample interactions) the phase lag between the output and the input is 90 degrees. You can test that with the previous code by only changing the position of the base to a high-enough position that it does not interact with the sample. However in the above plot the phase lag is less than 90 degrees. Interestingly the phase can give relative information about the material properties of the sample. There is a well-developed theory of this in tapping mode AFM and it's called phase spectroscopy. If you are interested in this topic you can read reference 1.
Also look at the above plot and see that the response amplitude is no longer 60 nm as we initially set (in this case is near 45 nm!). It means that we have experienced a significant amplitude reduction due to the tip sample interactions.
Besides with the data acquired we are able to plot a Force-curve as the one shown in Figure 3. It shows the attractive and repulsive interactions of our probe with the surface.
We have arrived to the end of the notebook. I hope you have found it interesting and helpful!
REFERENCES
Garcı́a, Ricardo, and Ruben Perez. "Dynamic atomic force microscopy methods." Surface science reports 47.6 (2002): 197-301.
B. V. Derjaguin, V. M. Muller, and Y. P. Toporov, J. Colloid
Interface Sci. 53, 314 (1975)
Hertz, H. R., 1882, Ueber die Beruehrung elastischer Koerper (On Contact Between Elastic Bodies), in Gesammelte Werke (Collected Works), Vol. 1, Leipzig, Germany, 1895.
Van Oss, Carel J., Manoj K. Chaudhury, and Robert J. Good. "Interfacial Lifshitz-van der Waals and polar interactions in macroscopic systems." Chemical Reviews 88.6 (1988): 927-941.
Enrique A. López-Guerra, and Santiago D. Solares. "Modeling viscoelasticity through spring–dashpot models in intermittent-contact atomic force microscopy." Beilstein journal of nanotechnology 5, no. 1 (2014): 2149-2163.
Enrique A. López-Guerra, and Santiago D. Solares, "El microscopio de Fuerza Atómica: Metodos y Aplicaciones." Revista UVG (2013) No. 28, 14-23.
OPTIONAL: Further error analysis based in norm L1
End of explanation
"""
|
rflamary/POT
|
notebooks/plot_gromov_barycenter.ipynb
|
mit
|
# Author: Erwan Vautier <erwan.vautier@gmail.com>
# Nicolas Courty <ncourty@irisa.fr>
#
# License: MIT License
import numpy as np
import scipy as sp
import scipy.ndimage as spi
import matplotlib.pylab as pl
from sklearn import manifold
from sklearn.decomposition import PCA
import ot
"""
Explanation: Gromov-Wasserstein Barycenter example
This example is designed to show how to use the Gromov-Wasserstein distance
computation in POT.
End of explanation
"""
def smacof_mds(C, dim, max_iter=3000, eps=1e-9):
"""
Returns an interpolated point cloud following the dissimilarity matrix C
using SMACOF multidimensional scaling (MDS) in specific dimensionned
target space
Parameters
----------
C : ndarray, shape (ns, ns)
dissimilarity matrix
dim : int
dimension of the targeted space
max_iter : int
Maximum number of iterations of the SMACOF algorithm for a single run
eps : float
relative tolerance w.r.t stress to declare converge
Returns
-------
npos : ndarray, shape (R, dim)
Embedded coordinates of the interpolated point cloud (defined with
one isometry)
"""
rng = np.random.RandomState(seed=3)
mds = manifold.MDS(
dim,
max_iter=max_iter,
eps=1e-9,
dissimilarity='precomputed',
n_init=1)
pos = mds.fit(C).embedding_
nmds = manifold.MDS(
2,
max_iter=max_iter,
eps=1e-9,
dissimilarity="precomputed",
random_state=rng,
n_init=1)
npos = nmds.fit_transform(C, init=pos)
return npos
"""
Explanation: Smacof MDS
This function allows to find an embedding of points given a dissimilarity matrix
that will be given by the output of the algorithm
End of explanation
"""
def im2mat(I):
"""Converts and image to matrix (one pixel per line)"""
return I.reshape((I.shape[0] * I.shape[1], I.shape[2]))
square = spi.imread('../data/square.png').astype(np.float64)[:, :, 2] / 256
cross = spi.imread('../data/cross.png').astype(np.float64)[:, :, 2] / 256
triangle = spi.imread('../data/triangle.png').astype(np.float64)[:, :, 2] / 256
star = spi.imread('../data/star.png').astype(np.float64)[:, :, 2] / 256
shapes = [square, cross, triangle, star]
S = 4
xs = [[] for i in range(S)]
for nb in range(4):
for i in range(8):
for j in range(8):
if shapes[nb][i, j] < 0.95:
xs[nb].append([j, 8 - i])
xs = np.array([np.array(xs[0]), np.array(xs[1]),
np.array(xs[2]), np.array(xs[3])])
"""
Explanation: Data preparation
The four distributions are constructed from 4 simple images
End of explanation
"""
ns = [len(xs[s]) for s in range(S)]
n_samples = 30
"""Compute all distances matrices for the four shapes"""
Cs = [sp.spatial.distance.cdist(xs[s], xs[s]) for s in range(S)]
Cs = [cs / cs.max() for cs in Cs]
ps = [ot.unif(ns[s]) for s in range(S)]
p = ot.unif(n_samples)
lambdast = [[float(i) / 3, float(3 - i) / 3] for i in [1, 2]]
Ct01 = [0 for i in range(2)]
for i in range(2):
Ct01[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[0], Cs[1]],
[ps[0], ps[1]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
Ct02 = [0 for i in range(2)]
for i in range(2):
Ct02[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[0], Cs[2]],
[ps[0], ps[2]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
Ct13 = [0 for i in range(2)]
for i in range(2):
Ct13[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[1], Cs[3]],
[ps[1], ps[3]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
Ct23 = [0 for i in range(2)]
for i in range(2):
Ct23[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[2], Cs[3]],
[ps[2], ps[3]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
"""
Explanation: Barycenter computation
End of explanation
"""
clf = PCA(n_components=2)
npos = [0, 0, 0, 0]
npos = [smacof_mds(Cs[s], 2) for s in range(S)]
npost01 = [0, 0]
npost01 = [smacof_mds(Ct01[s], 2) for s in range(2)]
npost01 = [clf.fit_transform(npost01[s]) for s in range(2)]
npost02 = [0, 0]
npost02 = [smacof_mds(Ct02[s], 2) for s in range(2)]
npost02 = [clf.fit_transform(npost02[s]) for s in range(2)]
npost13 = [0, 0]
npost13 = [smacof_mds(Ct13[s], 2) for s in range(2)]
npost13 = [clf.fit_transform(npost13[s]) for s in range(2)]
npost23 = [0, 0]
npost23 = [smacof_mds(Ct23[s], 2) for s in range(2)]
npost23 = [clf.fit_transform(npost23[s]) for s in range(2)]
fig = pl.figure(figsize=(10, 10))
ax1 = pl.subplot2grid((4, 4), (0, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax1.scatter(npos[0][:, 0], npos[0][:, 1], color='r')
ax2 = pl.subplot2grid((4, 4), (0, 1))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax2.scatter(npost01[1][:, 0], npost01[1][:, 1], color='b')
ax3 = pl.subplot2grid((4, 4), (0, 2))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax3.scatter(npost01[0][:, 0], npost01[0][:, 1], color='b')
ax4 = pl.subplot2grid((4, 4), (0, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax4.scatter(npos[1][:, 0], npos[1][:, 1], color='r')
ax5 = pl.subplot2grid((4, 4), (1, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax5.scatter(npost02[1][:, 0], npost02[1][:, 1], color='b')
ax6 = pl.subplot2grid((4, 4), (1, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax6.scatter(npost13[1][:, 0], npost13[1][:, 1], color='b')
ax7 = pl.subplot2grid((4, 4), (2, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax7.scatter(npost02[0][:, 0], npost02[0][:, 1], color='b')
ax8 = pl.subplot2grid((4, 4), (2, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax8.scatter(npost13[0][:, 0], npost13[0][:, 1], color='b')
ax9 = pl.subplot2grid((4, 4), (3, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax9.scatter(npos[2][:, 0], npos[2][:, 1], color='r')
ax10 = pl.subplot2grid((4, 4), (3, 1))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax10.scatter(npost23[1][:, 0], npost23[1][:, 1], color='b')
ax11 = pl.subplot2grid((4, 4), (3, 2))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax11.scatter(npost23[0][:, 0], npost23[0][:, 1], color='b')
ax12 = pl.subplot2grid((4, 4), (3, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax12.scatter(npos[3][:, 0], npos[3][:, 1], color='r')
"""
Explanation: Visualization
The PCA helps in getting consistency between the rotations
End of explanation
"""
|
nfaggian/notebooks
|
Introductions/.ipynb_checkpoints/Tutorial - 1.0 - Introduction-checkpoint.ipynb
|
apache-2.0
|
print("Hello World")
"""
Explanation: Welcome to the interactive Python session.
This notebook series is intended as a very high-level introduction to python and tools, for a more comprehensive set of material refer to the following reference:
https://wiki.python.org/moin/BeginnersGuide/Programmers
Also, consider working through the notes provided by Robert Johansson:
https://github.com/jrjohansson/scientific-python-lectures
Q. Print hello world to the screen:
End of explanation
"""
name = "Nathan Faggian"
name
yourname = name
id(name), id(yourname)
"""
Explanation: Q. Assign a value to a variable (references, most things are mutable)
End of explanation
"""
title = "Dr " + name
title
"""
Explanation: Q. Mutate a string.
End of explanation
"""
container = ("Nathan", 182)
container
"""
Explanation: Some core data types: Tuples, Lists, Dictionaries and Sets.
Q. How do you form a container for information? Tuples.
End of explanation
"""
container = set(["nathan", 182])
container
182 in container
"""
Explanation: Sets are also appropriate if you don't want order to be preserved.
End of explanation
"""
table = [("Nathan", 182), ("Michael", 175), ("Sam", 190)]
table
sorted(table)
sorted(table, key=lambda x:x[1], reverse=True)
"""
Explanation: Q. Sorting tabular datasets. Lists of Tuples and the sorted built-in.
End of explanation
"""
heights = {"Nathan": 182, "Michael": 175, "Sam": 190}
heights
heights["Michael"]
"""
Explanation: Q. How do you describe a relationship or a mapping of data? You can use lists of tuples but dictionaries are better.
End of explanation
"""
import operator
sorted(heights.items(), key=operator.itemgetter(0), reverse=True)
"""
Explanation: Q. Who is the tallest person?
End of explanation
"""
year = 2015
year > 2020
(year > 2010) and (year < 2020)
(2010 < year < 2020)
"""
Explanation: Basic boolean expressions, selection and iteration.
End of explanation
"""
def is_leap_year(year):
"""
Returns if the year is a leap year.
"""
if year % 100 == 0:
return year % 400 == 0
else:
return year % 4 == 0
"""
Explanation: Basic example of selection:
Python
if (condition):
operation
else
operation
Q. Is the year 2000 a, gregorian calendar, leap year?
Leap years:
The year is evenly divisible by 4
If the year can be evenly divided by 100, it is NOT a leap year, unless the year is also evenly divisible by 400.
End of explanation
"""
years = [2010, 2011, 2016, 2020]
for year in years:
if is_leap_year(year):
print("{} is a leap year!".format(year))
"""
Explanation: Basic example of iteration:
Python
for data in iterable:
operation
End of explanation
"""
def greet(first_name, last_name):
"""Returns a string"""
return "Hello {} {}.".format(first_name, last_name)
greet("Nathan","Faggian")
"""
Explanation: Functions and Classes
Q. Write a greeting function that takes two arguments and returns a string.
End of explanation
"""
class Wallet(object):
"""
A Wallet contains integers of dollars and cents.
"""
def __init__(self, dollars, cents):
self.dollars = dollars
self.cents = cents
def __repr__(self):
return "Wallet({},{})".format(self.dollars, self.cents)
def __del__(self):
pass
pouch = Wallet(1000,0)
print("A wallet: {}, with {} dollars and {} cents.".format(pouch,
pouch.dollars,
pouch.cents))
"""
Explanation: Q. Write a basic class that encapsulates some data.
End of explanation
"""
import this
"""
Explanation: Tip: Python allows you to use both functional and imperative styles and lets the user determine which approach is best.
What makes Python different from other languages?
End of explanation
"""
data = [1, 2, 3, 4]
sum(data)
"""
Explanation: Q. Form an array of values and sum then together using the sum built-in.
End of explanation
"""
sum([x for x in data if x > 2])
"""
Explanation: For simple things the R language looks similar, especially math.
R
data <- c(1,2,3,4)
sum(data)
Q. Sum an array of values only if the values are greater than 2.
End of explanation
"""
total = 0
for x in data:
if x > 2:
total += x
total
"""
Explanation: For loops are also fine but more verbose:
End of explanation
"""
[x for x in data if x > 2]
"""
Explanation: Influences from functional programming make code easier to read and understand - list comprehensions.
End of explanation
"""
{k:v for k,v in [("a", 2), ("b",4)] if v > 2}
"""
Explanation: There are also dictionary comprehensions.
End of explanation
"""
with open('test.file', 'w') as file_handle:
file_handle.write("Hello")
"""
Explanation: Context managers, can take care of handling the opening and closing of files.
End of explanation
"""
def fib():
a, b = 0, 1
while True:
yield b
a, b = b, a + b
fib_generator = fib()
value = next(fib_generator)
value
"""
Explanation: Q. An infinite fibonacci sequence. A. Using generators.
End of explanation
"""
|
M-R-Houghton/euroscipy_2015
|
scikit_image/lectures/example_pano.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
from skimage import io, transform
from skimage.color import rgb2gray
from skdemo import imshow_all
ic = io.ImageCollection('../images/pano/DFM_*')
"""
Explanation: Note: This example has been significantly expanded and enhanced. The new, recommended version is located here. We retain this version intact as it was the exact example used in the scikit-image paper.
Panorama stitching
End of explanation
"""
imshow_all(ic[0], ic[1])
"""
Explanation: The ImageCollection class provides an easy way of
loading and representing multiple images. Images are not
read from disk until accessed.
End of explanation
"""
image0 = rgb2gray(ic[0][:, 500:500+1987, :])
image1 = rgb2gray(ic[1][:, 500:500+1987, :])
image0 = transform.rescale(image0, 0.25)
image1 = transform.rescale(image1, 0.25)
imshow_all(image0, image1)
"""
Explanation: Credit: Photographs taken in Petra, Jordan by François Malan<br/>
License: CC-BY
End of explanation
"""
from skimage.feature import ORB, match_descriptors
orb = ORB(n_keypoints=1000, fast_threshold=0.05)
orb.detect_and_extract(image0)
keypoints1 = orb.keypoints
descriptors1 = orb.descriptors
orb.detect_and_extract(image1)
keypoints2 = orb.keypoints
descriptors2 = orb.descriptors
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
from skimage.feature import plot_matches
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
plot_matches(ax, image0, image1, keypoints1, keypoints2, matches12)
ax.axis('off');
"""
Explanation: For this demo, we estimate a projective transformation
that relates the two images. Since the outer
parts of these photographs do not comform well to such
a model, we select only the central parts. To
further speed up the demonstration, images are downscaled
to 25% of their original size.
1. Feature detection and matching
"Oriented FAST and rotated BRIEF" features are detected in both images:
End of explanation
"""
from skimage.transform import ProjectiveTransform
from skimage.measure import ransac
from skimage.feature import plot_matches
# Select keypoints from the source (image to be registered)
# and target (reference image)
src = keypoints2[matches12[:, 1]][:, ::-1]
dst = keypoints1[matches12[:, 0]][:, ::-1]
model_robust, inliers = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=2)
fig, ax = plt.subplots(1, 1, figsize=(15, 15))
plot_matches(ax, image0, image1, keypoints1, keypoints2, matches12[inliers])
ax.axis('off');
"""
Explanation: Each feature yields a binary descriptor; those are used to find
the putative matches shown. Many false matches are observed.
2. Transform estimation
To filter matches, we apply RANdom SAMple Consensus (RANSAC),
a common method of rejecting outliers. This iterative process
estimates transformation models based on
randomly chosen subsets of matches, finally selecting the
model which corresponds best with the majority of matches.
End of explanation
"""
from skimage.transform import SimilarityTransform
r, c = image1.shape[:2]
# Note that transformations take coordinates in (x, y) format,
# not (row, column), in order to be consistent with most literature
corners = np.array([[0, 0],
[0, r],
[c, 0],
[c, r]])
# Warp the image corners to their new positions
warped_corners = model_robust(corners)
# Find the extents of both the reference image and the warped
# target image
all_corners = np.vstack((warped_corners, corners))
corner_min = np.min(all_corners, axis=0)
corner_max = np.max(all_corners, axis=0)
output_shape = (corner_max - corner_min)
output_shape = np.ceil(output_shape[::-1])
"""
Explanation: Note how most of the false matches have now been rejected.
3. Warping
Next, we want to produce the panorama itself. The first
step is to find the shape of the output image, by taking
considering the extents of all warped images.
End of explanation
"""
from skimage.color import gray2rgb
from skimage.exposure import rescale_intensity
from skimage.transform import warp
offset = SimilarityTransform(translation=-corner_min)
image0_ = warp(image0, offset.inverse,
output_shape=output_shape, cval=-1)
image1_ = warp(image1, (model_robust + offset).inverse,
output_shape=output_shape, cval=-1)
"""
Explanation: Warp the images according to the estimated transformation model.
Values outside the input images are set to -1 to distinguish the
"background".
A shift is added to make sure that both images are visible in their
entirety. Note that warp takes the inverse mapping
as an input.
End of explanation
"""
def add_alpha(image, background=-1):
"""Add an alpha layer to the image.
The alpha layer is set to 1 for foreground and 0 for background.
"""
return np.dstack((gray2rgb(image), (image != background)))
image0_alpha = add_alpha(image0_)
image1_alpha = add_alpha(image1_)
merged = (image0_alpha + image1_alpha)
alpha = merged[..., 3]
# The summed alpha layers give us an indication of how many
# images were combined to make up each pixel. Divide by the
# number of images to get an average.
merged /= np.maximum(alpha, 1)[..., np.newaxis]
merged = merged[..., :3]
imshow_all(image0_alpha, image1_alpha, merged)
"""
Explanation: An alpha channel is now added to the warped images
before they are merged together:
End of explanation
"""
plt.imsave('/tmp/frame0.tif', image0_alpha)
plt.imsave('/tmp/frame1.tif', image1_alpha)
%%bash
enblend /tmp/frame*.tif -o /tmp/pano.tif
pano = io.imread('/tmp/pano.tif')
plt.figure(figsize=(10, 10))
plt.imshow(pano)
plt.axis('off');
"""
Explanation: Note that, while the columns are well aligned, the color
intensity is not well matched between images.
4. Blending
To blend images smoothly we make use of the open source package
Enblend, which in turn employs multi-resolution splines and
Laplacian pyramids [1, 2].
[1] P. Burt and E. Adelson. "A Multiresolution Spline With Application to Image Mosaics". ACM Transactions on Graphics, Vol. 2, No. 4, October 1983. Pg. 217-236.
[2] P. Burt and E. Adelson. "The Laplacian Pyramid as a Compact Image Code". IEEE Transactions on Communications, April 1983.
End of explanation
"""
%reload_ext load_style
%load_style ../themes/tutorial.css
"""
Explanation: <div style="height: 400px;"></div>
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.13.0/examples/notebooks/generated/statespace_structural_harvey_jaeger.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from IPython.display import display, Latex
"""
Explanation: Detrending, Stylized Facts and the Business Cycle
In an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.
Their paper begins:
"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step
in macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic
properties of the data and (2) present meaningful information."
In particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.
statsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.
End of explanation
"""
# Datasets
from pandas_datareader.data import DataReader
# Get the raw data
start = '1948-01'
end = '2008-01'
us_gnp = DataReader('GNPC96', 'fred', start=start, end=end)
us_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)
us_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS').mean()
recessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS').last().values[:,0]
# Construct the dataframe
dta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)
dta.columns = ['US GNP','US Prices','US monetary base']
dta.index.freq = dta.index.inferred_freq
dates = dta.index._mpl_repr()
"""
Explanation: Unobserved Components
The unobserved components model available in statsmodels can be written as:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{\gamma{t}}{\text{seasonal}} + \underbrace{c{t}}{\text{cycle}} + \sum{j=1}^k \underbrace{\beta_j x_{jt}}{\text{explanatory}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
see Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.
Trend
The trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.
$$
\begin{align}
\underbrace{\mu_{t+1}}{\text{level}} & = \mu_t + \nu_t + \eta{t+1} \qquad & \eta_{t+1} \sim N(0, \sigma_\eta^2) \\
\underbrace{\nu_{t+1}}{\text{trend}} & = \nu_t + \zeta{t+1} & \zeta_{t+1} \sim N(0, \sigma_\zeta^2) \
\end{align}
$$
where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.
For both elements (level and trend), we can consider models in which:
The element is included vs excluded (if the trend is included, there must also be a level included).
The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)
The only additional parameters to be estimated via MLE are the variances of any included stochastic components.
This leads to the following specifications:
| | Level | Trend | Stochastic Level | Stochastic Trend |
|----------------------------------------------------------------------|-------|-------|------------------|------------------|
| Constant | ✓ | | | |
| Local Level <br /> (random walk) | ✓ | | ✓ | |
| Deterministic trend | ✓ | ✓ | | |
| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |
| Local linear trend | ✓ | ✓ | ✓ | ✓ |
| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |
Seasonal
The seasonal component is written as:
<span>$$
\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \qquad \omega_t \sim N(0, \sigma_\omega^2)
$$</span>
The periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.
The variants of this model are:
The periodicity s
Whether or not to make the seasonal effects stochastic.
If the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).
Cycle
The cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between "1.5 and 12 years" (see Durbin and Koopman).
The cycle is written as:
<span>$$
\begin{align}
c_{t+1} & = c_t \cos \lambda_c + c_t^ \sin \lambda_c + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \\
c_{t+1}^ & = -c_t \sin \lambda_c + c_t^ \cos \lambda_c + \tilde \omega_t^ & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)
\end{align}
$$</span>
The parameter $\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).
Irregular
The irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.
$$
\varepsilon_t \sim N(0, \sigma_\varepsilon^2)
$$
In some cases, we may want to generalize the irregular component to allow for autoregressive effects:
$$
\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2)
$$
In this case, the autoregressive parameters would also be estimated via MLE.
Regression effects
We may want to allow for explanatory variables by including additional terms
<span>$$
\sum_{j=1}^k \beta_j x_{jt}
$$</span>
or for intervention effects by including
<span>$$
\begin{align}
\delta w_t \qquad \text{where} \qquad w_t & = 0, \qquad t < \tau, \\
& = 1, \qquad t \ge \tau
\end{align}
$$</span>
These additional parameters could be estimated via MLE or by including them as components of the state space formulation.
Data
Following Harvey and Jaeger, we will consider the following time series:
US real GNP, "output", (GNPC96)
US GNP implicit price deflator, "prices", (GNPDEF)
US monetary base, "money", (AMBSL)
The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.
All data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.
End of explanation
"""
# Plot the data
ax = dta.plot(figsize=(13,3))
ylim = ax.get_ylim()
ax.xaxis.grid()
ax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);
"""
Explanation: To get a sense of these three variables over the timeframe, we can plot them:
End of explanation
"""
# Model specifications
# Unrestricted model, using string specification
unrestricted_model = {
'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Unrestricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# local linear trend model with a stochastic damped cycle:
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
# The restricted model forces a smooth trend
restricted_model = {
'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Restricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# smooth trend model with a stochastic damped cycle. Notice
# that the difference from the local linear trend model is that
# `stochastic_level=False` here.
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
"""
Explanation: Model
Since the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{c{t}}{\text{cycle}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
The irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:
Local linear trend (the "unrestricted" model)
Smooth trend (the "restricted" model, since we are forcing $\sigma_\eta = 0$)
Below, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.
End of explanation
"""
# Output
output_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)
output_res = output_mod.fit(method='powell', disp=False)
# Prices
prices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)
prices_res = prices_mod.fit(method='powell', disp=False)
prices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)
prices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)
# Money
money_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)
money_res = money_mod.fit(method='powell', disp=False)
money_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)
money_restricted_res = money_restricted_mod.fit(method='powell', disp=False)
"""
Explanation: We now fit the following models:
Output, unrestricted model
Prices, unrestricted model
Prices, restricted model
Money, unrestricted model
Money, restricted model
End of explanation
"""
print(output_res.summary())
"""
Explanation: Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.
End of explanation
"""
fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));
"""
Explanation: For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.
The plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
End of explanation
"""
# Create Table I
table_i = np.zeros((5,6))
start = dta.index[0]
end = dta.index[-1]
time_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)
models = [
('US GNP', time_range, 'None'),
('US Prices', time_range, 'None'),
('US Prices', time_range, r'$\sigma_\eta^2 = 0$'),
('US monetary base', time_range, 'None'),
('US monetary base', time_range, r'$\sigma_\eta^2 = 0$'),
]
index = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])
parameter_symbols = [
r'$\sigma_\zeta^2$', r'$\sigma_\eta^2$', r'$\sigma_\kappa^2$', r'$\rho$',
r'$2 \pi / \lambda_c$', r'$\sigma_\varepsilon^2$',
]
i = 0
for res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):
if res.model.stochastic_level:
(sigma_irregular, sigma_level, sigma_trend,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
else:
(sigma_irregular, sigma_level,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
sigma_trend = np.nan
period_cycle = 2 * np.pi / frequency_cycle
table_i[i, :] = [
sigma_level*1e7, sigma_trend*1e7,
sigma_cycle*1e7, damping_cycle, period_cycle,
sigma_irregular*1e7
]
i += 1
pd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')
table_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)
table_i
"""
Explanation: Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
End of explanation
"""
|
opencobra/cobrapy
|
documentation_builder/gapfilling.ipynb
|
gpl-2.0
|
from cobra.io import load_model
from cobra.flux_analysis import gapfill
model = load_model("iYS1720")
"""
Explanation: Gapfillling
Model gap filling is the task of figuring out which reactions have to be added to a model to make it feasible. Several such algorithms have been reported e.g. Kumar et al. 2009 and Reed et al. 2006. Cobrapy has a gap filling implementation that is very similar to that of Reed et al. where we use a mixed-integer linear program to figure out the smallest number of reactions that need to be added for a user-defined collection of reactions, i.e. a universal model. Briefly, the problem that we try to solve is
Minimize: $$\sum_i c_i * z_i$$
subject to
$$Sv = 0$$
$$v^\star \geq t$$
$$l_i\leq v_i \leq u_i$$
$$v_i = 0 \textrm{ if } z_i = 0$$
Where l, u are lower and upper bounds for reaction i and z is an indicator variable that is zero if the reaction is not used and otherwise 1, c is a user-defined cost associated with using the ith reaction, $v^\star$ is the flux of the objective and t a lower bound for that objective. To demonstrate, let's take a model and remove some essential reactions from it.
End of explanation
"""
universal = cobra.Model("universal_reactions")
for i in [i.id for i in model.metabolites.f6p_c.reactions]:
reaction = model.reactions.get_by_id(i)
universal.add_reaction(reaction.copy())
model.remove_reactions([reaction])
"""
Explanation: In this model D-Fructose-6-phosphate is an essential metabolite. We will remove all the reactions using it, and at them to a separate model.
End of explanation
"""
model.optimize().objective_value
"""
Explanation: Now, because of these gaps, the model won't grow.
End of explanation
"""
solution = gapfill(model, universal, demand_reactions=False)
for reaction in solution[0]:
print(reaction.id)
"""
Explanation: We will use can use the model's original objective, growth, to figure out which of the removed reactions are required for the model be feasible again. This is very similar to making the 'no-growth but growth (NGG)' predictions of Kumar et al. 2009.
End of explanation
"""
result = gapfill(model, universal, demand_reactions=False, iterations=4)
for i, entries in enumerate(result):
print("---- Run %d ----" % (i + 1))
for e in entries:
print(e.id)
"""
Explanation: We can obtain multiple possible reaction sets by having the algorithm go through multiple iterations.
End of explanation
"""
with model:
model.objective = model.add_boundary(model.metabolites.f6p_c, type='demand')
solution = gapfill(model, universal)
for reaction in solution[0]:
print(reaction.id)
"""
Explanation: We can also instead of using the original objective, specify a given metabolite that we want the model to be able to produce.
End of explanation
"""
|
therealAJ/python-sandbox
|
data-science/learning/ud2/Part 1 Exercise Solutions/Ecommerce Purchases Exercise .ipynb
|
gpl-3.0
|
import pandas as pd
ecom = pd.read_csv('Ecommerce Purchases')
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>
Ecommerce Purchases Exercise
In this Exercise you will be given some Fake Data about some purchases done through Amazon! Just go ahead and follow the directions and try your best to answer the questions and complete the tasks. Feel free to reference the solutions. Most of the tasks can be solved in different ways. For the most part, the questions get progressively harder.
Please excuse anything that doesn't make "Real-World" sense in the dataframe, all the data is fake and made-up.
Also note that all of these questions can be answered with one line of code.
Import pandas and read in the Ecommerce Purchases csv file and set it to a DataFrame called ecom.
End of explanation
"""
ecom.head()
"""
Explanation: Check the head of the DataFrame.
End of explanation
"""
ecom.info()
"""
Explanation: How many rows and columns are there?
End of explanation
"""
ecom['Purchase Price'].mean()
"""
Explanation: What is the average Purchase Price?
End of explanation
"""
ecom['Purchase Price'].max()
ecom['Purchase Price'].min()
"""
Explanation: What were the highest and lowest purchase prices?
End of explanation
"""
ecom[ecom['Language'] == 'en'].count()
"""
Explanation: How many people have English 'en' as their Language of choice on the website?
End of explanation
"""
ecom[ecom['Job'] =='Lawyer'].info()
"""
Explanation: How many people have the job title of "Lawyer" ?
End of explanation
"""
ecom['AM or PM'].value_counts()
"""
Explanation: How many people made the purchase during the AM and how many people made the purchase during PM ?
(Hint: Check out value_counts() )
End of explanation
"""
ecom['Job'].value_counts().head(5)
"""
Explanation: What are the 5 most common Job Titles?
End of explanation
"""
ecom[ecom['Lot']=='90 WT']['Purchase Price']
"""
Explanation: Someone made a purchase that came from Lot: "90 WT" , what was the Purchase Price for this transaction?
End of explanation
"""
ecom[ecom['Credit Card']==4926535242672853]['Email']
"""
Explanation: What is the email of the person with the following Credit Card Number: 4926535242672853
End of explanation
"""
ecom[(ecom['Purchase Price']>95) & (ecom['CC Provider']=='American Express')].count()
"""
Explanation: How many people have American Express as their Credit Card Provider and made a purchase above $95 ?
End of explanation
"""
def cc_split(card_year):
splited = card_year.split('/')
if(splited[1] == '25'):
return True
else:
return False
sum(ecom['CC Exp Date'].apply(cc_split))
"""
Explanation: Hard: How many people have a credit card that expires in 2025?
End of explanation
"""
def email_split(email):
email_arr = email.split('@')
return email_arr[1]
ecom['Email'].apply(email_split).value_counts().head(5)
"""
Explanation: Hard: What are the top 5 most popular email providers/hosts (e.g. gmail.com, yahoo.com, etc...)
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_evoked_delayed_ssp.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Create evoked objects in delayed SSP mode
This script shows how to apply SSP projectors delayed, that is,
at the evoked stage. This is particularly useful to support decisions
related to the trade-off between denoising and preserving signal.
We first will extract Epochs and create evoked objects
with the required settings for delayed SSP application.
Then we will explore the impact of the particular SSP projectors
on the evoked data.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = io.Raw(raw_fname, preload=True)
raw.filter(1, 40, method='iir')
events = mne.read_events(event_fname)
# pick magnetometer channels
picks = mne.pick_types(raw.info, meg='mag', stim=False, eog=True,
include=[], exclude='bads')
# If we suspend SSP projection at the epochs stage we might reject
# more epochs than necessary. To deal with this we set proj to `delayed`
# while passing reject parameters. Each epoch will then be projected before
# performing peak-to-peak amplitude rejection. If it survives the rejection
# procedure the unprojected raw epoch will be employed instead.
# As a consequence, the point in time at which the projection is applied will
# not have impact on the final results.
# We will make use of this function to prepare for interactively selecting
# projections at the evoked stage.
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=dict(mag=4e-12),
proj='delayed')
evoked = epochs.average() # average epochs and get an Evoked dataset.
"""
Explanation: Set parameters
End of explanation
"""
# Here we expose the details of how to apply SSPs reversibly
title = 'Incremental SSP application'
# let's first move the proj list to another location
projs, evoked.info['projs'] = evoked.info['projs'], []
fig, axes = plt.subplots(2, 2) # create 4 subplots for our four vectors
# As the bulk of projectors was extracted from the same source, we can simply
# iterate over our collection of projs and add them step by step to see how
# the signals change as a function of the SSPs applied. As this operation
# can't be undone we will operate on copies of the original evoked object to
# keep things reversible.
for proj, ax in zip(projs, axes.flatten()):
evoked.add_proj(proj) # add projection vectors loop by loop.
evoked.copy().apply_proj().plot(axes=ax) # apply on a copy of evoked
ax.set_title('+ %s' % proj['desc']) # extract description.
plt.suptitle(title)
mne.viz.tight_layout()
# We also could have easily visualized the impact of single projection vectors
# by deleting the vector directly after visualizing the changes.
# E.g. had we appended the following line to our loop:
# `evoked.del_proj(-1)`
# Often, it is desirable to interactively explore data. To make this more
# convenient we can make use of the 'interactive' option. This will open a
# check box that allows us to reversibly select projection vectors. Any
# modification of the selection will immediately cause the figure to update.
evoked.plot(proj='interactive')
# Hint: the same works with evoked.plot_topomap
"""
Explanation: Interactively select / deselect the SSP projection vectors
End of explanation
"""
|
bayesimpact/bob-emploi
|
data_analysis/notebooks/datasets/rome/update_from_v339_to_v341.ipynb
|
gpl-3.0
|
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '339'
NEW_VERSION = '341'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
"""
Explanation: Author: Pascal, pascal@bayesimpact.org
Date: 2019-10-23
ROME update from v339 to v341
In October 2019 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v341. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
"""
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
"""
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
"""
list(deleted_files)
"""
Explanation: Let's find this deleted file:
End of explanation
"""
en_tete_regroupement = pd.read_csv(list(deleted_files)[0])
display(en_tete_regroupement.head())
print(f'{len(en_tete_regroupement)} rows')
"""
Explanation: OK, not too bad: this is a file we've never used. We'll still check its content to make sure:
End of explanation
"""
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
def read_csv(filename):
try:
return pd.read_csv(filename)
except pd.errors.ParserError:
display(f'While parsing: {filename}')
raise
rome_data = [VersionedDataset(
basename=path.basename(f),
old=read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
"""
Explanation: It looks like a header for some kinds of activities. Not that big a deal.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
"""
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
"""
Explanation: Let's make sure the structure hasn't changed:
End of explanation
"""
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
print(f'New columns: {set(jobs.old.columns) - set(jobs.new.columns)}')
print(f'Old columns: {set(jobs.new.columns) - set(jobs.old.columns)}')
"""
Explanation: OK, let's check what's new in there:
End of explanation
"""
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
"""
Explanation: Ouch, it seems they have decided to rename one column. Lucky us we never used it.
Now let's see for each file if there are more or less rows.
End of explanation
"""
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
"""
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
End of explanation
"""
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
"""
Explanation: Alright, so the only change seems to be 1 new job added. Let's take a look (only showing interesting fields):
End of explanation
"""
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
"""
Explanation: That's indeed a new job related to digitalization of the construction industry.
OK, let's check at the changes in items:
End of explanation
"""
items.new[items.new.code_ogr.isin(new_items)].head()
"""
Explanation: As anticipated it is a very minor change (hard to see it visually): there are 17 new ones have been created. Let's have a look at them.
End of explanation
"""
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
"""
Explanation: The new ones seem legit to me.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
End of explanation
"""
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(5)
"""
Explanation: So in addition to the added items, there are few fixes. Let's have a look at them:
End of explanation
"""
|
walkon302/CDIPS_Recommender
|
notebook_versions/Recommendor_Method_Nathans_v1.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
"""
Explanation: Recommendation Method 1: Most similar items to user's previous views
Algorithm
Offline:
1. For each item, calculate features on trained neural network $ f_j $
2. For each user, look up previous views and average the features together of the previous visit $ f_i = \sum_j f_j*I(i,j) $
3. Store the features of the 'typical' item viewed by this user.
4. Calculate similarity of all items to user's 'typical item', store as a recommend list
Online:
1. User comes to website
2. Recommend the top 20 items from his recommend list.
End of explanation
"""
# load smaller user behavior dataset
user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views.pkl')
user_profile.head()
# load item features (indexed by spu)
spu_fea = pd.read_pickle("../data_nn_features/spu_fea.pkl") #takes forever to load
len(user_profile)
users = user_profile.user_id.unique()
len(users)
len(user_profile.buy_sn.unique())
# sample 100 users
users_sample = np.random.choice(users,size=100)
user_profile_sample = user_profile.loc[user_profile.user_id.isin(users_sample),]
len(user_profile_sample)
users_sample
# make a function for each user??
user_buy_dict = {}
average_viewed_features_dict = {}
# loop through users
for user_id in users_sample:
# get his trajectory
trajectory = user_profile_sample.loc[user_profile_sample.user_id==user_id,]
# save buy image
user_buy_dict[user_id] = trajectory.buy_spu.as_matrix()[0]
# save buy category
# remove buy item
trajectory = trajectory.loc[trajectory.view_spu!=user_buy_dict[user_id]]
n_features = len(spu_fea.features.as_matrix()[0])
n_views = len(trajectory)
# get previous views
features_items = np.empty((n_features,n_views))
for vi,view_spu in enumerate(trajectory.view_spu):
# load features for image
if view_spu in spu_fea.spu_id.values:
features_items[:,vi] = spu_fea.loc[spu_fea.spu_id==view_spu,'features'].as_matrix()[0] # return a 1-D np array
else:
# this shouldn't happen
raise ValueError('all items should have features')
features_items[:,vi] = np.ones(n_features) # if features don't exist for an item, add array of ones (shouldn't change average)
# average features
average_viewed_features_dict[user_id] = np.mean(features_items,axis=1)
#average_viewed_features_dict
def dot(K, L):
if len(K) != len(L): return 0
return sum(i[0]*i[1] for i in zip(K, L))
def similarity(item_1, item_2):
return dot(item_1, item_2) / np.sqrt(dot(item_1, item_1)*dot(item_2, item_2))
# for each user
user_buy_ranks = np.empty(len(users_sample))
no_ranks = np.empty(len(users_sample))
for ui,user_id in enumerate(users_sample):
print(ui)
# load average trajectory
average_features = average_viewed_features_dict[user_id]
# get bought item
buy_spu = user_buy_dict[user_id]
# find buy item categoriy
buy_sn = user_profile_sample.loc[user_profile_sample['buy_spu']==buy_spu,'buy_sn'].as_matrix()[0] # should assert they are all the same
# find all other items in the category
spus_in_category_b = user_profile.loc[user_profile.buy_sn==buy_sn,'buy_spu'].unique()
spus_in_category_v = user_profile.loc[user_profile.view_sn==buy_sn,'view_spu'].unique()
spus_in_category = list(spus_in_category_b)+list(spus_in_category_v)
assert buy_spu in spus_in_category
# does it make sense to pre-calculate this matrix of similarities (average user similarity for each bought item) #
# calculate similarity with all candidate in buy items
item_sim_in_category = pd.DataFrame(data = spus_in_category,columns=['spu'])
for spu in spus_in_category:
# load features for image
features_other = spu_fea.loc[spu_fea.spu_id==spu,'features'].as_matrix()[0] # return a 1-D np array
item_sim_in_category.loc[item_sim_in_category['spu']==spu,'similarity']= similarity(average_features,features_other)
item_sim_in_category['rank']=item_sim_in_category['similarity'].rank()
user_buy_ranks[ui]=item_sim_in_category.loc[item_sim_in_category.spu==buy_spu,'rank'].as_matrix()[0]
no_ranks[ui]=item_sim_in_category['rank'].max()
item_sim_in_category.sort_values(by='rank')
user_buy_ranks[ui]
item_sim_in_category['rank'].max()
item_sim_in_category['rank'].unique()
# plt.subplot(1,3,1)
# plt.scatter(np.arange(len(users_sample)),user_buy_ranks)
# plt.subplot(1,3,2)
# plt.scatter(np.arange(len(users_sample)),no_ranks)
plt.subplot(1,1,1)
plt.scatter(np.arange(len(users_sample)),user_buy_ranks/no_ranks)
sns.despine()
plt.axhline(y=0.5,label='chance',c='k',linestyle='--')
plt.axhline(y=np.mean(user_buy_ranks/no_ranks),label='mean')
plt.legend()
plt.xlabel('user (chosen randomly)')
plt.ylabel('ratio: buy rank / items in buy category')
"""
Explanation: FUNCTIONS HERE
Evaluation
How well does nathan's algorithm work?
get a small set of trajectories. Maybe 100 people.
seperate out the buy items.
remove the items that are on the same day.
calculate typcial images for day 1.
make recommend list.
calculate score.
End of explanation
"""
%%bash
jupyter nbconvert --to slides Recommendor_Method_Nathans.ipynb && mv Recommendor_Method_Nathans.slides.html ../notebook_slides/Recommendor_Method_Nathans_v1.slides.html
jupyter nbconvert --to html Recommendor_Method_Nathans.ipynb && mv Recommendor_Method_Nathans.html ../notebook_htmls/Recommendor_Method_Nathans_v1.html
cp Recommendor_Method_Nathans.ipynb ../notebook_versions/Recommendor_Method_Nathans_v1.ipynb
"""
Explanation: Generate Random Recomender
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/csir-csiro/cmip6/models/vresm-1-0/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: VRESM-1-0
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
ad960009/dist-keras
|
examples/mnist_analysis.ipynb
|
gpl-3.0
|
!(date +%d\ %B\ %G)
"""
Explanation: MNIST Analysis with Distributed Keras
Joeri Hermans (Technical Student, IT-DB-SAS, CERN)
Departement of Knowledge Engineering
Maastricht University, The Netherlands
End of explanation
"""
%matplotlib inline
import numpy as np
from keras.optimizers import *
from keras.models import Sequential
from keras.layers.core import *
from keras.layers.convolutional import *
from pyspark import SparkContext
from pyspark import SparkConf
from matplotlib import pyplot as plt
from pyspark import StorageLevel
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import OneHotEncoder
from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from distkeras.trainers import *
from distkeras.predictors import *
from distkeras.transformers import *
from distkeras.evaluators import *
from distkeras.utils import *
"""
Explanation: In this notebook we will show you how to process the MNIST dataset using Distributed Keras. As in the workflow notebook, we will guide you through the complete machine learning pipeline.
Preparation
To get started, we first load all the required imports. Please make sure you installed dist-keras, and seaborn. Furthermore, we assume that you have access to an installation which provides Apache Spark.
Before you start this notebook, place make sure you ran the "MNIST preprocessing" notebook first, since we will be evaluating a manually "enlarged dataset".
End of explanation
"""
# Modify these variables according to your needs.
application_name = "Distributed Keras MNIST Analysis"
using_spark_2 = False
local = False
path = "mnist.parquet"
if local:
# Tell master to use local resources.
master = "local[*]"
num_processes = 3
num_executors = 1
else:
# Tell master to use YARN.
master = "yarn-client"
num_executors = 30
num_processes = 1
# This variable is derived from the number of cores and executors, and will be used to assign the number of model trainers.
num_workers = num_executors * num_processes
print("Number of desired executors: " + `num_executors`)
print("Number of desired processes / executor: " + `num_processes`)
print("Total number of workers: " + `num_workers`)
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_processes`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.locality.wait", "0")
conf.set("spark.executor.memory", "5g")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
# Check if the user is running Spark 2.0 +
if using_spark_2:
sc = SparkSession.builder.config(conf=conf) \
.appName(application_name) \
.getOrCreate()
else:
# Create the Spark context.
sc = SparkContext(conf=conf)
# Add the missing imports
from pyspark import SQLContext
sqlContext = SQLContext(sc)
# Check if we are using Spark 2.0
if using_spark_2:
reader = sc
else:
reader = sqlContext
# Read the training and test set.
training_set = reader.read.parquet('data/mnist_train_big.parquet') \
.select("features_normalized_dense", "label_encoded", "label")
test_set = reader.read.parquet('data/mnist_test_preprocessed.parquet') \
.select("features_normalized_dense", "label_encoded", "label")
# Print the schema of the dataset.
training_set.printSchema()
"""
Explanation: In the following cell, adapt the parameters to fit your personal requirements.
End of explanation
"""
mlp = Sequential()
mlp.add(Dense(1000, input_shape=(784,)))
mlp.add(Activation('relu'))
mlp.add(Dropout(0.2))
mlp.add(Dense(200))
mlp.add(Activation('relu'))
mlp.add(Dropout(0.2))
mlp.add(Dense(10))
mlp.add(Activation('softmax'))
mlp.summary()
optimizer_mlp = 'adam'
loss_mlp = 'categorical_crossentropy'
"""
Explanation: Model Development
Multilayer Perceptron
End of explanation
"""
training_set = training_set.repartition(num_workers)
test_set = test_set.repartition(num_workers)
training_set.cache()
test_set.cache()
print("Number of training instances: " + str(training_set.count()))
print("Number of testing instances: " + str(test_set.count()))
"""
Explanation: Training
Prepare the training and test set for evaluation and training.
End of explanation
"""
def evaluate_accuracy(model, test_set, features="features_normalized_dense"):
evaluator = AccuracyEvaluator(prediction_col="prediction_index", label_col="label")
predictor = ModelPredictor(keras_model=model, features_col=features)
transformer = LabelIndexTransformer(output_dim=10)
test_set = test_set.select(features, "label")
test_set = predictor.predict(test_set)
test_set = transformer.transform(test_set)
score = evaluator.evaluate(test_set)
return score
"""
Explanation: Evaluation
We define a utility function which will compute the accuracy for us.
End of explanation
"""
trainer = ADAG(keras_model=mlp, worker_optimizer=optimizer_mlp, loss=loss_mlp, num_workers=num_workers,
batch_size=4, communication_window=5, num_epoch=1,
features_col="features_normalized_dense", label_col="label_encoded")
# Modify the default parallelism factor.
trained_model = trainer.train(training_set)
# View the weights of the trained model.
trained_model.get_weights()
print("Training time: " + str(trainer.get_training_time()))
print("Accuracy: " + str(evaluate_accuracy(trained_model, test_set)))
"""
Explanation: ADAG
End of explanation
"""
|
myuuuuun/NumericalCalculation
|
chapter2/Chapter2.ipynb
|
mit
|
#!/usr/bin/python
#-*- encoding: utf-8 -*-
"""
Copyright (c) 2015 @myuuuuun
https://github.com/myuuuuun/NumericalCalculation
This software is released under the MIT License.
"""
%matplotlib inline
from __future__ import division, print_function
import math
import numpy as np
import functools
import sys
import types
import matplotlib.pyplot as plt
EPSIRON = 1.0e-8
"""
係数行列[a_0, a_1, ..., a_n] から、n次多項式 a_0 + a_1 * x + ... + a_n * x^n
を生成して返す(関数を返す)
"""
def make_polynomial(a_matrix):
def __func__(x):
f = 0
for n, a_i in enumerate(a_matrix):
f += a_i * pow(x, n)
return f
return __func__
"""
グラフを描画し、その上に元々与えられていた点列を重ねてプロットする
INPUT:
points: 与えられた点列のリスト[[x_0, f_0], [x_1, f_1], ..., [x_n, f_n]]
x_list: 近似曲線を描写するxの範囲・密度
f_list: 上のxに対応するfの値
"""
def points_on_func(points, x_list, f_list, **kwargs):
title = kwargs.get('title', "Given Points and Interpolation Curve")
xlim = kwargs.get('xlim', False)
ylim = kwargs.get('ylim', False)
fig, ax = plt.subplots()
plt.title(title)
plt.plot(x_list, f_list, color='b', linewidth=1, label="Interpolation Curve")
points_x = [point[0] for point in points]
points_y = [point[1] for point in points]
plt.plot(points_x, points_y, 'o', color='r', label="Given Points")
plt.xlabel("x")
plt.ylabel("f")
if xlim:
ax.set_xlim(xlim)
if ylim:
ax.set_ylim(ylim)
plt.legend()
plt.show()
"""
Explanation: 第2章 関数近似(補間)
教科書第2章に載っているアルゴリズムを実装していきます。
各種ライブラリのインポート・後で使う汎用関数を定義
End of explanation
"""
def lagrange(points):
# 次元数
dim = len(points) - 1
# matrix Xをもとめる(ヴァンデルモンドの行列式)
x_matrix = np.array([[pow(point[0], j) for j in range(dim + 1)] for point in points])
# matrix Fをもとめる
f_matrix = np.array([point[1] for point in points])
# 線形方程式 X * A = F を解く
a_matrix = np.linalg.solve(x_matrix, f_matrix)
return a_matrix
# lagrange()で求めた補間多項式と、元の点列をプロットしてみる
# 与えられた点列のリスト
points = [[1, 1], [2, 2], [3, 1], [4, 1], [5, 3]]
# ラグランジュの補間多項式の係数行列を求める
a_matrix = lagrange(points)
# 係数行列を多項式に変換
func_lagrange = make_polynomial(a_matrix)
# 0から8まで、0.1刻みでxとfの値のセットを求める
x_list = np.arange(0, 8, 0.1)
f_list = func_lagrange(x_list)
# プロットする
points_on_func(points, x_list, f_list)
"""
Explanation: 式(2.5)の実装
n+1個の点列を入力し、逆行列を解いて、補間多項式を求め、n次補間多項式の係数行列[a_0, a_1, ..., a_n]を返す
INPUT
points: n+1個の点列[[x_0, f_0], [x_1, f_1], ..., [x_n, f_n]]
OUTPUT
n次補間多項式の係数行列[a_0, a_1, ..., a_n]を返す
End of explanation
"""
def lagrange2(points, x_list=np.arange(-5, 5, 0.1)):
dim = len(points) - 1
f_list = []
for x in x_list:
L = 0
for i in range(dim + 1):
Li = 1
for j in range(dim + 1):
if j != i:
Li *= (x - points[j][0]) / (points[i][0] - points[j][0])
Li *= points[i][1]
L += Li
f_list.append(L)
return f_list
points = [[1, 1], [2, 2], [3, 1], [4, 1], [5, 3]]
a_matrix = lagrange2(points, np.arange(0, 8, 0.1))
points_on_func(points, np.arange(0, 8, 0.1), a_matrix)
"""
Explanation: 式(2.7)の実装
補間多項式を変形した式から、逆行列の計算をすることなく、ラグランジュの補間多項式を求める
ただし、今回は補間多項式の係数行列を返すのではなく、具体的なxの値のリストに対して、補間値のリストを生成して返す
INPUT
points: 与えられた点列を入力
x_list: 補間値を求めたいxのリストを入力
OUTPUT
f_list: x_listの各要素に対する補間値のリスト
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
"""
#@test {"skip": true}
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg python-opengl
!pip install pyglet
!pip install 'imageio==2.4.0'
!pip install 'xvfbwrapper==0.2.9'
!pip install tf-agents[reverb]
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import io
import matplotlib
import matplotlib.pyplot as plt
import os
import shutil
import tempfile
import tensorflow as tf
import zipfile
import IPython
try:
from google.colab import files
except ImportError:
files = None
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import policy_saver
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tempdir = os.getenv("TEST_TMPDIR", tempfile.gettempdir())
#@test {"skip": true}
# Set up a virtual display for rendering OpenAI gym environments.
import xvfbwrapper
xvfbwrapper.Xvfb(1400, 900, 24).start()
"""
Explanation: CheckpointerとPolicySaver
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/10_checkpointer_policysaver_tutorial"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
はじめに
tf_agents.utils.common.Checkpointerは、ローカルストレージとの間でトレーニングの状態、ポリシーの状態、およびreplay_bufferの状態を保存/読み込むユーティリティです。
tf_agents.policies.policy_saver.PolicySaverは、ポリシーのみを保存/読み込むツールであり、Checkpointerよりも軽量です。PolicySaverを使用すると、ポリシーを作成したコードに関する知識がなくてもモデルをデプロイできます。
このチュートリアルでは、DQNを使用してモデルをトレーニングし、次にCheckpointerとPolicySaverを使用して、状態とモデルをインタラクティブな方法で保存および読み込む方法を紹介します。PolicySaverでは、TF2.0の新しいsaved_modelツールとフォーマットを使用することに注意してください。
セットアップ
以下の依存関係をインストールしていない場合は、実行します。
End of explanation
"""
env_name = "CartPole-v1"
collect_steps_per_iteration = 100
replay_buffer_capacity = 100000
fc_layer_params = (100,)
batch_size = 64
learning_rate = 1e-3
log_interval = 5
num_eval_episodes = 10
eval_interval = 1000
"""
Explanation: DQNエージェント
前のColabと同じように、DQNエージェントを設定します。 このColabでは、詳細は主な部分ではないので、デフォルトでは非表示になっていますが、「コードを表示」をクリックすると詳細を表示できます。
ハイパーパラメーター
End of explanation
"""
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
"""
Explanation: 環境
End of explanation
"""
#@title
q_net = q_network.QNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=global_step)
agent.initialize()
"""
Explanation: エージェント
End of explanation
"""
#@title
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
collect_driver = dynamic_step_driver.DynamicStepDriver(
train_env,
agent.collect_policy,
observers=[replay_buffer.add_batch],
num_steps=collect_steps_per_iteration)
# Initial data collection
collect_driver.run()
# Dataset generates trajectories with shape [BxTx...] where
# T = n_step_update + 1.
dataset = replay_buffer.as_dataset(
num_parallel_calls=3, sample_batch_size=batch_size,
num_steps=2).prefetch(3)
iterator = iter(dataset)
"""
Explanation: データ収集
End of explanation
"""
#@title
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
def train_one_iteration():
# Collect a few steps using collect_policy and save to the replay buffer.
collect_driver.run()
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience)
iteration = agent.train_step_counter.numpy()
print ('iteration: {0} loss: {1}'.format(iteration, train_loss.loss))
"""
Explanation: エージェントのトレーニング
End of explanation
"""
#@title
def embed_gif(gif_buffer):
"""Embeds a gif file in the notebook."""
tag = '<img src="data:image/gif;base64,{0}"/>'.format(base64.b64encode(gif_buffer).decode())
return IPython.display.HTML(tag)
def run_episodes_and_create_video(policy, eval_tf_env, eval_py_env):
num_episodes = 3
frames = []
for _ in range(num_episodes):
time_step = eval_tf_env.reset()
frames.append(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_tf_env.step(action_step.action)
frames.append(eval_py_env.render())
gif_file = io.BytesIO()
imageio.mimsave(gif_file, frames, format='gif', fps=60)
IPython.display.display(embed_gif(gif_file.getvalue()))
"""
Explanation: ビデオ生成
End of explanation
"""
print ('global_step:')
print (global_step)
run_episodes_and_create_video(agent.policy, eval_env, eval_py_env)
"""
Explanation: ビデオ生成
ビデオを生成して、ポリシーのパフォーマンスを確認します。
End of explanation
"""
checkpoint_dir = os.path.join(tempdir, 'checkpoint')
train_checkpointer = common.Checkpointer(
ckpt_dir=checkpoint_dir,
max_to_keep=1,
agent=agent,
policy=agent.policy,
replay_buffer=replay_buffer,
global_step=global_step
)
"""
Explanation: チェックポインタとPolicySaverのセットアップ
CheckpointerとPolicySaverを使用する準備ができました。
Checkpointer
End of explanation
"""
policy_dir = os.path.join(tempdir, 'policy')
tf_policy_saver = policy_saver.PolicySaver(agent.policy)
"""
Explanation: Policy Saver
End of explanation
"""
#@test {"skip": true}
print('Training one iteration....')
train_one_iteration()
"""
Explanation: 1回のイテレーションのトレーニング
End of explanation
"""
train_checkpointer.save(global_step)
"""
Explanation: チェックポイントに保存
End of explanation
"""
train_checkpointer.initialize_or_restore()
global_step = tf.compat.v1.train.get_global_step()
"""
Explanation: チェックポイントに復元
チェックポイントに復元するためには、チェックポイントが作成されたときと同じ方法でオブジェクト全体を再作成する必要があります。
End of explanation
"""
tf_policy_saver.save(policy_dir)
"""
Explanation: また、ポリシーを保存して指定する場所にエクスポートします。
End of explanation
"""
saved_policy = tf.saved_model.load(policy_dir)
run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)
"""
Explanation: ポリシーの作成に使用されたエージェントまたはネットワークについての知識がなくても、ポリシーを読み込めるので、ポリシーのデプロイが非常に簡単になります。
保存されたポリシーを読み込み、それがどのように機能するかを確認します。
End of explanation
"""
#@title Create zip file and upload zip file (double-click to see the code)
def create_zip_file(dirname, base_filename):
return shutil.make_archive(base_filename, 'zip', dirname)
def upload_and_unzip_file_to(dirname):
if files is None:
return
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
shutil.rmtree(dirname)
zip_files = zipfile.ZipFile(io.BytesIO(uploaded[fn]), 'r')
zip_files.extractall(dirname)
zip_files.close()
"""
Explanation: エクスポートとインポート
以下は、後でトレーニングを続行し、再度トレーニングすることなくモデルをデプロイできるように、Checkpointer とポリシーディレクトリをエクスポート/インポートするのに役立ちます。
「1回のイテレーションのトレーニング」に戻り、後で違いを理解できるように、さらに数回トレーニングします。 結果が少し改善し始めたら、以下に進みます。
End of explanation
"""
train_checkpointer.save(global_step)
checkpoint_zip_filename = create_zip_file(checkpoint_dir, os.path.join(tempdir, 'exported_cp'))
"""
Explanation: チェックポイントディレクトリからzipファイルを作成します。
End of explanation
"""
#@test {"skip": true}
if files is not None:
files.download(checkpoint_zip_filename) # try again if this fails: https://github.com/googlecolab/colabtools/issues/469
"""
Explanation: zipファイルをダウンロードします。
End of explanation
"""
#@test {"skip": true}
upload_and_unzip_file_to(checkpoint_dir)
train_checkpointer.initialize_or_restore()
global_step = tf.compat.v1.train.get_global_step()
"""
Explanation: 10〜15回ほどトレーニングした後、チェックポイントのzipファイルをダウンロードし、[ランタイム]> [再起動してすべて実行]に移動してトレーニングをリセットし、このセルに戻ります。ダウンロードしたzipファイルをアップロードして、トレーニングを続けます。
End of explanation
"""
tf_policy_saver.save(policy_dir)
policy_zip_filename = create_zip_file(policy_dir, os.path.join(tempdir, 'exported_policy'))
#@test {"skip": true}
if files is not None:
files.download(policy_zip_filename) # try again if this fails: https://github.com/googlecolab/colabtools/issues/469
"""
Explanation: チェックポイントディレクトリをアップロードしたら、「1回のイテレーションのトレーニング」に戻ってトレーニングを続けるか、「ビデオ生成」に戻って読み込まれたポリシーのパフォーマンスを確認します。
または、ポリシー(モデル)を保存して復元することもできます。Checkpointerとは異なり、トレーニングを続けることはできませんが、モデルをデプロイすることはできます。ダウンロードしたファイルはCheckpointerのファイルよりも大幅に小さいことに注意してください。
End of explanation
"""
#@test {"skip": true}
upload_and_unzip_file_to(policy_dir)
saved_policy = tf.saved_model.load(policy_dir)
run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)
"""
Explanation: ダウンロードしたポリシーディレクトリ(exported_policy.zip)をアップロードし、保存したポリシーの動作を確認します。
End of explanation
"""
eager_py_policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(
policy_dir, eval_py_env.time_step_spec(), eval_py_env.action_spec())
# Note that we're passing eval_py_env not eval_env.
run_episodes_and_create_video(eager_py_policy, eval_py_env, eval_py_env)
"""
Explanation: SavedModelPyTFEagerPolicy
TFポリシーを使用しない場合は、py_tf_eager_policy.SavedModelPyTFEagerPolicyを使用して、Python envでsaved_modelを直接使用することもできます。
これは、eagerモードが有効になっている場合にのみ機能することに注意してください。
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_saved_model(policy_dir, signature_keys=["action"])
tflite_policy = converter.convert()
with open(os.path.join(tempdir, 'policy.tflite'), 'wb') as f:
f.write(tflite_policy)
"""
Explanation: ポリシーを TFLite に変換する
詳細については、「TensorFlow Lite 推論」をご覧ください。
End of explanation
"""
import numpy as np
interpreter = tf.lite.Interpreter(os.path.join(tempdir, 'policy.tflite'))
policy_runner = interpreter.get_signature_runner()
print(policy_runner._inputs)
policy_runner(**{
'0/discount':tf.constant(0.0),
'0/observation':tf.zeros([1,4]),
'0/reward':tf.constant(0.0),
'0/step_type':tf.constant(0)})
"""
Explanation: TFLite モデルで推論を実行する
詳細については、「TensorFlow Lite 推論」をご覧ください。
End of explanation
"""
|
xlbaojun/Note-jupyter
|
05其他/pandas文档-zh-master/检索 ,查询数据.ipynb
|
gpl-2.0
|
import numpy as np
import pandas as pd
"""
Explanation: 检索,查询数据
这一节学习如何检索pandas数据。
End of explanation
"""
dates = pd.date_range('1/1/2000', periods=8)
dates
df = pd.DataFrame(np.random.randn(8,4), index=dates, columns=list('ABCD'))
df
panel = pd.Panel({'one':df, 'two':df-df.mean()})
panel
"""
Explanation: Python和Numpy的索引操作符[]和属性操作符‘.’能够快速检索pandas数据。
然而,这两种方式的效率在pandas中可能不是最优的,我们推荐使用专门优化过的pandas数据检索方法。而这些方法则是本节要介绍的。
多种索引方式
pandas支持三种不同的索引方式:
* .loc 基于label进行索引,当然也可以和boolean数组一起使用。‘.loc’接受的输入:
* 一个单独的label,比如5、'a',注意,这里的5是index值,而不是整形下标
* label列表或label数组,比如['a', 'b', 'c']
* .iloc 是基本的基于整数位置(从0到axis的length-1)的,当然也可以和一个boolean数组一起使用。当提供检索的index越界时会有IndexError错误,注意切片索引(slice index)允许越界。
* .ix 支持基于label和整数位置混合的数据获取方式。默认是基本label的. .ix是最常用的方式,它支持所有.loc和.iloc的输入。如果提供的是纯label或纯整数索引,我们建议使用.loc或 .iloc。
以 .loc为例看一下使用方式:
对象类型 | Indexers
Series | s.loc[indexer]
DataFrame | df.loc[row_indexer, column_indexer]
Panel | p.loc[item_indexer, major_indexer, minor_indexer]
最基本的索引和选择
最基本的选择数据方式就是使用[]操作符进行索引,
对象类型 | Selection | 返回值类型
Series | series[label],这里的label是index名 | 常数
DataFrame| frame[colname],使用列名 | Series对象,相应的colname那一列
Panel | panel[itemname] | DataFrame对象,相应的itemname那一个
下面用示例展示一下
End of explanation
"""
s = df['A'] #使用列名
s#返回的是 Series
"""
Explanation: 我们使用最基本的[]操作符
End of explanation
"""
s[dates[5]] #使用index名
panel['two']
"""
Explanation: Series使用index索引
End of explanation
"""
df
df[['B', 'A']] = df[['A', 'B']]
df
"""
Explanation: 也可以给[]传递一个column name组成的的list,形如df[[col1,col2]], 如果给出的某个列名不存在,会报错
End of explanation
"""
sa = pd.Series([1,2,3],index=list('abc'))
dfa = df.copy()
sa
sa.b #直接把index作为属性
dfa
dfa.A
panel.one
sa
sa.a = 5
sa
sa
dfa.A=list(range(len(dfa.index))) # ok if A already exists
dfa
dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
dfa
"""
Explanation: 通过属性访问 把column作为DataFrame对象的属性
可以直接把Series的index、DataFrame中的column、Panel中的item作为这些对象的属性使用,然后直接访问相应的index、column、item
End of explanation
"""
s
s[:5]
s[::2]
s[::-1]
"""
Explanation: 注意:使用属性和[] 有一点区别:
如果要新建一个column,只能使用[]
毕竟属性的含义就是现在存在的!不存在的列名当然不是属性了
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a new column.
使用属性要注意的:
* 如果一个已经存在的函数和列名相同,则不存在相应的属性哦
* 总而言之,属性的适用范围要比[]小
切片范围 Slicing ranges
可以使用 [] 还有.iloc切片,这里先介绍使用[]
对于Series来说,使用[]进行切片就像ndarray一样,
End of explanation
"""
s2 = s.copy()
s2[:5]=0 #赋值
s2
"""
Explanation: []不但可以检索,也可以赋值
End of explanation
"""
df[:3]
df[::-1]
"""
Explanation: 对于DataFrame对象来说,[]操作符按照行进行切片,非常有用。
End of explanation
"""
df1 = pd.DataFrame(np.random.rand(5,4), columns=list('ABCD'), index=pd.date_range('20160101',periods=5))
df1
df1.loc[2:3]
"""
Explanation: 使用Label进行检索
警告:
.loc要求检索时输入必须严格遵守index的类型,一旦输入类型不对,将会引起TypeError。
End of explanation
"""
df1.loc['20160102':'20160104']
"""
Explanation: 输入string进行检索没问题
End of explanation
"""
s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
s1
s1.loc['c':]
s1.loc['b']
"""
Explanation: 细心地你一定发现了,index='20160104'那一行也被检索出来了,没错,loc检索时范围是闭集合[start,end].
整型可以作为label检索,这是没问题的,不过要记住此时整型表示的是label而不是index中的下标!
.loc操作是检索时的基本操作,以下输入格式都是合法的:
* 一个label,比如:5、'a'. 记住这里的5表示的是index中的一个label而不是index中的一个下标。
* label组成的列表或者数组比如['a','b','c']
* 切片,比如'a':'f'.注意loc中切片范围是闭集合!
* 布尔数组
End of explanation
"""
s1.loc['c':]=0
s1
"""
Explanation: loc同样支持赋值操作
End of explanation
"""
df1 = pd.DataFrame(np.random.randn(6,4), index=list('abcdef'),columns=list('ABCD'))
df1
df1.loc[['a','b','c','d'],:]
df1.loc[['a','b','c','d']] #可以省略 ':'
"""
Explanation: 再来看看DataFramed的例子
End of explanation
"""
df1.loc['d':,'A':'C'] #注意是闭集合
df1.loc['a']
"""
Explanation: 使用切片检索
End of explanation
"""
df1.loc['a']>0
df1.loc[:,df1.loc['a']>0]
"""
Explanation: 使用布尔数组检索
End of explanation
"""
df1.loc['a','A']
df1.get_value('a','A')
"""
Explanation: 得到DataFrame中的某一个值, 等同于df1.get_value('a','A')
End of explanation
"""
s1 = pd.Series(np.random.randn(5),index=list(range(0,10,2)))
s1
s1.iloc[:3] #注意检索是半闭半开区间
s1.iloc[3]
"""
Explanation: 根据下标进行检索 Selection By Position
pandas提供了一系列的方法实现基于整型的检索。语义和python、numpy切片几乎一样。下标同样都是从0开始,并且进行的是半闭半开的区间检索[start,end)。如果输入 非整型label当做下标进行检索会引起IndexError。
.iloc的合法输入包括:
* 一个整数,比如5
* 整数组成的列表或者数组,比如[4,3,0]
* 整型表示的切片,比如1:7
* 布尔数组
看一下Series使用iloc检索的示例:
End of explanation
"""
s1.iloc[:3]=0
s1
"""
Explanation: iloc同样也可以进行赋值
End of explanation
"""
df1 = pd.DataFrame(np.random.randn(6,4),index=list(range(0,12,2)), columns=list(range(0,8,2)))
df1
df1.iloc[:3]
"""
Explanation: DataFrame的示例:
End of explanation
"""
df1.iloc[1:5,2:4]
df1.iloc[[1,3,5],[1,2]]
df1.iloc[1:3,:]
df1.iloc[:,1:3]
df1.iloc[1,1]#只检索一个元素
"""
Explanation: 进行行和列的检索
End of explanation
"""
df1.iloc[1]
df1.iloc[1:2]
"""
Explanation: 注意下面两个例子的区别:
End of explanation
"""
x = list('abcdef')
x
x[4:10] #这里x的长度是6
x[8:10]
s = pd.Series(x)
s
s.iloc[4:10]
s.iloc[8:10]
df1 = pd.DataFrame(np.random.randn(5,2), columns=list('AB'))
df1
df1.iloc[:,2:3]
df1.iloc[:,1:3]
df1.iloc[4:6]
"""
Explanation: 如果切片检索时输入的范围越界,没关系,只要pandas版本>=v0.14.0, 就能如同Python/Numpy那样正确处理。
注意:仅限于 切片检索
End of explanation
"""
df1.iloc[[4,5,6]]
"""
Explanation: 上面说到,这种优雅处理越界的能力仅限于输入全是切片,如果输入是越界的 列表或者整数,则会引起IndexError
End of explanation
"""
df1.iloc[:,4]
"""
Explanation: 输入有切片,有整数,如果越界同样不能处理
End of explanation
"""
s = pd.Series([0,1,2,3,4,5])
s
s.sample()
s.sample(n=6)
s.sample(3) #直接输入整数即可
"""
Explanation: 选择随机样本 Selecting Random Samples
使用sample()方法能够从行或者列中进行随机选择,适用对象包括Series、DataFrame和Panel。sample()方法默认对行进行随机选择,输入可以是整数或者小数。
End of explanation
"""
s.sample(frac=0.5)
s.sample(0.5) #必须输入frac=0.5
s.sample(frac=0.8) #6*0.8=4.8
s.sample(frac=0.7)# 6*0.7=4.2
"""
Explanation: 也可以输入小数,则会随机选择N*frac个样本, 结果进行四舍五入
End of explanation
"""
s
s.sample(n=6,replace=False)
s.sample(6,replace=True)
"""
Explanation: sample()默认进行的无放回抽样,可以利用replace=True参数进行可放回抽样。
End of explanation
"""
s = pd.Series([0,1,2,3,4,5])
s
example_weights=[0,0,0.2,0.2,0.2,0.4]
s.sample(n=3,weights=example_weights)
example_weights2 = [0.5, 0, 0, 0, 0, 0]
s.sample(n=1, weights=example_weights2)
s.sample(n=2, weights=example_weights2) #n>1 会报错,
"""
Explanation: 默认情况下,每一行/列都被等可能的采样,如果你想为每一行赋予一个被抽样选择的权重,可以利用weights参数实现。
注意:如果weights中各概率相加和不等于1,pandas会先对weights进行归一化,强制转为概率和为1!
End of explanation
"""
s
s.sample(7) #7不行
s.sample(7,replace=True)
"""
Explanation: 注意:由于sample默认进行的是无放回抽样,所以输入必须n<=行数,除非进行可放回抽样。
End of explanation
"""
df2 = pd.DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
df2
df2.sample(n=3,weights='weight_column')
"""
Explanation: 如果是对DataFrame对象进行有权重采样,一个简单 的方法是新增一列用于表示每一行的权重
End of explanation
"""
df3 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df3
df3.sample(1,axis=1)
"""
Explanation: 对列进行采样, axis=1
End of explanation
"""
df4 = pd.DataFrame({'col1':[1,2,3], 'clo2':[2,3,4]})
df4
"""
Explanation: 我们也可以使用random_state参数 为sample内部的随机数生成器提供种子数。
End of explanation
"""
df4.sample(n=2, random_state=2)
df4.sample(n=2,random_state=2)
df4.sample(n=2,random_state=3)
"""
Explanation: 注意下面两个示例,输出是相同的,因为使用了相同的种子数
End of explanation
"""
se = pd.Series([1,2,3])
se
se[5]=5
se
"""
Explanation: 使用赋值的方式扩充对象 Setting With Enlargement
用.loc/.ix/[]对不存在的键值进行赋值时,将会导致在对象中添加新的元素,它的键即为赋值时不存在的键。
对于Series来说,这是一种有效的添加操作。
End of explanation
"""
dfi = pd.DataFrame(np.arange(6).reshape(3,2),columns=['A','B'])
dfi
dfi.loc[:,'C']=dfi.loc[:,'A'] #对列进行扩充
dfi
dfi.loc[3]=5 #对行进行扩充
dfi
"""
Explanation: DataFrame可以在行或者列上扩充数据
End of explanation
"""
s.iat[5]
df.at[dates[5],'A']
df.iat[3,0]
"""
Explanation: 标量值的快速获取和赋值
如果仅仅想获取一个元素,使用[]未免太繁重了。pandas提供了快速获取一个元素的方法:at和iat. 适用于Series、DataFrame和Panel。
如果loc方法,at方法的合法输入是label,iat的合法输入是整型。
End of explanation
"""
df.at[dates[-1]+1,0]=7
df
"""
Explanation: 也可以进行赋值操作
End of explanation
"""
s = pd.Series(range(-3, 4))
s
s[s>0]
s[(s<-1) | (s>0.5)]
s[~(s<0)]
"""
Explanation: 布尔检索 Boolean indexing
另一种常用的操作是使用布尔向量过滤数据。运算符有三个:|(or), &(and), ~(not)。
注意:运算符的操作数要在圆括号内。
使用布尔向量检索Series的操作方式和numpy ndarray一样。
End of explanation
"""
df[df['A'] > 0]
"""
Explanation: DataFrame示例:
End of explanation
"""
df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
'c' : np.random.randn(7)})
df2
criterion = df2['a'].map(lambda x:x.startswith('t'))
df2[criterion]
df2[[x.startswith('t') for x in df2['a']]]
df2[criterion & (df2['b'] == 'x')]
"""
Explanation: 利用列表解析和map方法能够产生更加复杂的选择标准。
End of explanation
"""
df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
"""
Explanation: 结合loc、iloc等方法可以检索多个坐标下的数据.
End of explanation
"""
s = pd.Series(np.arange(5), index=np.arange(5)[::-1],dtype='int64')
s
s.isin([2,4,6])
s[s.isin([2,4,6])]
"""
Explanation: 使用isin方法检索 Indexing with isin
isin(is in)
对于Series对象来说,使用isin方法时传入一个列表,isin方法会返回一个布尔向量。布尔向量元素为1的前提是列表元素在Series对象中存在。看起来比较拗口,还是看例子吧:
End of explanation
"""
s[s.index.isin([2,4,6])]
s[[2,4,6]]
"""
Explanation: Index对象中也有isin方法.
End of explanation
"""
df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
'ids2':['a', 'n', 'c', 'n']})
df
values=['a', 'b', 1, 3]
df.isin(values)
"""
Explanation: DataFrame同样有isin方法,参数是数组或字典。二者的区别看例子吧:
End of explanation
"""
values = {'ids': ['a', 'b'], 'vals': [1, 3]}
df.isin(values)
"""
Explanation: 输入一个字典的情形:
End of explanation
"""
values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
row_mark = df.isin(values).all(1)
df[row_mark]
row_mark = df.isin(values).any(1)
df[row_mark]
"""
Explanation: 结合isin方法和any() all()可以对DataFrame进行快速查询。比如选择每一列都符合标准的行:
End of explanation
"""
s[s>0]
"""
Explanation: where()方法 The where() Method and Masking
使用布尔向量对Series对象查询时通常返回的是对象的子集。如果想要返回的shape和原对象相同,可以使用where方法。
使用布尔向量对DataFrame对象查询返回的shape和原对象相同,这是因为底层用的where方法实现。
End of explanation
"""
s.where(s>0)
df[df<0]
df.where(df<0)
"""
Explanation: 使用where方法
End of explanation
"""
df.where(df<0, 2)
df
df.where(df<0, df) #将df作为other的参数值
"""
Explanation: where方法还有一个可选的other参数,作用是替换返回结果中是False的值,并不会改变原对象。
End of explanation
"""
s2 = s.copy()
s2
s2[s2<0]=0
s2
"""
Explanation: 你可能想基于某种判断条件来赋值。一种直观的方法是:
End of explanation
"""
df = pd.DataFrame(np.random.randn(6,5), index=list('abcdef'), columns=list('ABCDE'))
df_orig = df.copy()
df_orig.where(df < 0, -df, inplace=True);
df_orig
"""
Explanation: 默认情况下,where方法并不会修改原始对象,它返回的是一个修改过的原始对象副本,如果你想直接修改原始对象,方法是将inplace参数设置为True
End of explanation
"""
df2 = df.copy()
df2[df2[1:4] >0]=3
df2
df2 = df.copy()
df2.where(df2>0, df2['A'], axis='index')
"""
Explanation: 对齐
where方法会将输入的布尔条件对齐,因此允许部分检索时的赋值。
End of explanation
"""
s.mask(s>=0)
df.mask(df >= 0)
"""
Explanation: mask
End of explanation
"""
n = 10
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df[(df.a<df.b) & (df.b<df.c)]
df.query('(a < b) & (b < c)') #
"""
Explanation: query()方法 The query() Method (Experimental)
DataFrame对象拥有query方法,允许使用表达式检索。
比如,检索列'b'的值介于列‘a’和‘c’之间的行。
注意: 需要安装numexptr。
End of explanation
"""
n = 10
colors = np.random.choice(['red', 'green'], size=n)
foods = np.random.choice(['eggs', 'ham'], size=n)
colors
foods
index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
df = pd.DataFrame(np.random.randn(n,2), index=index)
df
df.query('color == "red"')
"""
Explanation: MultiIndex query() 语法
对于DataFrame对象,可以使用MultiIndex,如同操作列名一样。
End of explanation
"""
df.index.names = [None, None]
df
df.query('ilevel_0 == "red"')
"""
Explanation: 如果index没有名字,可以给他们命名
End of explanation
"""
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df2 = pd.DataFrame(np.random.randn(n+2, 3), columns=df.columns)
df2
expr = '0.0 <= a <= c <= 0.5'
map(lambda frame: frame.query(expr), [df, df2])
"""
Explanation: ilevl_0意思是 0级index。
query() 用例 query() Use Cases
一个使用query()的情景是面对DataFrame对象组成的集合,并且这些对象有共同的的列名,则可以利用query方法对这个集合进行统一检索。
End of explanation
"""
df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
df
df.query('(a<b) &(b<c)')
df[(df.a < df.b) & (df.b < df.c)]
"""
Explanation: Python中query和pandas中query语法比较 query() Python versus pandas Syntax Comparison
End of explanation
"""
df.query('a < b & b < c')
df.query('a<b and b<c')
"""
Explanation: query()可以去掉圆括号, 也可以用and 代替&运算符
End of explanation
"""
df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
'c': np.random.randint(5, size=12),
'd': np.random.randint(9, size=12)})
df
df.query('a in b')
df[df.a.isin(df.b)]
df[~df.a.isin(df.b)]
df.query('a in b and c < d') #更复杂的例子
df[df.b.isin(df.a) & (df.c < df.d)] #Python语法
"""
Explanation: in 和not in 运算符 The in and not in operators
query()也支持Python中的in和not in运算符,实际上是底层调用isin
End of explanation
"""
df.query('b==["a", "b", "c"]')
df[df.b.isin(["a", "b", "c"])] #Python语法
df.query('c == [1, 2]')
df.query('c != [1, 2]')
df.query('[1, 2] in c') #使用in
df.query('[1, 2] not in c')
df[df.c.isin([1, 2])] #Python语法
"""
Explanation: ==和列表对象一起使用 Special use of the == operator with list objects
可以使用==/!=将列表和列名直接进行比较,等价于使用in/not in.
三种方法功能等价: ==/!= VS in/not in VS isin()/~isin()
End of explanation
"""
df = pd.DataFrame(np.random.randn(n, 3), columns=list('abc'))
df
df['bools']=np.random.randn(len(df))>0.5
df
df.query('bools')
df.query('not bools')
df.query('not bools') == df[~df.bools]
"""
Explanation: 布尔运算符 Boolean Operators
可以使用not或者~对布尔表达式进行取非。
End of explanation
"""
shorter = df.query('a<b<c and (not bools) or bools>2')
shorter
longer = df[(df.a < df.b) & (df.b < df.c) & (~df.bools) | (df.bools > 2)]
longer
shorter == longer
"""
Explanation: 表达式任意复杂都没关系。
End of explanation
"""
df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
'c': np.random.randn(7)})
df2
df2.duplicated('a') #只观察列a的值是否重复
df2.duplicated('a', keep='last')
df2.drop_duplicates('a')
df2.drop_duplicates('a', keep='last')
df2.drop_duplicates('a', keep=False)
"""
Explanation: query()的性能
DataFrame.query()底层使用numexptr,所以速度要比Python快,特别时当DataFrame对象非常大时。
重复数据的确定和删除 Duplicate Data
如果你想确定和去掉DataFrame对象中重复的行,pandas提供了两个方法:duplicated和drop_duplicates. 两个方法的参数都是列名。
* duplicated 返回一个布尔向量,长度等于行数,表示每一行是否重复
* drop_duplicates 则删除重复的行
默认情况下,首次遇到的行被认为是唯一的,以后遇到内容相同的行都被认为是重复的。不过两个方法都有一个keep参数来确定目标行是否被保留。
* keep='first'(默认):标记/去掉重复行除了第一次出现的那一行
* keep='last': 标记/去掉重复行除了最后一次出现的那一行
* keep=False: 标记/去掉所有重复的行
End of explanation
"""
df2.duplicated(['a', 'b']) #此时列a和b两个元素构成每一个检索的基本单位,
df2
"""
Explanation: 可以传递列名组成的列表
End of explanation
"""
df3 = pd.DataFrame({'a': np.arange(6),
'b': np.random.randn(6)},
index=['a', 'a', 'b', 'c', 'b', 'a'])
df3
df3.index.duplicated() #布尔表达式
df3[~df3.index.duplicated()]
df3[~df3.index.duplicated(keep='last')]
df3[~df3.index.duplicated(keep=False)]
"""
Explanation: 也可以检查index值是否重复来去掉重复行,方法是Index.duplicated然后使用切片操作(因为调用Index.duplicated会返回布尔向量)。keep参数同上。
End of explanation
"""
s = pd.Series([1,2,3], index=['a', 'b', 'c'])
s
s.get('a')
s.get('x', default=-1)
s.get('b')
"""
Explanation: 形似字典的get()方法
Serires, DataFrame和Panel都有一个get方法来得到一个默认值。
End of explanation
"""
df = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC'))
df.select(lambda x: x=='A', axis=1)
"""
Explanation: select()方法 The select() Method
Series, DataFrame和Panel都有select()方法来检索数据,这个方法作为保留手段通常其他方法都不管用的时候才使用。select接受一个函数(在label上进行操作)作为输入返回一个布尔值。
End of explanation
"""
dflookup = pd.DataFrame(np.random.randn(20, 4), columns=list('ABCD'))
dflookup
dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D'])
"""
Explanation: lookup()方法 The lookup()方法
输入行label和列label,得到一个numpy数组,这就是lookup方法的功能。
End of explanation
"""
index = pd.Index(['e', 'd', 'a', 'b'])
index
'd' in index
"""
Explanation: Index对象 Index objects
pandas中的Index类和它的子类可以被当做一个序列可重复集合(ordered multiset),允许数据重复。然而,如果你想把一个有重复值Index对象转型为一个集合这是不可以的。创建Index最简单的方法就是通过传递一个列表或者其他序列创建。
End of explanation
"""
index = pd.Index(['e', 'd', 'a', 'b'], name='something')
index.name
index = pd.Index(list(range(5)), name='rows')
columns = pd.Index(['A', 'B', 'C'], name='cols')
df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
df
df['A']
"""
Explanation: 还可以个Index命名
End of explanation
"""
dfmi = pd.DataFrame([list('abcd'),
list('efgh'),
list('ijkl'),
list('mnop')],
columns=pd.MultiIndex.from_product([['one','two'],
['first','second']]))
dfmi
"""
Explanation: 返回视图VS返回副本 Returning a view versus a copy
当对pandas对象赋值时,一定要注意避免链式索引(chained indexing)。看下面的例子:
End of explanation
"""
dfmi['one']['second']
dfmi.loc[:,('one','second')]
"""
Explanation: 比较下面两种访问方式:
End of explanation
"""
dfmi.loc[:,('one','second')]=value
#实际是
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
"""
Explanation: 上面两种方法返回的结果抖一下,那么应该使用哪种方法呢?答案是我们更推荐大家使用方法二。
dfmi['one']选择了第一级列然后返回一个DataFrame对象,然后另一个Python操作dfmi_with_one['second']根据'second'检索出了一个Series。对pandas来说,这两个操作是独立、有序执行的。而.loc方法传入一个元组(slice(None),('one','second')),pandas把这当作一个事件执行,所以执行速度更快。
为什么使用链式索引赋值为报错?
刚才谈到不推荐使用链式索引是出于性能的考虑。接下来从赋值角度谈一下不推荐使用链式索引。首先,思考Python怎么解释执行下面的代码?
End of explanation
"""
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
"""
Explanation: 但下面的代码解释后结果却不一样:
End of explanation
"""
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
foo['quux'] = value # We don't know whether this will modify df or not!
return foo
"""
Explanation: 看到__getitem__了吗?除了最简单的情况,我们很难预测他到底返回的是视图还是副本(哲依赖于数组的内存布局,这是pandas没有硬性要求的),因此不推荐使用链式索引赋值!
而dfmi.loc.__setitem__直接对dfmi进行操作。
有时候明明没有使用链式索引,也会引起SettingWithCopy警告,这是Pandas设计的bug~
End of explanation
"""
dfb = pd.DataFrame({'a' : ['one', 'one', 'two',
'three', 'two', 'one', 'six'],
'c' : np.arange(7)})
dfb
dfb['c'][dfb.a.str.startswith('o')] = 42 #虽然会引起SettingWithCopyWarning 但也能得到正确结果
pd.set_option('mode.chained_assignment','warn')
dfb[dfb.a.str.startswith('o')]['c'] = 42 #这实际上是对副本赋值!
"""
Explanation: 链式索引中顺序也很重要
此外,在链式表达式中,不同的顺序也可能导致不同的结果。这里的顺序指的是检索时行和列的顺序。
End of explanation
"""
dfc = pd.DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]})
dfc
dfc.loc[0,'A'] = 11
dfc
"""
Explanation: 正确的方式是:老老实实使用.loc
End of explanation
"""
|
mjlong/openmc
|
docs/source/pythonapi/examples/post-processing.ipynb
|
mit
|
from IPython.display import Image
import numpy as np
import matplotlib.pyplot as plt
import openmc
from openmc.statepoint import StatePoint
%matplotlib inline
"""
Explanation: This notebook demonstrates some basic post-processing tasks that can be performed with the Python API, such as plotting a 2D mesh tally and plotting neutron source sites from an eigenvalue calculation. The problem we will use is a simple reflected pin-cell.
End of explanation
"""
# Instantiate some Nuclides
h1 = openmc.Nuclide('H-1')
b10 = openmc.Nuclide('B-10')
o16 = openmc.Nuclide('O-16')
u235 = openmc.Nuclide('U-235')
u238 = openmc.Nuclide('U-238')
zr90 = openmc.Nuclide('Zr-90')
"""
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
"""
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
water.add_nuclide(b10, 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
"""
Explanation: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
"""
# Instantiate a MaterialsFile, add Materials
materials_file = openmc.MaterialsFile()
materials_file.add_material(fuel)
materials_file.add_material(water)
materials_file.add_material(zircaloy)
materials_file.default_xs = '71c'
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
# Use both reflective and vacuum boundaries to make life interesting
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
"""
# Create Geometry and set root Universe
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# Instantiate a GeometryFile
geometry_file = openmc.GeometryFile()
geometry_file.geometry = geometry
# Export to "geometry.xml"
geometry_file.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 100
inactive = 10
particles = 5000
# Instantiate a SettingsFile
settings_file = openmc.SettingsFile()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
source_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
settings_file.set_source_space('box', source_bounds)
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 90 active batches each with 5000 particles.
End of explanation
"""
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.width = [1.26, 1.26]
plot.pixels = [250, 250]
plot.color = 'mat'
# Instantiate a PlotsFile, add Plot, and export to "plots.xml"
plot_file = openmc.PlotsFile()
plot_file.add_plot(plot)
plot_file.export_to_xml()
"""
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
"""
# Run openmc in plotting mode
executor = openmc.Executor()
executor.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
"""
Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
End of explanation
"""
# Instantiate an empty TalliesFile
tallies_file = openmc.TalliesFile()
# Create mesh which will be used for tally
mesh = openmc.Mesh()
mesh.dimension = [100, 100]
mesh.lower_left = [-0.63, -0.63]
mesh.upper_right = [0.63, 0.63]
tallies_file.add_mesh(mesh)
# Create mesh filter for tally
mesh_filter = openmc.Filter(type='mesh', bins=[1])
mesh_filter.mesh = mesh
# Create mesh tally to score flux and fission rate
tally = openmc.Tally(name='flux')
tally.add_filter(mesh_filter)
tally.add_score('flux')
tally.add_score('fission')
tallies_file.add_tally(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a 2D mesh tally.
End of explanation
"""
# Run OpenMC!
executor.run_simulation()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the statepoint file
sp = StatePoint('statepoint.100.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, data from the statepoint file is only read into memory when it is requested. This helps keep the memory use to a minimum even when a statepoint file may be huge.
End of explanation
"""
tally = sp.get_tally(scores=['flux'])
print(tally)
"""
Explanation: Next we need to get the tally, which can be done with the StatePoint.get_tally(...) method.
End of explanation
"""
tally.sum
"""
Explanation: The statepoint file actually stores the sum and sum-of-squares for each tally bin from which the mean and variance can be calculated as described here. The sum and sum-of-squares can be accessed using the sum and sum_sq properties:
End of explanation
"""
print(tally.mean.shape)
(tally.mean, tally.std_dev)
"""
Explanation: However, the mean and standard deviation of the mean are usually what you are more interested in. The Tally class also has properties mean and std_dev which automatically calculate these statistics on-the-fly.
End of explanation
"""
flux = tally.get_slice(scores=['flux'])
fission = tally.get_slice(scores=['fission'])
print(flux)
"""
Explanation: The tally data has three dimensions: one for filter combinations, one for nuclides, and one for scores. We see that there are 10000 filter combinations (corresponding to the 100 x 100 mesh bins), a single nuclide (since none was specified), and two scores. If we only want to look at a single score, we can use the get_slice(...) method as follows.
End of explanation
"""
flux.std_dev.shape = (100, 100)
flux.mean.shape = (100, 100)
fission.std_dev.shape = (100, 100)
fission.mean.shape = (100, 100)
fig = plt.subplot(121)
fig.imshow(flux.mean)
fig2 = plt.subplot(122)
fig2.imshow(fission.mean)
"""
Explanation: To get the bins into a form that we can plot, we can simply change the shape of the array since it is a numpy array.
End of explanation
"""
# Determine relative error
relative_error = np.zeros_like(flux.std_dev)
nonzero = flux.mean > 0
relative_error[nonzero] = flux.std_dev[nonzero] / flux.mean[nonzero]
# distribution of relative errors
ret = plt.hist(relative_error[nonzero], bins=50)
"""
Explanation: Now let's say we want to look at the distribution of relative errors of our tally bins for flux. First we create a new variable called relative_error and set it to the ratio of the standard deviation and the mean, being careful not to divide by zero in case some bins were never scored to.
End of explanation
"""
sp.source
"""
Explanation: Source Sites
Source sites can be accessed from the source property. As shown below, the source sites are represented as a numpy array with a structured datatype.
End of explanation
"""
sp.source['E']
"""
Explanation: If we want, say, only the energies from the source sites, we can simply index the source array with the name of the field:
End of explanation
"""
# Create log-spaced energy bins from 1 keV to 100 MeV
energy_bins = np.logspace(-3,1)
# Calculate pdf for source energies
probability, bin_edges = np.histogram(sp.source['E'], energy_bins, density=True)
# Make sure integrating the PDF gives us unity
print(sum(probability*np.diff(energy_bins)))
# Plot source energy PDF
plt.semilogx(energy_bins[:-1], probability*np.diff(energy_bins), linestyle='steps')
plt.xlabel('Energy (MeV)')
plt.ylabel('Probability/MeV')
"""
Explanation: Now, we can look at things like the energy distribution of source sites. Note that we don't directly use the matplotlib.pyplot.hist method since our binning is logarithmic.
End of explanation
"""
plt.quiver(sp.source['xyz'][:,0], sp.source['xyz'][:,1],
sp.source['uvw'][:,0], sp.source['uvw'][:,1],
np.log(sp.source['E']), cmap='jet', scale=20.0)
plt.colorbar()
plt.xlim((-0.5,0.5))
plt.ylim((-0.5,0.5))
"""
Explanation: Let's also look at the spatial distribution of the sites. To make the plot a little more interesting, we can also include the direction of the particle emitted from the source and color each source by the logarithm of its energy.
End of explanation
"""
|
ImAlexisSaez/deep-learning-specialization-coursera
|
course_1/week_4/assignment_1/building_your_deep_neural_network_step_by_step_v3.ipynb
|
mit
|
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
"""
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
"""
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
"""
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.00865408 -0.02301539]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
"""
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
"""
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
"""
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
"""
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1 / m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1 - Y), np.log(1 - AL)))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
"""
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.17007265 0.2524272 ]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
"""
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1 / m * np.dot(dZ, A_prev.T)
db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
"""
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
"""
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L - 1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
"""
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
"""
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
"""
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation
"""
|
dbenn/photometry_tools
|
SkyCoordAperturePhotometry.ipynb
|
mit
|
import os
from random import random
# TODO: shouldn't need ordered dictionary now either
from collections import OrderedDict
import numpy as np
import pandas as pd
from astropy.io import fits
from astropy.visualization import astropy_mpl_style
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib.patches import Circle
from matplotlib.offsetbox import TextArea, DrawingArea, OffsetImage, AnnotationBbox
plt.style.use(astropy_mpl_style)
%matplotlib inline
from PythonPhot import aper
import requests, math, glob
from photutils import DAOStarFinder
from astropy.stats import mad_std
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
import astropy.units as u
from photutils import aperture_photometry, CircularAperture
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Sky coordinate DSLR aperture photometry yielding untransformed magnitudes
Uses Python 3, astropy, matplotlib, PythonPhot, PhotUtils
Assumes a plate-solved image for RA/Dec determination
Definitions
Imports
End of explanation
"""
def get_ra_and_dec(stars, maglimit):
result = []
for star in stars:
vsp_template = 'https://www.aavso.org/apps/vsp/api/chart/?format=json&fov=10&star={}&maglimit={}'
query = vsp_template.format(star, maglimit)
record = requests.get(query).json()
# assume that first element corresponds to the star
if len(record["photometry"]) != 0 and record["photometry"][0]["auid"] == record["auid"]:
bands = record["photometry"][0]["bands"]
else:
bands = []
result.append({"star":record["star"], "ra":record["ra"], "dec":record["dec"], "bands":bands})
return result
"""
Explanation: Functions
RA and Dec for a list of stars
End of explanation
"""
def extract_all_sources(fits_path, fwhm, source_snr=20):
hdulist = fits.open(fits_path)
data = hdulist[0].data.astype(float)
header = hdulist[0].header
wcs = WCS(header)
bkg_sigma = mad_std(data)
daofind = DAOStarFinder(fwhm=fwhm, threshold=source_snr*bkg_sigma)
sources = daofind(data)
return sources, wcs
"""
Explanation: Extract all sources from image
End of explanation
"""
def get_local_coords(ra_decs, wcs, radius=4):
local_position_map = OrderedDict()
for ra_dec in ra_decs:
star_coord = SkyCoord("{} {}".format(ra_dec['ra'], ra_dec['dec']), unit=(u.hourangle, u.deg))
xy = SkyCoord.to_pixel(star_coord, wcs=wcs, origin=1)
x = xy[0].item(0)
y = xy[1].item(0)
for source in sources:
if(source['xcentroid']-radius <= x <= source['xcentroid']+radius) and \
source['ycentroid']-radius <= y <= source['ycentroid']+radius:
local_position_map[ra_dec["star"]] = (x, y)
return local_position_map
"""
Explanation: Convert RA and Dec to local coordinates
End of explanation
"""
def get_ref_mags_for_band(star_info_list, desired_band):
mags = {}
# target star's band list will be empty
for info in star_info_list:
for band in info["bands"]:
#print("{}: {}".format(info["star"], band))
if band["band"] == desired_band:
mags[info["star"]] = band["mag"]
break
return mags
"""
Explanation: Extract reference (check, comparison) magnitude map from ordered star information list
End of explanation
"""
def multi_file_photometry(fits_root, fits_files, data_index, coords, dataframe,
aperture_radius, inner_sky_radius, outer_sky_radius,
gain=1, zeropoint=0, suffix='.fit'):
for fits_file in fits_files:
fits_file_path = os.path.join(fits_root, fits_file)
hdus = fits.open(fits_file_path)
instr_mags = []
for x, y in coords:
time, mag = aperture_photometry(hdus[data_index], x, y,
aperture_radius, inner_sky_radius, outer_sky_radius,
gain, zeropoint)
instr_mags.append(mag)
dataframe[fits_file[0:fits_file.rindex(suffix)]] = [time] + instr_mags
"""
Explanation: Photometry of a list of FITS files, creating a table of times and instrumental magnitudes
End of explanation
"""
def aperture_photometry(hdu, x, y,
aperture_radius, inner_sky_radius, outer_sky_radius,
gain, zeropoint):
image_data = hdu.data
time = hdu.header[time_name]
mag, magerr, flux, fluxerr, sky, skyerr, badflag, outstr = \
aper.aper(image_data, x, y, phpadu=gain,
apr=aperture_radius, zeropoint=zeropoint,
skyrad=[inner_sky_radius, outer_sky_radius],
exact=True)
return time, mag[0]
"""
Explanation: Single image+coordinate photometry, returning a time and instrumental magnitude
Invoked by multi_file_photometry()
End of explanation
"""
def show_image(image_data, coord_map, aperture_size, annotate=True, vmin=10, vmax=200, figx=20, figy=10):
fig = plt.figure(figsize=(figx, figy))
plt.imshow(image_data, cmap='gray', vmin=vmin, vmax=vmax)
plt.gca().invert_yaxis()
plt.colorbar()
if annotate:
for designation in coord_map:
xy = coord_map[designation]
annotate_image(fig.axes[0], designation, xy, aperture_size)
plt.show()
"""
Explanation: Display an image with target and reference stars annotated, to sanity check local coordinates
End of explanation
"""
def annotate_image(axis, designation, xy, aperture_size):
axis.plot(xy[0], xy[1], 'o', markersize=aperture_size,
markeredgecolor='r', markerfacecolor='none',
markeredgewidth=2)
offsetbox = TextArea(designation, minimumdescent=False)
ab = AnnotationBbox(offsetbox, xy,
xybox=(-20, 40+random()*10-10),
xycoords='data',
boxcoords="offset points",
arrowprops=dict(arrowstyle="->"))
axis.add_artist(ab)
"""
Explanation: Annotate plot axis with coordinate positions and designations
Invoked by show_image()
End of explanation
"""
def standardised_magnitudes(instr_mag_df_trans, star_names, row_names, catalog_mags):
# exclude target star and check star to get list of possible comparison star names
comp_names = star_names[2:]
# obtain available comparison star names and magnitudes, ignoring any star not in catalog
avail_comp_names = [name for name in comp_names if name in catalog_mags.keys()]
avail_comp_mags = [catalog_mags[name] for name in comp_names if name in catalog_mags.keys()]
target_name = star_names[0]
#print(avail_comp_names, avail_comp_mags, target_name)
std_mags = np.array([])
for row_name in row_names:
# get instrumental magnitudes for the current row of data and compute
# standardised magnitude of the target star
comp_instr_mags = [instr_mag_df_trans.loc[row_name][comp_name] for comp_name in avail_comp_names]
target_mag = standardised_magnitude(instr_mag_df_trans.loc[row_name][target_name],
np.array(comp_instr_mags),
np.array(avail_comp_mags))
# collect standardised magnitudes for each row
std_mags = np.append(std_mags, target_mag)
# TODO: also compute/return check star mags and look at std error; is that what spreadsheet uses?
return std_mags
"""
Explanation: Compute standardised magnitudes given a data frame of all instrumental magnitudes, a list of all star names, a list of row names of interest (e.g. stk-median-g*) in the instrumental magnitude data frame, and a dictionary of comparison star magnitudes
End of explanation
"""
def standardised_magnitude(target_instr_mag, comp_instr_mags, catalog_comp_mags):
deltas = target_instr_mag - comp_instr_mags
mags = deltas + catalog_comp_mags
return mags.mean()
"""
Explanation: Compute standardised magnitude given target's instrumental magnitude, a numpy array of comparison star instrumental magnitudes and catalog comparison star magnitudes
End of explanation
"""
def write_webobs_file(path, obscode, cal_software, target, check, airmass, results, chart_id, comment):
header_template = """#TYPE=EXTENDED
#OBSCODE={0}
#SOFTWARE={1}, Python scripts, Jupyter notebook
#DELIM=,
#DATE=JD
#OBSTYPE=DSLR
#NAME,DATE,MAG,MERR,FILT,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AMASS,GROUP,CHART,NOTES
"""
if type(airmass) is float:
airmass = "{0:1.6f}".format(airmass)
result_template = "{0},{1:1.6f},{2:1.6f},{3:1.6f},{4},NO,STD,ENSEMBLE,NA,{5},{6:1.6f},{7},NA,{8},{9}\n"
with open(path, "w") as webobs:
webobs.write(header_template.format(obscode, cal_software))
for result in results:
jd, mag, mag_err, band, check_instr_mag = result
webobs.write(result_template.format(target, jd, mag, mag_err, band, check, check_instr_mag,
airmass, chart_id, comment))
"""
Explanation: Write AAVSO Extended Upload Format file suitable for upload to WebObs
the results parameter is a list of tuples containing jd, mag, mag_err, band, check, check_instr_mag for each photometry result
End of explanation
"""
# Output file directory
output_file_root = "/Users/david/aavso/dslr-photometry/working"
# WebObs file
webobs_file = "webobs.csv"
# Instrumental magnitude output file path
instr_mag_csv_file = "instr_mags.csv"
# FITS file directory
fits_root = "/Users/david/aavso/dslr-photometry/working"
# Plate-solved FITS file name
wcs_file = "stk-median-g1-wcs.fit"
# B, G, and R FITS file prefixes to identify files,
# e.g. stk-median-g matches stk-median-g1.fit, stk-median-g2.fit, ...
fits_prefixes = ["stk-median-b", "stk-median-g", "stk-median-r"]
# FITS file data HDU index
data_index = 0
# Time column name
time_name = "JD"
"""
Explanation: Inputs
Change these to suit your environment
File settings
End of explanation
"""
names = ["eta Car","000-BBR-533","000-BBR-603","000-BBS-066","000-BBR-573","000-BBR-998","000-BBR-795","000-BBR-563"]
"""
Explanation: Names or AUIDs for target and comparison stars
End of explanation
"""
maglimit = 7
"""
Explanation: Magnitude limit for comparison star lookups
End of explanation
"""
# FWHM (e.g. from PSF in IRIS)
fwhm = 6
# Aperture radii
measurement_aperture = 9
inner_sky_annulus = 12
outer_sky_annulus = 20
# ph/ADU
# Note: PythonPhot's aperture photometry function takes a phadu parameter.
# Assumption: this is photons/ADU or e-/ADU, i.e. gain.
gain=1.67
"""
Explanation: Aperture radii and gain
End of explanation
"""
target_comp_ra_dec = get_ra_and_dec(names, maglimit=maglimit)
# Question: why does 000-BBR-563 have no bands?
target_comp_ra_dec
"""
Explanation: Outputs
Obtain RA and Dec for selected AUIDs
End of explanation
"""
sources, wcs = extract_all_sources(wcs_file, fwhm=fwhm)
sources
"""
Explanation: Extract all sources from plate-solved image
End of explanation
"""
position_map = get_local_coords(target_comp_ra_dec, wcs)
position_map
"""
Explanation: Convert RA and Dec to local coordinates
End of explanation
"""
files = os.listdir(fits_root)
fits_files = []
for fits_prefix in fits_prefixes:
fits_files += sorted([file for file in files if fits_prefix in file and file.find("wcs") == -1])
"""
Explanation: Find B, G, R files in the FITS file directory
End of explanation
"""
fits_file = fits_files[5]
print(fits_file)
hdus = fits.open(os.path.join(fits_root, fits_file))
image_data = hdus[data_index].data
median = np.median(image_data)
show_image(image_data, position_map, measurement_aperture, annotate=True, vmin=10, vmax=median*4)
"""
Explanation: Aperture location sanity check by visual inspection
Arbitrarily choose the first G FITS file
End of explanation
"""
# Create empty table with time and object headers
pd.options.display.float_format = '{:,.6f}'.format
instr_mag_df = pd.DataFrame()
names = [name for name in position_map]
instr_mag_df['name'] = [time_name] + names
instr_mag_df.set_index('name', inplace=True)
# Carry out photometry on B, G, R FITS files, yielding instrumental magnitudes
positions = position_map.values()
multi_file_photometry(fits_root, fits_files, data_index, positions, instr_mag_df,
measurement_aperture, inner_sky_annulus, outer_sky_annulus, gain)
# Save photometry table as CSV
instr_mag_csv_path = os.path.join(output_file_root, instr_mag_csv_file)
instr_mag_df.T.to_csv(instr_mag_csv_path)
# Display photometry table
instr_mag_df.T
"""
Explanation: Aperture photometry, yielding instrumental magnitudes
End of explanation
"""
b_row_names = [row_name for row_name in instr_mag_df.T.index if "-b" in row_name]
g_row_names = [row_name for row_name in instr_mag_df.T.index if "-g" in row_name]
r_row_names = [row_name for row_name in instr_mag_df.T.index if "-r" in row_name]
catalog_v_mags = get_ref_mags_for_band(target_comp_ra_dec, "V")
tg = standardised_magnitudes(instr_mag_df.T, names, g_row_names, catalog_v_mags)
tg.mean(), np.median(tg), tg.std()
catalog_b_mags = get_ref_mags_for_band(target_comp_ra_dec, "B")
tb = standardised_magnitudes(instr_mag_df.T, names, b_row_names, catalog_b_mags)
tb.mean(), np.median(tb), tb.std()
obscode = "BDJB"
cal_software = "IRIS"
target = names[0]
check = names[1]
airmass = "NA" # TODO: compute (look at AAVSO spreadsheet)
chart_id = "X15962DX"
comment = "Canon 1100D; 100mm; ISO 100; f2.0; 5 sec x 20 images median stacked in groups of 5"
jd = instr_mag_df.T.iloc[0]["JD"]
check_instr_b = instr_mag_df.T.loc[b_row_names][check].mean()
check_instr_g = instr_mag_df.T.loc[g_row_names][check].mean()
results = [(jd, tb.mean(), tb.std(), "TB", check_instr_b),
(jd, tg.mean(), tg.std(), "TG", check_instr_g)]
webobs_path = os.path.join(output_file_root, webobs_file)
write_webobs_file(webobs_path, obscode, cal_software, target, check, airmass, results, chart_id, comment)
# Questions:
# - is mean or median best per T[BGR] row?
# - std() or some other std dev function (e.g. population vs sample)
# - how to compute R; use catalog B-V, V-R? may just want to report TG, TB
# - is there a role for linear regression here or only for transformation coefficients?
# - can/should we do airmass correction independent of transformation?
"""
Explanation: Differential Photometry and Standarised Magnitude
End of explanation
"""
|
jessicaowensby/We-Rise-Keras
|
notebooks/05_Transfer_Learning.ipynb
|
apache-2.0
|
from __future__ import print_function
import datetime
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input
from keras import backend as K
import numpy as np
now = datetime.datetime.now
batch_size = 128
num_classes = 5
epochs = 5
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
filters = 32
# size of pooling area for max pooling
pool_size = 2
# convolution kernel size
kernel_size = 3
"""
Explanation: Fine Tuning Example
Transfer learning example:
1- Train a simple convnet on the MNIST dataset the first 5 digits [0..4].
2- Freeze convolutional layers and fine-tune dense layers
for the classification of digits [5..9].
Get to 99.8% test accuracy after 5 epochs
for the first five digits classifier
and 99.2% for the last five digits after transfer + fine-tuning.
End of explanation
"""
if K.image_data_format() == 'channels_first':
input_shape = (1, img_rows, img_cols)
else:
input_shape = (img_rows, img_cols, 1)
def train_model(model, train, test, num_classes):
x_train = train[0].reshape((train[0].shape[0],) + input_shape)
x_test = test[0].reshape((test[0].shape[0],) + input_shape)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(train[1], num_classes)
y_test = keras.utils.to_categorical(test[1], num_classes)
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
t = now()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
print('Training time: %s' % (now() - t))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Keras Configs
<code>~/.keras/keras.json</code>
Specify whether you will use Theano or TensorFlow, Optmization options, Channel first and more.
End of explanation
"""
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# create two datasets one with digits below 5 and one with 5 and above
x_train_lt5 = x_train[y_train < 5]
y_train_lt5 = y_train[y_train < 5]
x_test_lt5 = x_test[y_test < 5]
y_test_lt5 = y_test[y_test < 5]
x_train_gte5 = x_train[y_train >= 5]
y_train_gte5 = y_train[y_train >= 5] - 5
x_test_gte5 = x_test[y_test >= 5]
y_test_gte5 = y_test[y_test >= 5] - 5
"""
Explanation: the data, shuffled and split between train and test sets
End of explanation
"""
feature_layers = [
Conv2D(filters, kernel_size,
padding='valid',
input_shape=input_shape),
Activation('relu'),
Conv2D(filters, kernel_size),
Activation('relu'),
MaxPooling2D(pool_size=pool_size),
Dropout(0.25),
Flatten(),
]
classification_layers = [
Dense(128),
Activation('relu'),
Dropout(0.5),
Dense(num_classes),
Activation('softmax')
]
# create complete model
model = Sequential(feature_layers + classification_layers)
# train model for 5-digit classification [0..4]
train_model(model,
(x_train_lt5, y_train_lt5),
(x_test_lt5, y_test_lt5), num_classes)
# freeze feature layers and rebuild model
for l in feature_layers:
l.trainable = False
# transfer: train dense layers for new classification task [5..9]
train_model(model,
(x_train_gte5, y_train_gte5),
(x_test_gte5, y_test_gte5), num_classes)
"""
Explanation: define two groups of layers: feature (convolutions) and classification (dense)
End of explanation
"""
|
claudiuskerth/PhDthesis
|
Data_analysis/SNP-indel-calling/dadi/dadiExercises/example_YRI_CEU.ipynb
|
mit
|
%pwd
import dadi
"""
Explanation: Table of Contents
<p><div class="lev2 toc-item"><a data-toc-modified-id="Load-dadi-module-01" href="#Load-dadi-module"><span class="toc-item-num">0.1 </span>Load dadi module</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Import-custom-models-02" href="#Import-custom-models"><span class="toc-item-num">0.2 </span>Import custom models</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Load-the-data-03" href="#Load-the-data"><span class="toc-item-num">0.3 </span>Load the data</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Set-grid-size-for-extrapolation-04" href="#Set-grid-size-for-extrapolation"><span class="toc-item-num">0.4 </span>Set grid size for extrapolation</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Create-demographic-model-function-05" href="#Create-demographic-model-function"><span class="toc-item-num">0.5 </span>Create demographic model function</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Set-parameter-bounds-and-initial-values-06" href="#Set-parameter-bounds-and-initial-values"><span class="toc-item-num">0.6 </span>Set parameter bounds and initial values</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Optimisation-07" href="#Optimisation"><span class="toc-item-num">0.7 </span>Optimisation</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Analysis-of-optimisation-result-08" href="#Analysis-of-optimisation-result"><span class="toc-item-num">0.8 </span>Analysis of optimisation result</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Simulation-from-estimated-model-09" href="#Simulation-from-estimated-model"><span class="toc-item-num">0.9 </span>Simulation from estimated model</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Parameter-uncertainty-010" href="#Parameter-uncertainty"><span class="toc-item-num">0.10 </span>Parameter uncertainty</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Folded-data-011" href="#Folded-data"><span class="toc-item-num">0.11 </span>Folded data</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Likelihood-Ratio-Test-(LRT)-between-models-012" href="#Likelihood-Ratio-Test-(LRT)-between-models"><span class="toc-item-num">0.12 </span>Likelihood Ratio Test (LRT) between models</a></div>
## Load dadi module
End of explanation
"""
import sys
sys.path # get the PYTHONPATH variable
sys.path.insert(0, '/home/claudius/Downloads/dadi') # add path to dadi at beginning of list
sys.path
import dadi, numpy
"""
Explanation: I have not installed dadi globally on huluvu. Instead, I left it in my Downloads directory '/home/claudius/Downloads/dadi'. In order for Python to find that module, I need to add that directory to the PYTHONPATH variable.
End of explanation
"""
% ll /home/claudius/Downloads/dadi/examples/YRI_CEU/
"""
Explanation: Import custom models
I am going to analyise and example data set that comes with dadi.
End of explanation
"""
! head /home/claudius/Downloads/dadi/examples/YRI_CEU/demographic_models.py
"""
Explanation: The Python script YRI_CEU.py contains example code for the analysis of these two human population samples.
End of explanation
"""
%quickref
# insert path where file with model functions resides
sys.path.insert(0, '/home/claudius/Downloads/dadi/examples/YRI_CEU/')
# load custom demographic model functions into current namespace
import demographic_models
"""
Explanation: The file demographic_models.py contains function definitions for custom demographic models.
End of explanation
"""
% ll /home/claudius/Downloads/dadi/examples/YRI_CEU/
! cat /home/claudius/Downloads/dadi/examples/YRI_CEU/YRI_CEU.fs
"""
Explanation: Load the data
End of explanation
"""
# read in the unfolded 2D SFS from file
data = dadi.Spectrum.from_file('/home/claudius/Downloads/dadi/examples/YRI_CEU/YRI_CEU.fs')
ns = data.sample_sizes
ns
"""
Explanation: This a 2D SFS format as understood by dadi. It pertains to the two human populations YRI and CEU. 10 Individuals of each population have been sampled. The first 21 numbers should be [YRI: 0, CEU: 0-20]. The following 21 numbers should be [YRI: 1, CEU: 0-20] and so on.
End of explanation
"""
# flatten the array and get its length
len([ elem for row in data.data for elem in row ])
21*21
"""
Explanation: Number of samples (or sample size) refers to the number of sampled gene copies.
End of explanation
"""
%quickref
# print the docstring of the fs object
%pdoc data
%pinfo data
# this prints the source code for the object
%psource data
"""
Explanation: The Spectrum is a 21 $\times$ 21 matrix.
End of explanation
"""
pts_l = [40, 50, 60]
"""
Explanation: Set grid size for extrapolation
End of explanation
"""
%psource demographic_models.prior_onegrow_mig
"""
Explanation: dadi will solve the partial differential equation at these three grid sizes and then extrapolate to an infinitely fine grid.
Create demographic model function
The demographic model we'll is defined in the function demographic_models.prior_onegrow_mig. Let's have a look at its definition:
End of explanation
"""
func = demographic_models.prior_onegrow_mig
"""
Explanation: The function first records the required grid size.
It then initialises the phi distribution with this grid.
It then specifies a stepwise change in population size of the ancestral population. This initial ancestral population is implicitly taken as the reference population. The population size parameters nu1F, nu2F and nu2B are relativ to the population size of this initial ancestral population, which is set to 1.
That means, if nu1F is greater 1, the model specifies a stepwise increase in population size. The population
stays at this size for a time Tp, which is specified in $2N_{ref}$ generation.
Next the model function specifies a population split. One of the daughter populations has the same population size as the ancestral population (African presumably). The other daughter population starts at a population size of nu2B and then exponentially increases in size to nu2F. During this time of divergence T, the two populations exchange gene copies at a rate m in each direction.
The function Spectrum.from_phi then solves the partial differential equation given the spcified model and given the specified parameter values.
Finally the expected SFS given the model is returned.
End of explanation
"""
func_ex = dadi.Numerics.make_extrap_log_func(func)
"""
Explanation: Next we turn the model function into a version that can do the extrapolation.
End of explanation
"""
%psource func
"""
Explanation: The func_ex is the function we are going to use for optimisation.
Set parameter bounds and initial values
The model function takes a list of 6 parameters as its first argument. See the docstring for their description.
End of explanation
"""
upper_bounds = [100, 100, 100, 10, 3, 3]
"""
Explanation: It is necessary to confine the search space space to reasonable values.
End of explanation
"""
lower_bounds = [1e-2, 1e-2, 1e-2, 0, 0, 0]
"""
Explanation: This specifies that the maximal size that the ancestral population can grow to (nu1F) is 100$\times$ its initial size. Similarly, the maximal time the ancestral population stays at this size before it splits into two populations is 3$\times 2N_{ref}$. Note that this time is 3 times the expected time to the MRCA for a sample of $n$ gene copies when $n$ is very large and under the standard neutral model (see p. 76 in Wakeley2009).
End of explanation
"""
# define starting values for model parameters
p0 = [2, 0.1, 2, 1, 0.2, 0.2]
"""
Explanation: The lower bound of the population size parameters is set to 1/100th of the reference population and the lower bounds of migration rate and time parameters is set to 0.
End of explanation
"""
%psource dadi.Misc.perturb_params
p0 = dadi.Misc.perturb_params(p0, upper_bound=upper_bounds, lower_bound=lower_bounds, fold=1.5)
"""
Explanation: Since the optimisation algorithms are not guaranteed to find the global optimum, it is important to run several optimisations for each data set, each with different starting values.
End of explanation
"""
p0
"""
Explanation: The naming of the function argument for the "number of factors to disturb by" is very unfortunate in this context. Anyway, a higher value leads to greater perturbation.
End of explanation
"""
popt_1 = dadi.Inference.optimize_log(p0, data, func_ex, pts_l, \
lower_bound=lower_bounds, upper_bound=upper_bounds, \
verbose=10, maxiter=10)
p_names = ("nu1F", "nu2B", "nu2F", "m", "Tp", "T")
for n,p in zip(p_names, popt_1):
print str(n) + "\t" + "{0:.3f}".format(p)
"""
Explanation: Optimisation
End of explanation
"""
# define starting values for model parameters
p0 = [2, 0.1, 2, 1, 0.2, 0.2]
# create new starting values for parameters
p0 = dadi.Misc.perturb_params(p0, upper_bound=upper_bounds, lower_bound=lower_bounds, fold=1.5)
# run optimisation with 10 iterations
popt_2 = dadi.Inference.optimize_log(p0, data, func_ex, pts_l, \
lower_bound=lower_bounds, upper_bound=upper_bounds, \
verbose=10, maxiter=10)
for n, p1, p2 in zip(p_names, popt_1, popt_2):
print str(n) + "\t" + "{0:.3f}".format(p1) + "\t" + "{0:.3f}".format(p2)
# define starting values for model parameters
p0 = [2, 0.1, 2, 1, 0.2, 0.2]
# create new starting values for parameters
p0 = dadi.Misc.perturb_params(p0, upper_bound=upper_bounds, lower_bound=lower_bounds, fold=1.5)
# run optimisation with 10 iterations
popt_3 = dadi.Inference.optimize_log(p0, data, func_ex, pts_l, \
lower_bound=lower_bounds, upper_bound=upper_bounds, \
verbose=10, maxiter=10)
for n, p1, p2, p3 in zip(p_names, popt_1, popt_2, popt_3):
print str(n) + "\t" + "{0:.3f}".format(p1) + "\t" + "{0:.3f}".format(p2) + "\t" + "{0:.3f}".format(p3)
"""
Explanation: Let's see how robust these estimates are to different starting values.
End of explanation
"""
# best fit parameter values (from YRY_CEU.py)
popt = [1.881, 0.0710, 1.845, 0.911, 0.355, 0.111]
# get best-fit model AFS
model = func_ex(popt, ns, pts_l)
model
model.data.sum()
"""
Explanation: With just 10 iterations, the optimisation does not seem to converge for all parameters.
Analysis of optimisation result
End of explanation
"""
# Log likelihood of the data given the model
ll = dadi.Inference.ll_multinom(model, data)
ll
%psource dadi.Inference.ll_multinom
# the optimal value of theta0 given the model
theta0 = dadi.Inference.optimal_sfs_scaling(model, data)
theta0
import pylab
%matplotlib inline
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
# plot a comparison of the model SFS with the SFS from the data
dadi.Plotting.plot_2d_comp_multinom(model, data, vmin=1, resid_range=3, pop_ids=("YRI", "CEU"))
# print the docstring of the function
%pdoc dadi.Plotting.plot_2d_comp_multinom
"""
Explanation: I do not understand what is in this model spectrum. I thought it would be expected proportions, so that the sum across the spectrum would be 1. I think these are expected counts (not proportions) assuming a $\theta$ of 1.
End of explanation
"""
# generate the core of a ms command with the optimised model parameter values
mscore = demographic_models.prior_onegrow_mig_mscore(popt)
# generate full ms command
mscommand = dadi.Misc.ms_command(1., ns, mscore, int(1e5))
"""
Explanation: Simulation from estimated model
The following requires that ms is installed.
End of explanation
"""
mscommand
import os
return_code = os.system('{0} > test.msout'.format(mscommand))
% ll
msdata = dadi.Spectrum.from_ms_file('test.msout')
dadi.Plotting.plot_2d_comp_multinom(model, theta0*msdata, vmin=1, pop_ids=['YRI', 'CEU'])
"""
Explanation: Note, the ms command specifies a $\theta$ of 1 for better efficiency. The simulated spectra can be rescaled later with the theta0 from above.
End of explanation
"""
# the examples directory contains site frequency spectra from bootstraps
% ll examples/YRI_CEU/bootstraps/
# load spectra from bootstraps of the data into an array
all_boot = [ dadi.Spectrum.from_file('examples/YRI_CEU/bootstraps/{0:02d}.fs'.format(i)) for i in range(100) ]
print ['{0:02d}.fs'.format(i) for i in range(100)]
%%time
uncerts = dadi.Godambe.GIM_uncert(func_ex, pts_l, all_boot, popt, data, multinom=True)
print 'Estimated parameter standard deviations from GIM: {0}'.format(uncerts)
"""
Explanation: The spectrum simulated with ms (averaged across iterations, I believe) is almost identical to the model spectrum. This confirms that $\delta$a$\delta$i's deterministic approximation is very good. One could now compare the ms simulated spectra to the observed spectrum.
Parameter uncertainty
In order to obtain confidence intervals for the parameter estimates, one needs to create conventional bootstraps over unlinked loci, i. e. over contigs instead of nucleotide sites. From these bootstrapped data sets one can generate site frequency spectra and estimate model parameters as for the full observed data. However, this is computationally expensive. A more efficient alternative is calculating the Godambe Information Matrix (GIM) from the bootstrapped data sets (see Coffman2016 for details).
End of explanation
"""
# These are the optimal parameters when the spectrum is folded. They can be
# found simply by passing data.fold() to the above call to optimize_log.
popt_fold = numpy.array([1.907, 0.073, 1.830, 0.899, 0.425, 0.113])
# get standard deviations for model parameters
uncerts_folded = dadi.Godambe.GIM_uncert(func_ex, pts_l, all_boot, popt_fold, data.fold(), multinom=True)
print 'Folding increases parameter uncertainties by factors of: {}'.format(uncerts_folded/uncerts)
"""
Explanation: Folded data
End of explanation
"""
# the model without migration is also defined in the demographic_models script
func_nomig = demographic_models.prior_onegrow_nomig
func_ex_nomig = dadi.Numerics.make_extrap_log_func(func_nomig)
# these are the best-fit parameters for the model without migration,
# as provided in YRI_CEU.py
popt_nomig = numpy.array([ 1.897, 0.0388, 9.677, 0.395, 0.070])
# get the expected AFS from the model without migration
model_nomig = func_ex_nomig(popt_nomig, ns, pts_l)
# get the likelihood of the data given the model without migration
ll_nomig = dadi.Inference.ll_multinom(model_nomig, data)
print 'The log likelihood of the model with migration was: {0:.1f}'.format(ll)
print 'The log likelihodd of the model without migration is: {0:.1f}'.format(ll_nomig)
"""
Explanation: Outgroup information greatly increases power!
Likelihood Ratio Test (LRT) between models
The following will compare the model with migration with a model without migration, thus testing whether the inferred migration rate is significantly different from 0.
End of explanation
"""
p_lrt = popt
p_lrt[3] = 0
print p_lrt
print popt
# the first line just creates a reference, not a copy
# best fit parameter values for the model with migration (from YRY_CEU.py)
popt = [1.881, 0.0710, 1.845, 0.911, 0.355, 0.111]
p_lrt = popt[:] # copy parameter list
p_lrt[3] = 0
print p_lrt
print popt
"""
Explanation: The more complex model with migration (one parameter more) has a greater likelihood as expected. But is that difference significant or just due to better being able to fit noise in the data?
End of explanation
"""
adj = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, p_lrt, data, nested_indices=[3], multinom=True)
D_adj = adj * 2 * (ll - ll_nomig)
print 'Adjusted D statistic: {0:.4f}'.format(D_adj)
"""
Explanation: Need to calculate an adjustment factor, maybe correcting for linkage (see Coffman2016).
End of explanation
"""
pval = dadi.Godambe.sum_chi2_ppf(D_adj, weights=(0.5, 0.5))
print 'p-val for rejecting the no-migration model: {0:.4f}'.format(pval)
"""
Explanation: Verbatim from YRI_CEU.py:
"Because this is test of a parameter on the boundary of parameter space (m cannot be less than zero), our null distribution is an even proportion of chi^2 distributions with 0 and 1 d.o.f. To evaluate the p-value, we use the point percent function for a weighted sum of chi^2 dists."
See also the manual and Coffman2016.
End of explanation
"""
|
kingmolnar/DataScienceProgramming
|
02-Python/Introduction_class.ipynb
|
cc0-1.0
|
x = 7+3
print x, type(x)
print x
x = (x+5)*0.5
print x, type(x)
type(x)
"""
Explanation: <p style="text-align:right;color:red;font-weight:bold;font-size:16pt;padding-bottom:20px">Please, rename this notebook before editing!</p>
The Programming Language Python
References
Here are some references to freshen up on concepts:
- Self-paced online tutorials
- CodeAcademy (13h estimated time) https://www.codecademy.com/tracks/python
- Brief overview with live examples https://www.learnpython.org/en/Welcome
Books
Python for Everybody (HTML, PDF, Kindle) https://www.py4e.com/book
Python Practice Book http://anandology.com/python-practice-book/index.html
Learning Python (free, requires registration to download) https://www.packtpub.com/packt/free-ebook/learning-python
Python 2 vs Python 3
While there are a number of major differences between the versions the majority of libraries and tools that we are concerned with operate on both. Most changes in Python 3 concern the internal workings and performance. Though, there are some syntax changes, and some operations behave differently. This pages offers a comprehensive look at the key changes: http://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html
We provide both versions on our cluster: the binaries python and python2 for Python 2.7, and python3 for version 3.4.
All assignments are expected to run on Python 2.7
Resources & and Important Links
Official web-site of the Python Software Foundation https://www.python.org/
API Refernce https://docs.python.org/2.7/
StackOverflow https://stackoverflow.com/questions/tagged/python
The following has been adopted from https://www.learnpython.org/
Variables and Types
One of the conveniences and pit-falls is that Python does not require to explicitly declare variables before using them. This is common in many other programming languages. Python is not statically typed, but rather follows the object oriented paradigm. Every variable in Python is an object.
However, the values that variables hold have a designated data type.
This tutorial will go over a few basic types of variables.
Numbers
Python supports two types of numbers - integers and floating point numbers. Basic arithmetic operations yield different results for integers and floats. Special attention needs to be given when mixing values of different types within expressions the results might be unexpected.
To define an integer, use the following syntax:
End of explanation
"""
myfloat = 7.0
print myfloat, type(myfloat)
myfloat = float(42)
print myfloat, type(myfloat)
"""
Explanation: To define a floating point number, you may use one of the following notations:
End of explanation
"""
(numerator, denominator) = 3.5.as_integer_ratio()
print numerator, denominator, numerator/denominator, 1.0*numerator/denominator
"""
Explanation: Arithmetic operations
We can arithmetic operations that are common in many programing languages.
- +, -, *, /
- // is a special integer division even if the operands aren't
- x**y is used for $x^y$
- n % k calculates the remainder (modulo) of the integer division of n by k
Try it out!
End of explanation
"""
mystring = 'hello'
print(mystring)
mystring = "hello"
print(mystring)
"""
Explanation: Strings
Strings are defined either with a single quote or a double quotes. Many other languages interpret them differently.
End of explanation
"""
my_String99 = 'Don\'t worry about apostrophes'
print(my_String99)
"""
Explanation: The difference between the two is that using double quotes makes it easy to include apostrophes (whereas these would terminate the string if using single quotes)
End of explanation
"""
3*"u"
"""
Explanation: Operators
Some the arithmetic operators can be applied to strings, though they have a different interpretation
- + will concatenate two strings
- * multiplies a string with an integer, i.e. the result is that many copies of the original string.
- % has a very special purpose to fill in values into strings
Python provides a large number of operations to manipulate text strings. Examples are given at https://www.tutorialspoint.com/python/python_strings.htm
For the complete documentation refer to https://docs.python.org/2/library/string.html
End of explanation
"""
7.0/13.0
print "The magic number is %.2f!" % (7.0/13.0)
int("3")
float("6.3")
int(str(8)*8)/6
"""
Explanation: String Formatting
Python uses C-style string formatting to create new, formatted strings. The "%" operator is used to format a set of variables enclosed in a "tuple" (a fixed size list), together with a format string, which contains normal text together with "argument specifiers", special symbols like "%s" and "%d".
Some basic argument specifiers you should know:
%s - String (or any object with a string representation, like numbers)
%d - Integers
%f - Floating point numbers
%.<number of digits>f - Floating point numbers with a fixed amount of digits to the right of the dot.
%x/%X - Integers in hex representation (lowercase/uppercase)
End of explanation
"""
mylist = [1, 2, "three", ("a", 7)]
print len(mylist)
print mylist
mylist[0]
mylist + [7, 8, 8]
mylist * 2
"""
Explanation: Lists
Lists are construct for holding multiple objects or values of different types (if this makes sense). We can dynamically add, replace, or remove elements from a list.
Usually we iterate through list in order to perform some operations, though, we can also address a specific element by its position (index) in the list.
The + and * operators work on lists in a similar way as they do on strings.
Complete documentation at https://docs.python.org/2/tutorial/datastructures.html
End of explanation
"""
power_of_twos = [2**k for k in xrange(0,10)]
print power_of_twos
[k for k in xrange(0,10)]
[i*j for i in xrange(1,11) for j in xrange(1,11)]
[ [i*j for i in xrange(1,11) ] for j in xrange(1,11)]
"""
Explanation: List Comprehension
This technique comes in handy and is often used.
End of explanation
"""
x = 2.0001
print(x == 2) # prints out True
print(x == 3) # prints out False
print(x < 3) # prints out True
"""
Explanation: Conditions
Python uses boolean variables to evaluate conditions. The boolean values True and False are returned when an expression is compared or evaluated.
Notice that variable assignment is done using a single equals operator "=", whereas comparison between two variables is done using the double equals operator "==". The "not equals" operator is marked as "!=".
End of explanation
"""
name = "John"
age = 23
if name == "John" and age == 23:
print("Your name is John, and you are also 23 years old.")
if name == "John" or name == "Rick":
print("Your name is either John or Rick.")
"""
Explanation: The "and", "or" and "not" boolean operators allow building complex boolean expressions, for example:
End of explanation
"""
name = "John"
if name in ["John", "Rick"]:
print("Your name is either John or Rick.")
"""
Explanation: The "in" operator could be used to check if a specified object exists within an iterable object container, such as a list:
End of explanation
"""
x = 3
if x == 2:
print("x equals two!")
print("x equals two! ... again")
else:
print("x does not equal to two.")
print "done?"
"""
Explanation: Python uses indentation to define code blocks, instead of brackets. The standard Python indentation is 4 spaces, although tabs and any other space size will work, as long as it is consistent. Notice that code blocks do not need any termination.
Here is an example for using Python's "if" statement using code blocks:
if <statement is true>:
<do something>
....
....
elif <another statement is true>: # else if
<do something else>
....
....
else:
<do another thing>
....
....
End of explanation
"""
# Prints out the numbers 0,1,2,3,4
for x in range(5):
print(x)
# Prints out 3,4,5
for x in range(3, 6):
print(x)
# Prints out 3,5,7
for x in range(3, 8, 2):
print(x)
"""
Explanation: A statement is evaulated as true if one of the following is correct: 1. The "True" boolean variable is given, or calculated using an expression, such as an arithmetic comparison. 2. An object which is not considered "empty" is passed.
Here are some examples for objects which are considered as empty: 1. An empty string: "" 2. An empty list: [] 3. The number zero: 0 4. The false boolean variable: False
Loops
There are two types of loops in Python, for and while.
The "for" loop
For loops iterate over a given sequence. Here is an example:
primes = [2, 3, 5, 7]
for prime in primes:
print(prime)
For loops can iterate over a sequence of numbers using the range and xrange functions. The difference between range and xrange is that the range function returns a new list with numbers of that specified range, whereas xrange returns an iterator, which is more efficient. (Python 3 uses the range function, which acts like xrange). Note that the range function is zero based.
End of explanation
"""
count = 0
while count < 5:
print(count)
count += 1 # This is the same as count = count + 1
x
## compute the Greatest Common Denominator (GCD)
a = 18802
b = 401
while a!=b:
# put smaller number in a
(a, b) = (a, b) if a<b else (b, a)
b = b - a
print "The GCD is %d"%a
import myfirst
myfirst.gcd(15, 12)
# %load myfirst.py
#!/usr/bin/env python
def gcd(a,b):
while a!=b:
# put smaller number in a
##(a, b) = (a, b) if a<b else (b, a) #(a<b)?(a,b):(b,a)
if b>a:
(a, b) = (b, a)
b = b - a
return a
"""
Explanation: "while" loops
While loops repeat as long as a certain boolean condition is met. For example:
End of explanation
"""
# Prints out 0,1,2,3,4
count = 0
while True:
print(count)
count += 1
if count >= 5:
break
# Prints out only odd numbers - 1,3,5,7,9
for x in range(10):
# Check if x is even
if x % 2 == 0:
continue
print(x)
"""
Explanation: "break" and "continue" statements
break is used to exit a for loop or a while loop, whereas continue is used to skip the current block, and return to the "for" or "while" statement. A few examples:
End of explanation
"""
# Prints out 0,1,2,3,4 and then it prints "count value reached 5"
count=0
while(count<5):
print(count)
count +=1
else:
print("count value reached %d" %(count))
# Prints out 1,2,3,4
for i in range(1, 10):
if(i%5==0):
break
print(i)
else:
print("this is not printed because for loop is terminated because of break but not due to fail in condition")
"""
Explanation: can we use "else" clause for loops?
End of explanation
"""
def my_function():
print("Hello From My Function!")
my_function()
"""
Explanation: Functions and methods
Functions are a convenient way to divide your code into useful blocks, allowing us to order our code, make it more readable, reuse it and save some time. Also functions are a key way to define interfaces so programmers can share their code.
As we have seen on previous tutorials, Python makes use of blocks.
A block is a area of code of written in the format of:
block_head:
1st block line
2nd block line
Where a block line is more Python code (even another block), and the block head is of the following format: block_keyword block_name(argument1,argument2, ...) Block keywords you already know are "if", "for", and "while".
Functions in python are defined using the block keyword "def", followed with the function's name as the block's name. For example:
End of explanation
"""
def my_function_with_args(username, greeting):
print("Hello, "+username+" , From My Function!, I wish you "+greeting)
my_function_with_args("class", "a wonderful day")
"""
Explanation: Functions may also receive arguments (variables passed from the caller to the function). For example:
End of explanation
"""
def sum_two_numbers(a, b):
return a + b
print "I'm done"
sum_two_numbers(5,19)
"""
Explanation: Functions may return a value to the caller, using the keyword- 'return' . For example:
End of explanation
"""
def my_function():
print("Hello From My Function!")
def my_function_with_args(username, greeting):
print("Hello, %s , From My Function!, I wish you %s"%(username, greeting))
def sum_two_numbers(a, b):
return a + b
# print(a simple greeting)
my_function()
#prints - "Hello, John Doe, From My Function!, I wish you a great year!"
my_function_with_args("John Doe", "a great year!")
# after this line x will hold the value 3!
x = sum_two_numbers(1,2)
"""
Explanation: How to call functions
End of explanation
"""
# Modify this function to return a list of strings as defined above
def list_benefits():
pass
# Modify this function to concatenate to each benefit - " is a benefit of functions!"
def build_sentence(benefit):
pass
def name_the_benefits_of_functions():
list_of_benefits = list_benefits()
for benefit in list_of_benefits:
print(build_sentence(benefit))
name_the_benefits_of_functions()
"""
Explanation: In this exercise you'll use an existing function,
and while adding your own to create a fully functional program.
Add a function named list_benefits() that returns the following list of strings: "More organized code", "More readable code", "Easier code reuse", "Allowing programmers to share and connect code together"
Add a function named build_sentence(info) which receives a single argument containing a string and returns a sentence starting with the given string and ending with the string " is a benefit of functions!"
Run and see all the functions work together!
End of explanation
"""
s = "Hello WORLD"
type(s)
s.swapcase()
len()
3.75.as_integer_ratio()
"""
Explanation: Methods
Methods are very similar to functions with the difference that, typically, a method associated with an objects.
End of explanation
"""
# These two lines are critical to using matplotlib within the noteboook
%matplotlib inline
import matplotlib.pyplot as plt
x = range(10)
x
x = [float(i-50)/50.0 for i in range(100)]
x
sin(0.1)
from math import *
sin(0.1)
y = [ xx**2 for xx in x]
y2 = [xx**3 for xx in x]
plt.plot(x, y, label="x^2")
plt.plot(x, y2, label="x^3")
plt.legend(loc="best")
plt.title("Exponential Functions")
theta = [ pi*0.02*float(t-50) for t in range (100)]
theta[:10]
x = [sin(t) for t in theta]
y = [cos(t) for t in theta]
plt.figure(figsize=(6,6))
plt.plot(x,y)
plt.xlim(-3,3)
plt.ylim(-3,3)
"""
Explanation: Hint: while typing in the notebook or at the ipython prompt use the [TAB]-key after adding a "." (period) behind an object to see available methods:
Type the name of an already defined object: s
Add a period "." and hit the [TAB]-key: s. $\leftarrow$ This should show a list of available methods to a string.
Plotting something
Let's put some of our knowledge about lists and functions to use.
The following examples will create list of values, and then graph them.
We use a module of the Matplotlib library https://matplotlib.org/ The web-site provides detailed documentation and a wealth of examples.
<img src="https://matplotlib.org/_static/logo2.svg" style="top:5px;width:200px;right:5px;position:absolute" />
End of explanation
"""
|
ML4DS/ML4all
|
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering[Conflicto].ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import imread
"""
Explanation: Lab Session: Clustering algorithms for Image Segmentation
Author: Jesús Cid Sueiro
Jan. 2017
End of explanation
"""
name = "birds.jpg"
name = "Seeds.jpg"
birds = imread("Images/" + name)
birdsG = np.sum(birds, axis=2)
# <SOL>
plt.imshow(birdsG, cmap=plt.get_cmap('gray'))
plt.grid(False)
plt.axis('off')
plt.show()
# </SOL>
"""
Explanation: 1. Introduction
In this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.
1.1. Load Image
Several images are provided with this notebook:
BinarySeeds.png
birds.jpg
blood_frog_1.jpg
cKyDP.jpg
Matricula.jpg
Matricula2.jpg
Seeds.png
Select and visualize image birds.jpg from file and plot it in grayscale
End of explanation
"""
# <SOL>
plt.hist(birdsG.ravel(), bins=256)
plt.show()
# </SOL>
"""
Explanation: 2. Thresholding
Select an intensity threshold by manual inspection of the image histogram
End of explanation
"""
# <SOL>
if name == "birds.jpg":
th = 256
elif name == "Seeds.jpg":
th = 650
birdsBN = birdsG > th
# If there are more white than black pixels, reverse the image
if np.sum(birdsBN) > float(np.prod(birdsBN.shape)/2):
birdsBN = 1-birdsBN
plt.imshow(birdsBN, cmap=plt.get_cmap('gray'))
plt.grid(False)
plt.axis('off')
plt.show()
# </SOL>
"""
Explanation: Plot the binary image after thresholding.
End of explanation
"""
# <SOL>
(h, w) = birdsBN.shape
bW = birdsBN * range(w)
bH = birdsBN * np.array(range(h))[:,np.newaxis]
pSet = [t for t in zip(bW.ravel(), bH.ravel()) if t!=(0,0)]
X = np.array(pSet)
# </SOL>
print X
plt.scatter(X[:, 0], X[:, 1], s=5);
plt.axis('equal')
plt.show()
"""
Explanation: 3. Dataset generation
Extract pixel coordinates dataset from image and plot them in a scatter plot.
End of explanation
"""
from sklearn.cluster import KMeans
# <SOL>
est = KMeans(100) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=5, cmap='rainbow',
linewidth=0.0)
plt.axis('equal')
plt.show()
# </SOL>
"""
Explanation: 4. k-means clustering algorithm
Use the pixel coordinates as the input data for a k-means algorithm. Plot the result of the clustering by means of a scatter plot, showing each cluster with a different colour.
End of explanation
"""
from sklearn.metrics.pairwise import rbf_kernel
# <SOL>
gamma = 5
sf = 4
Xsub = X[0::sf]
print Xsub.shape
gamma = 0.001
K = rbf_kernel(Xsub, Xsub, gamma=gamma)
# </SOL>
# Visualization
# <SOL>
plt.imshow(K, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
# </SOL>
"""
Explanation: 5. Spectral clustering algorithm
5.1. Affinity matrix
Compute and visualize the affinity matrix for the given dataset, using a rbf kernel with $\gamma=5$.
End of explanation
"""
# <SOL>
from sklearn.cluster import SpectralClustering
spc = SpectralClustering(n_clusters=100, gamma=gamma, affinity='rbf')
y_kmeans = spc.fit_predict(Xsub)
# </SOL>
plt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0)
plt.axis('equal')
plt.show()
"""
Explanation: 5.2. Spectral clusering
Apply the spectral clustering algorithm, and show the clustering results using a scatter plot.
End of explanation
"""
|
ireapps/cfj-2017
|
exercises/06. pandas? pandas! (Part 2)-working.ipynb
|
mit
|
# to avoid errors with the FDA files, we're going to specify the encoding
# as latin_1, which is common with gov't data
# so it's a decent educated guess to start with
# main dataframe
# country code lookup dataframe
# refusal code lookup dataframe
# specify that the 'ASC_ID' column comes in as a string
# because later we're going to join on it
# run `.head()` to check the output
"""
Explanation: More fun with pandas
Let's use pandas to dive into some more complicated data.
The data
We're going to be working with FDA import refusal data from 2014 to September 2017. From the source:
The Food, Drug, and Cosmetic Act (the Act) authorizes FDA to detain a regulated product that appears to be out of compliance with the Act. The FDA district office will then issue a "Notice of FDA Action" specifying the nature of the violation to the owner or consignee. The owner or consignee is entitled to an informal hearing in order to provide testimony regarding the admissibility of the product. If the owner fails to submit evidence that the product is in compliance or fails to submit a plan to bring the product into compliance, FDA will issue another "Notice of FDA Action" refusing admission to the product. The product then has to be exported or destroyed within 90 days.
Here's the layout for the main file:
Column | Description
------ | -----------
MFG_FIRM_FEI_NUM |
LGL_NAME | Name of the Declared Manufacturer
LINE1_ADRS | Manufacturer Address
LINE2_ADRS | Manufacturer Address
CITY_NAME | Manufacturer City
PROVINCE_STATE | Manufacturer Province or State
ISO_CNTRY_CODE | 2 Letter ISO country code
PRODUCT_CODE | 5-7 Character product code
REFUSAL_DATE |
DISTRICT | FDA District where entry was made
ENTRY_NUM | CBP Entry Number
RFRNC_DOC_ID | CBP Line Number
LINE_NUM | FDA Line number
LINE_SFX_ID | FDA Line Suffix
FDA_SAMPLE_ANALYSIS | Y if there are FDA Analytical Results
PRIVATE_LAB_ANALYSIS | Y if there was a Private Lab package
REFUSAL_CHARGES | asc_id’s (1 to many) of the charges for which product was refused. If there are multiple they will be separated by a comma e.g. 320, 328, 321, 482, 218, 3320
PROD_CODE_DESC_TEXT | FDA's or Corrected Description
Come up with a list of questions to ask
As with any tool, your analysis is only as good as your questions. We'll start with a couple easy ones:
In this time period, which country had the most imports refused? (ISO_CNTRY_CODE)
Which company had the most? (MFG_FIRM_FEI_NUM)
What was the most common reason for refusing a product? (REFUSAL_CHARGES)
Let's get started!
Import pandas
Load the data into data frames
We'll use the read_csv() method to read in the data files:
data/import-refusal.csv => the main data file
data/import-refusal-charge-codes.csv => refusal code lookup file
data/country-codes.csv => country code lookup file (via)
End of explanation
"""
# convert the date strings to datetime
# make sure the conversion actually happened
# run `.head()` to check the country code output
# run `.head()` to check the output
"""
Explanation: Convert the date field to native datetime
We'll use the to_datetime() method to convert the REFUSAL_DATE column from string to datetime. (Via this S/O answer)
Why? Later on we might want to do some time-based analysis.
End of explanation
"""
|
albahnsen/PracticalMachineLearningClass
|
notebooks/11-Ensembles_Bagging.ipynb
|
mit
|
import numpy as np
# set a seed for reproducibility
np.random.seed(1234)
# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations
mod1 = np.random.rand(1000)
mod2 = np.random.rand(1000)
mod3 = np.random.rand(1000)
mod4 = np.random.rand(1000)
mod5 = np.random.rand(1000)
# each model independently predicts 1 (the "correct response") if random number was at least 0.3
preds1 = np.where(mod1 > 0.3, 1, 0)
preds2 = np.where(mod2 > 0.3, 1, 0)
preds3 = np.where(mod3 > 0.3, 1, 0)
preds4 = np.where(mod4 > 0.3, 1, 0)
preds5 = np.where(mod5 > 0.3, 1, 0)
# print the first 20 predictions from each model
print(preds1[:20])
print(preds2[:20])
print(preds3[:20])
print(preds4[:20])
print(preds5[:20])
# average the predictions and then round to 0 or 1
ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5)/5.0).astype(int)
# print the ensemble's first 20 predictions
print(ensemble_preds[:20])
# how accurate was each individual model?
print(preds1.mean())
print(preds2.mean())
print(preds3.mean())
print(preds4.mean())
print(preds5.mean())
# how accurate was the ensemble?
print(ensemble_preds.mean())
"""
Explanation: 11 - Ensemble Methods - Bagging
by Alejandro Correa Bahnsen and Jesus Solano
version 1.5, February 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Why are we learning about ensembling?
Very popular method for improving the predictive performance of machine learning models
Provides a foundation for understanding more sophisticated models
Lesson objectives
Students will be able to:
Define ensembling and its requirements
Identify the two basic methods of ensembling
Decide whether manual ensembling is a useful approach for a given problem
Explain bagging and how it can be applied to decision trees
Explain how out-of-bag error and feature importances are calculated from bagged trees
Explain the difference between bagged trees and Random Forests
Build and tune a Random Forest model in scikit-learn
Decide whether a decision tree or a Random Forest is a better model for a given problem
Part 1: Introduction
Ensemble learning is a widely studied topic in the machine learning community. The main idea behind
the ensemble methodology is to combine several individual base classifiers in order to have a
classifier that outperforms each of them.
Nowadays, ensemble methods are one
of the most popular and well studied machine learning techniques, and it can be
noted that since 2009 all the first-place and second-place winners of the KDD-Cup https://www.sigkdd.org/kddcup/ used ensemble methods. The core
principle in ensemble learning, is to induce random perturbations into the learning procedure in
order to produce several different base classifiers from a single training set, then combining the
base classifiers in order to make the final prediction. In order to induce the random permutations
and therefore create the different base classifiers, several methods have been proposed, in
particular:
* bagging
* pasting
* random forests
* random patches
Finally, after the base classifiers
are trained, they are typically combined using either:
* majority voting
* weighted voting
* stacking
There are three main reasons regarding why ensemble
methods perform better than single models: statistical, computational and representational . First, from a statistical point of view, when the learning set is too
small, an algorithm can find several good models within the search space, that arise to the same
performance on the training set $\mathcal{S}$. Nevertheless, without a validation set, there is
a risk of choosing the wrong model. The second reason is computational; in general, algorithms
rely on some local search optimization and may get stuck in a local optima. Then, an ensemble may
solve this by focusing different algorithms to different spaces across the training set. The last
reason is representational. In most cases, for a learning set of finite size, the true function
$f$ cannot be represented by any of the candidate models. By combining several models in an
ensemble, it may be possible to obtain a model with a larger coverage across the space of
representable functions.
Example
Let's pretend that instead of building a single model to solve a binary classification problem, you created five independent models, and each model was correct about 70% of the time. If you combined these models into an "ensemble" and used their majority vote as a prediction, how often would the ensemble be correct?
End of explanation
"""
# read in and prepare the vehicle training data
import pandas as pd
url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/vehicles_train.csv'
train = pd.read_csv(url)
train['vtype'] = train.vtype.map({'car':0, 'truck':1})
# read in and prepare the vehicle testing data
url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/vehicles_test.csv'
test = pd.read_csv(url)
test['vtype'] = test.vtype.map({'car':0, 'truck':1})
train.head()
"""
Explanation: Note: As you add more models to the voting process, the probability of error decreases, which is known as Condorcet's Jury Theorem.
What is ensembling?
Ensemble learning (or "ensembling") is the process of combining several predictive models in order to produce a combined model that is more accurate than any individual model.
Regression: take the average of the predictions
Classification: take a vote and use the most common prediction, or take the average of the predicted probabilities
For ensembling to work well, the models must have the following characteristics:
Accurate: they outperform the null model
Independent: their predictions are generated using different processes
The big idea: If you have a collection of individually imperfect (and independent) models, the "one-off" mistakes made by each model are probably not going to be made by the rest of the models, and thus the mistakes will be discarded when averaging the models.
There are two basic methods for ensembling:
Manually ensemble your individual models
Use a model that ensembles for you
Theoretical performance of an ensemble
If we assume that each one of the $T$ base classifiers has a probability $\rho$ of
being correct, the probability of an ensemble making the correct decision, assuming independence,
denoted by $P_c$, can be calculated using the binomial distribution
$$P_c = \sum_{j>T/2}^{T} {{T}\choose{j}} \rho^j(1-\rho)^{T-j}.$$
Furthermore, as shown, if $T\ge3$ then:
$$
\lim_{T \to \infty} P_c= \begin{cases}
1 &\mbox{if } \rho>0.5 \
0 &\mbox{if } \rho<0.5 \
0.5 &\mbox{if } \rho=0.5 ,
\end{cases}
$$
leading to the conclusion that
$$
\rho \ge 0.5 \quad \text{and} \quad T\ge3 \quad \Rightarrow \quad P_c\ge \rho.
$$
Part 2: Manual ensembling
What makes a good manual ensemble?
Different types of models
Different combinations of features
Different tuning parameters
Machine learning flowchart created by the winner of Kaggle's CrowdFlower competition
End of explanation
"""
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsRegressor
models = {'lr': LinearRegression(),
'dt': DecisionTreeRegressor(),
'nb': GaussianNB(),
'kn': KNeighborsRegressor()}
# Train all the models
X_train = train.iloc[:, 1:]
X_test = test.iloc[:, 1:]
y_train = train.price
y_test = test.price
for model in models.keys():
models[model].fit(X_train, y_train)
# predict test for each model
y_pred = pd.DataFrame(index=test.index, columns=models.keys())
for model in models.keys():
y_pred[model] = models[model].predict(X_test)
# Evaluate each model
from sklearn.metrics import mean_squared_error
for model in models.keys():
print(model,np.sqrt(mean_squared_error(y_pred[model], y_test)))
"""
Explanation: Train different models
End of explanation
"""
np.sqrt(mean_squared_error(y_pred.mean(axis=1), y_test))
"""
Explanation: Evaluate the error of the mean of the predictions
End of explanation
"""
# set a seed for reproducibility
np.random.seed(1)
# create an array of 1 through 20
nums = np.arange(1, 21)
print(nums)
# sample that array 20 times with replacement
print(np.random.choice(a=nums, size=20, replace=True))
"""
Explanation: Comparing manual ensembling with a single model approach
Advantages of manual ensembling:
Increases predictive accuracy
Easy to get started
Disadvantages of manual ensembling:
Decreases interpretability
Takes longer to train
Takes longer to predict
More complex to automate and maintain
Small gains in accuracy may not be worth the added complexity
Part 3: Bagging
The primary weakness of decision trees is that they don't tend to have the best predictive accuracy. This is partially due to high variance, meaning that different splits in the training data can lead to very different trees.
Bagging is a general purpose procedure for reducing the variance of a machine learning method, but is particularly useful for decision trees. Bagging is short for bootstrap aggregation, meaning the aggregation of bootstrap samples.
What is a bootstrap sample? A random sample with replacement:
End of explanation
"""
# set a seed for reproducibility
np.random.seed(123)
n_samples = train.shape[0]
n_B = 10
# create ten bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(1, n_B +1 )]
samples
# show the rows for the first decision tree
train.iloc[samples[0], :]
"""
Explanation: How does bagging work (for decision trees)?
Grow B trees using B bootstrap samples from the training data.
Train each tree on its bootstrap sample and make predictions.
Combine the predictions:
Average the predictions for regression trees
Take a vote for classification trees
Notes:
Each bootstrap sample should be the same size as the original training set.
B should be a large enough value that the error seems to have "stabilized".
The trees are grown deep so that they have low bias/high variance.
Bagging increases predictive accuracy by reducing the variance, similar to how cross-validation reduces the variance associated with train/test split (for estimating out-of-sample error) by splitting many times an averaging the results.
End of explanation
"""
from sklearn.tree import DecisionTreeRegressor
# grow each tree deep
treereg = DecisionTreeRegressor(max_depth=None, random_state=123)
# DataFrame for storing predicted price from each tree
y_pred = pd.DataFrame(index=test.index, columns=[list(range(n_B))])
# grow one tree for each bootstrap sample and make predictions on testing data
for i, sample in enumerate(samples):
X_train = train.iloc[sample, 1:]
y_train = train.iloc[sample, 0]
treereg.fit(X_train, y_train)
y_pred[i] = treereg.predict(X_test)
y_pred
"""
Explanation: Build one tree for each sample
End of explanation
"""
for i in range(n_B):
print(i, np.sqrt(mean_squared_error(y_pred[i], y_test)))
"""
Explanation: Results of each tree
End of explanation
"""
y_pred.mean(axis=1)
np.sqrt(mean_squared_error(y_test, y_pred.mean(axis=1)))
"""
Explanation: Results of the ensemble
End of explanation
"""
# define the training and testing sets
X_train = train.iloc[:, 1:]
y_train = train.iloc[:, 0]
X_test = test.iloc[:, 1:]
y_test = test.iloc[:, 0]
# instruct BaggingRegressor to use DecisionTreeRegressor as the "base estimator"
from sklearn.ensemble import BaggingRegressor
bagreg = BaggingRegressor(DecisionTreeRegressor(), n_estimators=500,
bootstrap=True, oob_score=True, random_state=1)
# fit and predict
bagreg.fit(X_train, y_train)
y_pred = bagreg.predict(X_test)
y_pred
# calculate RMSE
np.sqrt(mean_squared_error(y_test, y_pred))
"""
Explanation: Bagged decision trees in scikit-learn (with B=500)
End of explanation
"""
# show the first bootstrap sample
samples[0]
# show the "in-bag" observations for each sample
for sample in samples:
print(set(sample))
# show the "out-of-bag" observations for each sample
for sample in samples:
print(sorted(set(range(n_samples)) - set(sample)))
"""
Explanation: Estimating out-of-sample error
For bagged models, out-of-sample error can be estimated without using train/test split or cross-validation!
On average, each bagged tree uses about two-thirds of the observations. For each tree, the remaining observations are called "out-of-bag" observations.
End of explanation
"""
# compute the out-of-bag R-squared score (not MSE, unfortunately!) for B=500
bagreg.oob_score_
"""
Explanation: How to calculate "out-of-bag error":
For every observation in the training data, predict its response value using only the trees in which that observation was out-of-bag. Average those predictions (for regression) or take a vote (for classification).
Compare all predictions to the actual response values in order to compute the out-of-bag error.
When B is sufficiently large, the out-of-bag error is an accurate estimate of out-of-sample error.
End of explanation
"""
# read in and prepare the churn data
# Download the dataset
import pandas as pd
import numpy as np
url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/churn.csv'
data = pd.read_csv(url)
# Create X and y
# Select only the numeric features
X = data.iloc[:, [1,2,6,7,8,9,10]].astype(np.float)
# Convert bools to floats
X = X.join((data.iloc[:, [4,5]] == 'no').astype(np.float))
y = (data.iloc[:, -1] == 'True.').astype(np.int)
X.head()
y.value_counts().to_frame('count').assign(percentage = lambda x: x/x.sum())
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
"""
Explanation: Estimating feature importance
Bagging increases predictive accuracy, but decreases model interpretability because it's no longer possible to visualize the tree to understand the importance of each feature.
However, we can still obtain an overall summary of feature importance from bagged models:
Bagged regression trees: calculate the total amount that MSE is decreased due to splits over a given feature, averaged over all trees
Bagged classification trees: calculate the total amount that Gini index is decreased due to splits over a given feature, averaged over all trees
Part 4: Combination of classifiers - Majority Voting
The most typical form of an ensemble is made by combining $T$ different base classifiers.
Each base classifier $M(\mathcal{S}j)$ is trained by applying algorithm $M$ to a random subset
$\mathcal{S}_j$ of the training set $\mathcal{S}$.
For simplicity we define $M_j \equiv M(\mathcal{S}_j)$ for $j=1,\dots,T$, and
$\mathcal{M}={M_j}{j=1}^{T}$ a set of base classifiers.
Then, these models are combined using majority voting to create the ensemble $H$ as follows
$$
f_{mv}(\mathcal{S},\mathcal{M}) = max_{c \in {0,1}} \sum_{j=1}^T
\mathbf{1}_c(M_j(\mathcal{S})).
$$
End of explanation
"""
n_estimators = 100
# set a seed for reproducibility
np.random.seed(123)
n_samples = X_train.shape[0]
# create bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(n_estimators)]
from sklearn.tree import DecisionTreeClassifier
np.random.seed(123)
seeds = np.random.randint(1, 10000, size=n_estimators)
trees = {}
for i in range(n_estimators):
trees[i] = DecisionTreeClassifier(max_features="sqrt", max_depth=None, random_state=seeds[i])
trees[i].fit(X_train.iloc[samples[i]], y_train.iloc[samples[i]])
# Predict
y_pred_df = pd.DataFrame(index=X_test.index, columns=list(range(n_estimators)))
for i in range(n_estimators):
y_pred_df.iloc[:, i] = trees[i].predict(X_test)
y_pred_df.head()
"""
Explanation: Create 100 decision trees
End of explanation
"""
y_pred_df.sum(axis=1)[:10]
y_pred = (y_pred_df.sum(axis=1) >= (n_estimators / 2)).astype(np.int)
from sklearn import metrics
metrics.f1_score(y_pred, y_test)
metrics.accuracy_score(y_pred, y_test)
"""
Explanation: Predict using majority voting
End of explanation
"""
from sklearn.ensemble import BaggingClassifier
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,
random_state=42, n_jobs=-1, oob_score=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
"""
Explanation: Using majority voting with sklearn
End of explanation
"""
samples_oob = []
# show the "out-of-bag" observations for each sample
for sample in samples:
samples_oob.append(sorted(set(range(n_samples)) - set(sample)))
"""
Explanation: Part 5: Combination of classifiers - Weighted Voting
The majority voting approach gives the same weight to each classfier regardless of the performance of each one. Why not take into account the oob performance of each classifier
First, in the traditional approach, a
similar comparison of the votes of the base classifiers is made, but giving a weight $\alpha_j$
to each classifier $M_j$ during the voting phase
$$
f_{wv}(\mathcal{S},\mathcal{M}, \alpha)
=\max_{c \in {0,1}} \sum_{j=1}^T \alpha_j \mathbf{1}c(M_j(\mathcal{S})),
$$
where $\alpha={\alpha_j}{j=1}^T$.
The calculation of $\alpha_j$ is related to the performance of each classifier $M_j$.
It is usually defined as the normalized misclassification error $\epsilon$ of the base
classifier $M_j$ in the out of bag set $\mathcal{S}j^{oob}=\mathcal{S}-\mathcal{S}_j$
\begin{equation}
\alpha_j=\frac{1-\epsilon(M_j(\mathcal{S}_j^{oob}))}{\sum{j_1=1}^T
1-\epsilon(M_{j_1}(\mathcal{S}_{j_1}^{oob}))}.
\end{equation}
Select each oob sample
End of explanation
"""
errors = np.zeros(n_estimators)
for i in range(n_estimators):
y_pred_ = trees[i].predict(X_train.iloc[samples_oob[i]])
errors[i] = 1 - metrics.accuracy_score(y_train.iloc[samples_oob[i]], y_pred_)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
plt.scatter(range(n_estimators), errors)
plt.xlim([0, n_estimators])
plt.title('OOB error of each tree')
"""
Explanation: Estimate the oob error of each classifier
End of explanation
"""
alpha = (1 - errors) / (1 - errors).sum()
weighted_sum_1 = ((y_pred_df) * alpha).sum(axis=1)
weighted_sum_1.head(20)
y_pred = (weighted_sum_1 >= 0.5).astype(np.int)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
"""
Explanation: Estimate $\alpha$
End of explanation
"""
clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(), n_estimators=100, bootstrap=True,
random_state=42, n_jobs=-1, oob_score=True)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
errors = np.zeros(clf.n_estimators)
y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))
for i in range(clf.n_estimators):
oob_sample = ~clf.estimators_samples_[i]
y_pred_ = clf.estimators_[i].predict(X_train.values[oob_sample])
errors[i] = metrics.accuracy_score(y_pred_, y_train.values[oob_sample])
y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)
alpha = (1 - errors) / (1 - errors).sum()
y_pred = (np.sum(y_pred_all_ * alpha, axis=1) >= 0.5).astype(np.int)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
"""
Explanation: Using Weighted voting with sklearn
End of explanation
"""
X_train_2 = pd.DataFrame(index=X_train.index, columns=list(range(n_estimators)))
for i in range(n_estimators):
X_train_2[i] = trees[i].predict(X_train)
X_train_2.head()
from sklearn.linear_model import LogisticRegressionCV
lr = LogisticRegressionCV(cv = 5 )
lr.fit(X_train_2, y_train)
lr.coef_
y_pred = lr.predict(y_pred_df)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
"""
Explanation: Part 5: Combination of classifiers - Stacking
The staking method consists in combining the different base classifiers by learning a
second level algorithm on top of them. In this framework, once the base
classifiers are constructed using the training set $\mathcal{S}$, a new set is constructed
where the output of the base classifiers are now considered as the features while keeping the
class labels.
Even though there is no restriction on which algorithm can be used as a second level learner,
it is common to use a linear model, such as
$$
f_s(\mathcal{S},\mathcal{M},\beta) =
g \left( \sum_{j=1}^T \beta_j M_j(\mathcal{S}) \right),
$$
where $\beta={\beta_j}_{j=1}^T$, and $g(\cdot)$ is the sign function
$g(z)=sign(z)$ in the case of a linear regression or the sigmoid function, defined
as $g(z)=1/(1+e^{-z})$, in the case of a logistic regression.
Lets first get a new training set consisting of the output of every classifier
End of explanation
"""
y_pred_all_ = np.zeros((X_test.shape[0], clf.n_estimators))
X_train_3 = np.zeros((X_train.shape[0], clf.n_estimators))
for i in range(clf.n_estimators):
X_train_3[:, i] = clf.estimators_[i].predict(X_train)
y_pred_all_[:, i] = clf.estimators_[i].predict(X_test)
lr = LogisticRegressionCV(cv=5)
lr.fit(X_train_3, y_train)
y_pred = lr.predict(y_pred_all_)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
"""
Explanation: Using sklearn
End of explanation
"""
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
metrics.f1_score(y_pred, y_test), metrics.accuracy_score(y_pred, y_test)
"""
Explanation: vs using only one dt
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/pandas_list_unique_values_in_column.ipynb
|
mit
|
# Import modules
import pandas as pd
# Set ipython's max row display
pd.set_option('display.max_row', 1000)
# Set iPython's max column width to 50
pd.set_option('display.max_columns', 50)
"""
Explanation: Title: List Unique Values In A Pandas Column
Slug: pandas_list_unique_values_in_column
Summary: List Unique Values In A Pandas Column
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Special thanks to Bob Haffner for pointing out a better way of doing it.
Preliminaries
End of explanation
"""
# Create an example dataframe
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])
df
"""
Explanation: Create an example dataframe
End of explanation
"""
#List unique values in the df['name'] column
df.name.unique()
"""
Explanation: List unique values
End of explanation
"""
|
wuafeing/Python3-Tutorial
|
02 strings and text/02.03 match strings with shell wildcard.ipynb
|
gpl-3.0
|
from fnmatch import fnmatch, fnmatchcase
fnmatch("foo.txt", "*.txt")
fnmatch("foo.txt", "?oo.txt")
fnmatch("Dat45.csv", "Dat[0-9]*")
names = ["Dat1.csv", "Dat2.csv", "config.ini", "foo.py"]
[name for name in names if fnmatch(name, "Dat*.csv")]
"""
Explanation: Previous
2.3 用Shell通配符匹配字符串
问题
你想使用 Unix Shell 中常用的通配符(比如 *.py , Dat[0-9]*.csv 等)去匹配文本字符串
解决方案
fnmatch 模块提供了两个函数—— fnmatch() 和 fnmatchcase() ,可以用来实现这样的匹配。用法如下:
End of explanation
"""
# False On OS X (Mac)
# True On Windows
fnmatch("foo.txt", "*.TXT")
"""
Explanation: fnmatch() 函数使用底层操作系统的大小写敏感规则(不同的系统是不一样的)来匹配模式。比如:
End of explanation
"""
fnmatchcase("foo.txt", "*.TXT")
"""
Explanation: 如果你对这个区别很在意,可以使用 fnmatchcase() 来代替。它完全使用你的模式大小写匹配。比如:
End of explanation
"""
addresses = [
'5412 N CLARK ST',
'1060 W ADDISON ST',
'1039 W GRANVILLE AVE',
'2122 N CLARK ST',
'4802 N BROADWAY',
]
"""
Explanation: 这两个函数通常会被忽略的一个特性是在处理非文件名的字符串时候它们也是很有用的。 比如,假设你有一个街道地址的列表数据:
End of explanation
"""
from fnmatch import fnmatchcase
[addr for addr in addresses if fnmatchcase(addr, "* ST")]
[addr for addr in addresses if fnmatchcase(addr, "54[0-9][0-9] *CLARK*")]
"""
Explanation: 你可以像这样写列表推导:
End of explanation
"""
|
cliburn/sta-663-2017
|
notebook/18E_Spark_SQL.ipynb
|
mit
|
from pyspark import SparkContext, SparkConf
conf = (SparkConf()
.setAppName('SparkSQL')
.setMaster('local[*]'))
sc = SparkContext(conf=conf)
from pyspark.sql import SQLContext
sqlc = SQLContext(sc)
"""
Explanation: Spark SQL
Official Documentation
A tour of the Spark SQL library, the spark-csv package and Spark DataFrames.
Resources
Spark tutorials: A growing bunch of accessible tutorials on Spark, mostly in Scala but a few in Python.
End of explanation
"""
pandas_df = sns.load_dataset('iris')
spark_df = sqlc.createDataFrame(pandas_df)
spark_df.show(n=3)
"""
Explanation: DataFrame from pandas
End of explanation
"""
%%bash
cat data/cars.csv
from pyspark.sql.types import *
def pad(alist):
tmp = alist[:]
n = 5 - len(alist)
for i in range(n):
tmp.append('')
return tmp
# Load a text file and convert each line to a tuple.
lines = sc.textFile('data/cars.csv')
header = lines.first() #extract header
lines = lines.filter(lambda line: line != header)
lines = lines.filter(lambda line: line)
parts = lines.map(lambda l: l.split(','))
parts = parts.map(lambda part: pad(part))
fields = [
StructField('year', IntegerType(), True),
StructField('make', StringType(), True),
StructField('model', StringType(), True),
StructField('comment', StringType(), True),
StructField('blank', StringType(), True),
]
schema = StructType(fields)
# Apply the schema to the RDD.
df0 = sqlc.createDataFrame(parts, schema)
df0.show(n=3)
"""
Explanation: DataFrame from CSV files
Using manual parsing and a schema
End of explanation
"""
df = (sqlc.read.format('com.databricks.spark.csv')
.options(header='true', inferschema='true')
.load('data/cars.csv'))
"""
Explanation: Using the spark-csv package
End of explanation
"""
df.printSchema()
df.show()
df.select(['year', 'make']).show()
"""
Explanation: Using the dataframe
End of explanation
"""
df.registerTempTable('cars')
q = sqlc.sql('select year, make from cars where year > 2000')
q.show()
"""
Explanation: To run SQL queries, we need to register the dataframe as a table
End of explanation
"""
q_df = q.toPandas()
q_df
"""
Explanation: Spark dataframes can be converted to Pandas ones
Typically, we would only convert small dataframes such as the results of SQL queries. If we could load the original dataset in memory as a pandaa dataframe, why would we be using Spark?
End of explanation
"""
df = sqlc.read.json('data/durham-police-crime-reports.json')
"""
Explanation: DataFrame from JSON files
It is easier to read in JSON than CSV files because JSON is self-describing, allowing Spark SQL to infer the appropriate schema without additional hints.
As an example, we will look at Durham police crime reports from the Durham Open Data website.
End of explanation
"""
df.count()
"""
Explanation: How many records are there?
End of explanation
"""
df.printSchema()
"""
Explanation: Since this is JSON, it is possible to have a nested schema.
End of explanation
"""
df.show(n=5)
"""
Explanation: Show the top few rows.
End of explanation
"""
df.select(['fields.strdate', 'fields.chrgdesc']).show(n=5)
"""
Explanation: Make a dataframe only containing date and charges.
End of explanation
"""
df.select('fields.chrgdesc').distinct().show()
"""
Explanation: Show distinct charges - note that for an actual analysis, you would probably want to consolidate these into a smaller number of groups to account for typos, etc.
End of explanation
"""
df.groupby('fields.chrgdesc').count().sort('count', ascending=False).show()
"""
Explanation: What charges are the most common?
End of explanation
"""
df.registerTempTable('crimes')
q = sqlc.sql('''
select fields.chrgdesc, count(fields.chrgdesc) as count
from crimes
where fields.monthstamp=3
group by fields.chrgdesc
''')
q.show()
"""
Explanation: Register as table to run full SQL queries
End of explanation
"""
crimes_df = q.toPandas()
crimes_df.head()
"""
Explanation: Convert to pandas
End of explanation
"""
from odo import odo
odo('sqlite:///../data/Chinook_Sqlite.sqlite::Album', 'Album.json')
df = sqlc.read.json('Album.json')
df.show(n=3)
"""
Explanation: DataFrame from SQLite3
The official docs suggest that this can be done directly via JDBC but I cannot get it to work. As a workaround, you can convert to JSON before importing as a dataframe. If anyone finds out how to load an SQLite3 database table directly into a Spark dataframe, please let me know.
End of explanation
"""
ds = sqlc.read.text('../data/Ulysses.txt')
ds
ds.show(n=3)
def remove_punctuation(s):
import string
return s.translate(dict.fromkeys(ord(c) for c in string.punctuation))
counts = (ds.map(lambda x: remove_punctuation(x[0]))
.flatMap(lambda x: x.lower().strip().split())
.filter(lambda x: x!= '')
.map(lambda x: (x, 1))
.countByKey())
sorted(counts.items(), key=lambda x: x[1], reverse=True)[:10]
"""
Explanation: DataSets
In Scala and Java, Spark 1.6 introduced a new type called DataSet that combines the relational properties of a DataFrame with the functional methods of an RDD. This will be available in Python in a later version. However, because of the dynamic nature of Python, you can already call functional methods on a Spark Dataframe, giving most of the ease of use of the DataSet type.
End of explanation
"""
%load_ext version_information
%version_information pyspark
"""
Explanation: Optional Exercise
The crime data set includes both date and geospatial information. Consider creating an interactive map visualization of crimes in Durham by date using the bokeh package. See this example to get started. GeoJSON version of the Durham Police Crime Reports can be downloaded.
Version information
End of explanation
"""
|
Leguark/GeMpy
|
Prototype Notebook/.ipynb_checkpoints/Sandstone Project_legacy-checkpoint.ipynb
|
mit
|
T.jacobian?
# Setting extend, grid and compile
# Setting the extent
sandstone = GeoMig.Interpolator(696000,747000,6863000,6950000,-20000, 2000,
range_var = np.float32(110000),
u_grade = 9) # Range used in geomodeller
# Setting resolution of the grid
sandstone.set_resolutions(40,40,150)
sandstone.create_regular_grid_3D()
# Compiling
sandstone.theano_compilation_3D()
"""
Explanation: Sandstone Model
First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
End of explanation
"""
sandstone.load_data_csv("foliations", os.pardir+"/input_data/a_Foliations.csv")
sandstone.load_data_csv("interfaces", os.pardir+"/input_data/a_Points.csv")
pn.set_option('display.max_rows', 25)
sandstone.Foliations
"""
Explanation: Loading data from geomodeller
So there are 3 series, 2 of one single layer and 1 with 2. Therefore we need 3 potential fields, so lets begin.
End of explanation
"""
sandstone.set_series({"EarlyGranite_Series":sandstone.formations[-1],
"BIF_Series":(sandstone.formations[0], sandstone.formations[1]),
"SimpleMafic_Series":sandstone.formations[2]})
sandstone.series
sys.version_info
"""
Explanation: Defining Series
End of explanation
"""
sandstone.compute_potential_field("EarlyGranite_Series", verbose = 1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 13, figsize=(7,6), contour_lines = 20)
sandstone.potential_interfaces;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1
block = block.reshape(40,40,150)
#block = np.swapaxes(block, 0, 1)
plt.imshow(block[:,8,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax),
interpolation = "none")
"""
Explanation: Early granite
End of explanation
"""
sandstone.compute_potential_field("BIF_Series", verbose=1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 12, figsize=(7,6), contour_lines = 100)
sandstone.potential_interfaces, sandstone.layers[0].shape;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[(sandstone.Z_x<sandstone.potential_interfaces[0]) * (sandstone.Z_x>sandstone.potential_interfaces[-1])] = 1
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 2
block = block.reshape(40,40,150)
plt.imshow(block[:,13,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax),
interpolation = "none")
"""
Explanation: BIF Series
End of explanation
"""
sandstone.compute_potential_field("SimpleMafic_Series", verbose = 1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 15, figsize=(7,6), contour_lines = 20)
sandstone.potential_interfaces, sandstone.layers[0].shape;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1
block = block.reshape(40,40,150)
#block = np.swapaxes(block, 0, 1)
plt.imshow(block[:,13,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax))
"""
Explanation: SImple mafic
End of explanation
"""
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 1)
%matplotlib qt4
plot_block = sandstone.block.get_value().reshape(40,40,150)
plt.imshow(plot_block[:,13,:].T, origin = "bottom", aspect = "equal",
extent = (sandstone.xmin, sandstone.xmax, sandstone.zmin, sandstone.zmax), interpolation = "none")
"""
Explanation: Optimizing the export of lithologies
Here I am going to try to return from the theano interpolate function the internal type of the result (in this case DK I guess) so I can make another function in python I guess to decide which potential field I calculate at every grid_pos
End of explanation
"""
"""Export model to VTK
Export the geology blocks to VTK for visualisation of the entire 3-D model in an
external VTK viewer, e.g. Paraview.
..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk
**Optional keywords**:
- *vtk_filename* = string : filename of VTK file (default: output_name)
- *data* = np.array : data array to export to VKT (default: entire block model)
"""
vtk_filename = "noddyFunct2"
extent_x = 10
extent_y = 10
extent_z = 10
delx = 0.2
dely = 0.2
delz = 0.2
from pyevtk.hl import gridToVTK
# Coordinates
x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')
y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')
z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')
# self.block = np.swapaxes(self.block, 0, 2)
gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
"""
Explanation: Export vtk
End of explanation
"""
%%timeit
sol = interpolator.geoMigueller(dips,dips_angles,azimuths,polarity, rest, ref)[0]
sandstone.block_export.profile.summary()
"""
Explanation: Performance Analysis
CPU
End of explanation
"""
interpolator.theano_set_3D()
%%timeit
sol = interpolator.geoMigueller(dips,dips_angles,azimuths,polarity, rest, ref)[0].reshape(20,20,20, order = "C")
interpolator.geoMigueller.profile.summary()
"""
Explanation: GPU
End of explanation
"""
|
Cyb3rWard0g/ThreatHunter-Playbook
|
docs/notebooks/windows/08_lateral_movement/WIN-200902020333.ipynb
|
gpl-3.0
|
from openhunt.mordorutils import *
spark = get_spark()
"""
Explanation: Remote WMI ActiveScriptEventConsumers
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2020/09/02 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI ActiveScriptEventConsumers remotely to move laterally in my network.
Technical Context
One of the components of an Event subscription is the event consumer. It is basically the main action that gets executed when a filter triggers (i.e. monitor for authentication events. if one occurs. trigger the consumer).
According to MS Documentation, there are several WMI consumer classes available
ActiveScriptEventConsumer -> Executes a predefined script in an arbitrary scripting language when an event is delivered to it. Example -> Running a Script Based on an Event
CommandLineEventConsumer -> Launches an arbitrary process in the local system context when an event is delivered to it. Example -> Running a Program from the Command Line Based on an Event
LogFileEventConsumer -> Writes customized strings to a text log file when events are delivered to it. Example -> Writing to a Log File Based on an Event
NTEventLogEventConsumer -> Logs a specific Message to the Windows event log when an event is delivered to it. Example -> Logging to NT Event Log Based on an Event
ScriptingStandardConsumerSetting Provides registration data common to all instances of the ActiveScriptEventConsumer class.
SMTPEventConsumer Sends an email Message using SMTP each time an event is delivered to it. Example -> Sending Email Based on an Event
The ActiveScriptEventConsumer class allows for the execution of scripting code from either JScript or VBScript engines. Finally, the WMI script host process is %SystemRoot%\system32\wbem\scrcons.exe.
Offensive Tradecraft
Threat actors can achieve remote code execution by using WMI event subscriptions. Normally, a permanent WMI event subscription is designed to persist and respond to certain events.
According to Matt Graeber, if an attacker wanted to execute a single payload however, the respective event consumer would just need to delete its corresponding event filter, consumer, and filter to consumer binding.
The advantage of this technique is that the payload runs as SYSTEM, and it avoids having a payload be displayed in plaintext in the presence of command line auditing.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/08_lateral_movement/SDWIN-200724174200.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/covenant_wmi_remote_event_subscription_ActiveScriptEventConsumers.zip |
Analytics
Initialize Analytics Engine
End of explanation
"""
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/covenant_wmi_remote_event_subscription_ActiveScriptEventConsumers.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
"""
Explanation: Download & Process Mordor Dataset
End of explanation
"""
df = spark.sql(
'''
SELECT EventID, EventType
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND EventID = 20
AND LOWER(Message) Like '%type: script%'
'''
)
df.show(10,False)
"""
Explanation: Analytic I
Look for the creation of Event consumers of script type.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi consumer | 20 |
End of explanation
"""
df = spark.sql(
'''
SELECT EventID, SourceName
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-WMI-Activity/Operational'
AND EventID = 5861
AND LOWER(Message) LIKE '%scriptingengine = "vbscript"%'
'''
)
df.show(10,False)
"""
Explanation: Analytic II
Look for the creation of Event consumers of script type (i.e vbscript).
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-WMI-Activity/Operational | Wmi subscription created | 5861 |
End of explanation
"""
df = spark.sql(
'''
SELECT ParentImage, Image, CommandLine, ProcessId, ProcessGuid
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%scrcons%'
'''
)
df.show(10,False)
"""
Explanation: Analytic III
Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is created. This is created by svchost.exe.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
End of explanation
"""
df = spark.sql(
'''
SELECT ParentProcessName, NewProcessName, CommandLine, NewProcessId
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE '%scrcons%'
'''
)
df.show(10,False)
"""
Explanation: Analytic IV
Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is created. This is created by svchost.exe.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
End of explanation
"""
df = spark.sql(
'''
SELECT Image, ImageLoaded, Description, ProcessGuid
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND LOWER(ImageLoaded) IN (
'c:\\\windows\\\system32\\\wbem\\\scrcons.exe',
'c:\\\windows\\\system32\\\\vbscript.dll',
'c:\\\windows\\\system32\\\wbem\\\wbemdisp.dll',
'c:\\\windows\\\system32\\\wshom.ocx',
'c:\\\windows\\\system32\\\scrrun.dll'
)
'''
)
df.show(10,False)
"""
Explanation: Analytic V
Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is being used. You can do this by looking for a few modules being loaded by a process.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation
"""
df = spark.sql(
'''
SELECT d.`@timestamp`, c.Image, d.DestinationIp, d.ProcessId
FROM mordorTable d
INNER JOIN (
SELECT b.ImageLoaded, a.CommandLine, b.ProcessGuid, a.Image
FROM mordorTable b
INNER JOIN (
SELECT ProcessGuid, CommandLine, Image
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%scrcons.exe'
) a
ON b.ProcessGuid = a.ProcessGuid
WHERE b.Channel = "Microsoft-Windows-Sysmon/Operational"
AND b.EventID = 7
AND LOWER(b.ImageLoaded) IN (
'c:\\\windows\\\system32\\\wbem\\\scrcons.exe',
'c:\\\windows\\\system32\\\\vbscript.dll',
'c:\\\windows\\\system32\\\wbem\\\wbemdisp.dll',
'c:\\\windows\\\system32\\\wshom.ocx',
'c:\\\windows\\\system32\\\scrrun.dll'
)
) c
ON d.ProcessGuid = c.ProcessGuid
WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational"
AND d.EventID = 3
'''
)
df.show(10,False)
"""
Explanation: Analytic VI
Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is being used and add some context to it that might not be normal in your environment. You can add network connections context to look for any scrcons.exe reaching out to external hosts over the network.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
| Process | Microsoft-Windows-Sysmon/Operational | Process connected to Ip | 3 |
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation
"""
df = spark.sql(
'''
SELECT d.`@timestamp`, d.TargetUserName, c.Image, c.ProcessId
FROM mordorTable d
INNER JOIN (
SELECT b.ImageLoaded, a.CommandLine, b.ProcessGuid, a.Image, b.ProcessId
FROM mordorTable b
INNER JOIN (
SELECT ProcessGuid, CommandLine, Image
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE '%scrcons.exe'
) a
ON b.ProcessGuid = a.ProcessGuid
WHERE b.Channel = "Microsoft-Windows-Sysmon/Operational"
AND b.EventID = 7
AND LOWER(b.ImageLoaded) IN (
'c:\\\windows\\\system32\\\wbem\\\scrcons.exe',
'c:\\\windows\\\system32\\\\vbscript.dll',
'c:\\\windows\\\system32\\\wbem\\\wbemdisp.dll',
'c:\\\windows\\\system32\\\wshom.ocx',
'c:\\\windows\\\system32\\\scrrun.dll'
)
) c
ON split(d.ProcessId, '0x')[1] = LOWER(hex(CAST(c.ProcessId as INT)))
WHERE LOWER(d.Channel) = "security"
AND d.EventID = 4624
AND d.LogonType = 3
'''
)
df.show(10,False)
"""
Explanation: Analytic VII
One of the main goals is to find context that could tell us that scrcons.exe was used over the network (Lateral Movement). One way would be to add a network logon session as context to some of the previous events.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, TargetUserName,ImpersonationLevel, LogonType, ProcessName
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND ProcessName LIKE '%scrcons.exe'
'''
)
df.show(10,False)
"""
Explanation: Analytic VIII
One of the main goals is to find context that could tell us that scrcons.exe was used over the network (Lateral Movement). One way would be to add a network logon session as context to some of the previous events.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |
End of explanation
"""
|
davicsilva/dsintensive
|
notebooks/eda-miniprojects/racial_disc/sliderule_dsi_inferential_statistics_exercise_2-Copy1.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Examining Racial Discrimination in the US Job Market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés to black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes when presented to the employer.
Exercises
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Write a story describing the statistical significance in the context or the original problem.
Does your analysis mean that race/name is the most important factor in callback success? Why or why not? If not, how would you amend your analysis?
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Experiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states
Scipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Import the libs
End of explanation
"""
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
data.head()
"""
Explanation: Loading the data from a stata file
End of explanation
"""
data_calls = data[['id','race','call', 'education', 'yearsexp']].loc[data['call']==1]
data_calls.head()
"""
Explanation: Dataset with résumés that received callbacks
End of explanation
"""
# total résumés in the dataset
n = data.shape[0]
# Callback / white/black-sounding name
total_call = data['id'].loc[data['call'] ==1.0].count()
call_w = data['id'].loc[(data['race'] =='w') & (data['call'] ==1.0)].count()
call_b = data['id'].loc[(data['race'] =='b') & (data['call'] ==1.0)].count()
# Summary
print("Total résumés = %d Curricula Vitae (CV)" % n)
print("Total callbacks = %d calls (%.2f%% all CV)" % (total_call,(100*(total_call/n))))
print("...")
print("...Callback for with white-sounding name = %d or %.2f%% from résumés with callbacks;" % (call_w, (100*(call_w/total_call))))
print("...Callback for black-sounding name = %d or %.2f%% from résumés with callbacks." % (call_b, (100*(call_b/total_call))))
print("...")
print("...Callback for white-sounding name is %.2f%% greater than for black-sounding names" % (100*((call_w - call_b)/call_w)))
"""
Explanation: Callbacks for white and black-sounding names (a summary)
End of explanation
"""
# create data
label_call_w = "white-sounding names - " + '{:.5}'.format(str(100*(call_w/total_call))) +"%"
label_call_b = "black-sounding names - " + '%.5s' % format(str(100*(call_b/total_call))) +"%"
names= label_call_w, label_call_b
size=[call_w, call_b]
# Create a circle for the center of the plot
inner_circle=plt.Circle( (0,0), 0.7, color='white')
# Give color names
plt.pie(size, labels=names, colors=['blue','skyblue'], labeldistance=1.0, wedgeprops = { 'linewidth' : 7, 'edgecolor' : 'white' })
p=plt.gcf()
p.gca().add_artist(inner_circle)
plt.show()
"""
Explanation: Callbacks: a visual presentation
End of explanation
"""
# Dataframe with only the résumés with callback (data_calls)
data_calls.head()
"""
Explanation: 1.What test is appropriate for this problem? Does CLT apply?
We have a problem with variables that represents categorical values: 'b', 'w'.<br>
For this type of analysis - relationship of two categorical variables -, their distribution in the dataset is often displayed in an R×C table, also referred to as a contingency table [1].
In order to do hypothesis testing with categorical variables we use the chi-square test [1].
Table 1. White-sounding and black-souding names versus callbacks
| | Callback | No callback |
|:----|:--------:|:-----------:|
|white-sounding names | X% | Y% |
| black-sounding names | Z% | W% |
Does CLT apply?
Yes.
2.What are the null and alternate hypotheses?
2.1 - Null hypothesis: H0 => "Race has NOT a significant impact on the rate of callbacks for résumés".
2.2 - Alternate hypothesis: Ha => "Race has a significant impact on the rate of callbacks for résumés".
3. Compute margin of error, confidence interval, and p-value
About résumés with callback:
End of explanation
"""
# Total résumés
n = data.shape[0]
# Total callbacks
n_call = data_calls.shape[0]
# Résumés white and black sounding names
total_white = data['id'].loc[(data['race'] =='w')].count()
total_black = data['id'].loc[(data['race'] =='b')].count()
# Perc.(%) white and black sounding names from total
perc_tot_white = total_white/n
perc_tot_black = total_black/n
# Perc.(%) white and black sounding names with callback
perc_call_white = call_w/n_call
perc_call_black = call_b/n_call
print("Total résumés with callback = %d" %n_call)
print("---------------------------------")
print("Perc.(%%) white-sounding names from total = %.2f%%" %(perc_tot_white*100))
print("Perc.(%%) black-sounding names from total = %.2f%%" %(perc_tot_black*100))
print("---------------------------------")
print("Perc.(%%) white-sounding names from callbacks = %.2f%%" %(perc_call_white*100))
print("Perc.(%%) black-sounding names from total = %.2f%%" %(perc_call_black*100))
print("---------------------------------")
"""
Explanation: Sumary:
End of explanation
"""
rc = data[['race','call']]
sns.barplot(x='race', y='call', data=rc)
plt.show()
"""
Explanation: Barplot: 'race' versus 'callback'
End of explanation
"""
sns.set()
# Plot tip as a function of toal bill across days
g = sns.lmplot(x="yearsexp", y="education", hue="race",
truncate=True, size=7, data=data_calls)
# Use more informative axis labels than are provided by default
g.set_axis_labels("Years of experience", "Years of formal education")
# Arrays with callback:
#... w_callback = white-sounding name
#... b_callback = black-sounding name
w_callback = data_calls.iloc[:, 1][data_calls.race == 'w'].values
b_callback = data_calls.iloc[:, 1][data_calls.race == 'b'].values
"""
Explanation: Investigation: is there any relation between white/black-sounding names and education versus years of experience?
End of explanation
"""
critical_value = 1.96
margin_error = np.sqrt((perc_call_white*(1-perc_call_white)/n))*critical_value
low_critical_value = perc_call_white - margin_error
high_critical_value = perc_call_white + margin_error
print("White-sounding names:")
print("---------------------")
print("Mean: ", perc_call_white)
print("Margin of error: ", margin_error)
print("---------------------")
print("Confidence interval: ")
print("...From = %.2f" %low_critical_value)
print("...To = %.2f" %high_critical_value)
"""
Explanation: Margin of error and confidence intervals for 'w' and 'b' groups
For alfa=0,05 and degrees of freedom (df)=1, we have x2=3.84 or x=1.96
White-sounding names
End of explanation
"""
critical_value = 1.96
margin_error = np.sqrt((perc_call_black*(1-perc_call_black)/n))*critical_value
low_critical_value = perc_call_black - margin_error
high_critical_value = perc_call_black + margin_error
print("Black-sounding names:")
print("---------------------")
print("...Mean: ", perc_call_black)
print("...Margin of error: ", margin_error)
print("---------------------")
print("Confidence interval: ")
print("...From = %.2f" %low_critical_value)
print("...To = %.2f" %high_critical_value)
"""
Explanation: Black-sounding names
End of explanation
"""
|
HuanglabPurdue/NCS
|
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
|
gpl-3.0
|
import matplotlib
import matplotlib.pyplot as pyplot
import numpy
import os
import time
# tensorflow
import tensorflow as tf
from tensorflow.python.training import adagrad
from tensorflow.python.training import adam
from tensorflow.python.training import gradient_descent
# python3-6 NCS. This provideds the OTF and the simulated images.
import pyNCS
import pyNCS.denoisetools as ncs
# python3 and C NCS.
import pyCNCS.ncs_c as ncsC
# Generate the same random noise each time.
numpy.random.seed(1)
py_ncs_path = os.path.dirname(os.path.abspath(pyNCS.__file__))
print(py_ncs_path)
"""
Explanation: NCS using Tensorflow versus C.
This notebook compares the performance of Tensorflow versus the C library for NCS. Note that this works a little differently then the usual approach. Here we solve the entire image in a single step rather than breaking it up into lots of sub-images. This works fine at least for the simulated image as it isn't too large. Both Tensorflow and the C library are fairly memory efficient.
Timing was done with the CPU version of Tensorflow. The GPU version might be faster?
In order for this to work you need both the reference NCS/python3-6 Python module and the NCS/clib Python module in your Python path.
End of explanation
"""
# create normalized ideal image
fpath1 = os.path.join(py_ncs_path, "../randwlcposition.mat")
imgsz = 128
zoom = 8
Pixelsize = 0.1
NA = 1.4
Lambda = 0.7
t = time.time()
res = ncs.genidealimage(imgsz,Pixelsize,zoom,NA,Lambda,fpath1)
elapsed = time.time()-t
print('Elapsed time for generating ideal image:', elapsed)
imso = res[0]
pyplot.imshow(imso,cmap="gray")
# select variance map from calibrated map data
fpath = os.path.join(py_ncs_path, "../gaincalibration_561_gain.mat")
noisemap = ncs.gennoisemap(imgsz,fpath)
varsub = noisemap[0]*10 # increase the readout noise by 10 to demonstrate the effect of NCS algorithm
gainsub = noisemap[1]
# generate simulated data
I = 100
bg = 10
offset = 100
N = 1
dataimg = ncs.gendatastack(imso,varsub,gainsub,I,bg,offset,N)
imsd = dataimg[1]
print(imsd.shape)
alpha = 0.1
"""
Explanation: pyNCS analysis
This is a basically a copy of NCS/python3-6/NCSdemo_simulation.py
End of explanation
"""
# Get the OTF mask that NCSDemo_simulation.py used.
rcfilter = ncs.genfilter(128,Pixelsize,NA,Lambda,'OTFweighted',1,0.7)
print(rcfilter.shape)
pyplot.imshow(rcfilter, cmap = "gray")
pyplot.show()
# Calculate gamma and run Python/C NCS.
gamma = varsub/(gainsub*gainsub)
# This takes ~100ms on my laptop.
out_c = ncsC.pyReduceNoise(imsd[0], gamma, rcfilter, alpha)
"""
Explanation: pyCNCS analysis
Mixed C and Python NCS analysis.
End of explanation
"""
f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,8))
ax1.imshow(imsd[0],aspect='equal',cmap="gray")
ax2.imshow(out_c,aspect ='equal',cmap="gray")
pyplot.show()
"""
Explanation: Compare results to reference implementation.
End of explanation
"""
py_otf_mask = numpy.fft.fftshift(rcfilter.astype(numpy.float32))
FITMIN = tf.constant(1.0e-6)
tf_alpha = tf.constant(numpy.float32(alpha))
tf_data = tf.Variable(imsd[0].astype(numpy.float32), shape = (128, 128), trainable=False)
tf_gamma = tf.constant(gamma.astype(numpy.float32))
tf_rc = tf.constant(py_otf_mask*py_otf_mask/(128.0*128.0))
tf_u = tf.Variable(imsd[0].astype(numpy.float32), shape = (128, 128), trainable=True)
# Tensorflow cost function.
@tf.function
def cost():
## LL
t1 = tf.math.add(tf_data, tf_gamma)
t2 = tf.math.add(tf_u, tf_gamma)
t2 = tf.math.maximum(t2, FITMIN)
t2 = tf.math.log(t2)
t2 = tf.math.multiply(t1, t2)
t2 = tf.math.subtract(tf_u, t2)
c1 = tf.math.reduce_sum(t2)
## NC
t1 = tf.dtypes.complex(tf_u, tf.zeros_like(tf_u))
t2 = tf.signal.fft2d(t1)
t2 = tf.math.multiply(t2, tf.math.conj(t2))
t2 = tf.math.abs(t2)
t2 = tf.math.multiply(t2, tf_rc)
c2 = tf.math.reduce_sum(t2)
c2 = tf.math.multiply(tf_alpha, c2)
return tf.math.add(c1, c2)
# Gradient Descent Optimizer.
#
# This takes ~700ms on my laptop, so about 7x slower.
tf_data.assign(numpy.copy(imsd[0]))
tf_u.assign(tf_data.numpy())
for i in range(100):
if((i%10)==0):
print(cost().numpy())
opt = gradient_descent.GradientDescentOptimizer(2.0).minimize(cost)
out_tf = tf_u.numpy()
f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,4))
ax1.imshow(out_c,aspect='equal',cmap="gray")
ax2.imshow(out_tf,aspect ='equal',cmap="gray")
pyplot.show()
print("Maximum pixel difference is {0:.3f}e-".format(numpy.max(numpy.abs(out_c - out_tf))))
# AdamOptimizer.
#
# This takes ~1.5ms on my laptop, so about 15x slower.
tf_data.assign(numpy.copy(imsd[0]))
tf_u.assign(tf_data.numpy())
for i in range(100):
if((i%10)==0):
print(cost().numpy())
opt = adam.AdamOptimizer(0.8).minimize(cost)
out_tf_2 = tf_u.numpy()
f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,4))
ax1.imshow(out_c,aspect='equal',cmap="gray")
ax2.imshow(out_tf_2,aspect ='equal',cmap="gray")
pyplot.show()
print("Maximum pixel difference is {0:.3f}e-".format(numpy.max(numpy.abs(out_c - out_tf_2))))
# Adagrad.
#
# This takes ~950ms on my laptop, so about 9.5x slower.
tf_data.assign(numpy.copy(imsd[0]))
tf_u.assign(tf_data.numpy())
for i in range(100):
if((i%10)==0):
print(cost().numpy())
opt = adagrad.AdagradOptimizer(0.8).minimize(cost)
out_tf_3 = tf_u.numpy()
f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,4))
ax1.imshow(out_c,aspect='equal',cmap="gray")
ax2.imshow(out_tf_3,aspect ='equal',cmap="gray")
pyplot.show()
print("Maximum pixel difference is {0:.3f}e-".format(numpy.max(numpy.abs(out_c - out_tf_3))))
"""
Explanation: Tensorflow
End of explanation
"""
|
neurohackweek/kids_rsfMRI_motion
|
split_half_reliability/Development_Abide_Motion_Wrapper.ipynb
|
mit
|
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib import ticker
from glob import glob
import numpy as np
import os
import pandas as pd
from scipy.stats import linregress, pearsonr, spearmanr
import nibabel as nib
import urllib
import seaborn as sns
sns.set_context('notebook', font_scale=2)
sns.set_style('white')
"""
Explanation: Step by step code for abide_motion_wrapper.py
End of explanation
"""
behav_data_f = '../Phenotypic_V1_0b_preprocessed.csv'
df = pd.read_csv(behav_data_f)
"""
Explanation: Read in the phenotypic behavioural data
This is the Phenotypic_V1_0b_preprocessed1.csv file. It's saved in the DATA folder.
You can find the explanations of all the columns in the ABIDE_LEGEND_V1.02.pdf file.
We're going to load the data into a pandas data frame.
End of explanation
"""
df = df.loc[df['func_perc_fd'].notnull(), :]
df = df.loc[df['FILE_ID']!='no_filename', :]
df['AGE_YRS'] = np.floor(df['AGE_AT_SCAN'])
"""
Explanation: Our measure of interest is func_perc_fd so lets get rid of all participants who don't have a value!
We also want to make sure our data has the data so lets get rid of all participants who's file ID is "no_filename".
We also want to know the age in years for each participant.
End of explanation
"""
motion_thresh = 80
df_samp_motion = df.loc[df['func_perc_fd']<motion_thresh, :]
age_l, age_u = 6, 18
df_samp = df_samp_motion.loc[(df_samp_motion['AGE_YRS']>=age_l) & (df_samp_motion['AGE_YRS']<=age_u), :]
"""
Explanation: Create a stratified sample
We want to see how similar the average connectivity values are when there are no differences between the groups.
Therefore we need to split participants into matched samples.
What do they need to be matched on?!
DSM_IV_TR -- their diagnosis according to the DSM IV (0: control, 1: ASD, 2: Asp, 3: PDD)
SITE_ID -- the scanning site
AGE_YRS -- age in years
SEX -- sex (1: male, 2: female)
We also want to make sure that we sample evenly from the distribution of motion. This will prevent us from over sampling the low motion people, for which we have more data on.
Threshold your sample according to the motion/age cut offs
We're going to systematically change the upper threshold of the percent of volumes that exceed 0.2mm frame to frame dispacement.
And we're also going to select our lower and upper age limits. NOTE that these are inclusive boundaries. So for example a lower limit of 6 and an upper limit of 10 will include participants who are 6, 7, 8, 9 and 10 years old.
func_perc_fd
AGE_YRS
End of explanation
"""
plt.hist(np.array(df_samp ["func_perc_fd"]),bins=40)
plt.xlabel('func_perc_fd')
plt.ylabel('count')
"""
Explanation: Look at distribution of motion in sample
End of explanation
"""
##sort subjects based on motion
sort_column_list = ['func_perc_fd']
df_motion_sorted = df_samp.sort_values(by=sort_column_list)
#check that sorted worked!
df = df_motion_sorted[['func_perc_fd', "SUB_ID"]]
df.head(10)
#df.tail(10)
##rank subjects by motion
r=range(len(df_motion_sorted))
r_df=pd.DataFrame(r)
r_df.columns = ['rank']
r_df['newcol'] = df_motion_sorted.index
r_df.set_index('newcol', inplace=True)
r_df.index.names = [None]
df_motion_sorted_rank=pd.concat ([r_df,df_motion_sorted], axis=1)
"""
Explanation: To avoid oversampling the many low movers, we are going to split up our data into 4 motion quartiles and evenly sample from them
To do this we are going to:
* sort our sample based on motion and then add a column of their ranking
* based on our sample of motion, create motion quartile cutoffs
* create bins of subjects by motion quartile cutoffs
First we will sort our sample based on motion ('Func_perc_fd')
End of explanation
"""
plt.scatter(np.array(df_motion_sorted_rank["rank"]), np.array(df_motion_sorted_rank['func_perc_fd']))
plt.xlabel('rank')
plt.ylabel('func_perc_fd')
"""
Explanation: Let's check to make sure we correctly sorted our subjects by motion
End of explanation
"""
##create bins of subjects in quartiles
l=len(df_motion_sorted_rank)
chunk=l/4
chunk1=chunk
chunk2=2*chunk
chunk3=3*chunk
chunk4=l
"""
Explanation: Now, based on our sample of motion, create motion quartile cutoffs
End of explanation
"""
first=df_motion_sorted_rank[df_motion_sorted_rank['rank']<=chunk1]
second=df_motion_sorted_rank[(df_motion_sorted_rank['rank']>chunk1) & (df_motion_sorted_rank['rank']<=chunk2)]
third=df_motion_sorted_rank[(df_motion_sorted_rank['rank']>chunk2) & (df_motion_sorted_rank['rank']<=chunk3)]
fourth=df_motion_sorted_rank[df_motion_sorted_rank['rank']>=chunk3]
"""
Explanation: Then create bins of subjects by motion quartile cutoffs
End of explanation
"""
motion_boundaries = (first.func_perc_fd.max(), second.func_perc_fd.max(), third.func_perc_fd.max())
for boundary in motion_boundaries:
print boundary
plt.hist(np.array(df["func_perc_fd"]),bins=40)
plt.xlabel('func_perc_fd')
plt.ylabel('count')
for boundary in motion_boundaries:
plt.plot((boundary, boundary), (0,350), 'k-')
"""
Explanation: Look at what our sampling look like
End of explanation
"""
##shuffle
first_rand = first.reindex(np.random.permutation(first.index))
second_rand = second.reindex(np.random.permutation(second.index))
third_rand = third.reindex(np.random.permutation(third.index))
fourth_rand = fourth.reindex(np.random.permutation(fourth.index))
#Only keep the top 2*n/4 participants.
n=50
n_samp=(n*2)/4
n_samp
first_samp_2n = first_rand.iloc[:n_samp, :]
second_samp_2n = second_rand.iloc[:n_samp, :]
third_samp_2n = third_rand.iloc[:n_samp, :]
fourth_samp_2n = fourth_rand.iloc[:n_samp, :]
"""
Explanation: Looks good! We are evently sampling from all motion bins
Only keep 2n/4 participants from each bin
Remember to shuffle these remaining participants to ensure you get different sub samples each time you run the code.
End of explanation
"""
#append these together
frames = [first_samp_2n, second_samp_2n, third_samp_2n,fourth_samp_2n]
final_df = pd.concat(frames)
sort_column_list = ['DSM_IV_TR', 'DX_GROUP', 'SITE_ID', 'SEX', 'AGE_AT_SCAN']
df_samp_2n_sorted = final_df.sort_values(by=sort_column_list)
"""
Explanation: Append all the samples together into one big dataframe and then sort according to matching measures
End of explanation
"""
df_grp_A = df_samp_2n_sorted.iloc[::2, :]
df_grp_B = df_samp_2n_sorted.iloc[1::2, :]
"""
Explanation: Split this data frame into two and VOILA
End of explanation
"""
from abide_motion_wrapper import split_two_matched_samples
df_A, df_B = split_two_matched_samples(df, 80, 6, 18, 200)
print df_A[['AGE_AT_SCAN', 'DX_GROUP', 'SEX']].describe()
print df_B[['AGE_AT_SCAN', 'DX_GROUP', 'SEX']].describe()
"""
Explanation: Actually this can be implemented as a function
The inputs to split_two_matched_samples are the master data frame (df), the motion threshold (motion_thresh), lower age limit (age_l), upper age limit (age_u) and the number of participants (n) in each group.
End of explanation
"""
## to grab data
for f_id in df.loc[:, 'FILE_ID']:
if not (f_id == "no_filename") and not os.path.isfile("../DATA/{}_rois_aal.1D".format(f_id)):
print f_id
testfile = urllib.URLopener()
testfile.retrieve(("https://s3.amazonaws.com/fcp-indi/data/Projects"
"/ABIDE_Initiative/Outputs/cpac/filt_noglobal/rois_aal"
"/{}_rois_aal.1D".format(f_id)),
"DATA/{}_rois_aal.1D".format(f_id))
"""
Explanation: Now that we have our groups, we are going to want to load in the actual AAL ROI times series files and make individual and group correlation matrices
We already have the aal time series files downloaded in the DATA folder, but if you wanted to download them yourselves, you can use the code below
End of explanation
"""
## looking at an example aal time series file for one subject
test = '../DATA/NYU_0051076_rois_aal.1D'
tt = pd.read_csv(test, sep='\t')
tt.head()
def make_group_corr_mat(df):
"""
This function reads in each subject's aal roi time series files and creates roi-roi correlation matrices
for each subject and then sums them all together. The final output is a 3d matrix of all subjects
roi-roi correlations, a mean roi-roi correlation matrix and a roi-roi covariance matrix.
**NOTE WELL** This returns correlations transformed by the Fisher z, aka arctanh, function.
"""
for i, (sub, f_id) in enumerate(df[['SUB_ID', 'FILE_ID']].values):
# read each subjects aal roi time series files
ts_df = pd.read_table('../DATA/{}_rois_aal.1D'.format(f_id))
# create a correlation matrix from the roi all time series files
corr_mat_r = ts_df.corr()
# the correlations need to be transformed to Fisher z, which is
# equivalent to the arctanh function.
corr_mat_z = np.arctanh(corr_mat_r)
# for the first subject, create a correlation matrix of zeros
# that is the same dimensions as the aal roi-roi matrix
if i == 0:
all_corr_mat = np.zeros([corr_mat_z.shape[0], corr_mat_z.shape[1], len(df)])
# now add the correlation matrix you just created for each subject to the all_corr_mat matrix (3D)
all_corr_mat[:, :, i] = corr_mat_z
# create the mean correlation matrix (ignore nas - sometime there are some...)
av_corr_mat = np.nanmean(all_corr_mat, axis=2)
# create the group covariance matrix (ignore nas - sometime there are some...)
var_corr_mat = np.nanvar(all_corr_mat, axis=2)
return all_corr_mat, av_corr_mat, var_corr_mat
"""
Explanation: The function below (make_group_corr_mat) creates individual and group roi-roi correlation matrices by:
Reading in each subjects AAL roi time series file (in DATA folder). Each column is a AAL ROI and the rows below correspond to its average time series.
Create roi-roi correlation matrices for each subject
Fischer Z transforming the correlation matrices
Concatonating all subjects roi-roi matrices and creating a mean and variance roi-roi correlation matrix
End of explanation
"""
M_grA, M_grA_av, M_grA_var = make_group_corr_mat(df_A)
M_grB, M_grB_av, M_grB_var = make_group_corr_mat(df_B)
"""
Explanation: Make the group correlation matrices for the two different groups.
End of explanation
"""
sub, f_id = df[['SUB_ID', 'FILE_ID']].values[0]
ts_df = pd.read_table('../DATA/{}_rois_aal.1D'.format(f_id))
corr_mat_r = ts_df.corr()
corr_mat_z = np.arctanh(corr_mat_r)
r_array = np.triu(corr_mat_r, k=1).reshape(-1)
z_array = np.triu(corr_mat_z, k=1).reshape(-1)
sns.distplot(r_array[r_array<>0.0], label='r values')
sns.distplot(z_array[z_array<>0.0], label='z values')
plt.axvline(c='k', linewidth=0.5)
plt.legend()
plt.title('Pairwise correlation values\nfor an example subject')
sns.despine()
"""
Explanation: Check out the distributions of the r and z values in one of the correlation matrices
Just to see what happens to the data when you apply the arctanh transform.
(The answer is: not too much!)
End of explanation
"""
fig, ax_list = plt.subplots(1,2)
ax_list[0].imshow(M_grA_av, interpolation='none', cmap='RdBu_r', vmin=-1, vmax=1)
ax_list[1].imshow(M_grB_av, interpolation='none', cmap='RdBu_r', vmin=-1, vmax=1)
for ax in ax_list:
ax.set_xticklabels([])
ax.set_yticklabels([])
fig.suptitle('Comparison of average\nconnectivity matrices for two groups')
plt.tight_layout()
"""
Explanation: Visually check the average correlation matrices for the two groups
End of explanation
"""
indices = np.triu_indices_from(M_grA_av, k=1)
grA_values = M_grA_av[indices]
grB_values = M_grB_av[indices]
min_val = np.min([np.min(grA_values), np.min(grB_values)])
max_val = np.max([np.max(grA_values), np.max(grB_values)])
fig, ax = plt.subplots(figsize=(6,5))
ax.plot([np.min(grA_values), np.max(grA_values)], [np.min(grA_values), np.max(grA_values)], c='k', zorder=-1)
ax.scatter(grA_values, grB_values, color=sns.color_palette()[3], s=10, edgecolor='face')
ticks = ticker.MaxNLocator(5)
ax.xaxis.set_major_locator(ticks)
ax.yaxis.set_major_locator(ticks)
plt.xlabel('average roi-roi matrix group A ',fontsize=15)
plt.ylabel('average roi-roi matrix group B',fontsize=15)
ax.set_title('Correlation between average\nmatrices for two groups')
plt.tight_layout()
"""
Explanation: Scatter plot of the two connectivity matrices
End of explanation
"""
def calc_rsq(av_corr_mat_A, av_corr_mat_B):
"""
From wikipedia: https://en.wikipedia.org/wiki/Coefficient_of_determination
Rsq = 1 - (SSres / SStot)
SSres is calculated as the sum of square errors (where the error
is the difference between x and y).
SStot is calculated as the total sum of squares in y.
"""
# Get the data we need
inds = np.triu_indices_from(av_corr_mat_B, k=1)
x = av_corr_mat_A[inds]
y = av_corr_mat_B[inds]
# Calculate the error/residuals
res = y - x
SSres = np.sum(res**2)
# Sum up the total error in y
y_var = y - np.mean(y)
SStot = np.sum(y_var**2)
# R squared
Rsq = 1 - (SSres/SStot)
return Rsq
"""
Explanation: Looks very similar!
Now that we have the roi-roi mean correlation matrices for each group, we want to see how similar they are quantitatively
We expect them to be (about) exactly the same, thus we are going to see how far off the relationship is between these two correlation matrices to the unity line. This is a twist on the classical R squared. You can read more about it here: https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
"""
indices = np.triu_indices_from(M_grA_av, k=1)
grA_values = M_grA_av[indices]
grB_values = M_grB_av[indices]
min_val = np.min([np.nanmin(grA_values), np.nanmin(grB_values)])
max_val = np.max([np.nanmax(grA_values), np.nanmax(grB_values)])
mask_nans = np.logical_or(np.isnan(grA_values), np.isnan(grB_values))
fig, ax = plt.subplots(figsize=(6,5))
sns.regplot(grA_values[~mask_nans],
grB_values[~mask_nans],
color = sns.color_palette()[5],
scatter_kws={'s' : 10, 'edgecolor' : 'face'},
ax=ax)
ax.plot([min_val, max_val], [min_val, max_val], c='k', zorder=-1)
ax.axhline(0, color='k', linewidth=0.5)
ax.axvline(0, color='k', linewidth=0.5)
ticks = ticker.MaxNLocator(5)
ax.xaxis.set_major_locator(ticks)
ax.yaxis.set_major_locator(ticks)
ax.set_title('Correlation between average\nmatrices for two groups\nblack line = unity line\nblue line = best fit line')
plt.tight_layout()
"""
Explanation: Let's first visualize how far off our actual two sample correlation is from the unity line
Black line = unity line
Blue line = best fit line
End of explanation
"""
Rsq=calc_rsq( M_grA_av, M_grB_av)
Rsq
"""
Explanation: This looks like a very good fit!
Let's check what the actual Rsq is with our function - we expect it to be super high!
End of explanation
"""
def split_half_outcome(df, motion_thresh, age_l, age_u, n, n_perms=100):
"""
This function returns the R squared of how each parameter affects split-half reliability!
It takes in a dataframe, motion threshold, an age upper limit(age_u) an age lower limit (age_l), sample size (n),
and number of permutations (n_perms, currently hard coded at 100). This function essentially splits a data frame
into two matched samples (split_two_matched_samples.py), then creates mean roi-roi correlation matrices per sample
(make_group_corr_mat.py) and then calculates the R squared (calc_rsq.py) between the two samples'
correlation matrices and returns all the permuation coefficients of determinations in a dataframe.
"""
#set up data frame of average R squared to fill up later
Rsq_list = []
#Do this in each permutation
for i in range(n_perms):
#create two matched samples split on motion_thresh, age upper, age lower, and n
df_A, df_B = split_two_matched_samples(df, motion_thresh, age_l, age_u, n)
#make the matrix of all subjects roi-roi correlations, make the mean corr mat, and make covariance cor mat
#do this for A and then B
all_corr_mat_A, av_corr_mat_A, var_corr_mat_A = make_group_corr_mat(df_A)
all_corr_mat_B, av_corr_mat_B, var_corr_mat_B = make_group_corr_mat(df_B)
#calculate the R squared between the two matrices
Rsq = calc_rsq(av_corr_mat_A, av_corr_mat_B)
#print "Iteration " + str(i) + ": R^2 = " + str(Rsq)
#build up R squared output
Rsq_list += [Rsq]
return np.array(Rsq_list)
rsq_list = split_half_outcome(df, 50, 6, 18, 20, n_perms=100)
"""
Explanation: Run this split half calculation multiple times to get a distribution R square values
We want to build up a distribution of R sqaured values per specific motion cutoff, age range, and N combinations
End of explanation
"""
sns.distplot(rsq_list )
"""
Explanation: Plot R sqr values
End of explanation
"""
def abide_motion_wrapper(motion_thresh, age_l, age_u, n, n_perms=1000, overwrite=True):
behav_data_f = '../Phenotypic_V1_0b_preprocessed1.csv'
f_name = 'RESULTS/rsq_{:03.0f}pct_{:03.0f}subs_{:02.0f}to{:02.0f}.csv'.format(motion_thresh, n, age_l, age_u)
# By default this code will recreate files even if they already exist
# (overwrite=True)
# If you don't want to do this though, set overwrite to False and
# this step will skip over the analysis if the file already exists
if not overwrite:
# If the file exists then skip this loop
if os.path.isfile(f_name):
return
df = read_in_data(behav_data_f)
rsq_list = split_half_outcome(df, motion_thresh, age_l, age_u, n, n_perms=n_perms)
#print "R Squared list shape: " + str(rsq_list.shape)
med_rsq = np.median(rsq_list)
rsq_CI = np.percentile(rsq_list, 97.5) - np.percentile(rsq_list, 2.5)
columns = [ 'motion_thresh', 'age_l', 'age_u', 'n', 'med_rsq', 'CI_95' ]
results_df = pd.DataFrame(np.array([[motion_thresh, age_l, age_u, n, med_rsq, rsq_CI ]]),
columns=columns)
results_df.to_csv(f_name)
"""
Explanation: This is not a normal distribution, so when we want to plot the average Rsq value for a certain combination of age, motion cutoff and n, we should take the median of Rsq not the mean
We can wrap everything we just did into one big function
The function iterates through different sample sizes, age bins, and motion cutoffs for a specific amount of permutations and:
* Creates 2 split half samples
* Creates average roi-roi correlation matrices for each sample
* Calculates R squared value for fit of the two samples mean roi-roi corrlelation matrices
* Creates csvs of median Rsq and 95% confidence intervals per each motion, age, and sample size iteration
Note: the output csvs will be saved in RESULTS and will be labeled based on their specific input criteria. So for example, if the motion threshold was 50, age lower was 6, age upper was 10 and n=20, the csv output would be rsq_050pct_020subs_06to08.csv
End of explanation
"""
columns = [ 'motion_thresh', 'med_rsq', 'CI_95', 'n', 'age_l', 'age_u']
results_df = pd.DataFrame(columns = columns)
for f in glob('RESULTS/*csv'):
temp_df = pd.read_csv(f, index_col=0)
results_df = results_df.append(temp_df)
results_df.to_csv('RESULTS/SummaryRsqs.csv', index=None, columns=columns)
"""
Explanation: If you want to just run it with abide_motion_wrapper.py you will need to use loop_abide_motion_qsub_array.sh and SgeAbideMotion.sh
loop_abide_motion_qsub_array.sh loops through age, motion, and sample sizes of interest. To actually run the code (looping through all iteration) run SgeAbideMotion.sh. This is also where you can choose how to submit jobs (and parralalize or not)
Once you have finished running the code, you will want to summarize the data for plotting
This code grabs and formats the data for plotting into a summary file call SummaryRsqs.csv that has columns for: motion threshold, median Rsq, 95% CI for R sqr, sample size, age lower, and age upper. It will be in the RESULTS folder.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/official/migration/UJ9 Vertex SDK Custom XGBoost with pre-built training container.ipynb
|
apache-2.0
|
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex AI: Vertex AI Migration: Custom XGBoost model with pre-built training container
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ9%20Vertex%20SDK%20Custom%20XGBoost%20with%20pre-built%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ9%20Vertex%20SDK%20Custom%20XGBoost%20with%20pre-built%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
TRAIN_VERSION = "xgboost-cpu.1-1"
DEPLOY_VERSION = "xgboost-cpu.1-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
"""
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Iris tabular classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single Instance Training for Iris
import datetime
import os
import subprocess
import sys
import pandas as pd
import xgboost as xgb
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
args = parser.parse_args()
# Download data
iris_data_filename = 'iris_data.csv'
iris_target_filename = 'iris_target.csv'
data_dir = 'gs://cloud-samples-data/ai-platform/iris'
# gsutil outputs everything to stderr so we need to divert it to stdout.
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_data_filename),
iris_data_filename], stderr=sys.stdout)
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_target_filename),
iris_target_filename], stderr=sys.stdout)
# Load data into pandas, then use `.values` to get NumPy arrays
iris_data = pd.read_csv(iris_data_filename).values
iris_target = pd.read_csv(iris_target_filename).values
# Convert one-column 2D array into 1D array for use with XGBoost
iris_target = iris_target.reshape((iris_target.size,))
# Load data into DMatrix object
dtrain = xgb.DMatrix(iris_data, label=iris_target)
# Train XGBoost model
bst = xgb.train({}, dtrain, 20)
# Export the classifier to a file
model_filename = 'model.bst'
bst.save_model(model_filename)
# Upload the saved model file to Cloud Storage
gcs_model_path = os.path.join(args.model_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
"""
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_iris.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
job = aip.CustomTrainingJob(
display_name="iris_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
"""
Explanation: Train a model
training.create-python-pre-built-container
Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
"""
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
job.run(
replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=True
)
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
"""
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.CustomTrainingJob object at 0x7feab1346710>
Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
"""
model = aip.Model.upload(
display_name="iris_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
"""
Explanation: general.import-model
Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
"""
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/759209241365/locations/us-central1/models/925164267982815232/operations/3458372263047331840
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/759209241365/locations/us-central1/models/925164267982815232
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/759209241365/locations/us-central1/models/925164267982815232')
Make batch predictions
predictions.batch-prediction
Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
"""
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for i in INSTANCES:
f.write(str(i) + "\n")
! gsutil cat $gcs_input_uri
"""
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form:
[ [ content_1], [content_2] ]
content: The feature values of the test item as a list.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="iris_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
"""
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
machine_type: The type of machine to use for training.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
"""
batch_predict_job.wait()
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
"""
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
"""
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
"""
DEPLOYED_NAME = "iris-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
"""
Explanation: Example Output:
{'instance': [1.4, 1.3, 5.1, 2.8], 'prediction': 2.0451931953430176}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
"""
INSTANCE = [1.4, 1.3, 5.1, 2.8]
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
"""
instances_list = [INSTANCE]
prediction = endpoint.predict(instances_list)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
endpoint.undeploy_all()
"""
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
"""
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/1_core_tensorflow.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print(tf.__version__)
"""
Explanation: Getting started with TensorFlow
Learning Objectives
1. Practice defining and performing basic operations on constant Tensors
1. Use Tensorflow's automatic differentiation capability
1. Learn how to train a linear regression from scratch with TensorFLow
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
End of explanation
"""
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name='my_variable')
"""
Explanation: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method:
x.assign(new_value)
x.assign_add(value_to_be_added)
x.assign_sub(value_to_be_subtracted
End of explanation
"""
# TODO 1
x.assign( # TODO: Your code goes here.
x
# TODO 2
x.assign( # TODO: Your code goes here.
x
# TODO 3
x.assign( # TODO: Your code goes here.
x
"""
Explanation: Lab Task #1: Use the assign(..) method to assign a new value to the variable x you created above. After each, print x to verify how the value changes.
End of explanation
"""
# TODO 1
a = # TODO: Your code goes here.
b = # TODO: Your code goes here.
c = # TODO: Your code goes here.
d = # TODO: Your code goes here.
print("c:", c)
print("d:", d)
# TODO 2
a = # TODO: Your code goes here.
b = # TODO: Your code goes here.
c = # TODO: Your code goes here.
d = # TODO: Your code goes here.
print("c:", c)
print("d:", d)
# TODO 3
# tf.math.exp expects floats so we need to explicitly give the type
a = # TODO: Your code goes here.
b = # TODO: Your code goes here.
print("b:", b)
"""
Explanation: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
tf.add allows to add the components of a tensor
tf.multiply allows us to multiply the components of a tensor
tf.subtract allow us to substract the components of a tensor
tf.math.* contains the usual math operations to be applied on the components of a tensor
and many more...
Most of the standard aritmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.)
Lab Task #2: Create two tensorflow constants a = [5, 3, 8] and b = [3, -1, 2]. Then, compute
1. the sum of the constants a and b below using tf.add and + and verify both operations produce the same values.
2. the product of the constants a and b below using tf.multiply and * and verify both operations produce the same values.
3. the exponential of the constant a using tf.math.exp. Note, you'll need to specify the type for this operation.
End of explanation
"""
# native python list
a_py = [1, 2]
b_py = [3, 4]
"""
Explanation: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
End of explanation
"""
# TODO 1
# TODO: Your code goes here.
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
"""
Explanation: Lab Task #3: Use tf.add to compute the sum of the native python arrays a and b.
End of explanation
"""
# TODO 1
# TODO: Your code goes here.
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
"""
Explanation: Lab Task #4: Use tf.add to compute the sum of the NumPy arrays a and b.
End of explanation
"""
# TODO 1
# TODO: Your code goes here.
"""
Explanation: Lab Task #5: Use tf.add to compute the sum of the Tensorflow constants a and b.
End of explanation
"""
a_tf.numpy()
"""
Explanation: You can convert a native TF tensor to a NumPy array using .numpy()
End of explanation
"""
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
"""
Explanation: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
"""
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print("X_test:{}".format(X_test))
print("Y_test:{}".format(Y_test))
"""
Explanation: Let's also create a test dataset to evaluate our models:
End of explanation
"""
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
"""
Explanation: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
End of explanation
"""
errors = (Y_hat - Y)**2
loss = tf.reduce_mean(errors)
loss.numpy()
"""
Explanation: Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
End of explanation
"""
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
"""
Explanation: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
End of explanation
"""
# TODO 1
def compute_gradients(X, Y, w0, w1):
# TODO: Your code goes here.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
"""
Explanation: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information:
python
with tf.GradientTape() as tape:
loss = # computation
This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable:
python
gradients = tape.gradient(loss, [w0, w1])
We illustrate this procedure with by computing the loss gradients with respect to the model weights:
Lab Task #6: Complete the function below to compute the loss gradients with respect to the model weights w0 and w1.
End of explanation
"""
# TODO 1
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = #TODO: Your code goes here.
#TODO: Your code goes here.
#TODO: Your code goes here.
if step % 100 == 0:
loss = #TODO: Your code goes here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
"""
Explanation: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Lab Task #7: Complete the for loop below to train a linear regression.
1. Use compute_gradients to compute dw0 and dw1.
2. Then, re-assign the value of w0 and w1 using the .assign_sub(...) method with the computed gradient values and the LEARNING_RATE.
3. Finally, for every 100th step , we'll compute and print the loss. Use the loss_mse function we created above to compute the loss.
End of explanation
"""
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
"""
Explanation: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
End of explanation
"""
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-X**2)
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
# TODO 2
STEPS = 2000
LEARNING_RATE = .02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
plt.figure()
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
plt.legend()
"""
Explanation: This is indeed much better!
Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
End of explanation
"""
|
idekerlab/cyrest-examples
|
notebooks/cookbook/Python-cookbook/Export.ipynb
|
mit
|
# import data from url
from py2cytoscape.data.cyrest_client import CyRestClient
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Load a sample network
network = cy.network.create_from('http://chianti.ucsd.edu/~kono/data/galFiltered.sif')
# Apply layout to the cytoscape network object
cy.layout.apply(network = network)
"""
Explanation: Export
Save Images
we can choose the format of the saving image in R.
pdf
png
svg
Save Images
We can choose the format of the saving image in Python. To execute this examples, first, we have to import sample data.
Write an image of the specified type to the specified file, at the specified scaling factor.
Note: the file is written to the file system of the computer upon which R is running, not Cytoscape – in those cases where they are different. It is saved to the working directory.
End of explanation
"""
# png
from IPython.display import Image
network_png = network.get_png()
Image(network_png)
"""
Explanation: Save image as png
End of explanation
"""
# svg
from IPython.display import SVG
network_svg = network.get_svg()
SVG(network_svg)
"""
Explanation: Save image as svg
End of explanation
"""
# pdf
network_pdf = network.get_pdf()
# save the file
f = open('resultImage/scale_free_500.pdf', 'wb')
f.write(network_pdf)
f.close()
"""
Explanation: Save image as pdf
End of explanation
"""
|
facaiy/book_notes
|
machine_learning/tree/decision_tree/presentation.ipynb
|
cc0-1.0
|
from sklearn.datasets import load_iris
data = load_iris()
# 准备特征数据
X = pd.DataFrame(data.data,
columns=["sepal_length", "sepal_width", "petal_length", "petal_width"])
# 准备标签数据
y = pd.DataFrame(data.target, columns=['target'])
y.replace(to_replace=range(3), value=data.target_names, inplace=True)
# 组建样本 [特征,标签]
samples = pd.concat([X, y], axis=1, keys=["x", "y"])
samples.head(5)
samples["y", "target"].value_counts()
samples["x"].describe()
"""
Explanation: 决策树原理与实现简介
前言
为什么讲决策树?
原理简单,直白易懂。
可解释性好。
变种在工业上应用多:随机森林、GBDT。
深化拓展
理论,考古:ID3, C4.5, CART
工程,实现细节:
demo
scikit-learn
spark
xgboost
应用,调参分析
演示
理论
算法:
ID3
C4.5
C5.0
CART
CHAID
MARS
行业黑话
分类问题 vs 回归问题
样本 = (特征$x$,真实值$y$)
目的:找到模型$h(\cdot)$,使得预测值$\hat{y} = h(x)$ $\to$ 真实值$y$
End of explanation
"""
Image(url="https://upload.wikimedia.org/wikipedia/commons/f/f3/CART_tree_titanic_survivors.png")
Image(url="http://scikit-learn.org/stable/_images/iris.svg")
Image(url="http://scikit-learn.org/stable/_images/sphx_glr_plot_iris_0011.png")
samples = pd.concat([X, y], axis=1)
samples.head(3)
"""
Explanation: 三分钟明白决策树
End of explanation
"""
Image(url="https://upload.wikimedia.org/wikipedia/commons/f/f3/CART_tree_titanic_survivors.png")
"""
Explanation: 工程
Demo实现
其主要问题是在每次决策时找到一个分割点,让生成的子集尽可能地纯净。这里涉及到四个问题:
如何分割样本?
如何评价子集的纯净度?
如何找到单个最佳的分割点,其子集最为纯净?
如何找到最佳的分割点序列,其最终分割子集总体最为纯净?
End of explanation
"""
def splitter(samples, feature, threshold):
# 按特征 f 和阈值 t 分割样本
left_nodes = samples.query("{f} < {t}".format(f=feature, t=threshold))
right_nodes = samples.query("{f} >= {t}".format(f=feature, t=threshold))
return {"left_nodes": left_nodes, "right_nodes": right_nodes}
split = splitter(samples, "sepal_length", 5)
# 左子集
x_l = split["left_nodes"].loc[:, "target"].value_counts()
x_l
# 右子集
x_r = split["right_nodes"].loc[:, "target"].value_counts()
x_r
"""
Explanation: 1.0 如何分割样本
决策树的分割方法是取一个特征 $f$ 和阈值 $t$,以此为界将样本 $X$ 拆分为两个子集 $X_l, X_r$。其数学表达形同:
\begin{align}
X = \begin{cases}
X_l, \ \text{if } X[f] < t \
X_r, \ \text{if } X[f] \geq t
\end{cases}
\end{align}
End of explanation
"""
def calc_class_proportion(node):
# 计算各标签在集合中的占比
y = node["target"]
return y.value_counts() / y.count()
calc_class_proportion(split["left_nodes"])
calc_class_proportion(split["right_nodes"])
"""
Explanation: 2. 如何评价子集的纯净度?
常用的评价函数正是计算各标签 $c_k$ 在子集中的占比 $p_k = c_k / \sum (c_k)$,并通过组合 $p_k$ 来描述占比集中或分散。
End of explanation
"""
|
kenjisato/intro-macro
|
doc/python/Optimal Growth (Euler).ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Computing the Optimal Grwoth Model by the Euler Equation
End of explanation
"""
alpha = 0.3
delta = 0.05
rho = 0.1
theta = 1
A = 1
def f(x):
return A * x**alpha
kgrid = np.linspace(0.0, 7.5, 300)
fig, ax = plt.subplots(1,1)
# Locus obtained from (EE)
kstar = ((delta + rho) / (A * alpha)) ** (1/(alpha - 1))
ax.axvline(kstar)
ax.text(kstar*1.01, 0.1, '$\dot c = 0$', fontsize=16)
# Locus obtained from (CA)
ax.plot(kgrid, f(kgrid) - delta * kgrid)
ax.text(4, 1.06*(f(4) - delta * 4), '$\dot k = 0$', fontsize=16)
# axis labels
ax.set_xlabel('$k$', fontsize=16)
ax.set_ylabel('$c$', fontsize=16)
ax.set_ylim([0.0, 1.8 * np.max(f(kgrid) - delta*kgrid)])
plt.show()
"""
Explanation: Model
Let's consider the optimal growth model,
\begin{align}
&\max\int_{0}^{\infty}e^{-\rho t}u(c(t))dt \
&\text{subject to} \
&\qquad\dot{k}(t)=f(k(t))-\delta k(t)-c(t),\
&\qquad k(0):\text{ given.} \
\end{align}
We will assume the following specific function forms when necessary
\begin{align}
u(c) &= \frac{c^{1-\theta}}{1-\theta}, \quad \theta > 0, \
f(k) &= A k^\alpha, \quad 0 < \alpha < 1, \quad A > 0
\end{align}
By using the Hamiltonian method, we have obtained the first-order dynamics of the economy
\begin{align}
\dot{c} &= \theta^{-1} c [f'(k) - \delta - \rho] & \text{(EE)} \
\dot{k} &= f(k) - \delta k - c. & \text{(CA)}
\end{align}
(EE) is the Euler equation and (CA) the capital accumulation equation.
Let's draw the phase diagram on your computer.
$\dot c = 0$ locus (EE)
$\dot k = 0$ is equivalent to
\begin{align}
f'(k) = \delta + \rho
\end{align}
Thus, the locus is a vertical line which goes through $(k^, 0)$, where $k^$ is the unique value that satisfies $f'(k^*) = \delta + \rho$. Under the assumption that $f(k) = Ak^\alpha$,
\begin{align}
k^* = \left(\frac{\delta + \rho}{A \alpha}\right)^\frac{1}{\alpha - 1}
\end{align}
$\dot k = 0$ locus (CA)
$\dot k = 0$ is equivalent to
\begin{align}
c = f(k) - \delta k.
\end{align}
Code for the loci
End of explanation
"""
def phase_space(kmax, gridnum, yamp=1.8, colors=['black', 'black'], labels_on=False):
kgrid = np.linspace(0.0, kmax, gridnum)
fig, ax = plt.subplots(1,1)
# EE locus
ax.plot(kgrid, f(kgrid) - delta * kgrid, color=colors[0])
if labels_on:
ax.text(4, f(4) - delta * 4, '$\dot k = 0$', fontsize=16)
# CA locus
kstar = ((delta + rho) / (A * alpha)) ** (1/(alpha - 1))
ax.axvline(kstar, color=colors[1])
if labels_on:
ax.text(kstar*1.01, 0.1, '$\dot c = 0$', fontsize=16)
# axis labels
ax.set_xlabel('$k$', fontsize=16)
ax.set_ylabel('$c$', fontsize=16)
ax.set_ylim([0.0, yamp * np.max(f(kgrid) - delta*kgrid)])
return fig, ax
"""
Explanation: What we want to do is to draw paths on this phase space. It is convenient to have a function that returns this kind of figure.
End of explanation
"""
fig, ax = phase_space(kmax=7, gridnum=300)
"""
Explanation: You can draw the loci by calling the function as in the following.
End of explanation
"""
dt = 0.001
def f_deriv(k):
"""derivative of f"""
return A * alpha * k ** (alpha - 1)
def update(k, c):
cnew = c * (1 + (f_deriv(k) - delta - rho) * dt / theta) # D-EE
knew = k + (f(k) - delta * k - c) * dt
return knew, cnew
k_initial, c_guess = 0.4, 0.2
# Find a first-order path from the initial condition k0 and guess of c0
k0, c0 = k_initial, c_guess
k, c = [k0], [c0]
for i in range(10000):
knew, cnew = update(k[-1], c[-1])
k.append(knew)
c.append(cnew)
kgrid = np.linspace(0.0, 10., 300)
fig, ax = phase_space(10., 300)
ax.plot(k, c)
"""
Explanation: The dynamics
Discretize
\begin{align}
\dot{c} &= \theta^{-1} c [f'(k) - \delta - \rho] & \text{(EE)} \
\dot{k} &= f(k) - \delta k - c. & \text{(CA)}
\end{align}
to get the discretized dynamic equations:
\begin{align}
c(t+\Delta t) &= c(t){1 + \theta^{-1} [f'(k(t)) - \delta - \rho] \Delta t}& \text{(D-EE)} \
k(t+\Delta t) &= k(t) + {f(k(t)) - \delta k(t) - c(t)} \Delta t. & \text{(D-CA)}
\end{align}
End of explanation
"""
def compute_path(k0, c_guess, steps, ax=None, output=True):
"""compute a path starting from (k0, c_guess) that satisfies EE and CA"""
k, c = [k0], [c_guess]
for i in range(steps):
knew, cnew = update(k[-1], c[-1])
# stop if the new values violate nonnegativity constraints
if knew < 0:
break
if cnew < 0:
break
k.append(knew)
c.append(cnew)
# plot the path if ax is given
if ax is not None:
ax.plot(k, c)
# You may want to suppress the output when you give ax.
if output:
return k, c
"""
Explanation: The blue curve shows the dynamic path of the system of differential equation. The solution moves from left to right in this case. This path doesn't seem to satisfy the transversality condition and so it's not the optimal path.
What we do next is to find $c(0)$ that converges to the steady state. I will show you how to do this by “brute force.”
Make many guesses about $c(0)$ and find the solution. We need to make a function to create a path that starts from $(k(0), c(0))$ and verify whether or not it's approaching to the steady state.
End of explanation
"""
k_init = 0.4
steps = 30000
fig, ax = phase_space(40, 3000)
for c_init in [0.1, 0.2, 0.3, 0.4, 0.5]:
compute_path(k_init, c_init, steps, ax, output=False)
"""
Explanation: Typical usage:
End of explanation
"""
k_init = 0.4
steps = 30000
# set of guesses about c(0)
c_guess = np.linspace(0.40, 0.50, 1000)
k_final = []
c_final = []
for c0 in c_guess:
k, c = compute_path(k_init, c0, steps, output=True)
# Final values
k_final.append(k[-1])
c_final.append(c[-1])
plt.plot(c_guess, k_final, label='lim k')
plt.plot(c_guess, c_final, label='lim c')
plt.legend()
"""
Explanation: Let's find the optimal path. The following code makes a plot that relates a guess of $c(0)$ to the final $c(t)$ and $k(t)$ for large $t$.
End of explanation
"""
cdiff = [c1 - c0 for c0, c1 in zip(c_final[:-1], c_final[1:])]
c_optimal = c_guess[cdiff.index(max(cdiff))]
c_optimal
fig, ax = phase_space(7.5, 300)
compute_path(k_init, c_optimal, steps=15000, ax=ax, output=False)
"""
Explanation: As you can clearly see, there is a critical value around 0.41. To know the exact value of the threshold, execute the following code.
End of explanation
"""
|
ramseylab/networkscompbio
|
class06_degdist_python3.ipynb
|
apache-2.0
|
import pandas
import collections
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats
import igraph
"""
Explanation: CS446/546 - Class Session 6 - Degree Distribution
In this class session we are going to plot the degree distribution of the undirected human
protein-protein interaction network (PPI), without using igraph. We'll obtain the interaction data from the Pathway Commons SIF file (in the shared/ folder) and we'll
manually compute the degree of each vertex (protein) in the network. We'll then compute
the count N(k) of vertices that have a given vertex degree k, for all k values.
Finally, we'll plot the degree distribution and discuss whether it is consistent with the
results obtained in the Jeong et al. article for the yeast PPI.
We'll start by loading all of the Python modules that we will need for this notebook. Because we'll be calling a bunch of functions from numpy and matplotlib.pyplot, we'll alias them as np and plt, respectively.
End of explanation
"""
sif_data = pandas.read_csv("shared/pathway_commons.sif",
sep="\t", names=["species1","interaction_type","species2"])
"""
Explanation: Step 1: load in the SIF file as a pandas data frame using pandas.read_csv. Make sure the column names of your data frame are species1, interaction_type, and species2. Save the data frame as the object sif_data.
End of explanation
"""
interaction_types_ppi = set(["interacts-with",
"in-complex-with"])
interac_ppi = sif_data[sif_data.interaction_type.isin(interaction_types_ppi)].copy()
"""
Explanation: Step 2: restrict the interactions to protein-protein undirected ("in-complex-with", "interacts-with"). The restricted data frame should be called interac_ppi. Then we will make a copy using copy so interac_ppi is independent of sif_data which will be convenient for this exercise.
End of explanation
"""
boolean_vec = interac_ppi['species1'] > interac_ppi['species2']
interac_ppi.loc[boolean_vec, ['species1', 'species2']] = interac_ppi.loc[boolean_vec, ['species2', 'species1']].values
"""
Explanation: Step 3: for each interaction, reorder species1 and species2 (if necessary) so that
species1 < species2 (in terms of the species names, in lexicographic order). You can make a boolean vector boolean_vec containing (for each row of the data frame interac_ppi) True if species2 > species1 (by lexicographic order) for that row, or False otherwise. You can then use the loc method on the data frame, to select rows based on boolean_vec and the two columns that you want (species1 and species2). Thanks to Garrett Bauer for suggesting this approach (which is more elegant than looping over all rows):
End of explanation
"""
for rowtuple in interac_ppi.head().iterrows():
row = rowtuple[1]
rowid = rowtuple[0]
print(rowid)
if row['species1'] > row['species2']:
interac_ppi['species1'][rowid] = row['species2']
interac_ppi['species2'][rowid] = row['species1']
type(interac_ppi.head())
for i in range(0, interac_ppi.shape[0]):
if interac_ppi.iat[i,0] > interac_ppi.iat[i,2]:
temp_name = interac_ppi.iat[i,0]
interac_ppi.set_value(i, 'species1', interac_ppi.iat[i,2])
interac_ppi.set_value(i, 'species2', temp_name)
"""
Explanation: Since iterating is reasonably fast in Python, you could also do this using a for loop through all of the rows of the data frame, swapping species1 and species2 entries as needed (and in-place in the data frame) so that in the resulting data frame interac_ppi satisfies species1 < species2 for all rows.
End of explanation
"""
interac_ppi_unique = interac_ppi[["species1","species2"]].drop_duplicates()
"""
Explanation: Step 4: Restrict the data frame to only the columns species1 and species2. Use the drop_duplicates method to subset the rows of the resulting two-column data frame to only unique rows. Assign the resulting data frame object to have the name interac_ppi_unique. This is basically selecting only unique pairs of proteins, regardless of interaction type.
End of explanation
"""
vertex_degrees_ctr = collections.Counter()
allproteins = interac_ppi_unique["species1"].tolist() + interac_ppi_unique["species2"].tolist()
for proteinname in allproteins:
vertex_degrees_ctr.update([proteinname])
vertex_degrees = list(vertex_degrees_ctr.values())
"""
Explanation: Step 5: compute the degree of each vertex (though we will not associate the vertex degrees with vertex names here, since for this exercise we only need the vector of vertex degree values, not the associated vertex IDs). You'll want to create an object called vertex_degrees_ctr which is of class collections.Counter. You'll want to name the final list of vertex degrees, vertex_degrees.
End of explanation
"""
dict(list(dict(vertex_degrees_ctr).items())[0:9])
"""
Explanation: Let's print out the vertex degrees of the first 10 vertices, in whatever the key order is. Pythonistas -- anyone know of a less convoluted way to do this?
End of explanation
"""
vertex_degrees[0:9]
"""
Explanation: Let's print out the first ten entries of the vertex_degrees list. Note that we don't expect it to be in the same order as the output from the previous command above, since dict changes the order in the above.
End of explanation
"""
nbins=30
hist_res = plt.hist(np.array(vertex_degrees), bins=nbins)
hist_counts = hist_res[0]
hist_breaks = hist_res[1]
kvals = 0.5*(hist_breaks[0:(nbins-1)]+hist_breaks[1:nbins])
"""
Explanation: Step 6: Calculate the histogram of N(k) vs. k, using 30 bins, using plt.hist. You'll probably want to start by making a numpy.array from your vertex_degrees. Call the resulting object from plt.hist, hist_res. Obtain a numpy array of the bin counts as element zero from hist_res (name this object hist_counts) and obtain a numpy array of the bin centers (which are k values) as element one from hist_res (name this object hist_breaks). Finally, you want the k values of the centers of the bins, not the breakpoint values. So you'll have to do some arithmetic to go from the 31 k values of the bin breakpoints, to a numpy array of the 30 k values of the centers of the bins. You should call that object kvals.
End of explanation
"""
kvals
"""
Explanation: Let's print the k values of the bin centers:
End of explanation
"""
hist_counts
"""
Explanation: Let's print the histogram bin counts:
End of explanation
"""
plt.loglog(kvals[1:14],
hist_counts[1:14], "o")
plt.xlabel("k")
plt.ylabel("N(k)")
plt.gca().set_xlim([50, 2000])
plt.show()
"""
Explanation: Step 7: Plot N(k) vs. k, on log-log scale (using only the first 14 points, which is plenty sufficient to see the approximatey scale-free degree distribution and where it becomes exponentially suppressed at high k. For this you'll use plt.loglog. You'll probably want to adjust the x-axis limits using plt.gca().set_xlim(). To see the plot, you'll have to do plt.show().
End of explanation
"""
scipy.stats.linregress(np.log10(kvals[0:3]), np.log10(hist_counts[0:3]))
"""
Explanation: Step 8: Do a linear fit to the log10(N(k)) vs. log10(k) data (just over the range in which the relationship appears to be linear, which is the first four points). You'll want to use scipy.stats.linregress to do the linear regression. Don't forget to log10-transform the data using np.log10.
End of explanation
"""
jeong_slope = -6.5/(np.log(45)-np.log(2))
print("%.2f" % jeong_slope)
"""
Explanation: Slope is -1.87 with SE 0.084, i.e., gamma = 1.87 with a 95% CI of about +/- 0.17.
Now let's compute the slope for the degree distribution Fig. 1b in the Jeong et al. article, for the yeast PPI. The change in ordinate over the linear range is about -6.5 in units of natural logarithm. The change in abscissa over the linear range is approximately log(45)-log(2), so we can compute the Jeong et al. slope thus:
End of explanation
"""
g = igraph.Graph.TupleList(interac_ppi_unique.values.tolist(), directed=False)
xs, ys = zip(*[(left, count) for left, _, count in
g.degree_distribution().bins()])
plt.loglog(xs, ys)
plt.show()
igraph.statistics.power_law_fit(g.degree())
"""
Explanation: How close was your slope from the human PPI, to the slope for the yeast PPI from the Jeong et al. article?
Now we'll do the same thing in just a few lines of igraph code
End of explanation
"""
|
arcyfelix/Courses
|
18-11-22-Deep-Learning-with-PyTorch/02-Introduction to PyTorch/Part 4 - Fashion-MNIST.ipynb
|
apache-2.0
|
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(dataset=trainset,
batch_size=64,
shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=False,
transform=transform)
testloader = torch.utils.data.DataLoader(dataset=testset,
batch_size=64,
shuffle=True)
"""
Explanation: Classifying Fashion-MNIST
Now it's your turn to build and train a neural network. You'll be using the Fashion-MNIST dataset, a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.
<img src='assets/fashion-mnist-sprite.png' width=500px>
In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.
First off, let's load the dataset through torchvision.
End of explanation
"""
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
"""
Explanation: Here we can see one of the images.
End of explanation
"""
from torch import nn, optim
# TODO: Define your network architecture here
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.h1 = nn.Linear(in_features=784,
out_features=256)
self.h2 = nn.Linear(in_features=256,
out_features=128)
self.h3 = nn.Linear(in_features=128,
out_features=64)
self.h4 = nn.Linear(in_features=64,
out_features=10)
self.relu = nn.ReLU()
self.log_softmax = nn.LogSoftmax(dim=1)
def forward(self, x):
# Flatten x
x = x.view(size=(x.shape[0], -1))
# Define the network
x = self.h1(x)
x = self.relu(x)
x = self.h2(x)
x = self.relu(x)
x = self.h3(x)
x = self.relu(x)
x = self.h4(x)
x = self.log_softmax(x)
return x
"""
Explanation: Building the network
Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
End of explanation
"""
# TODO: Create the network, define the criterion and optimizer
model = MyModel()
model
criterion = nn.NLLLoss()
optimizer = optim.Adam(params=model.parameters(),
lr=0.0025)
# TODO: Train the network here
epochs = 5
for e in range(epochs):
epoch_loss = 0
for images, labels in trainloader:
# Forward pass
log_predictions = model(images)
# Calculate the loss
loss = criterion(log_predictions, labels)
# Reset the optimizer for each batch
optimizer.zero_grad()
# Backpropagation
loss.backward()
# Applying the gradients
optimizer.step()
# Adding batch loss to the epoch loss
epoch_loss += loss.item()
else:
print(f'Epoch loss: {epoch_loss}')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
# TODO: Calculate the class probabilities (softmax) for img
## Exp of the log_softmax will return softmax
ps = torch.exp(model(img))
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28),
ps,
version='Fashion')
"""
Explanation: Train the network
Now you should create your network and train it. First you'll want to define the criterion ( something like nn.CrossEntropyLoss) and the optimizer (typically optim.SGD or optim.Adam).
Then write the training code. Remember the training pass is a fairly straightforward process:
Make a forward pass through the network to get the logits
Use the logits to calculate the loss
Perform a backward pass through the network with loss.backward() to calculate the gradients
Take a step with the optimizer to update the weights
By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
End of explanation
"""
|
Danghor/Formal-Languages
|
Ply/Conflicts-Resolved.ipynb
|
gpl-2.0
|
import ply.lex as lex
tokens = [ 'NUMBER' ]
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = float(t.value)
return t
literals = ['+', '-', '*', '/', '^', '(', ')']
t_ignore = ' \t'
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count('\n')
def t_error(t):
print(f"Illegal character '{t.value[0]}'")
t.lexer.skip(1)
__file__ = 'main'
lexer = lex.lex()
"""
Explanation: Resolving Conflicts Using Precedence Declarations
This file shows how shift/reduce and reduce/reduce conflicts can be resolved using operator precedence declarations.
Specification of the Scanner
We implement a minimal scanner for arithmetic expressions.
End of explanation
"""
import ply.yacc as yacc
"""
Explanation: Specification of the Parser
End of explanation
"""
start = 'expr'
"""
Explanation: The start variable of our grammar is expr, but we dont't have to specify that. The default
start variable is the first vvariable that is defined.
End of explanation
"""
precedence = (
('left', '+', '-'),
('left', '*', '/'),
('right', '^')
)
"""
Explanation: The following operator precedence declarations declare that the operators +and - have a lower precedence than the operators *and /. Furthermore, they specify that all these operators are left associative. Operators can also be declared as right associative using the keyword right or as non-associative using the keyword nonassoc.
End of explanation
"""
def p_expr(p):
"""
expr : expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| expr '^' expr
| '(' expr ')'
| NUMBER
"""
pass
"""
Explanation: Without precedence declarations, the grammar below is ambiguous.
End of explanation
"""
def p_error(t):
pass
"""
Explanation: We define p_error to prevent a warning.
End of explanation
"""
parser = yacc.yacc(write_tables=False, debug=True)
"""
Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
End of explanation
"""
!cat parser.out
"""
Explanation: Let's look at the action table that is generated. Conflicts are always resolved in favour of shifting.
End of explanation
"""
|
jhconning/Dev-II
|
notebooks/EdgeworthProduction.ipynb
|
bsd-3-clause
|
edgeplot(50)
"""
Explanation: Edgeworth Box: Efficiency in production allocation
Efficiency in production
Consider a small-open economy with two production sectors -- agriculture and manufacturing -- with production in each sector taking place with constant returns to scale production functions. Producers in the agricultural sector maximize profits
$$\max_{K_A,L_A} p_A F(K_A,L_A) - w L_A - r K_A$$
And producers in manufacturing similarly maximize
$$\max_{K_M,L_M} p_M G(K_M,L_M) - w L_M - r K_M$$
In equilibrium total factor demands must equal total supplies:
$$K_A + K_M = \bar K$$
$$L_A + L_M = \bar L$$
The first order necessary conditions for an interior optimum in each sector lead to an equilibrium where the following condition must hold:
$$\frac{F_L(K_A,L_A)}{F_K(K_A,L_A)} = \frac{w}{r}
=\frac{G_L(\bar K-K_A,\bar L- L_A)}{F_K(\bar K-K_A,\bar L- L_A)} $$
Efficiency requires that the marginal rates of technical substitutions (MRTS) be equalized across sectors (and across firms within a sector which is being assumed here). In an Edgeworth box, isoquants from each sector will be tanget to a common wage-rental ratio line.
If we assume Cobb-Douglas forms $F(K,L) = K^\alpha L^{1-\alpha}$ and $G(K,L) = K^\beta L^{1-\beta}$ the efficiency condition can be used to find a closed form solution for $K_A$ in terms of $L_A$:
$$\frac{(1-\alpha)}{\alpha}\frac{K_A}{L_A} =\frac{w}{r} =\frac{(1-\beta)}{\beta}\frac{\bar K-K_A}{\bar L-L_A}$$
Rearranging the expression above we can get a closed-form expression for the efficiency locus $K_A (L_A)$:
$$K_A(L_A) = \frac{L_A \cdot \bar K}
{ \frac{\beta(1-\alpha)}{\alpha (1-\beta)} (\bar L -L_A)+L_A}$$
With this we can now plot the efficiency locus curve in an Edgeworth box.
Edgeworth Box plots
Here is and Edgeworth Box depicting the situation where $L_A = 50$ units of labor are allocated to the agricultural sector and all other allocations are efficient (along the efficiency locus).
End of explanation
"""
LA = 50
interact(edgeplot, LA=(10, LBAR-10,1),
Kbar=fixed(KBAR), Lbar=fixed(LBAR),z
alpha=(0.1,0.9,0.1),beta=(0.1,0.9,0.1));
"""
Explanation: If you're reading this using a jupyter server you can interact with the following plot, changing the technology parameters and position of the isoquant. If you are not this may appear blank or static.
End of explanation
"""
fig, ax = plt.subplots(figsize=(7,6))
ppf(30,alpha =0.8, beta=0.2)
"""
Explanation: The Production Possiblity Frontier
The efficiency locus also allows us to trace out the production possibility frontier: by varying $L_A$ from 0 to $\bar L$ and, for every $L_A$, calculating $K_A(L_A)$ and with that efficient production $(q_A,q_B)$ where $q_A=F(K_A(L_A), L_A)$ and $q_B=F(\bar K - K_A(L_A), \bar L - L_A)$.
For Cobb-Douglas technologies the PPF will be quite straight unless $\beta$ and $\alpha$ are very different from each other.
End of explanation
"""
ssline(a=0.6, b=0.3);
"""
Explanation: Efficient resource allocation and comparative advantage in a small open economy
We have a production possibility frontier which also tells us the opportunity cost of producing different amounts of good $A$ in terms of how much of good B (via its slope or the Rate of Product Transformation (RPT) $\frac{MC_A}{MC_B}$). This is given by the slope of the PPF. The bowed out shape of the PPF tells us that the opportunity cost of producing either good is rising in its quantity.
How much of each good will the economy produce? If this is a competitive small open economy then product prices will be given by world prices. Each firm maximizes profits which leads every firm in the A sector to increase output until $MC_X(q_a) = P_A$ and similarly in the B sector so that in equlibrium we must have
$$\frac{MC_A}{MC_B} = \frac{P_A}{P_B}$$
and the economy will produce where the slope of the PPF exactly equals the world relative price. This is where national income valued at world prices is maximized, and the country is producing according to comparative advantage.
Consumers take this income as given and maximize utility. If we make heroic assumptions about preferences (preferences are identical and homothetic) then we can represent consumer preferences on the same diagram and we would have consumers choosing a consumption basket somewhere along the consumption possibliity frontier given by the world price line passing thorugh the production point.
If the economy is instead assumed to be closed then product prices must be calculated alongside the resource allocation. The PPF itself becomes the economy's budget constraint and we find an optimum (and equlibrium autarky domestic prices) where the community indifference curve is tangent to the PPF.
As previously noted, given our linear homogenous production technology, profit maximization in agriculture will lead firms to choose inputs to satisfy $\frac{(1-\alpha)}{\alpha}\frac{K_A}{L_A} =\frac{w}{r}$. This implies a relationship between the optimal production technique or capital-labor intensity $\frac{K_A}{L_A}$ in agriculture and the factor price ratio $\frac{w}{r}$:
$$ \frac{K_A}{L_A} = \frac{\alpha}{1-\alpha} \frac{w}{r} $$
and similarly in manufacturing
$$ \frac{K_M}{L_M} = \frac{\beta}{1-\beta} \frac{w}{r} $$
From the first order conditions we also have:
$$P_A F_L(K_A,L_A) = w = P_M G_L(K_M,L_M) $$
Note this condition states that competition has driven firms to price at marginal cost in each industry or $P_A = MC_A = w\frac{1}{F_L}$ and $P_M = MC_M = w\frac{1}{G_L}$ which in turn implies that at a market equilibrium optimum
$$\frac{P_A}{P_M} = \frac{G_L(K_M,L_M)}{F_L(K_A,L_A)}$$
This states that the world price line with slope (negative) $\frac{P_A}{P_M}$ will be tangent to the production possibility frontier which as a slope (negative) $\frac{MC_A}{MC_M}= \frac{P_A}{P_M}$ which can also be written as $\frac{G_L}{F_L}$ or equivalently $\frac{G_K}{F_K}$. The competitive market leads producers to move resources across sectors to maximize the value of GDP at world prices.
With the Cobb Douglas technology we can write:
$$F_L = (1-\alpha) \left [ \frac{K_A}{L_A} \right]^\alpha$$
$$G_L = (1-\beta) \left [ \frac{K_M}{L_M} \right]^\beta$$
Using these expressions and the earlier expression relating $\frac{K_A}{L_A}$ and $\frac{K_M}{L_M}$ to $\frac{w}{r}$ we have:
$$\frac{P_A}{P_M}
=\frac{1-\alpha}{1-\beta}
\frac{\left [ \frac{ (1-\beta)}{\beta} \frac{w}{r} \right]^\beta}
{ \left [ \frac{ (1-\alpha)}{\alpha} \frac{w}{r} \right]^\alpha}
$$
or
$$\frac{P_A}{P_M} = \Gamma \left [ \frac{w}{r} \right]^{\beta - \alpha}
$$
where
$$\Gamma =
\frac{1-\alpha}{1-\beta}
\left ( \frac{\alpha}{1-\alpha} \right )^\alpha
\left ( \frac{1-\beta}{\beta} \right )^\beta
$$
Solving for $\frac{w}{r}$ as a function of the world prices we find an expression for the 'Stolper-Samuelson' (SS) line:
$$\frac{w}{r} = \frac{1}{\Gamma} \left [ \frac{P_A}{P_M} \right ]^\frac{1}{\beta-\alpha} $$
The Stolper Samuelson Theorem
The Stolper Samuelson theorem tells us how changes in the world relative price of products translates into changes in the relative price of factors and therefore in the distribution of income in society.
The theorem states that an increase in the relative price of a good will lead to an increase in both the relative and the real price of the factor used intensively in the production of that good (and conversely to a decline in both the real and the relative price of the other factor).
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, fixed
ALPHA = 0.6 # capital share in agriculture
BETA = 0.4 #
KBAR = 100
LBAR = 100
p = 1 # =Pa/Pm relative price of ag goods
def F(K,L,alpha=ALPHA):
"""Agriculture Production function"""
return (K**alpha)*(L**(1-alpha))
def G(K,L,beta=BETA):
"""Manufacturing Production function"""
return (K**beta)*(L**(1-beta))
def budgetc(c1, p1, p2, I):
return (I/p2)-(p1/p2)*c1
def isoq(L, Q, mu):
return (Q/(L**(1-mu)))**(1/mu)
def edgeworth(L, Kbar=KBAR, Lbar=LBAR,alpha=ALPHA, beta=BETA):
"""efficiency locus: """
a = (1-alpha)/alpha
b = (1-beta)/beta
return b*L*Kbar/(a*(Lbar-L)+b*L)
def edgeplot(LA, Kbar=KBAR, Lbar=LBAR,alpha=ALPHA,beta=BETA):
"""Draw an edgeworth box
arguments:
LA -- labor allocated to ag, from which calculate QA(Ka(La),La)
"""
KA = edgeworth(LA, Kbar, Lbar,alpha, beta)
RTS = (alpha/(1-alpha))*(KA/LA)
QA = F(KA,LA,alpha)
QM = G(Kbar-KA,Lbar-LA,beta)
print("(LA,KA)=({:4.1f}, {:4.1f}) (QA, QM)=({:4.1f}, {:4.1f}) RTS={:4.1f}"
.format(LA,KA,QA,QM,RTS))
La = np.arange(1,Lbar)
fig, ax = plt.subplots(figsize=(7,6))
ax.set_xlim(0, Lbar)
ax.set_ylim(0, Kbar)
ax.plot(La, edgeworth(La,Kbar,Lbar,alpha,beta),'k-')
#ax.plot(La, La,'k--')
ax.plot(La, isoq(La, QA, alpha))
ax.plot(La, Kbar-isoq(Lbar-La, QM, beta),'g-')
ax.plot(LA, KA,'ob')
ax.vlines(LA,0,KA, linestyles="dashed")
ax.hlines(KA,0,LA, linestyles="dashed")
ax.text(-6,-6,r'$O_A$',fontsize=16)
ax.text(Lbar,Kbar,r'$O_M$',fontsize=16)
ax.set_xlabel(r'$L_A -- Labor$', fontsize=16)
ax.set_ylabel('$K_A - Capital$', fontsize=16)
#plt.show()
def ppf(LA,Kbar=KBAR, Lbar=LBAR,alpha=ALPHA,beta=BETA):
"""Draw a production possibility frontier
arguments:
LA -- labor allocated to ag, from which calculate QA(Ka(La),La)
"""
KA = edgeworth(LA, Kbar, Lbar,alpha, beta)
RTS = (alpha/(1-alpha))*(KA/LA)
QA = F( KA,LA,alpha)
QM = G(Kbar-KA,Lbar-LA,beta)
ax.scatter(QA,QM)
La = np.arange(0,Lbar)
Ka = edgeworth(La, Kbar, Lbar,alpha, beta)
Qa = F(Ka,La,alpha)
Qm = G(Kbar-Ka,Lbar-La,beta)
ax.set_xlim(0, Lbar)
ax.set_ylim(0, Kbar)
ax.plot(Qa, Qm,'k-')
ax.set_xlabel(r'$Q_A$',fontsize=18)
ax.set_ylabel(r'$Q_B$',fontsize=18)
plt.show()
"""
Explanation: This relationship can be seen from the formula. If agriculture were more labor intensive than manufacturing, so $\alpha < \beta$, then an increase in the relative price of agricultural goods creates an incipient excess demand for labor and an excess supply of capital (as firms try to expand production in the now more profitable labor-intensive agricultural sector and cut back production in the now relatively less profitable and capital-intensive manufacturing sector). Equilibrium will only be restored if the equilibrium wage-rental ratio falls which in turn leads firms in both sectors to adopt more capital-intensive techniques. With more capital per worker in each sector output per worker and hence real wages per worker $\frac{w}{P_a}$ and $\frac{w}{P_m}$ increase.
To be completed
Code to solve for unique HOS equilibrium as a function of world relative price $\frac{P_A}{P_M}$
Interactive plot with $\frac{P_A}{P_M}$ slider that plots equilibrium in Edgeworth box and PPF
Code section
To keep the presentation tidy I've put the code that is used in the notebook above at the end of the notebook.
Run all code below this cell first. Then return and run all code above.
End of explanation
"""
fig, ax = plt.subplots(figsize=(7,6))
ppf(20,alpha =0.8, beta=0.2)
"""
Explanation: It's interesting to note that for Cobb-Douglas technologies you really need quite a difference in capital-intensities between the two technologies in order to get much curvature to the production function.
End of explanation
"""
def wreq(p,a=ALPHA, b=BETA):
B = ((1-a)/(1-b))*(a/(1-a))**a * ((1-b)/b)**b
return B*p
def ssline(a=ALPHA, b=BETA):
p = np.linspace(0.1,10,100)
plt.title('The Stolper-Samuelson line')
plt.xlabel(r'$p = \frac{P_a}{P_m}$', fontsize=18)
plt.ylabel(r'$ \frac{w}{r}$', fontsize=18)
plt.plot(p,wreq(p, a, b));
ssline()
"""
Explanation: Code for Stolper Samuelson line
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.17/_downloads/fd79fe12dec0d8ba3f96e5d55db03054/plot_ecog.ipynb
|
bsd-3-clause
|
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Chris Holdgraf <choldgraf@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from mayavi import mlab
import mne
from mne.viz import plot_alignment, snapshot_brain_montage
print(__doc__)
"""
Explanation: Working with ECoG data
MNE supports working with more than just MEG and EEG data. Here we show some
of the functions that can be used to facilitate working with
electrocorticography (ECoG) data.
End of explanation
"""
mat = loadmat(mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat')
ch_names = mat['ch_names'].tolist()
elec = mat['elec'] # electrode positions given in meters
dig_ch_pos = dict(zip(ch_names, elec))
mon = mne.channels.DigMontage(dig_ch_pos=dig_ch_pos)
print('Created %s channel positions' % len(ch_names))
"""
Explanation: Let's load some ECoG electrode locations and names, and turn them into
a :class:mne.channels.DigMontage class.
End of explanation
"""
info = mne.create_info(ch_names, 1000., 'ecog', montage=mon)
"""
Explanation: Now that we have our electrode positions in MRI coordinates, we can create
our measurement info structure.
End of explanation
"""
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces=['pial'])
mlab.view(200, 70)
"""
Explanation: We can then plot the locations of our electrodes on our subject's brain.
<div class="alert alert-info"><h4>Note</h4><p>These are not real electrodes for this subject, so they
do not align to the cortical surface perfectly.</p></div>
End of explanation
"""
# We'll once again plot the surface, then take a snapshot.
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces='pial')
mlab.view(200, 70)
xy, im = snapshot_brain_montage(fig, mon)
# Convert from a dictionary to array to plot
xy_pts = np.vstack(xy[ch] for ch in info['ch_names'])
# Define an arbitrary "activity" pattern for viz
activity = np.linspace(100, 200, xy_pts.shape[0])
# This allows us to use matplotlib to create arbitrary 2d scatterplots
_, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')
ax.set_axis_off()
plt.show()
"""
Explanation: Sometimes it is useful to make a scatterplot for the current figure view.
This is best accomplished with matplotlib. We can capture an image of the
current mayavi view, along with the xy position of each electrode, with the
snapshot_brain_montage function.
End of explanation
"""
|
SeismicPi/SeismicPi
|
Lessons/Lesson 5/Lesson 5.ipynb
|
mit
|
#find_position here
def find_position(a, t, s):
return (a-s*t)/2.0;
"""
Explanation: Lesson 5
In this lesson, we will use the find_position function from the previous lesson to implement a one-dimensional version of piTap. You can tap anywhere along a line and have the program predict where you've tapped!
Copy and paste your find position function into the program block below.
End of explanation
"""
#WRITE YOUR CODE HERE
#ANSWER
a = 1.0
speeds = []
i = 10
while(i > 0):
i = i-1
thisTime = 1 #piTap.getTDOA()
speeds.append(a/thisTime)
"""
Explanation: Take a brief moment to review what it does. It takes in three parameters, $a$ (position of friend's house), $t$ (time distance of arrival), and $s$ (the speed you and your friend travel at). Now instead of using the metaphor of you and your friend racing home, how can this be used to model the piTap?
The setup of piTap is that there are two sensors, connected by a horizontal line. We want to be able to tap anywhere in between the line and predict where we've tapped. This is similar to you and your friend running home, except now it is two waves that are racing towards the sensors. The sensors are analogous to the homes, the position of the tap is is analogous to the location of the hangout spot, and the circular wave front is equivalent to you and your friend running left and right.
We know what the value of $a$ is, just the distance beween the two sensors, which we can measure with a meter stick or ruler. We know what $t$ is since we can ask the raspberry Pi what the time difference between the sensors is. However, we do not yet know what $s$ is, and we will need some way of calculating it. From the previous lesson, we asked "What is the TDOA if you decide to hang out at a location LEFT of your house?". The answer to this is that it is simply the the time it takes your friend to run to his house from your house. Thus if we tap the board LEFT of the leftmost sensor, the time distance of arrival will be the time it takes the wave to propagate from one sensor to another. And since we know that the distance between the sensors, we can use the formula that $s = \frac{\Delta p}{t} = \frac{a}{t}$.
So we could calculate $s$ by tapping the board left of the left-most sensor, retrieve the TDOA and take the quotient of the length and the TDOA. However, since we've only conducted the experiment once, this result could be rather inaccurate. In order to get a more accurate value of $s$, we will want to conduct this experiment multiple times and then take an average of all the experiment results.
Below, write a program that creates an initialliy empty list called "speeds" that will eventually contain ten values of the $s$, calculated in ten different trials. Use a while loop to wait for ten time differences, calculate the speed for that trial and then appends it to speeds.
Initialize $a$ at the very beginning to be the length of the distance between the two sensors (which you should measure with a ruler). Finally, run the code block and tap LEFT of the left-most sensor ten times.
End of explanation
"""
#WRITE YOUR CODE HERE
s = 0
for newS in speeds:
s = s + newS
s = s/10
print s
"""
Explanation: Finally, we want to calculate the average of all the values in speeds. Do this by calculating the sum of all the values in speeds, then dividing it by ten (the number of elements in speeds). Set $s$ to be this value.
End of explanation
"""
#from pymouse import PyMouse
import time
"""
Explanation: Now we know $a$ and $s$! So whenever we get a $t$ by tapping the board, we should be able to calculate the position of where we've tapped!
We've written the majority of the meat for our code. However, displaying a texual format of where we've tapped isn't so satisfying. We want something visual and exciting! So instead of just printing out where we tapped, we move our mouse to the location. The effect of this is that when we project our screen onto the board, we can set the sensors to be at the left most edge and right most edge. Whenever we tap on the line in between them, our mouse will travel to where we've tapped, creating a tocuh screen!
Writing a module that will move our mouse from scratch is rather difficult, however, someone else has done this already! The library that moves the mouse is called PyMouse. We can import this library by calling
python
from pymouse import PyMouse
Do this in the code block below.
End of explanation
"""
mouse = PyMouse() #Making a new mouse
mouse.move(0,0) #Making the mouse move to the very top left corner
time.sleep(1)
mouse.move(10, 0) #Moving the mouse right
time.sleep(1)
mouse.move(20, 0)
time.sleep(1)
mouse.move(30, 0)
time.sleep(1)
mouse.move(30, 10)
time.sleep(1)
mouse.move(30, 20)
time.sleep(1)
mouse.move(30, 30)
"""
Explanation: Now we can use this to make our mouse move to any pixel on the screen. We will first need to write a method that clicks to a position $(x,y)$. The way coordinates are defined here are not quite the same as cartesian coordinates. The upper left corner is defined as $(0,0)$. Increasing the first coordinate still makes the mouse move right, but increasing the second coordinate will make the mouse move down.
Run the code below to see what I mean
End of explanation
"""
import Tkinter as tk
root = tk.Tk()
widthPixels = root.winfo_screenwidth()
heightPixels = root.winfo_screenheight()
print (widthPixels, heightPixels)
"""
Explanation: We will now need to find the resolution of our monitor. The resolution tells us the dimensions of our display, pixels wide by pixels tall.
This is done for you below. Run the code below.
End of explanation
"""
#Write your function here.
#Solution
def tap(a, t, s):
x_coordinate = (find_position(a,t,s)/a)*widthPixels
y_coordinate = heightPixels/2
mouse.move(x_coordinate, y_coordinate)
"""
Explanation: We will want to have the line where we can tap be the halfway point between the top of the projected screen and the bottom of the projected screen. Thus the $y$ coordinate of our mouse will always be at the number of pixels tall our screen is divided by 2.
But now we need to figure out how far along the $x-axis$ we will go, we can do this with ratios. $\frac{find_{}-position}{a} = \frac{x-coordinate}{width}$, discuss why this is true. We then know that x-coordinate $= \frac{find_-position}{a}*widthPixels$
Implement a function below that given the $a$, the width of the projection, $t$, a time distance of arrival, and $s$, the speed of a wave in the board, moves the mouse to the corresponding location on screen. Call it tap.
End of explanation
"""
#import boardControlLib
#b = boardControlLib.BoardControl()
while(False):
tap(a, b.getTime(), s)
time.sleep(.05)
"""
Explanation: This completes the one-dimensional PiTap! Below we've written some code that runs this method every time you tap on the board. So everytime you tap on the board, we read a time difference. We've set the constants $a$ and $s$ already, so we can just call tap(a,t,s) with the new time difference, and our mouse will automatically move to where we'ved tapped! Run the codeblock below and tap on the board. Sometimes the mouse does not go exactly to where you've tapped, this is because of error/noise in the signals, but it should be pretty close!
End of explanation
"""
|
gagneurlab/concise
|
nbs/effect_prediction.ipynb
|
mit
|
from effect_demo_setup import *
from concise.models import single_layer_pos_effect as concise_model
import numpy as np
# Generate training data for the model, use a 1000bp sequence
param, X_feat, X_seq, y, id_vec = load_example_data(trim_seq_len = 1000)
# Generate the model
dc = concise_model(pooling_layer="sum",
init_motifs=["TGCGAT", "TATTTAT"],
n_splines=10,
n_covariates=0,
seq_length=X_seq.shape[1],
**param)
# Train the model
dc.fit([X_seq], y, epochs=1,
validation_data=([X_seq], y))
# In order to select the right output of a potential multitask model we have to generate a list of output labels, which will be used alongside the model itself.
model_output_annotation = np.array(["output_1"])
"""
Explanation: Variant effect prediction
The variant effect prediction parts integrated in concise are designed to extract importance scores for a single nucleotide variant in a given sequence. Predictions are made for each output individually for a multi-task model. In this short tutorial we will be using a small model to explain the basic functionality and outputs.
At the moment there are three different effect scores to be chosen from. All of them require as in input:
The input sequence with the variant with its reference genotype
The input sequence with the variant with its alternative genotype
Both aformentioned sequences in reverse-complement
Information on where (which basepair, 0-based) the mutation is placed in the forward sequences
The following variant scores are available:
In-silico mutagenesis (ISM):
Predict the outputs of the sequences containing the reference and alternative genotype of the variant and use the differential output as a effect score.
Gradient-based score
Dropout-based score
Calculating effect scores
Firstly we will need to have a trained model and a set of input sequences containing the variants we want to look at. For this tutorial we will be using a small model:
End of explanation
"""
import h5py
dataset_path = "%s/data/sample_hqtl_res.hdf5"%concise_demo_data_path
dataset = {}
with h5py.File(dataset_path, "r") as ifh:
ref = ifh["test_in_ref"].value
alt = ifh["test_in_alt"].value
dirs = ifh["test_out"]["seq_direction"].value
# This datset is stored with forward and reverse-complement sequences in an interlaced manner
assert(dirs[0] == b"fwd")
dataset["ref"] = ref[::2,...]
dataset["alt"] = alt[::2,...]
dataset["ref_rc"] = ref[1::2,...]
dataset["alt_rc"] = alt[1::2,...]
dataset["y"] = ifh["test_out"]["type"].value[::2]
# The sequence is centered around the mutatiom with the mutation occuring on position when looking at forward sequences
dataset["mutation_position"] = np.array([500]*dataset["ref"].shape[0])
"""
Explanation: As with any prediction that you want to make with a model it is necessary that the input sequences have to fit the input dimensions of your model, in this case the reference and alternative sequences in their forward and reverse-complement state have to have the shape [?, 1000, 4].
We will be storing the dataset in a dictionary for convenience:
End of explanation
"""
from concise.effects.ism import ism
from concise.effects.gradient import gradient_pred
from concise.effects.dropout import dropout_pred
ism_result = ism(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"],
alt_rc = dataset["alt_rc"],
mutation_positions = dataset["mutation_position"],
out_annotation_all_outputs = model_output_annotation, diff_type = "diff")
gradient_result = gradient_pred(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"],
alt_rc = dataset["alt_rc"],
mutation_positions = dataset["mutation_position"],
out_annotation_all_outputs = model_output_annotation)
dropout_result = dropout_pred(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"], alt_rc = dataset["alt_rc"], mutation_positions = dataset["mutation_position"], out_annotation_all_outputs = model_output_annotation)
gradient_result
"""
Explanation: All prediction functions have the same general set of required input values. Before going into more detail of the individual prediction functions We will look into how to run them. The following input arguments are availble for all functions:
model: Keras model
ref: Input sequence with the reference genotype in the mutation position
ref_rc: Reverse complement of the 'ref' argument
alt: Input sequence with the alternative genotype in the mutation position
alt_rc: Reverse complement of the 'alt' argument
mutation_positions: Position on which the mutation was placed in the forward sequences
out_annotation_all_outputs: Output labels of the model.
out_annotation: Select for which of the outputs (in case of a multi-task model) the predictions should be calculated.
The out_annotation argument is not required. We will now run the available predictions individually.
End of explanation
"""
from concise.effects.snp_effects import effect_from_model
# Define the parameters:
params = {"methods": [gradient_pred, dropout_pred, ism],
"model": dc,
"ref": dataset["ref"],
"ref_rc": dataset["ref_rc"],
"alt": dataset["alt"],
"alt_rc": dataset["alt_rc"],
"mutation_positions": dataset["mutation_position"],
"extra_args": [None, {"dropout_iterations": 60},
{"rc_handling" : "maximum", "diff_type":"diff"}],
"out_annotation_all_outputs": model_output_annotation,
}
results = effect_from_model(**params)
"""
Explanation: The output of all functions is a dictionary, please refer to the individual chapters further on for an explanation of the individual values. Every dictionary contains pandas dataframes as values. Every column of the dataframe is named according to the values given in the out_annotation_all_outputs labels and contains the respective predicted scores.
Convenience function
For convenience there is also a function available which enables the execution of all functions in one call.
Additional arguments of the effect_from_model function are:
methods: A list of prediction functions to be executed. Using the same function more often than once (even with different parameters) will overwrite the results of the previous calculation of that function.
extra_args: None or a list of the same length as 'methods'. The elements of the list are dictionaries with additional arguments that should be passed on to the respective functions in 'methods'. Arguments defined here will overwrite arguments that are passed to all methods.
**argv: Additional arguments to be passed on to all methods, e.g,: out_annotation.
End of explanation
"""
print(results.keys())
"""
Explanation: Again the returned value is a dictionary containing the results of the individual calculations, the keys are the names of the executed functions:
End of explanation
"""
|
ajgpitch/qutip-notebooks
|
examples/spin-chain-model.ipynb
|
lgpl-3.0
|
%matplotlib inline
from qutip.qip.circuit import QubitCircuit
from qutip.qip.operations import gate_sequence_product
import numpy as np
"""
Explanation: QuTiP example: Physical implementation of Spin Chain Qubit model
Author: Anubhav Vardhan (anubhavvardhan@gmail.com)
Numerical simulation added by Boxi Li (etamin1201@gmail.com)
For more information about QuTiP see http://qutip.org
End of explanation
"""
from qutip.qip.models.spinchain import CircularSpinChain
from qutip.qip.models.spinchain import LinearSpinChain
"""
Explanation: If your qutip version is lower than 4.4.1 please run the following cell
End of explanation
"""
from qutip.qip.device import CircularSpinChain, LinearSpinChain
from qutip.qip.noise import RandomNoise
"""
Explanation: Otherwise please run this cell
End of explanation
"""
N = 3
qc = QubitCircuit(N)
qc.add_gate("CNOT", targets=[0], controls=[2])
"""
Explanation: Hamiltonian:
$\displaystyle H = - \frac{1}{2}\sum_n^N h_n \sigma_z(n) - \frac{1}{2} \sum_n^{N-1} [ J_x^{(n)} \sigma_x(n) \sigma_x(n+1) + J_y^{(n)} \sigma_y(n) \sigma_y(n+1) +J_z^{(n)} \sigma_z(n) \sigma_z(n+1)]$
The linear and circular spin chain models employing the nearest neighbor interaction can be implemented using the SpinChain class.
Circuit Setup
End of explanation
"""
U_ideal = gate_sequence_product(qc.propagators())
U_ideal
"""
Explanation: The non-adjacent interactions are broken into a series of adjacent ones by the program automatically.
End of explanation
"""
p1 = CircularSpinChain(N, correct_global_phase=True)
U_list = p1.run(qc)
U_physical = gate_sequence_product(U_list)
U_physical.tidyup(atol=1e-5)
(U_ideal - U_physical).norm()
"""
Explanation: Circular Spin Chain Model Implementation
End of explanation
"""
p1.qc0.gates
"""
Explanation: The results obtained from the physical implementation agree with the ideal result.
End of explanation
"""
p1.qc1.gates
"""
Explanation: The gates are first convert to gates with adjacent interactions moving in the direction with the least number of qubits in between.
End of explanation
"""
p1.qc2.gates
"""
Explanation: They are then converted into the basis [ISWAP, RX, RZ]
End of explanation
"""
p1.get_full_tlist()
"""
Explanation: The time for each applied gate:
End of explanation
"""
p1.plot_pulses();
"""
Explanation: The pulse can be plotted as:
End of explanation
"""
p2 = LinearSpinChain(N, correct_global_phase=True)
U_list = p2.run(qc)
U_physical = gate_sequence_product(U_list)
U_physical.tidyup(atol=1e-5)
(U_ideal - U_physical).norm()
"""
Explanation: Linear Spin Chain Model Implementation
End of explanation
"""
p2.qc0.gates
"""
Explanation: The results obtained from the physical implementation agree with the ideal result.
End of explanation
"""
p2.qc1.gates
"""
Explanation: The gates are first convert to gates with adjacent interactions moving in the direction with the least number of qubits in between.
End of explanation
"""
p2.qc2.gates
"""
Explanation: They are then converted into the basis [ISWAP, RX, RZ]
End of explanation
"""
p2.get_full_tlist()
"""
Explanation: The time for each applied gate:
End of explanation
"""
p2.plot_pulses();
"""
Explanation: The pulse can be plotted as:
End of explanation
"""
from qutip import basis, fidelity
N = 1
plus_state = (basis(2,0) + basis(2,1)).unit()
qc = QubitCircuit(N=N)
qc.add_gate("SNOT", targets=0)
processor = LinearSpinChain(N=N)
processor.load_circuit(qc)
end_state = processor.run_state(init_state=basis(2, 0), analytical=False).states[-1]
fidelity(end_state, plus_state)
processor.add_noise(RandomNoise(rand_gen=np.random.normal, dt=0.1, loc=0.1, scale=0.2))
end_state = processor.run_state(init_state=basis(2, 0), analytical=False).states[-1]
fidelity(end_state, plus_state)
"""
Explanation: Numerical simulation
From QuTiP 4.5, we also add the possibility to allow numerical simulation of SpinChain-based quantum computing. One needs only to add an option analytical=False in run_state to use one of the QuTiP solvers to simulate the state evolution instead of direct matrix product. Under numerical simulation, one can go beyond simulation with perfect gate operations. All the noise defined for the class Processor can also be used for SpinChain here.
End of explanation
"""
from qutip.bloch import Bloch
b = Bloch()
b.add_states([end_state, plus_state])
b.make_sphere()
"""
Explanation: As the control noise is coherent noise, the result of this noise is still a pure state. Therefore, we can visualize it on a Bloch sphere.
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Software versions:
End of explanation
"""
|
KiranArun/A-Level_Maths
|
Integration/Integration.ipynb
|
mit
|
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Integration
End of explanation
"""
x = np.linspace(-10, 10, 201)
def f(x):
return x**2
y = f(x)
fig, ax = plt.subplots(1, figsize=(8,4))
ax.plot(x,y, 'g', label='line')
ax.fill_between(x,y, color='blue', alpha=0.3, label='area under graph')
ax.grid(True)
ax.legend()
plt.show()
"""
Explanation: Contents
1.Integral Calculus
2.Fundamental Theorem of Calculus
3.Basic Integration
- Integrating powers of x
- Integrating other basic terms
4.Definite Integrals
- Area under graph
- Area under graph for y axis
- Area between lines
- Area between lines on y axis
<a id='Integral_Calculus'></a>
Integral Calculus
How to find area under curve between a specified x
$\lim_{n\to\infty}\sum_{i=1}^n f(x_i)\Delta x_i = \int^b_a f(x)dx$
this is the area under the graph
the left side sums as many values of y in the specified x data set and weights it with the difference in x
the right side is the integral which is 1 function which takes the range of a to b
This is the Definite Integral
$\int f(x) dx$
This is the Indefinite Integral or anti-derivative
End of explanation
"""
x = np.linspace(-5, 5, 201)
def f(x):
return 6*x**2 - 20
def F(x):
return 2*x**3 - 20*x
y = f(x)
start = 60
end = 160
section = x[start:end+1]
fig, ax = plt.subplots(1, figsize=(8,4))
ax.plot(x,y, 'g', label='y = 2x')
ax.fill_between(section,f(section), color='blue', alpha=0.3, label='area under graph')
ax.plot(x[start], 0, 'om', color='purple', label='a')
ax.plot(x[end], 0, 'om', color='r', label='b')
ax.grid(True)
ax.legend()
plt.show()
print 'shaded net area =', F(x[end]) - F(x[start])
"""
Explanation: <a id='Fundamental_Theorem_of_Calculus'></a>
Fundamental Theorem of Calculus
$f(x)$ is continuous in $[a,b]$
$F(x) = \int^x_af(t)dt$
- where $x$ is in $[a,b]$
$\frac{dF}{dx} = \frac{d}{dx}\int^x_af(t)dt = f(x)$
Example:
$F(x) = \int^x_a\frac{\cos^2t}{-\sin t^2}dt$
$F\prime(x) = \frac{d}{dx}\int^x_a\frac{\cos^2t}{-\sin t^2}dt = \frac{\cos^2x}{-\sin x^2}$
Example 2:
$F(x) = \int^{x^2}_a\frac{\cos^2t}{-\sin t^2}dt$
$F\prime(x) = \frac{d}{dx}\int^{x^2}_a\frac{\cos^2t}{-\sin t^2}dt$
$= \frac{\cos^2x^2}{-\sin x^4}\times \frac{d}{dx}x^2$
$= 2\frac{\cos^2x^2}{-\sin x^4}$
<a id='Basic_Integration'></a>
Basic Integration
<a id='Integrating_powers_of_x'></a>
Integrating powers of x
$\int Ax^ndx = \frac{A}{n+1}x^{n+1} + C$
to find the derivative we use $\frac{d}{dx}ax^n = anx^{n-1}$
we do the opposite with $\int ax^ndx = a\frac{1}{n+1}x^{n+1}$
we add $C$ as we cant find out the constant of the original function
Example
$\int 2x^5dx = \frac{1}{3}x^{6} + C$
<a id='Integrating_other_basic_terms'></a>
Integrating other basic terms
Integrating $e^{kx}$
$\int Ae^{kx + b} dx = \frac{A}{k}e^{kx + b} + C$
the derivative is $\frac{d}{dx}e^x = e^x$
to differentiate, we would use the chain rule on the function of x and $\therefore$ multiply by k
Example
$\int 3e^{9x + 2} dx = \frac{1}{3}e^{9x + 2} + C$
Integrating $\frac{1}{x}$
$\int A\frac{n}{x} dx = An\ln x + C$
$\int A\frac{f\prime(x)}{f(x)} dx = A\ln|f(x)| + C$
in the second rule, the top is caused by the chain rule
Example
$\int 2\frac{6}{x} dx = 12\ln x + C$
Example 2
$\int 2\frac{10x}{5x^2 + 3} dx = 2\ln |5x^2 + 3| + C$
Integrating $\sin x$
$\int A\sin(kx) dx = -A\frac{1}{k}\cos(kx) + C$
Example
$\int 4\sin(2x) dx = -2\cos(2x) + C$
Integrating $\cos x$
$\int A\cos(kx) dx = A\frac{1}{k}\sin(kx) + C$
Example
$\int 11\cos(3x) dx = \frac{11}{3}\sin(3x) + C$
<a id='Definite_Integrals'></a>
Definite Integrals
This is where there are defined boundaries on the x or y axis
<a id='Area_under_graph'></a>
Area under graph
$F(x) = \int f(x)dx$
$\int_a^b f(x)dx = F(b) - F(a)$
if the graph is negative, the area can be negative
the definite integral gives the net area
to find area (not net area), split into positive and negative regions and find sum magnitudes of regions
Example
$f(x) = 6x^2$
$F(x) = 2x^3$
$\int_2^5 f(x)dx = F(5) - F(2)$
$= 2(5)^3 - 2(2)^3$
$= 234$
<a id='Area_under_graph_for_y_axis'></a>
Area under graph for y axis
$F(y) = \int f^{-1}(y)dy$
$\int_c^d f^{-1}(y)dy = F(d) - F(c)$
do the same but in terms of y
this includes taking the inverse of the line function to get a function in terms of y
Example
$f(x) = 6x^2$
$f^{-1}(y) = \left(\frac{1}{6}y\right)^{\frac{1}{2}}$
$F(y) = 4\left(\frac{1}{6}y\right)^{\frac{3}{2}}$
$\int_2^5 f^{-1}(y)dy = F(5) - F(2)$
$= 4\left(\frac{5}{6}\right)^{\frac{3}{2}} - 4\left(\frac{1}{3}\right)^{\frac{3}{2}}$
$= 2.273$
<a id='Area_between_lines'></a>
Area between lines
$\int_a^b(f(x) - g(x))dx = \int_a^bf(x)dx - \int_a^bg(x)dx$
Example
$= \int_0^1(\sqrt{x} - x^2)dx$
$= \left(\frac{2}{3}x^{\frac{3}{2}} - \frac{x^3}{3}\right)\mid^1_0$
$= \left(\frac{2}{3}1^{\frac{3}{2}} - \frac{1^3}{3}\right) - \left(\frac{2}{3}0^{\frac{3}{2}} - \frac{0^3}{3}\right)$
$= \left(\frac{2}{3} - \frac{1}{3}\right)$
$= \left(\frac{1}{3}\right)$
if more lines, separate into sections on the x axis and sum
<a id='Area_between_lines_on_y_axis'></a>
Area between lines on y axis
This works the same as area under graph on y axis but combined with the area between lines method
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.1/tutorials/fti.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
print(b['exptime'])
"""
Explanation: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
End of explanation
"""
b['exptime'] = 1, 'hr'
"""
Explanation: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
End of explanation
"""
print(b['fti_method'])
b['fti_method'] = 'oversample'
"""
Explanation: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
End of explanation
"""
print(b['fti_oversample'])
"""
Explanation: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
End of explanation
"""
b.run_compute(fti_method='none', irrad_method='none', model='fti_off')
b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on')
"""
Explanation: Influence on Light Curves
End of explanation
"""
afig, mplfig = b.plot(show=True, legend=True)
"""
Explanation: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse.
End of explanation
"""
|
BDannowitz/polymath-progression-blog
|
jlab-ml-lunch-2/notebooks/00-Data-Exploration.ipynb
|
gpl-2.0
|
%matplotlib widget
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import imageio
"""
Explanation: JLab ML Lunch 2 - Data Exploration
Second ML challenge hosted
On October 30th, a test dataset will be released, and predictions must be submitted within 24 hours
Let's take a look at the training data!
End of explanation
"""
X_train = pd.read_csv("MLchallenge2_training.csv")
# There are 150 columns. Let's just see a few
X_train[['x', 'y', 'z', 'px', 'py', 'pz',
'x1', 'y1', 'z1', 'px1', 'py1', 'pz1']].head()
def plot_quiver_track(df, track_id, elev=None,
azim=None, dist=None):
# Extract the track row
track = df.loc[track_id].values
# Get all the values of each type of feature
x = [track[(6*i)] for i in range(0, 25)]
y = [track[1+(6*i)] for i in range(0, 25)]
z = [track[2+(6*i)] for i in range(0, 25)]
px = [track[3+(6*i)] for i in range(0, 25)]
py = [track[4+(6*i)] for i in range(0, 25)]
pz = [track[5+(6*i)] for i in range(0, 25)]
# I ideally would like to link the magnitude
# of the momentum to the color, but my results
# were buggy...
p_tot = np.sqrt(np.square(px) +
np.square(py) +
np.square(pz))
# Create our 3D figure
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_pane_color((1,1,1,1))
ax.yaxis.set_pane_color((1,1,1,1))
ax.zaxis.set_pane_color((1,1,1,1))
# Set the three 3D plot viewing attributes
if elev is not None:
ax.elev = elev
if azim is not None:
ax.azim = azim
if dist is not None:
ax.dist = dist
# Create our quiver plot
ax.quiver(z, x, y, pz, px, py, length=14)
# Labels for clarity
ax.set_title("Track {}".format(track_id))
ax.set_xlabel("z", fontweight="bold")
ax.set_ylabel("x", fontweight="bold")
ax.set_zlabel("y", fontweight="bold")
plt.tight_layout()
return fig, ax
fig, ax = plot_quiver_track(X_train, 2)
fig.show()
gif_filename = "track-2-anim"
ax.elev = 50.
ax.azim = 90.
ax.dist = 9.
img_files = []
for n in range(0, 100):
ax.elev = ax.elev-0.4
ax.azim = ax.azim+1.5
filename = f'images/{gif_filename}/img{str(n).zfill(3)}.png'
img_files.append(filename)
plt.savefig(filename, bbox_inches='tight')
images = []
for filename in img_files:
images.append(imageio.imread(filename))
imageio.mimsave('images/track-2.gif', images)
"""
Explanation: Training Data
This shows the state vector ($x,y,z, p_x, p_y, p_z$) for the origin and 24 detector stations
Jupyter-matplotlib widget used for handy visualizations (https://github.com/matplotlib/jupyter-matplotlib)
End of explanation
"""
X_test = pd.read_csv("test_in.csv", names=X_train.columns)
X_test[['x', 'y', 'z', 'x15', 'y15', 'z15', 'x23', 'y23', 'z23']].head()
import missingno as mno
ax = mno.matrix(X_test.head(100))
"""
Explanation: Now read in the example test data
End of explanation
"""
import re
from io import StringIO
with open('test_in.csv', 'r') as f:
data_str = f.read()
data_str_io = StringIO(
re.sub(r"([-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?\n)", r",,\1", data_str)
)
X_test = pd.read_csv(data_str_io, names=X_train.columns)
X_test.head()
"""
Explanation: One caveat on the test data
The last value of each row is actually the z-value of the next step to be predicted, not the x-position
... but this isn't the same spot for each row
Just add two commas before the last number of each row
End of explanation
"""
import re
from io import StringIO
def load_test_data(filename):
with open(filename, 'r') as f:
data_str = f.read()
data_str_io = StringIO(
re.sub(r"([-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?\n)", r",,\1", data_str)
)
X_test = pd.read_csv(data_str_io, names=X_train.columns)
return X_test
"""
Explanation: This should be saved for later usage
End of explanation
"""
|
davidwhogg/Avast
|
notebooks/fakedata2.ipynb
|
mit
|
def oned_gaussian(xs, mm, sig):
return np.exp(-0.5 * (xs - mm) ** 2 / sig ** 2) / np.sqrt(2. * np.pi * sig)
def make_synth(rv, xs, ds, ms, sigs):
"""
`rv`: radial velocity in m/s (or same units as `c` above
`xs`: `[M]` array of wavelength values
`ds`: depths at line centers
`ms`: locations of the line centers in rest wavelength
`sigs`: Gaussian sigmas of lines
"""
synths = np.ones_like(xs)
for d, m, sig in zip(ds, ms, sigs):
synths *= np.exp(d *
oned_gaussian(xs * doppler(rv), m, sig))
return synths
def make_data(N, xs, ds, ms, sigs):
"""
`N`: number of spectra to make
`xs`: `[M]` array of wavelength values
`ds`: depth-like parameters for lines
`ms`: locations of the line centers in rest wavelength
`sigs`: Gaussian sigmas of lines
"""
np.random.seed(2361794231)
M = len(xs)
data = np.zeros((N, M))
ivars = np.zeros((N, M))
rvs = 30000. * np.random.uniform(-1., 1., size=N) # 30 km/s bc Earth ; MAGIC
for n, rv in enumerate(rvs):
ivars[n, :] = 10000. # s/n = 100 ; MAGIC
data[n, :] = make_synth(rv, xs, ds, ms, sigs)
data[n, :] += np.random.normal(size=M) / np.sqrt(ivars[n, :])
return data, ivars, rvs
fwhms = [0.1077, 0.1113, 0.1044, 0.1083, 0.1364, 0.1, 0.1281,
0.1212, 0.1292, 0.1526, 0.1575, 0.1879] # FWHM of Gaussian fit to line (A)
sigs = np.asarray(fwhms) / 2. / np.sqrt(2. * np.log(2.)) # Gaussian sigma (A)
ms = [4997.967, 4998.228, 4998.543, 4999.116, 4999.508, 5000.206, 5000.348,
5000.734, 5000.991, 5001.229, 5001.483, 5001.87] # line center (A)
ds = [-0.113524, -0.533461, -0.030569, -0.351709, -0.792123, -0.234712, -0.610711,
-0.123613, -0.421898, -0.072386, -0.147218, -0.757536] # depth of line center (normalized flux)
ws = np.ones_like(ds) # dimensionless weights
dx = 0.01 # A
xs = np.arange(4998. + 0.5 * dx, 5002., dx) # A
N = 16
data, ivars, true_rvs = make_data(N, xs, ds, ms, sigs)
data = np.log(data)
data_xs = np.log(xs)
def add_tellurics(xs, all_data, true_rvs, lambdas, strengths, dx):
N, M = np.shape(all_data)
tellurics = np.ones_like(xs)
for ll, s in zip(lambdas, strengths):
tellurics *= np.exp(-s * oned_gaussian(xs, ll, dx))
plt.plot(xs, tellurics)
all_data *= np.repeat([tellurics,],N,axis=0)
return all_data
n_tellurics = 16 # magic
telluric_sig = 3.e-6 # magic
telluric_xs = np.random.uniform(data_xs[0], data_xs[-1], n_tellurics)
strengths = 0.01 * np.random.uniform(size = n_tellurics) ** 2. # magic numbers
all_data = np.exp(data)
all_data = add_tellurics(data_xs, all_data, true_rvs, telluric_xs, strengths, telluric_sig)
data = np.log(all_data)
"""
Explanation: The following is code copied from EPRV/fakedata.py to generate a realistic fake spectrum:
End of explanation
"""
def make_template(all_data, rvs, xs, dx):
"""
`all_data`: `[N, M]` array of pixels
`rvs`: `[N]` array of RVs
`xs`: `[M]` array of wavelength values
`dx`: linear spacing desired for template wavelength grid (A)
"""
(N,M) = np.shape(all_data)
all_xs = np.empty_like(all_data)
for i in range(N):
all_xs[i,:] = xs + np.log(doppler(rvs[i])) # shift to rest frame
all_data, all_xs = np.ravel(all_data), np.ravel(all_xs)
tiny = 10.
template_xs = np.arange(min(all_xs)-tiny*dx, max(all_xs)+tiny*dx, dx)
template_ys = np.nan + np.zeros_like(template_xs)
for i,t in enumerate(template_xs):
ind = (all_xs >= t-dx/2.) & (all_xs < t+dx/2.)
if np.sum(ind) > 0:
template_ys[i] = np.nanmedian(all_data[ind])
ind_nan = np.isnan(template_ys)
template_ys[ind_nan] = np.interp(template_xs[ind_nan], template_xs[~ind_nan], template_ys[~ind_nan])
return template_xs, template_ys
def subtract_template(data_xs, data, model_xs_t, model_ys_t, rvs_t):
(N,M) = np.shape(data)
data_sub = np.copy(data)
for n,v in enumerate(rvs_t):
model_ys_t_shifted = Pdot(data_xs, model_xs_t, model_ys_t, v)
data_sub[n,:] -= np.ravel(model_ys_t_shifted)
if n == 0:
plt.plot(data_xs, data[n,:], color='k')
plt.plot(data_xs, data_sub[n,:], color='blue')
plt.plot(data_xs, np.ravel(model_ys_t_shifted), color='red')
return data_sub
x0_star = true_rvs + np.random.normal(0., 100., N)
x0_t = np.zeros(N)
model_xs_star, model_ys_star = make_template(data, x0_star, data_xs, np.log(6000.01) - np.log(6000.))
model_xs_t, model_ys_t = make_template(data, x0_t, data_xs, np.log(6000.01) - np.log(6000.))
def chisq_star(rvs_star, rvs_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t):
pd_star = Pdot(data_xs, model_xs_star, model_ys_star, rvs_star)
pd_t = Pdot(data_xs, model_xs_t, model_ys_t, rvs_t)
pd = pd_star + pd_t
return np.sum((data - pd)**2 * ivars)
def chisq_t(rvs_t, rvs_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t):
pd_star = Pdot(data_xs, model_xs_star, model_ys_star, rvs_star)
pd_t = Pdot(data_xs, model_xs_t, model_ys_t, rvs_t)
pd = pd_star + pd_t
return np.sum((data - pd)**2 * ivars)
soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
x0_star = soln_star
x0_t = soln_t
print np.std(x0_star - true_rvs)
print np.std(x0_t)
data_star = subtract_template(data_xs, data, model_xs_t, model_ys_t, x0_t)
data_t = subtract_template(data_xs, data, model_xs_star, model_ys_star, x0_star)
plt.plot(data_xs, data[0,:], color='black')
plt.plot(model_xs_star, model_ys_star, color='red')
plt.plot(model_xs_t, model_ys_t, color='green')
plt.plot(data_xs, data_star[0,:], color='blue')
plt.plot(data_xs, data_t[0,:], color='red')
true_star = np.log(make_data(N, xs, ds, ms, sigs)[0])
plt.plot(data_xs, true_star[0,:], color='k')
plt.plot(data_xs, data_star[0,:], color='blue')
plt.plot(model_xs_star, model_ys_star, color='red')
"""
Explanation: First step: generate some approximate models of the star and the tellurics using first-guess RVs.
End of explanation
"""
soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
print np.std(soln_star - true_rvs)
print np.std(soln_t)
"""
Explanation: Next: use the template-subtracted data to get better RVs for star and template
End of explanation
"""
for n in range(5):
x0_star = soln_star
x0_t = soln_t
data_star = subtract_template(data_xs, data, model_xs_t, model_ys_t, x0_t)
data_t = subtract_template(data_xs, data, model_xs_star, model_ys_star, x0_star)
model_xs_star, model_ys_star = make_template(data_star, x0_star, data_xs, np.log(6000.01) - np.log(6000.))
model_xs_t, model_ys_t = make_template(data_t, x0_t, data_xs, np.log(6000.01) - np.log(6000.))
soln_star = minimize(chisq_star, x0_star, args=(x0_t, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
soln_t = minimize(chisq_t, x0_t, args=(soln_star, data_xs, data, ivars, model_xs_star, model_ys_star, model_xs_t, model_ys_t),
method='BFGS', options={'disp':True, 'gtol':1.e-2, 'eps':1.5e-5})['x']
print "iter {0}: star std = {1:.2f}, telluric std = {2:.2f}".format(n, np.std(soln_star - true_rvs), np.std(soln_t))
true_star = np.log(make_data(N, xs, ds, ms, sigs)[0])
plt.plot(data_xs, true_star[0,:], color='k')
plt.plot(data_xs, data_star[0,:], color='blue')
plt.plot(data_xs, data_t[0,:], color='red')
plt.plot(data_xs, data[0,:], color='k')
plt.plot(data_xs, data_star[0,:] + data_t[0,:], color='red')
plt.plot(data_xs, data[10,:], color='k')
plt.plot(data_xs, data_star[10,:] + data_t[10,:], color='red')
"""
Explanation: and repeat:
End of explanation
"""
|
CyberCRI/dataanalysis-herocoli-redmetrics
|
v1.52.2/Tests/2.5 Google form analysis - PCA.ipynb
|
cc0-1.0
|
%run "../Functions/2.1 Sampling.ipynb"
"""
Explanation: Google form analysis tests
Purpose: determine in what extent the current data can accurately describe correlations, underlying factors on the score.
Especially concerning the answerTemporalities[0] groups: are there underlying groups explaining the discrepancies in score? Are those groups tied to certain questions?
Table of Contents
Sorted total answers to questions
Cross-samples t-tests
biologists vs non-biologists
biologists vs non-biologists before
PCAs
<br>
<br>
<br>
End of explanation
"""
# all
#gfdf = gform.copy()
# only pairs
#gfdf = getPerfectPretestPostestPairs(gform)
# in the pairs, only volunteers
#gfdf = gfdf[~gfdf[QVolunteer].isin(yesNoPositives)]
# playtest's perfect pairs of phase 1
#gfdf = gfdfPlaytestPhase1PretestPosttestUniqueProfiles.copy()
# only the volunteers of this sample
gfdf = gfdfPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.copy()
# only the pretests
pretests = gfdf[gfdf[QTemporality] == answerTemporalities[0]]
#gfdf = pretests
# only the posttests
posttests = gfdf[gfdf[QTemporality] == answerTemporalities[1]]
#gfdf = posttests
pretestPosttestConcatenation = False
saveFiles = False
gfdf.index = range(0, len(gfdf))
len(gfdf)
if not pretestPosttestConcatenation:
len(gfdf[gfdf[QTemporality] == answerTemporalities[0]]),\
len(gfdf[gfdf[QTemporality] == answerTemporalities[1]]),\
len(gfdf)
if pretestPosttestConcatenation:
pretests = pretests.sort_values(by=QUserId)
pretests.index = range(0, len(pretests))
posttests = posttests.sort_values(by=QUserId)
posttests.index = range(0, len(posttests))
pretestsbinarized = getAllBinarized(pretests)
pretestsbinarized.index = pretests.index
posttestsbinarized = getAllBinarized(posttests)
posttestsbinarized.index = posttests.index
else:
binarized = getAllBinarized(gfdf)
binarized.index = gfdf.index
if pretestPosttestConcatenation:
pretestQPrefix = "pretest_"
pretestsbinarized.columns = [pretestQPrefix + x for x in pretestsbinarized.columns.values]
pretests.columns = [pretestQPrefix + x for x in pretests.columns.values]
posttestQPrefix = "posttest_"
posttestsbinarized.columns = [posttestQPrefix + x for x in posttestsbinarized.columns.values]
posttests.columns = [posttestQPrefix + x for x in posttests.columns.values]
binarized = pd.concat([pretestsbinarized,posttestsbinarized],axis=1)
gfdf = pd.concat([pretests,posttests],axis=1)
len(binarized)
gfdf.shape, binarized.shape
if pretestPosttestConcatenation:
scorePretest = np.dot(pretestsbinarized,np.ones(len(pretestsbinarized.columns)))
scorePosttest = np.dot(posttestsbinarized,np.ones(len(posttestsbinarized.columns)))
scoreTotal = scorePretest + scorePosttest
score = scorePretest
else:
score = np.dot(binarized,np.ones(len(binarized.columns)))
dimensions = binarized.shape[1]
dimensions
binarized['class'] = 'default'
# split data table into data X and class labels y
X = binarized.iloc[:,0:dimensions].values
y = binarized.iloc[:,dimensions].values
"""
Explanation: PCAs
<a id=PCAs />
Purpose: find out which questions have the more weight in the computation of the score.
Other leads: LDA, ANOVA.
Source for PCA: http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html
End of explanation
"""
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
"""
Explanation: Standardizing
End of explanation
"""
mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)
print('NumPy covariance matrix: \n%s' %np.cov(X_std.T))
"""
Explanation: 1 - Eigendecomposition - Computing Eigenvectors and Eigenvalues
Covariance Matrix
End of explanation
"""
cov_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
#print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
"""
Explanation: eigendecomposition on the covariance matrix:
End of explanation
"""
cor_mat1 = np.corrcoef(X_std.T)
if not pd.isnull(cor_mat1).any():
eig_vals, eig_vecs = np.linalg.eig(cor_mat1)
#print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
"""
Explanation: Correlation Matrix
Eigendecomposition of the standardized data based on the correlation matrix:
End of explanation
"""
u,s,v = np.linalg.svd(X_std.T)
s
"""
Explanation: Eigendecomposition of the raw data based on the correlation matrix:
cor_mat2 = np.corrcoef(binarized.T)
eig_vals, eig_vecs = np.linalg.eig(cor_mat2)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
Singular Vector Decomposition
End of explanation
"""
for ev in eig_vecs:
np.testing.assert_array_almost_equal(1.0, np.linalg.norm(ev))
print('Everything ok!')
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), list(eig_vecs[:,i])) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort()
eig_pairs.reverse()
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:')
for i in eig_pairs:
print(i[0])
if False:
#saved_eig_pairs = eig_pairs.copy()
np.array([len(x) for x in eig_pairs])
np.array([len(x[1]) for x in eig_pairs])
np.array([type(x[1]) for x in eig_pairs])
#np.array([len(x) for x in saved_eig_pairs])
#np.array([len(x[1]) for x in saved_eig_pairs])
#np.array([type(x[1]) for x in saved_eig_pairs])
#saved_eig_pairs[0]
eig_pairs[0]
np.array([pd.isnull(x[1]).any() for x in saved_eig_pairs]).any(),np.array([pd.isnull(x[1]).any() for x in eig_pairs]).any()
tot = sum(eig_vals)
var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
plt.bar(range(dimensions), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(dimensions), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
var_exp[:5]
cum_var_exp[:5]
"""
Explanation: 2 - Selecting Principal Components
End of explanation
"""
matrix_w = np.hstack((np.array(eig_pairs[0][1]).reshape(dimensions,1),
np.array(eig_pairs[1][1]).reshape(dimensions,1)))
print('Matrix W:\n', matrix_w)
"""
Explanation: Projection Matrix
End of explanation
"""
basecolors = ('green','red','blue','magenta','cyan','purple','yellow','black','white')
colors = basecolors
len(colors)
Y = X_std.dot(matrix_w)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
ax = plt.subplot(111)
plt.scatter(Y[:, 0], Y[:, 1])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title("base PCA")
plt.show()
"""
Explanation: 3 - Projection Onto the New Feature Space
End of explanation
"""
# creates a scatter plot using different colors for different classes
# answerIndices: index of 'gfdf' and 'binarized' DataFrames
# Y: 2D position in PCA for answers
# classNames: list of class names
# classes: list of series of class-index indexed UserIds
# title: str
# rainbow: whether to use rainbow colors
# figsize: for matplotlib
def classifyAndPlot(answerIndices, Y, classNames, classes, title = '', rainbow = False, figsize = (12, 8)):
%matplotlib nbagg
defaultClassName = ''
sampleSize = 0
# sets the name of the default class
for classIndex in range(0, len(classes)):
sampleSize += len(classes[classIndex])
if(sampleSize < len(answerIndices)):
if(len(classNames) == len(classes) + 1):
defaultClassName = classNames[-1]
else:
defaultClassName = 'other'
classNames.append(defaultClassName)
# y is the 'class' container
y = pd.Series(index = answerIndices, data = defaultClassName)
# set the class of each answer
for classIndex in range(0, len(classes)):
y[classes[classIndex]] = classNames[classIndex]
if (defaultClassName in y.values) and (not (defaultClassName in classNames)):
print("unexpected error: check the exhaustiveness of the provided classes")
with plt.style.context('seaborn-whitegrid'):
plots = pd.Series()
# update function to control the alpha channel
def updateAlpha(alpha):
if(len(plots) > 0):
for lab in classNames:
plots.loc[lab].set_alpha(alpha)
proxyArtists = []
for lab, col in zip(classNames,colors):
proxyArtists.append(plt.scatter([], [], label=lab, c=col, alpha=alpha, marker='o', s=150))
plots.loc[classNames[0]].axes.legend(proxyArtists, classNames, loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
# creates the slider to control the alpha channel
interact(updateAlpha, alpha=(0.0,1.0,0.01));
thisFigure = plt.figure(figsize=figsize)
ax = plt.subplot(111)
colors = basecolors
if (rainbow or len(classNames) > len(colors)):
colors = plt.cm.rainbow(np.linspace(1, 0, len(classNames)))
colors = colors[:len(classNames)]
for lab, col in zip(classNames,colors):
# y == lab is a selector:
# Y[y==lab, 0] selects all Y.x of class lab
# Y[y==lab, 0] selects all Y.y of class lab
xvalues = Y[y==lab, 0]
yvalues = Y[y==lab, 1]
#print("'" + str(lab) + "': " + str(len(xvalues)) + " values in " + str(col))
plots.loc[lab] = plt.scatter( xvalues,
yvalues,
label=lab,
c=[col],
alpha=0.2,
s=150
)
#print("scatter classes: [" + '; '.join(interactiveGraphClassNames) + "]")
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
if(len(title) > 0):
plt.title(title)
plt.show()
return plots
"""
Explanation: classifyAndPlot
End of explanation
"""
def updateQuestionIndex(q):
question = gfdf.columns[q]
print("question " + str(q) + ": " + question)
classNames = []
classes = []
for answer in gfdf[question].value_counts(dropna = False).index:
classNames.append(str(answer))
classes.append(gfdf[gfdf[question].apply(str) == str(answer)].index)
classifyAndPlot(gfdf.index, Y, classNames, classes, title = question, rainbow = False)
#interact(updateQuestionIndex, q=(0,len(gfdf.columns),1));
#updateQuestionIndex(q)
"""
Explanation: interactive classifyAndPlot
End of explanation
"""
interactiveY = Y.copy()
interactivey = []
# the list of unique colors used
interactiveColors = []
interactiveGraphClassNames = []
interactiveGraphClasses = []
interactiveGraphPlots = np.nan
interactiveFigure = np.nan
interactiveGraphAx = np.nan
questionInteractive = np.nan
alphaInteractive = np.nan
interactiveTitle = ''
"""
Explanation: complexClassifyAndPlot
common variables
End of explanation
"""
if pretestPosttestConcatenation:
pretestPossibleAnswers = possibleAnswers.copy()
pretestPossibleAnswers.index = pretests.columns
posttestPossibleAnswers = possibleAnswers.copy()
posttestPossibleAnswers.index = posttests.columns
possibleAnswersConcat = pd.concat([pretestPossibleAnswers, posttestPossibleAnswers], axis = 0)
def classPreprocess(gfdf, question, answersToCheckAgainst = possibleAnswers):
global interactiveGraphClassNames, interactiveGraphClasses
if pretestPosttestConcatenation:
answersToCheckAgainst = possibleAnswersConcat
interactiveGraphClassNames = []
interactiveGraphClasses = []
if len(answersToCheckAgainst[question]) > 0:
interactiveGraphClassNames = answersToCheckAgainst[question].copy()
else:
interactiveGraphClassNames = [str(x) for x in gfdf[question].unique()]
interactiveGraphClassNames.sort()
for answer in interactiveGraphClassNames:
interactiveGraphClasses.append(gfdf[gfdf[question].apply(str) == answer].index)
"""
Explanation: classPreprocess
End of explanation
"""
def commonClassProcess(gfdf):
global interactivey
global interactiveGraphClassNames, interactiveGraphClasses
defaultClassName = ''
sampleSize = 0
# sets the name of the default class
for classIndex in range(0, len(interactiveGraphClasses)):
sampleSize += len(interactiveGraphClasses[classIndex])
if(sampleSize < len(gfdf.index)):
if(len(interactiveGraphClassNames) == len(interactiveGraphClasses) + 1):
defaultClassName = interactiveGraphClassNames[-1]
else:
defaultClassName = 'other'
interactiveGraphClassNames.append(defaultClassName)
# y is the 'class' container
interactivey = pd.Series(index = gfdf.index, data = defaultClassName)
# set the class of each answer
for classIndex in range(0, len(interactiveGraphClasses)):
interactivey[interactiveGraphClasses[classIndex]] = interactiveGraphClassNames[classIndex]
if (defaultClassName in interactivey.values) and (not (defaultClassName in interactiveGraphClassNames)):
print("unexpected error: check the exhaustiveness of the provided classes")
"""
Explanation: commonClassProcess
End of explanation
"""
def plotClasses(rainbow):
global alphaInteractive
global interactiveColors
global interactiveY, interactivey
global interactiveGraphClassNames
global interactiveGraphPlots, interactiveGraphAx
interactiveColors = basecolors
if (rainbow or len(interactiveGraphClassNames) > len(interactiveColors)):
interactiveColors = plt.cm.rainbow(np.linspace(1, 0, len(interactiveGraphClassNames)))
interactiveColors = interactiveColors[:len(interactiveGraphClassNames)]
if pd.isnull(interactiveGraphPlots):
interactiveGraphPlots = plt.scatter( interactiveY[:, 0],
interactiveY[:, 1],
label='-',
c='yellow',
alpha=alphaInteractive.value,
s=150
)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
#print("scatter classes: [" + '; '.join(interactiveGraphClassNames) + "]")
fullColors = interactivey.copy()
proxyArtists = []
for lab, col in zip(interactiveGraphClassNames,interactiveColors):
fullColors[interactivey == lab] = pd.Series(data = [col] * len(interactivey[interactivey == lab]), index = interactivey[interactivey == lab].index)
proxyArtists.append(plt.scatter([],
[],
label=lab,
c=col,
alpha=alphaInteractive.value,
s=150
))
interactiveGraphPlots.set_color(fullColors)
#print("for classes: [" + '; '.join(interactiveGraphClassNames) + "]: \n\tfullcolors=[" + '; '.join(fullColors) + "]")
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
lgd = interactiveGraphAx.legend(proxyArtists, interactiveGraphClassNames, loc='center left', bbox_to_anchor=(1, 0.5))
if(len(interactiveTitle) > 0):
plt.title(interactiveTitle)
"""
Explanation: plotClasses
End of explanation
"""
# creates a scatter plot using different colors for different interactiveGraphClasses
# gfdf: base survey answers
# Y: 2D position in PCA for answers
# interactiveGraphClassNames: list of class names
# interactiveGraphClasses: list of series of class-index indexed UserIds
# title: str
# rainbow: whether to use rainbow colors
# figsize: for matplotlib
def complexClassifyAndPlot(
gfdf,
Y,
classNames = [],
classes = [],
title = '',
rainbow = False,
figsize = (12,8),
questionIndex=1,
):
%matplotlib nbagg
global questionInteractive, alphaInteractive
global interactiveGraphPlots,\
interactiveFigure,\
interactiveGraphAx,\
interactiveGraphClassNames,\
interactiveGraphClasses,\
interactivey
interactiveGraphPlots = np.nan
interactiveGraphClassNames = classNames
interactiveGraphClasses = classes
fullyInteractive = (len(interactiveGraphClassNames) == 0 or len(interactiveGraphClasses) == 0)
if fullyInteractive:
# questions to avoid:
# 1.52
#questionRange = chain(range(1,3), range(4,40), range(42,44))
# 1.52.2
#questionRange = chain(range(1,6), range(7,42), range(44,45))
#forbiddenQuestions = [QTimestamp, QAge, QRemarks, QUserId]
forbiddenQuestions = [QRemarks, QUserId]
def updateQuestionIndex(question=questionIndex):
#print("updateQuestionIndex(" + str(question) + ")")
global interactiveTitle
global interactiveGraphClassNames, interactiveGraphClasses
chosenQuestion = gfdf.columns[question]
while chosenQuestion in forbiddenQuestions:
question = (question + 1) % len(gfdf.columns)
chosenQuestion = gfdf.columns[question]
interactiveTitle = "Q" + str(question) + ": '" + chosenQuestion + "'"
classPreprocess(gfdf, chosenQuestion)
commonClassProcess(gfdf)
if pd.notnull(interactiveGraphPlots):
plotClasses(rainbow)
plt.show()
questionInteractive = IntSlider(value=questionIndex, min=0, max=len(gfdf.columns)-1, step=1)
interactive(updateQuestionIndex, question=questionInteractive)
display(questionInteractive)
with plt.style.context('seaborn-whitegrid'):
defaultAlphaValue = 0.5
# update function to control the alpha channel
def updateAlpha(alpha = defaultAlphaValue):
global interactiveColors
global interactiveGraphPlots
if pd.notnull(interactiveGraphPlots):
interactiveGraphPlots.set_alpha(alpha)
fullColors = interactivey.copy()
proxyArtists = []
for lab, col in zip(interactiveGraphClassNames,interactiveColors):
proxyArtists.append(plt.scatter([], [], label=lab, c=col, alpha=alpha, s=150))
# source https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# Put a legend to the right of the current axis
lgd = interactiveGraphAx.legend(proxyArtists, interactiveGraphClassNames, loc='center left', bbox_to_anchor=(1, 0.5))
#interactiveFigure.savefig('samplefigure', bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
# creates the slider to control the alpha channel
alphaInteractive = FloatSlider(value=defaultAlphaValue, min=0.0, max=1.0, step=0.01)
interactive(updateAlpha, alpha=alphaInteractive);
display(alphaInteractive)
interactiveFigure = plt.figure(figsize=figsize)
#interactiveGraphAx = plt.subplot(121)
interactiveGraphAx = plt.subplot(111)
if fullyInteractive:
updateQuestionIndex(questionIndex)
else:
commonClassProcess(gfdf)
plotClasses(rainbow)
#gform.loc[:, ['Name: Plasmid', 'Function: TER', 'Name: PR', 'Function - game: CDS', 'Name: TER', 'Function - biology: CDS', 'Name: RBS', 'Example: CDS', 'Name: CDS', 'Function: PR', 'Function: RBS', 'Function: Plasmid', 'Name: Operator XXX']]
#complexClassifyAndPlot(gfdf, Y, rainbow=True, figsize = (15, 5), questionIndex=12);
"""
Explanation: complexClassifyAndPlot
End of explanation
"""
## pb = 1 color with 4 subvalues not accepted to initialize n-indexed series
#fullColors = interactivey.copy()
#for lab, col in zip(interactiveGraphClassNames,interactiveColors):
# fullColors[interactivey == lab] = pd.Series(data = [col] * len(interactivey[interactivey == lab]), index = interactivey[interactivey == lab].index)
complexClassifyAndPlot(
gfdf,
Y,
classNames = [],
classes = [],
title = '',
rainbow = True,
figsize = (12,8),
questionIndex=1,
)
if saveFiles:
#if True:
import time
for qIndex in range(0, len(gfdf.columns)):
complexClassifyAndPlot(gfdf, Y, rainbow=True, figsize = (15, 5), questionIndex=qIndex);
time.sleep(0.3)
%matplotlib nbagg
time.sleep(0.1)
questionTitle = "Q" + str(qIndex) + "_'" + gfdf.columns[qIndex].replace(" ", "_").replace(":", "") + "'"
try:
interactiveFigure.savefig(questionTitle)
except:
print("- savefig failed for " + questionTitle)
"""
Explanation: tests
End of explanation
"""
sortedScore
if pretestPosttestConcatenation:
# scorePretest
# scorePosttest
# scoreTotal
score = scorePosttest - scorePretest
pcaComponent1 = interactiveY[:, 0].copy()
#pcaComponent1 = (max(pcaComponent1) - pcaComponent1)
#pcaComponent1 = pcaComponent1 * (max(score) / max(pcaComponent1))
#pcaComponent1.sort()
sortedScore = score.copy()
#sortedScore.sort()
fig = plt.figure(figsize=(12,8))
ax = plt.subplot(121)
pcaScat = plt.scatter(range(0,len(pcaComponent1)),pcaComponent1, c= 'blue', alpha=0.7)
scoreScat = plt.scatter(range(0,len(sortedScore)),sortedScore, c='red', alpha=0.7)
#ax.legend([pcaScat, scoreScat], ['pca', 'score'], loc='center left', bbox_to_anchor=(1, 0.5))
ax.legend([pcaScat, scoreScat], ['pca', 'score'], loc='center left')
plt.title("Comparison of score with the value of PCA component 1")
plt.plot()
ax2 = plt.subplot(122)
scorePcaScat = plt.scatter(pcaComponent1, sortedScore, c= 'green', alpha=0.7)
plt.title("Score vs value of PCA component 1")
plt.xlabel("PCA component 1")
plt.ylabel("score")
plt.plot()
"""
Explanation: Comparison of score with the value of PCA component 1
End of explanation
"""
if False:
answered = binarized[binarized[QBBExampleCDS] == 1]
indices = answered.index
surveys = gfdf.iloc[indices].index
classifyAndPlot(gfdf.index, Y, ['guessed', 'did not'], [surveys]);
if False:
classifyAndPlot(gfdf.index, Y, ['biologist', 'non-biologist'], [getSurveysOfBiologists(gfdf, False).index], title = 'biologists and non-biologists');
if False:
classifyAndPlot(gfdf.index, Y, ['gamer', 'non-gamer'], [getSurveysOfGamers(gfdf, True).index], title = 'gamers and non-gamers');
if False:
classNames = []
classes = []
for answer in gfdf[QInterestBiology].value_counts().index:
classNames.append(answer)
classes.append(gfdf[gfdf[QInterestBiology] == answer].index)
classNames.append('other')
classifyAndPlot(gfdf.index, Y, classNames, classes, rainbow = True, title = 'interest in biology');
"""
Explanation:
End of explanation
"""
#np.plot(score)
if False:
np.unique(score),classNames
if True:
classNames = []
classes = []
for thisScore in np.unique(score):
classNames.append(str(thisScore))
index = np.where(score == thisScore)[0]
classes.append(index)
thesePlots = classifyAndPlot(gfdf.index, Y, classNames, classes, rainbow = True, title = 'score')
if False:
classNames = []
classes = []
question = QAge
pretests = gfdf[gfdf[QTemporality] == answerTemporalities[0]]
for answer in np.sort(pretests[question].unique()):
classNames.append(str(answer))
classes.append(pretests[pretests[question] == answer].index)
classifyAndPlot(gfdf.index, Y, classNames, classes, rainbow = True, title = 'age');
"""
Explanation: TODO: find simple way to plot scores
End of explanation
"""
eig_vals
eig_vecs[0]
maxComponentIndex = np.argmax(abs(eig_vecs[0]))
binarized.columns[maxComponentIndex]
sum(eig_vecs[0]*eig_vecs[0])
eig_vecs[0]
sortedIndices = []
descendingWeights = np.sort(abs(eig_vecs[0]))[::-1]
for sortedComponent in descendingWeights:
sortedIndices.append(np.where(abs(eig_vecs[0]) == sortedComponent)[0][0])
sortedQuestions0 = pd.DataFrame(index = descendingWeights, data = binarized.columns[sortedIndices])
sortedQuestions0
def accessFirst(a):
return a[0]
sortedQuestionsLastIndex = 10
array1 = np.arange(sortedQuestionsLastIndex+1.)/(sortedQuestionsLastIndex + 1.)
import matplotlib.cm as cm
sortedQuestionsLastIndex+1,\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Accent(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Dark2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Paired(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Pastel1(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Pastel2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set1(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set2(array1)))),\
len(np.unique(np.apply_along_axis(accessFirst, 1, cm.Set3(array1)))),\
from matplotlib import cm
def displayQuestionsContributions(\
sortedQuestions,\
title = "Contributions of questions to component",\
sortedQuestionsLastIndex = 10\
):
colors=cm.Set3(np.arange(sortedQuestionsLastIndex+1.)/(sortedQuestionsLastIndex + 1.))
sortedQuestionsLabelsArray = np.append(sortedQuestions.values.flatten()[:sortedQuestionsLastIndex], 'others')
sortedQuestionsValuesArray = np.append(sortedQuestions.index[:sortedQuestionsLastIndex], sum(sortedQuestions.index[sortedQuestionsLastIndex:]))
fig1, ax1 = plt.subplots()
ax1.pie(sortedQuestionsValuesArray, labels=sortedQuestionsLabelsArray, autopct='%1.1f%%', startangle=100, colors = colors)
ax1.axis('equal')
# cf https://matplotlib.org/users/customizing.html
plt.rcParams['patch.linewidth'] = 0
plt.rcParams['text.color'] = '#2b2b2b'
plt.title(title)
plt.tight_layout()
plt.show()
displayQuestionsContributions(sortedQuestions0, sortedQuestionsLastIndex = 10, title = 'Contributions of questions to component 1')
sum(sortedQuestions0.index**2)
sortedIndices = []
descendingWeights = np.sort(abs(eig_vecs[1]))[::-1]
for sortedComponent in descendingWeights:
sortedIndices.append(np.where(abs(eig_vecs[1]) == sortedComponent)[0][0])
sortedQuestions1 = pd.DataFrame(index = descendingWeights, data = binarized.columns[sortedIndices])
sortedQuestions1
displayQuestionsContributions(sortedQuestions1, sortedQuestionsLastIndex = 10, title = 'Contributions of questions to component 2')
sum(sortedQuestions1.index**2)
"""
Explanation: Study of eigenvectors
End of explanation
"""
|
SealedSaint/CarND-Term1-P1
|
P1.ipynb
|
mit
|
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called './images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="./examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="./examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
"""
#reading in an image
image = mpimg.imread('images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Read in an Image
End of explanation
"""
zeros = np.zeros(shape=(10, 10))
nums = np.arange(0, 10)
zeros[1:4, :] = nums
print(zeros)
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def create_vertices(img):
"""
'img' is a canny transform edge image
Adjust our vertices here to be a trapezoid
The top of the trapezoid should be where we first detect edges from the center looking bottom-up
Sides of the trapezoid should extend to edges (plus buffer)
"""
ysize, xsize = img.shape[0], img.shape[1]
bottom_ignore = ysize//6
ybuffer = ysize//30
xbuffer_top = xsize//50
xbuffer_bot = xbuffer_top*2
side_search_buffer = ybuffer//2
# Let's find the last white pixel's index in the center column.
# This will give us an idea of where our region should be
# We ignore a certain portion of the bottom of the screen so we get a better region top
# - This is partly because car hoods can obsure the region
center_white = img[:ysize-bottom_ignore, xsize//2] == 255
indices = np.arange(0, center_white.shape[0])
indices[~center_white] = 0
last_white_ind = np.amax(indices)
# If our first white pixel is too close to the bottom of the screen, default back to the screen center
# region_top_y = (last_white_ind if last_white_ind < 4*ysize//5 else ysize//2) + ybuffer
region_top_y = min(last_white_ind + ybuffer, ysize-1)
# Now we need to find the x-indices for the top segment of our region
# To do this we will look left and right from our center point until we find white
y_slice_top = max(region_top_y - side_search_buffer, 0)
y_slice_bot = min(region_top_y + side_search_buffer, ysize-1)
region_top_white = np.copy(img[y_slice_top:y_slice_bot, :]) == 255
indices = np.zeros_like(region_top_white, dtype='int32')
indices[:, :] = np.arange(0, xsize)
indices[~region_top_white] = 0
# Separate into right and left sides we can grab our indices easier:
# Right side min and left side max
right_side = np.copy(indices)
right_side[right_side < xsize//2] = xsize*2 # Large number because we will take min
left_side = np.copy(indices)
left_side[left_side > xsize//2] = 0
region_top_x_left = max(np.amax(left_side) - xbuffer_top, 0)
region_top_x_right = min(np.amin(right_side) + xbuffer_top, xsize)
# Now we do the same thing for the bottom
# Look left and right from the center until we hit white
indices = np.arange(0, xsize)
region_bot_white = img[ysize-bottom_ignore, :] == 255
indices[~region_bot_white] = 0
# Separate into right and left sides we can grab our indices easier:
# Right side min and left side max
right_side = np.copy(indices)
right_side[right_side < xsize//2] = xsize*2 # Large number because we will take min
left_side = np.copy(indices)
left_side[left_side > xsize//2] = 0
region_bot_x_left = max(np.amax(left_side) - xbuffer_bot, 0)
region_bot_x_right = min(np.amin(right_side) + xbuffer_bot, xsize)
# Because of our bottom_ignore, we need to extrapolate these bottom x coords to bot of screen
left_slope = ((ysize-bottom_ignore) - region_top_y)/(region_bot_x_left - region_top_x_left)
right_slope = ((ysize-bottom_ignore) - region_top_y)/(region_bot_x_right - region_top_x_right)
# Let's check these slopes we don't divide by 0 or inf
if abs(left_slope < .001):
left_slope = .001 if left_slope > 0 else -.001
if abs(right_slope < .001):
right_slope = .001 if right_slope > 0 else -.001
if abs(left_slope) > 1000:
left_slope = 1000 if left_slope > 0 else -1000
if abs(right_slope) > 1000:
right_slope = 1000 if right_slope > 0 else -1000
# b=y-mx
left_b = region_top_y - left_slope*region_top_x_left
right_b = region_top_y - right_slope*region_top_x_right
# x=(y-b)/m
region_bot_x_left = max(int((ysize-1-left_b)/left_slope), 0)
region_bot_x_right = min(int((ysize-1-right_b)/right_slope), xsize-1)
verts = [
(region_bot_x_left, ysize),
(region_top_x_left, region_top_y),
(region_top_x_right, region_top_y),
(region_bot_x_right, ysize)
]
return np.array([verts], dtype=np.int32)
def region_of_interest(img):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
verts = create_vertices(img)
cv2.fillPoly(mask, verts, ignore_mask_color)
#Let's return an image of the regioned area in lines
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
cv2.polylines(line_img, verts, isClosed=True, color=[0, 255, 0], thickness=5)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image, line_img
def draw_lines(img, lines, color=[255, 0, 0], thickness=8):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
if lines is None: return lines
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
avg_lines = average_lines(lines, img)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
# draw_lines(line_img, lines)
draw_lines(line_img, avg_lines, color=[138,43,226])
return line_img
def average_lines(lines, img):
'''
img should be a regioned canny output
'''
if lines is None: return lines
positive_slopes = []
positive_xs = []
positive_ys = []
negative_slopes = []
negative_xs = []
negative_ys = []
min_slope = .3
max_slope = 1000
for line in lines:
for x1, y1, x2, y2 in line:
slope = (y2-y1)/(x2-x1)
if abs(slope) < min_slope or abs(slope) > max_slope: continue # Filter our slopes
# We only need one point sample and the slope to determine the line equation
positive_slopes.append(slope) if slope > 0 else negative_slopes.append(slope)
positive_xs.append(x1) if slope > 0 else negative_xs.append(x1)
positive_ys.append(y1) if slope > 0 else negative_ys.append(y1)
# We need to calculate our region_top_y from the canny image so we know where to extend our lines to
ysize, xsize = img.shape[0], img.shape[1]
XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize))
white = img == 255
YY[~white] = ysize*2 # Large number because we will take the min
region_top_y = np.amin(YY)
new_lines = []
if len(positive_slopes) > 0:
m = np.mean(positive_slopes)
avg_x = np.mean(positive_xs)
avg_y = np.mean(positive_ys)
b = avg_y - m*avg_x
# We have m and b, so with a y we can get x = (y-b)/m
x1 = int((region_top_y - b)/m)
x2 = int((ysize - b)/m)
new_lines.append([(x1, region_top_y, x2, ysize)])
if len(negative_slopes) > 0:
m = np.mean(negative_slopes)
avg_x = np.mean(negative_xs)
avg_y = np.mean(negative_ys)
b = avg_y - m*avg_x
# We have m and b, so with a y we can get x = (y-b)/m
x1 = int((region_top_y - b)/m)
x2 = int((ysize - b)/m)
new_lines.append([(x1, region_top_y, x2, ysize)])
return np.array(new_lines)
def weighted_img(initial_img, img, a=0.8, b=1., l=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, a, img, b, l)
def save_img(img, name):
mpimg.imsave('./images/output/{0}'.format(name if '.' in name else '{0}.png'.format(name)), img)
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
image_names = [name for name in os.listdir("./images") if '.' in name]
image_names.sort()
print(image_names)
images = [mpimg.imread('./images/{0}'.format(name)) for name in image_names]
"""
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def detect_lines(img, debug=False):
ysize, xsize = img.shape[0], img.shape[1]
blur_gray = gaussian_blur(grayscale(img), kernel_size=5)
ht = 150 # First detect gradients above. Then keep between low and high if connected to high
lt = ht//3 # Leave out gradients below
canny_edges = canny(blur_gray, low_threshold=lt, high_threshold=ht)
if debug: save_img(canny_edges, 'canny_edges_{0}'.format(index))
# Our region of interest will be dynamically decided on a per-image basis
regioned_edges, region_lines = region_of_interest(canny_edges)
rho = 2
theta = 3*np.pi/180
min_line_length = xsize//16
max_line_gap = min_line_length//2
threshold = min_line_length//4
lines = hough_lines(regioned_edges, rho, theta, threshold, min_line_length, max_line_gap)
# Let's combine the hough-lines with the canny_edges to see how we did
overlayed_lines = weighted_img(img, lines)
# overlayed_lines = weighted_img(weighted_img(img, region_lines, a=1), lines)
if debug: save_img(overlayed_lines, 'overlayed_lines_{0}'.format(index))
return overlayed_lines
for index, img in enumerate(images):
print('Image:', index)
# debug = (True if index == 0 else False)
debug = True
detect_lines(img, debug)
"""
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test images. Make copies into the test images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return detect_lines(image)
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = './videos/output/white.mp4'
clip1 = VideoFileClip("./videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = './videos/output/yellow.mp4'
clip2 = VideoFileClip('./videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = './videos/output/challenge.mp4'
clip2 = VideoFileClip('./videos/challengeShadowCurve.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
jan-rybizki/Chempy
|
tutorials/7-Acessing Chempy paper 1 abundance tracks.ipynb
|
mit
|
%pylab inline
"""
Explanation: This notebook will help you to access the chemical abundance tracks that you see in paper 1 figure 17 but for all elements. You can use it to compare it to your own model/data.
End of explanation
"""
# Single zone models for sun, arcturus and cas with default and alternative yield set
sun_def_ab = np.load('data/paper_1_abundance_tracks/single_sun_default_abundances.npy')
sun_def_cube = np.load('data/paper_1_abundance_tracks/single_sun_default_cube.npy')
sun_alt_ab = np.load('data/paper_1_abundance_tracks/single_sun_alternative_abundances.npy')
sun_alt_cube = np.load('data/paper_1_abundance_tracks/single_sun_alternative_cube.npy')
arc_def_ab = np.load('data/paper_1_abundance_tracks/single_arc_default_abundances.npy')
arc_def_cube = np.load('data/paper_1_abundance_tracks/single_arc_default_cube.npy')
arc_alt_ab = np.load('data/paper_1_abundance_tracks/single_arc_alternative_abundances.npy')
arc_alt_cube = np.load('data/paper_1_abundance_tracks/single_arc_alternative_cube.npy')
cas_def_ab = np.load('data/paper_1_abundance_tracks/single_cas_default_abundances.npy')
cas_def_cube = np.load('data/paper_1_abundance_tracks/single_cas_default_cube.npy')
cas_alt_ab = np.load('data/paper_1_abundance_tracks/single_cas_alternative_abundances.npy')
cas_alt_cube = np.load('data/paper_1_abundance_tracks/single_cas_alternative_cube.npy')
# multizone models for sun, arcturus and cas with default and alternative yield set
mult_sun_def_ab = np.load('data/paper_1_abundance_tracks/sun_default_abundances.npy')
mult_sun_def_cube = np.load('data/paper_1_abundance_tracks/sun_default_cube.npy')
mult_sun_alt_ab = np.load('data/paper_1_abundance_tracks/sun_alternative_abundances.npy')
mult_sun_alt_cube = np.load('data/paper_1_abundance_tracks/sun_alternative_cube.npy')
mult_arc_def_ab = np.load('data/paper_1_abundance_tracks/arc_default_abundances.npy')
mult_arc_def_cube = np.load('data/paper_1_abundance_tracks/arc_default_cube.npy')
mult_arc_alt_ab = np.load('data/paper_1_abundance_tracks/arc_alternative_abundances.npy')
mult_arc_alt_cube = np.load('data/paper_1_abundance_tracks/arc_alternative_cube.npy')
mult_cas_def_ab = np.load('data/paper_1_abundance_tracks/cas_default_abundances.npy')
mult_cas_def_cube = np.load('data/paper_1_abundance_tracks/cas_default_cube.npy')
mult_cas_alt_ab = np.load('data/paper_1_abundance_tracks/cas_alternative_abundances.npy')
mult_cas_alt_cube = np.load('data/paper_1_abundance_tracks/cas_alternative_cube.npy')
"""
Explanation: We load the results from the best parameter Chempy run:
End of explanation
"""
print(sun_def_ab.dtype.names)
"""
Explanation: These are the available elements (not all are trustworthy)
End of explanation
"""
print(sun_def_cube['time'])
"""
Explanation: These are the corresponding time-steps in Gyrs, 13.5 being present-day.
End of explanation
"""
plt.plot(sun_def_ab['Fe'][1:],sun_def_ab['Mg'][1:]-sun_def_ab['Fe'][1:], label = 'single sun')
plt.plot(mult_sun_def_ab['Fe'][1:],mult_sun_def_ab['Mg'][1:]-mult_sun_def_ab['Fe'][1:], label = 'multi sun')
plt.plot(arc_def_ab['Fe'][1:],arc_def_ab['Mg'][1:]-arc_def_ab['Fe'][1:], label = 'single arc')
plt.plot(mult_arc_def_ab['Fe'][1:],mult_arc_def_ab['Mg'][1:]-mult_arc_def_ab['Fe'][1:], label = 'multi arc')
plt.plot(cas_def_ab['Fe'][1:],cas_def_ab['Mg'][1:]-cas_def_ab['Fe'][1:], label = 'single cas')
plt.plot(mult_cas_def_ab['Fe'][1:],mult_cas_def_ab['Mg'][1:]-mult_cas_def_ab['Fe'][1:], label = 'multi cas')
plt.plot(sun_alt_ab['Fe'][1:],sun_alt_ab['Mg'][1:]-sun_alt_ab['Fe'][1:],linestyle = '--', label = 'single sun alternative')
plt.plot(mult_sun_alt_ab['Fe'][1:],mult_sun_alt_ab['Mg'][1:]-mult_sun_alt_ab['Fe'][1:],linestyle = '--', label = 'multi sun alternative')
plt.plot(arc_alt_ab['Fe'][1:],arc_alt_ab['Mg'][1:]-arc_alt_ab['Fe'][1:],linestyle = '--', label = 'single arc alternative')
plt.plot(mult_arc_alt_ab['Fe'][1:],mult_arc_alt_ab['Mg'][1:]-mult_arc_alt_ab['Fe'][1:],linestyle = '--', label = 'multi arc alternative')
plt.plot(cas_alt_ab['Fe'][1:],cas_alt_ab['Mg'][1:]-cas_alt_ab['Fe'][1:],linestyle = '--', label = 'single cas alternative')
plt.plot(mult_cas_alt_ab['Fe'][1:],mult_cas_alt_ab['Mg'][1:]-mult_cas_alt_ab['Fe'][1:],linestyle = '--', label = 'multi cas alternative')
plt.xlabel('[Fe/H]')
plt.ylabel('[Mg/Fe]')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.plot(sun_def_cube['time'][1:],sun_def_ab['Mg'][1:]-sun_def_ab['Fe'][1:])
plt.xlabel('time in Gyr')
plt.ylabel('[Mg/Fe]')
"""
Explanation: Here is how you can plot them:
End of explanation
"""
print(sun_def_cube.dtype.names)
"""
Explanation: You can do this for all the available elements.
I would see the results for [Fe/H] < 1 as rough extrapolations.
Beware that you have to weight in the SFR and age-distribution of your tracer stars if you want to compare e.g. to metallicity distribution function. See https://github.com/jan-rybizki/Chempy/blob/master/tutorials/5-Chempy_function.ipynb the section called "A note on chemical evolution tracks and 'by eye' fit".
End of explanation
"""
plt.plot(sun_def_cube['time'],sun_def_cube['sfr'], label = "SFR")
plt.plot(sun_def_cube['time'],sun_def_cube['infall'], label = "infall")
plt.plot(sun_def_cube['time'],sun_def_cube['stars'], label = "stars")
plt.plot(sun_def_cube['time'],sun_def_cube['gas'], label = "gas")
plt.plot(sun_def_cube['time'],sun_def_cube['sn1a'], label = "sn1a")
plt.plot(sun_def_cube['time'],sun_def_cube['sn2'], label = "sn2")
plt.plot(sun_def_cube['time'],sun_def_cube['mass_in_remnants'], label = "mass in remnants")
plt.yscale('log')
plt.legend()
"""
Explanation: You can of course also compare to all other values saved in the cube e.g. SN rates or sfr/gas/stellar mass
End of explanation
"""
|
prashantas/MyDataScience
|
DeepNetwork/deeplearningai_AndrewNG/Python+Basics+With+Numpy+v3.ipynb
|
bsd-2-clause
|
### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
"""
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
"""
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+ math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
"""
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
"""
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
"""
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
"""
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
"""
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
"""
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
"""
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
"""
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s =1/(1+ np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
"""
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
"""
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
"""
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
"""
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2],1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
"""
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
"""
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
"""
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
"""
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp,axis=1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
"""
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
"""
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
"""
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
"""
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
"""
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
"""
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
#loss = np.sum(np.square(y-yhat)) ## working
loss = np.dot((y-yhat),(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
"""
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation
"""
|
walkon302/CDIPS_Recommender
|
notebooks/_old/Exploring_Original_Dataset.ipynb
|
apache-2.0
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle
from IPython.core.debugger import Tracer
import seaborn as sns
%matplotlib inline
"""
Explanation: Setup
End of explanation
"""
import tensorflow as tf
import sklearn
import h5py
import keras
from keras.preprocessing import image
from resnet50 import ResNet50
from imagenet_utils import preprocess_input, decode_predictions
model = ResNet50(weights='imagenet')
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
#model.
img_path = 'img/euro/EUROMODA-U125256-39-5.jpg'
img = image.load_img(img_path, target_size=(224, 224))
img
x = image.img_to_array(img)
x.shape
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x.shape
plt.imshow(x[0,:,:,0])
preds = model.predict(x)
plt.plot(preds.T)
preds.shape
print('Predicted:', decode_predictions(preds))
# print: [[u'n02504458', u'African_elephant']]
"""
Explanation: Running the examples
Example 1
End of explanation
"""
from vgg16 import VGG16
from keras.preprocessing import image
from imagenet_utils import preprocess_input
model = VGG16(weights='imagenet', include_top=False)
img_path = '1360x.jpeg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x.shape
features = model.predict(x)
print features
features.shape
## save out instead
# from keras.utils import plot_model
# plot_model(model, to_file='model.png')
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
"""
Explanation: Example 2
End of explanation
"""
from vgg19 import VGG19
from keras.preprocessing import image
from imagenet_utils import preprocess_input
from keras.models import Model
base_model.input
base_model.get_layer('block5_pool')
# the layers appear to be keras objects.
base_model.get_layer('block5_pool').output
# input and output appear to be tensorflow tensors.
base_model = VGG19(include_top=False, weights='imagenet')
model = Model(input=base_model.input, output=base_model.get_layer('block5_pool').output)
# this Model creates a model based on some input tensor and some output tensor.
# here we've taken the base_model, and grabbed it's input layer, and it's 5th output layer,
# and created a new model with just those layers or is it all the layers inbetween
#img_path = '1360x.jpeg'
img_path = 'img/euro/EUROMODA-U125256-39-5.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
img
block4_pool_features = model.predict(x) # now we predict based on those layers
block4_pool_features.shape # same shape as before.
np.shape(block4_pool_features.tolist()[0][0])
import itertools
flattened_list = list(itertools.chain(*block4_pool_features.tolist()[0][0])) # * will unpack
for item in flattened_list: print item
"""
Explanation: Example 3
End of explanation
"""
import numpy as np
from vgg19 import VGG19
from resnet50 import ResNet50
from xception import Xception
from keras.preprocessing import image
from imagenet_utils import preprocess_input
from keras.models import Model
import itertools
def get_middle_layer(img_path):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
block4_pool_features = model.predict(x)
flattened_list = list(itertools.chain(*block4_pool_features.tolist()[0][0]))
return flattened_list
def dot(K, L):
if len(K) != len(L):
return 0
return sum(i[0] * i[1] for i in zip(K, L))
def similarity(item_1, item_2):
return dot(item_1, item_2) / np.sqrt(dot(item_1, item_1) * dot(item_2, item_2))
import os
import sys
base_model = ResNet50(include_top=False, weights='imagenet')
model = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output)
path = 'img/baiyi'
features = dict()
for filename in os.listdir(path): # loop through all images in the folder
img_path = path + '/' + filename
features[filename] = get_middle_layer(img_path) # get the features from the middle layer
len(features['Baiyixiuzi-B978N340-5.jpg'])
import itertools
similarities = {item: similarity(features[item[0]], features[item[1]]) for item in itertools.product(features,features)}
for key, item in similarities.items():
print key[0] + '|' + key[1] + '|' + str(item)
similarities
"""
Explanation: His Code
Batch 3
python ex3_batch.py $brand
runs this on each brand
End of explanation
"""
import math
import numpy as np
import pandas as pd
import pickle
import sys
sys.path.append('4.personalization/')
import utils.kmeans as kmeans
from collections import Counter
def average(lists):
#Tracer()()
return [np.mean(i) for i in zip(*[l for l in lists])]
def cluster(lists, model):
#Tracer()()
user_cluster = kmeans.predict(np.array([l for l in lists]), model)
user_vec = [0] * model.n_clusters
for i in user_cluster: user_vec[i] += 1
return [elem / float(sum(user_vec)) for elem in user_vec]
user_log = pd.read_pickle('4.personalization/data/viewlist_imagef.pkl')
user_vec = user_log.groupby(['user_id', 'dt'])['features'].apply(lambda x: average(x))
user_log.groupby(['user_id', 'dt'])
user_log[0:100].groupby(['user_id', 'dt'])['features'].apply(lambda x: x)
f = lambda y: y**2
f(7)
#user_log[0:100].groupby(['user_id', 'dt'])['goods_no'].apply(lambda x: x)
model_path='4.personalization/utils/model.pkl'
model = kmeans.load_model(model_path)
type(model)
model.cluster_centers_.shape
user_vec = user_log.groupby(['user_id', 'dt'])['features'].apply(lambda x: cluster(x, model))
len(user_vec[0])
"""
Explanation: Creating User Vecs
End of explanation
"""
buylist = pd.read_pickle('4.personalization/data/buylist_imagef2.pkl')
buylist.head(20)
candidates = pd.read_pickle('4.personalization/data/candidates.pkl')
candidates.head()
"""
Explanation: Exploring the Data
requires the virt-env to be set-up WTF!
End of explanation
"""
candidates_cluster = pd.read_pickle('4.personalization/data/candidates_cluster.pkl')
candidates_cluster.head()
user_vec_average = pd.read_pickle('4.personalization/data/user_vec_average.pkl')
user_vec_average.head()
"""
Explanation: candiates are items
End of explanation
"""
user_vec_average_no_pv = pd.read_pickle('4.personalization/data/user_vec_average_no_pv.pkl')
user_vec_average_no_pv.head()
user_vec_cluster = pd.read_pickle('4.personalization/data/user_vec_cluster.pkl')
user_vec_cluster.head()
"""
Explanation: These features are from the images of the previous users. These are the average of the previous images viewed.
End of explanation
"""
len(user_vec_cluster['features'][0])
viewlist = pd.read_pickle('4.personalization/data/viewlist_imagef.pkl')
viewlist.head(100)
viewlist_exp = pd.read_pickle('4.personalization/data/viewlist_imagef_exp.pkl')
viewlist_exp.head(100)
"""
Explanation: These features are from the images of the previous users. These are clusters assigned to each image. In an algorithm later, it's easier to just compare similarities of items within the same cluster (rather than every pairwise image).
End of explanation
"""
print('rows in buylist: {0}').format(str(len(buylist)))
print('rows in viewlsit: {0}').format(str(len(viewlist)))
print('rows in viewlsit expanded: {0}').format(str(len(viewlist_exp)))
print('rows in candidates: {0}').format(str(len(candidates)))
print('rows in candidates cluster: {0}').format(str(len(candidates)))
print('rows in user_vec_average: {0}').format(str(len(user_vec_average)))
print('number of users in buylist: {0}').format(str(len(set(buylist.user_id.unique()))))
print('number of users in viewlist: {0}').format(str(len(set(viewlist.user_id.unique()))))
print('number of users in both sets: {0}').format(str(len(set(buylist.user_id.unique()).intersection(set(viewlist.user_id.unique())))))
print('number of goods in buylist: {0}').format(str(len(set(buylist.goods_no.unique()))))
print('number of goods in viewlist: {0}').format(str(len(set(viewlist.goods_no.unique()))))
print('number of goods in both sets: {0}').format(str(len(set(buylist.goods_no.unique()).intersection(set(viewlist.goods_no.unique())))))
"""
Explanation: Expanded view list does something with the 'expand browses' and uses formar14_pv.
For instances 2597.. had formar14 =4 so the expanded list repeated that row 4 times.
Descriptive Statistics
End of explanation
"""
viewlist.features[3000]
"""
Explanation: Same number of goods in candidates list as in the buylist
Maybe these are the ones used for recommendations?
End of explanation
"""
for uid in viewlist_exp.user_id.unique():
print('user: {0}').format(uid)
indices = viewlist_exp.loc[viewlist_exp.user_id==uid].index.tolist()
print('places in database: {0}').format(indices)
print('')
#viewlist.dt
"""
Explanation: He has features for all 9863 goods. Does he have the images for those?
Sessions?
Also where are the person's locations in the database?
- are these individual sessions for the same user?
End of explanation
"""
uid = 18318014
indices = viewlist_exp.loc[viewlist_exp.user_id==uid].index.tolist()
indexlast = -1
print('single user: {0}').format(uid)
for index in indices:
# find product
if index-indexlast>1:
print('')
#print('new session')
print('row {0}, good number {1} date {2}').format(index,viewlist_exp.loc[index,'goods_no'],viewlist_exp.loc[index,'dt'])
indexlast = index.copy()
"""
Explanation: What are the item views for each person?
End of explanation
"""
dates_per_user = np.array([])
for uid in viewlist_exp.user_id.unique():
dates_per_user = np.append(dates_per_user,len(viewlist_exp.loc[viewlist_exp.user_id==uid,'dt'].unique()))
plt.hist(dates_per_user)
sns.despine()
plt.xlabel('number of dates per user')
dates_per_user
viewlist_exp.dt.unique()
"""
Explanation: Dates
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.20/_downloads/ad79868fcd6af353ce922b8a3a2fc362/plot_30_info.ipynb
|
bsd-3-clause
|
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
"""
Explanation: The Info data structure
This tutorial describes the :class:mne.Info data structure, which keeps track
of various recording details, and is attached to :class:~mne.io.Raw,
:class:~mne.Epochs, and :class:~mne.Evoked objects.
:depth: 2
We'll begin by loading the Python modules we need, and loading the same
example data <sample-dataset> we used in the introductory tutorial
<tut-overview>:
End of explanation
"""
print(raw.info)
"""
Explanation: As seen in the introductory tutorial <tut-overview>, when a
:class:~mne.io.Raw object is loaded, an :class:~mne.Info object is
created automatically, and stored in the raw.info attribute:
End of explanation
"""
info = mne.io.read_info(sample_data_raw_file)
print(info)
"""
Explanation: However, it is not strictly necessary to load the :class:~mne.io.Raw object
in order to view or edit the :class:~mne.Info object; you can extract all
the relevant information into a stand-alone :class:~mne.Info object using
:func:mne.io.read_info:
End of explanation
"""
print(info.keys())
print() # insert a blank line
print(info['ch_names'])
"""
Explanation: As you can see, the :class:~mne.Info object keeps track of a lot of
information about:
the recording system (gantry angle, HPI details, sensor digitizations,
channel names, ...)
the experiment (project name and ID, subject information, recording date,
experimenter name or ID, ...)
the data (sampling frequency, applied filter frequencies, bad channels,
projectors, ...)
The complete list of fields is given in :class:the API documentation
<mne.Info>.
Querying the Info object
The fields in a :class:~mne.Info object act like Python :class:dictionary
<dict> keys, using square brackets and strings to access the contents of a
field:
End of explanation
"""
print(info['chs'][0].keys())
"""
Explanation: Most of the fields contain :class:int, :class:float, or :class:list
data, but the chs field bears special mention: it contains a list of
dictionaries (one :class:dict per channel) containing everything there is
to know about a channel other than the data it recorded. Normally it is not
necessary to dig into the details of the chs field — various MNE-Python
functions can extract the information more cleanly than iterating over the
list of dicts yourself — but it can be helpful to know what is in there. Here
we show the keys for the first channel's :class:dict:
End of explanation
"""
print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005']))
print(mne.pick_channels(info['ch_names'], include=[],
exclude=['MEG 0312', 'EEG 005']))
"""
Explanation: Obtaining subsets of channels
It is often useful to convert between channel names and the integer indices
identifying rows of the data array where those channels' measurements are
stored. The :class:~mne.Info object is useful for this task; two
convenience functions that rely on the :class:mne.Info object for picking
channels are :func:mne.pick_channels and :func:mne.pick_types.
:func:~mne.pick_channels minimally takes a list of all channel names and a
list of channel names to include; it is also possible to provide an empty
list to include and specify which channels to exclude instead:
End of explanation
"""
print(mne.pick_types(info, meg=False, eeg=True, exclude=[]))
"""
Explanation: :func:~mne.pick_types works differently, since channel type cannot always
be reliably determined from channel name alone. Consequently,
:func:~mne.pick_types needs an :class:~mne.Info object instead of just a
list of channel names, and has boolean keyword arguments for each channel
type. Default behavior is to pick only MEG channels (and MEG reference
channels if present) and exclude any channels already marked as "bad" in the
bads field of the :class:~mne.Info object. Therefore, to get all and
only the EEG channel indices (including the "bad" EEG channels) we must
pass meg=False and exclude=[]:
End of explanation
"""
print(mne.pick_channels_regexp(info['ch_names'], '^E.G'))
"""
Explanation: Note that the meg and fnirs parameters of :func:~mne.pick_types
accept strings as well as boolean values, to allow selecting only
magnetometer or gradiometer channels (via meg='mag' or meg='grad') or
to pick only oxyhemoglobin or deoxyhemoglobin channels (via fnirs='hbo'
or fnirs='hbr', respectively).
A third way to pick channels from an :class:~mne.Info object is to apply
regular expression_ matching to the channel names using
:func:mne.pick_channels_regexp. Here the ^ represents the beginning of
the string and . character matches any single character, so both EEG and
EOG channels will be selected:
End of explanation
"""
print(mne.channel_type(info, 25))
"""
Explanation: :func:~mne.pick_channels_regexp can be especially useful for channels named
according to the 10-20 <ten-twenty_>_ system (e.g., to select all channels
ending in "z" to get the midline, or all channels beginning with "O" to get
the occipital channels). Note that :func:~mne.pick_channels_regexp uses the
Python standard module :mod:re to perform regular expression matching; see
the documentation of the :mod:re module for implementation details.
<div class="alert alert-danger"><h4>Warning</h4><p>Both :func:`~mne.pick_channels` and :func:`~mne.pick_channels_regexp`
operate on lists of channel names, so they are unaware of which channels
(if any) have been marked as "bad" in ``info['bads']``. Use caution to
avoid accidentally selecting bad channels.</p></div>
Obtaining channel type information
Sometimes it can be useful to know channel type based on its index in the
data array. For this case, use :func:mne.channel_type, which takes
an :class:~mne.Info object and a single integer channel index:
End of explanation
"""
picks = (25, 76, 77, 319)
print([mne.channel_type(info, x) for x in picks])
print(raw.get_channel_types(picks=picks))
"""
Explanation: To obtain several channel types at once, you could embed
:func:~mne.channel_type in a :term:list comprehension, or use the
:meth:~mne.io.Raw.get_channel_types method of a :class:~mne.io.Raw,
:class:~mne.Epochs, or :class:~mne.Evoked instance:
End of explanation
"""
ch_idx_by_type = mne.channel_indices_by_type(info)
print(ch_idx_by_type.keys())
print(ch_idx_by_type['eog'])
"""
Explanation: Alternatively, you can get the indices of all channels of all channel types
present in the data, using :func:~mne.channel_indices_by_type,
which returns a :class:dict with channel types as keys, and lists of
channel indices as values:
End of explanation
"""
print(info['nchan'])
eeg_indices = mne.pick_types(info, meg=False, eeg=True)
print(mne.pick_info(info, eeg_indices)['nchan'])
"""
Explanation: Dropping channels from an Info object
If you want to modify an :class:~mne.Info object by eliminating some of the
channels in it, you can use the :func:mne.pick_info function to pick the
channels you want to keep and omit the rest:
End of explanation
"""
|
tensorflow/cloud
|
src/python/tensorflow_cloud/core/tests/examples/dogs_classification.ipynb
|
apache-2.0
|
!pip install tensorflow-cloud
import datetime
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_cloud as tfc
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Model
"""
Explanation: TensorFlow Cloud - Putting it all together
In this example, we will use all of the features outlined in the Keras cloud guide to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages.
Setup
End of explanation
"""
if not tfc.remote():
from google.colab import files
key_upload = files.upload()
key_path = list(key_upload.keys())[0]
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = key_path
os.system(f"gcloud auth activate-service-account --key-file {key_path}")
GCP_BUCKET = "[your-bucket-name]" #@param {type:"string"}
"""
Explanation: Cloud Configuration
In order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our authentication key and specify our Cloud storage bucket for image building and publishing.
End of explanation
"""
(ds_train, ds_test), metadata = tfds.load(
"stanford_dogs",
split=["train", "test"],
shuffle_files=True,
with_info=True,
as_supervised=True,
)
NUM_CLASSES = metadata.features["label"].num_classes
"""
Explanation: Model Creation
Dataset preprocessing
We'll be loading our training data from TensorFlow Datasets:
End of explanation
"""
print("Number of training samples: %d" % tf.data.experimental.cardinality(ds_train))
print("Number of test samples: %d" % tf.data.experimental.cardinality(ds_test))
print("Number of classes: %d" % NUM_CLASSES)
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
"""
Explanation: Let's visualize this dataset:
End of explanation
"""
IMG_SIZE = 224
BATCH_SIZE = 64
BUFFER_SIZE = 2
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
def input_preprocess(image, label):
image = tf.keras.applications.resnet50.preprocess_input(image)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True)
"""
Explanation: Here we will resize and rescale our images to fit into our model's input, as well as create batches.
End of explanation
"""
inputs = tf.keras.layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
base_model = tf.keras.applications.ResNet50(
weights="imagenet", include_top=False, input_tensor=inputs
)
x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x = tf.keras.layers.Dropout(0.5)(x)
outputs = tf.keras.layers.Dense(NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
base_model.trainable = False
"""
Explanation: Model Architecture
We're using ResNet50 pretrained on ImageNet, from the Keras Applications module.
End of explanation
"""
MODEL_PATH = "resnet-dogs"
checkpoint_path = os.path.join("gs://", GCP_BUCKET, MODEL_PATH, "save_at_{epoch}")
tensorboard_path = os.path.join(
"gs://", GCP_BUCKET, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
)
callbacks = [
# TensorBoard will store logs for each epoch and graph performance for us.
keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),
# ModelCheckpoint will save models after each epoch for retrieval later.
keras.callbacks.ModelCheckpoint(checkpoint_path),
# EarlyStopping will terminate training when val_loss ceases to improve.
keras.callbacks.EarlyStopping(monitor="val_loss", patience=3),
]
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
"""
Explanation: Callbacks using Cloud Storage
End of explanation
"""
if tfc.remote():
epochs = 500
train_data = ds_train
test_data = ds_test
else:
epochs = 1
train_data = ds_train.take(5)
test_data = ds_test.take(5)
callbacks = None
model.fit(
train_data, epochs=epochs, callbacks=callbacks, validation_data=test_data, verbose=2
)
if tfc.remote():
SAVE_PATH = os.path.join("gs://", GCP_BUCKET, MODEL_PATH)
model.save(SAVE_PATH)
"""
Explanation: Here, we're using the tfc.remote() flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab.
End of explanation
"""
requirements = ["tensorflow-datasets", "matplotlib"]
f = open("requirements.txt", 'w')
f.write('\n'.join(requirements))
f.close()
"""
Explanation: Our model requires two additional libraries. We'll create a requirements.txt which specifies those libraries:
End of explanation
"""
job_labels = {"job":"resnet-dogs"}
"""
Explanation: Let's add a job label so we can document our job logs later:
End of explanation
"""
tfc.run(
requirements_txt="requirements.txt",
distribution_strategy="auto",
chief_config=tfc.MachineConfig(
cpu_cores=8,
memory=30,
accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,
accelerator_count=2,
),
docker_config=tfc.DockerConfig(
image_build_bucket=GCP_BUCKET,
),
job_labels=job_labels,
stream_logs=True,
)
"""
Explanation: Train on Cloud
All that's left to do is run our model on Cloud. To recap, our run() call enables:
- A model that will be trained and stored on Cloud, including checkpoints
- Tensorboard callback logs that will be accessible through tensorboard.dev
- Specific python library requirements that will be fulfilled
- Customizable job labels for log documentation
- Real-time streaming logs printed in Colab
- Deeply customizable machine configuration (ours will use two Tesla T4s)
- An automatic resolution of distribution strategy for this configuration
End of explanation
"""
!tensorboard dev upload --logdir $tensorboard_path --name "ResNet Dogs"
if tfc.remote():
model = tf.keras.models.load_model(SAVE_PATH)
model.evaluate(test_data)
"""
Explanation: Evaluate your model
We'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time
End of explanation
"""
|
geoneill12/phys202-2015-work
|
assignments/assignment04/MatplotlibExercises.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
"""
a = np.random.randn(2,100)
x = a[0,:]
y = a[1,:]
plt.figure(figsize=(5,5))
plt.scatter(x, y, color='green', alpha = 0.9)
plt.xlabel('Random Data')
plt.ylabel('y(Random Data)')
"""
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
"""
b = np.random.randn(2,100)
x = b[0,:]
y = b[1,:]
plt.hist(x, bins = 10, color = 'orange', alpha = 0.9)
plt.xlabel('This is a histogram')
plt.ylabel('This is still a histogram')
"""
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation
"""
|
ozorich/phys202-2015-work
|
assignments/assignment02/ProjectEuler59.ipynb
|
mit
|
assert 65 ^ 42 == 107
assert 107 ^ 42 == 65
assert ord('a') == 97
assert chr(97) == 'a'
"""
Explanation: Project Euler: Problem 59
https://projecteuler.net/problem=59
Each character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase A = 65, asterisk (*) = 42, and lowercase k = 107.
A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, 65 XOR 42 = 107, then 107 XOR 42 = 65.
For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.
Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.
Your task has been made easy, as the encryption key consists of three lower case characters. Using cipher.txt (in this directory), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text.
The following cell shows examples of how to perform XOR in Python and how to go back and forth between characters and integers:
End of explanation
"""
from itertools import *
encrypted=open("cipher.txt","r")
message=encrypted.read().split(",")
encrypted.close()
def key_cycler(cycles):
for n in range(cycles): #will repeat key for every index of message this won't translate very last character since length in 400.33
u1=key[0]^int(message[3*n])
unencrypted.insert(3*n,u1) #inserts into corresponding spot in unencrypted list
u2=key[1]^int(message[(3*n)+1])
unencrypted.insert((3*n)+1,u2)
u3=key[2]^int(message[(3*n)+2]) #XOR each message interger against its corresponding key value
unencrypted.insert((3*n)+2,u3)
"""
Explanation: Certain functions in the itertools module may be useful for computing permutations:
End of explanation
"""
length=len(message)
print(length)
repeat_times=1201/3 #gives me estimate of number of times to cycle through
print(repeat_times)
for a in range(97,123): #the values of lower case letters
for b in range(97,123):
for c in range(97,123):
key=[a,b,c] #iterates through all key values for 3 lowercase letters
unencrypted=[]
key_cycler(400) #cycles key through message and puts into unencrypted
english=[]
for i in unencrypted:
e=chr(i)
english.append(e) #converts from ACSII to character string
english="".join(english) #converts to whole string
if " the " in english: #checks to see if " the " is in message . Like suggested in the Gitter Chat I am assuming this won't appear if not correct key
print(english) # if it does appear for incorrect keys then I can remove the break and print all instance where
print(key) #" the " appears and then select which key produces a completely legible message
break #prints the key that made instance of message and then breaks the for loop so only first message with
# instances of " the " occuring is printed
"""
Explanation: The below is what I think should work however it takes a while to run so I end up interrupting kernel so I don't bog down system. Finding all the values of the key doesn't take long so it must be an error in my method of cycling through the message with each key value that takes too much time. I have been trying to figure out how to possibly use cycle() or repeat() function to run key against encrypted message. I am going to submit now but still attempt to fix problem then resubmit
End of explanation
"""
key=[97,97,97] #iterates through all key values for 3 lowercase letters
unencrypted=[]
key_cycler(400) #cycles key through message and puts into unencrypted
english=[]
for i in unencrypted:
e=chr(i)
english.append(e) #converts from ACSII to character string
english="".join(english) #converts to whole string
print(english)
"""
Explanation: Test of lower half of code by using a set key
End of explanation
"""
# This cell will be used for grading, leave it at the end of the notebook.
"""
Explanation: However this still takes too long to finish so must be an erro in the key_cycler function
End of explanation
"""
|
jpilgram/phys202-2015-work
|
assignments/assignment03/NumpyEx03.ipynb
|
mit
|
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
"""
Explanation: Numpy Exercise 3
Imports
End of explanation
"""
def brownian(maxt, n):
"""Return one realization of a Brownian (Wiener) process with n steps and a max time of t."""
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W
"""
Explanation: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
a = brownian(1.0,1000)
t = a[0]
W = a[1]
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000
"""
Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
plt.plot(t,W)
plt.title('Wiener Process Simulation')
plt.xlabel('Time')
plt.ylabel('W(t)')
assert True # this is for grading
"""
Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
End of explanation
"""
# YOUR CODE HERE
#raise NotImplementedError()
dW = np.diff(W)
dW.mean(), dW.std()
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float)
"""
Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
End of explanation
"""
def geo_brownian(t, W, X0, mu, sigma):
"Return X(t) for geometric brownian motion with drift mu, volatility sigma."""
# YOUR CODE HERE
#raise NotImplementedError()
Result = X0*np.exp((mu - 0.5*sigma**2)*t + (sigma*W))
return Result
assert True # leave this for grading
"""
Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function.
End of explanation
"""
# YOUR CODE HERE
plt.plot(t , geo_brownian(t, W, 1.0, 0.5, 0.3))
plt.title('Geometric Brownian Motion Simulation')
plt.xlabel('time')
plt.ylabel('X(t)')
#raise NotImplementedError()
assert True # leave this for grading
"""
Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
End of explanation
"""
|
Kidel/In-Codice-Ratio-OCR-with-CNN
|
Notebooks/03_Mnist-Dataset-Expansion.ipynb
|
apache-2.0
|
import os.path
from IPython.display import Image
from util import Util
u = Util()
import numpy as np
# Explicit random seed for reproducibility
np.random.seed(1337)
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.datasets import mnist
"""
Explanation: MNIST Convolutional Neural Network - Dataset Expansion
The previous experiment gave better results compared to the first one, with a higher accuracy on the test set.
However we still had lower results compared to the top results on MNIST, with error of 0.21-0.23%, while ours has around 0.55%.
From internal tests, increasing the dropout and reducing the number of epochs did not help (as we will show in this notebook using checkpoints we saved), so out last resort is to use image pre-processing to increase the dataset size and make it more generic applying rotation, scaling and shifts.
After training on distorted images we'll do some epochs on the normal input to have some bias towards undeformed digits.
We've also hypothesized that a size of 150 for the hidden layer may have been chosen for computational reasons, so we're going to increase it.
It's worth mentioning that the authors of Regularization of Neural Networks using DropConnect and Multi-column Deep Neural Networks for Image Classification also do ensemble learning with 35 neural networks to increase the precision, while we currently don't. Their best result with a single column is pretty close to our current result.
Imports
End of explanation
"""
batch_size = 512
nb_classes = 10
nb_epoch = 800
# checkpoint path
checkpoints_filepath_800 = "checkpoints/02_MNIST_relu_weights.best.hdf5"
checkpoints_filepath_56 = "checkpoints/02_MNIST_relu_weights.best_56_epochs.hdf5"
checkpoints_filepath_new = "checkpoints/03_MNIST_weights.best.hdf5"
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters1 = 20
nb_filters2 = 40
# size of pooling area for max pooling
pool_size1 = (2, 2)
pool_size2 = (3, 3)
# convolution kernel size
kernel_size1 = (4, 4)
kernel_size2 = (5, 5)
# dense layer size
dense_layer_size1 = 150
dense_layer_size1_new = 200
# dropout rate
dropout = 0.15
# activation type
activation = 'relu'
"""
Explanation: Definitions
End of explanation
"""
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
u.plot_images(X_train[0:9], y_train[0:9])
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
"""
Explanation: Data load
End of explanation
"""
datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.1,
horizontal_flip=False)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
"""
Explanation: Image preprocessing
As said in the introduction, we're going to apply random transformations: rotation with a window of 40 degrees and both vertical and horizontal shifts, zoom and scale with a range of 10% (so about 3 pixels more or 3 pixels less). We avoid flips and rotations that would basically alter the meaning of the symbol.
End of explanation
"""
model_800 = Sequential()
model_56 = Sequential()
model_new = Sequential()
def initialize_network(model, checkpoints_filepath, dropout1=dropout, dropout2=dropout, dense_layer_size1=dense_layer_size1):
model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],
border_mode='valid',
input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))
model.add(Activation(activation, name='activation_1_' + activation))
model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))
model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))
model.add(Activation(activation, name='activation_2_' + activation))
model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))
model.add(Activation(activation, name='activation_3_' + activation))
model.add(Dropout(dropout))
model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))
model.add(Activation('softmax', name='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall', 'mean_absolute_error'])
# loading weights from checkpoints
if os.path.exists(checkpoints_filepath):
model.load_weights(checkpoints_filepath)
else:
print('Warning: ' + checkpoints_filepath + ' could not be loaded')
initialize_network(model_800, checkpoints_filepath_800)
initialize_network(model_56, checkpoints_filepath_56)
initialize_network(model_new, checkpoints_filepath_new, dense_layer_size1_new)
"""
Explanation: Model definition
End of explanation
"""
# evaluation
print('evaluating 800 epochs model')
score = model_800.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print('Test error:', (1-score[2])*100, '%')
print('evaluating 56 epochs model')
score = model_56.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print('Test error:', (1-score[2])*100, '%')
"""
Explanation: Training and evaluation
First the evaluations for the network of the previous notebook, with 800 and 56 epochs of training.
End of explanation
"""
# checkpoint
checkpoint_new = ModelCheckpoint(checkpoints_filepath_new, monitor='val_precision', verbose=1, save_best_only=True, mode='max')
callbacks_list_new = [checkpoint_new]
# fits the model on batches with real-time data augmentation, for nb_epoch-100 epochs
history_new = model_new.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size,
# save_to_dir='distorted_data',
# save_format='png'
seed=1337),
samples_per_epoch=len(X_train), nb_epoch=nb_epoch-25, verbose=0,
validation_data=(X_test, Y_test), callbacks=callbacks_list_new)
# ensuring best val_precision reached during training
model_new.load_weights(checkpoints_filepath_new)
"""
Explanation: This part trains the new network (the one using image pre-processing) and then we output the scores.
We are going to use 800 epochs divided between the pre-processed images and the original ones.
End of explanation
"""
# fits the model on clear training set, for nb_epoch-700 epochs
history_new_cont = model_new.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch-775,
verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list_new)
# ensuring best val_precision reached during training
model_new.load_weights(checkpoints_filepath_new)
print('evaluating new model')
score = model_new.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print('Test error:', (1-score[2])*100, '%')
u.plot_history(history_new)
u.plot_history(history_new, 'precision')
u.plot_history(history_new, metric='loss', loc='upper left')
print("Continuation of training with no pre-processing")
u.plot_history(history_new_cont)
u.plot_history(history_new_cont, 'precision')
u.plot_history(history_new_cont, metric='loss', loc='upper left')
"""
Explanation: After epoch 475 nothing will be saved because precision doesn't increase anymore.
End of explanation
"""
# The predict_classes function outputs the highest probability class
# according to the trained classifier for each input example.
predicted_classes_800 = model_800.predict_classes(X_test)
predicted_classes_56 = model_56.predict_classes(X_test)
predicted_classes_new = model_new.predict_classes(X_test)
# Check which items we got right / wrong
correct_indices_800 = np.nonzero(predicted_classes_800 == y_test)[0]
incorrect_indices_800 = np.nonzero(predicted_classes_800 != y_test)[0]
correct_indices_56 = np.nonzero(predicted_classes_56 == y_test)[0]
incorrect_indices_56 = np.nonzero(predicted_classes_56 != y_test)[0]
correct_indices_new = np.nonzero(predicted_classes_new == y_test)[0]
incorrect_indices_new = np.nonzero(predicted_classes_new != y_test)[0]
"""
Explanation: Overall the method seems to work, with the precision converging to 99.4% in the first part of the training, and reaching 99.55% in the second part.
The epochs that make the model overfit and lose val_precision are cut away by the callback function that saves the model.
Inspecting the result
Results marked with "800" are relative to the network of notebook 02 after 800 epochs, while the ones marked with "56" are for the same network but after 56 epochs.
Results marked with "new" are relative to the network that uses image pre-processing and has a fully connected layer size of 250.
End of explanation
"""
u.plot_images(X_test[correct_indices_800[:9]], y_test[correct_indices_800[:9]],
predicted_classes_800[correct_indices_800[:9]])
"""
Explanation: Examples of correct predictions (800)
End of explanation
"""
u.plot_images(X_test[incorrect_indices_800[:9]], y_test[incorrect_indices_800[:9]],
predicted_classes_800[incorrect_indices_800[:9]])
"""
Explanation: Examples of incorrect predictions (800)
End of explanation
"""
u.plot_images(X_test[correct_indices_56[:9]], y_test[correct_indices_56[:9]],
predicted_classes_56[correct_indices_56[:9]])
"""
Explanation: Examples of correct predictions (56)
End of explanation
"""
u.plot_images(X_test[incorrect_indices_56[:9]], y_test[incorrect_indices_56[:9]],
predicted_classes_56[incorrect_indices_56[:9]])
"""
Explanation: Examples of incorrect predictions (56)
End of explanation
"""
u.plot_images(X_test[correct_indices_new[:9]], y_test[correct_indices_new[:9]],
predicted_classes_new[correct_indices_new[:9]])
"""
Explanation: Examples of correct predictions (new)
End of explanation
"""
u.plot_images(X_test[incorrect_indices_new[:9]], y_test[incorrect_indices_new[:9]],
predicted_classes_new[incorrect_indices_new[:9]])
"""
Explanation: Examples of incorrect predictions (new)
End of explanation
"""
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes_800)
"""
Explanation: Confusion matrix (800)
End of explanation
"""
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes_56)
"""
Explanation: Confusion matrix (56)
End of explanation
"""
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes_new)
"""
Explanation: Confusion matrix (56)
End of explanation
"""
|
nick-youngblut/SIPSim
|
ipynb/theory/non-equilibrium_calcs.ipynb
|
mit
|
%load_ext rpy2.ipython
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
library(gridExtra)
%%R
GC2MW = function(x){
A = 313.2
T = 304.2
C = 289.2
G = 329.2
GC = G + C
AT = A + T
x = x / 100
x*GC + (1-x)*AT
}
GC2BD = function(GC){
# GC = percentage
BD = GC / 100 * 0.098 + 1.66
return(BD)
}
calc_BD_macro = function(p_i, w, B, r)
rpm2w2 = function(rpm){
x = 2 * pi * rpm / 60
return(x**2)
}
calc_R_c = function(r_t, r_b){
x = r_t**2 + r_t * r_b + r_b**2
return(sqrt(x/3))
}
calc_R_p = function(p_p, p_m, B, w, r_c){
# distance of the particle from the axis of rotation (at equilibrium)
x = ((p_p - p_m) * (2 * B / w)) + r_c**2
return(sqrt(x))
}
calc_S = function(l, GC){
# l = dsDNA length (bp)
MW = GC2MW(GC)
S = 0.00834 * (l * MW)**0.479 + 2.8
S = S * 1e-13
return(S)
}
calc_dif_sigma_OLD = function(L, w, r_p, S, t, B, p_p, p_m){
nom = w**2 * r_p**2 * S
denom = B * (p_p - p_m)
x = nom / denom * t - 1.26
sigma = L / exp(x)
return(sigma)
}
calc_dif_sigma = function(L, w, r_c, S, t, B, p_p, p_m){
nom = w**2 * r_c**2 * S
denom = B * (p_p - p_m)
x = nom / denom * t - 1.26
sigma = L / exp(x)
return(sigma)
}
R_p2BD = function(r_p, p_m, B, w, r_c){
# converting a distance from center of rotation of a particle to buoyant density
## inverse of `calc_R_p`
nom = (r_p**2 - r_c**2) * w
return(nom / (2 * B) + p_m)
}
sigma2BD = function(r_p, sigma, p_m, B, w, r_c){
BD_low = R_p2BD(r_p - sigma, p_m, B, w, r_c)
BD_high = R_p2BD(r_p + sigma, p_m, B, w, r_c)
return(BD_high - BD_low)
}
time2eq = function(B, p_p, p_m, w, r_c, s, L, sigma){
x = (B * (p_p - p_m)) / (w**2 * r_c**2 * s)
y = 1.26 + log(L / sigma)
return(x * y)
}
"""
Explanation: Description:
calculations for modeling fragments in a CsCl gradient under non-equilibrium conditions
Notes
Good chapter on determining G+C content from CsCl gradient analysis
http://www.academia.edu/428160/Using_Analytical_Ultracentrifugation_of_DNA_in_CsCl_Gradients_to_Explore_Large-Scale_Properties_of_Genomes
http://www.analyticalultracentrifugation.com/dynamic_density_gradients.htm
Meselson et al. - 1957 - Equilibrium Sedimentation of Macromolecules in Den
Vinograd et al. - 1963 - Band-Centrifugation of Macromolecules and Viruses
http://onlinelibrary.wiley.com.proxy.library.cornell.edu/doi/10.1002/bip.360101011/pdf
Ultracentrigation book
http://books.google.com/books?hl=en&lr=&id=vxcSBQAAQBAJ&oi=fnd&pg=PA143&dq=Measurement+of+Density+Heterogeneity+by+Sedimentation+in&ots=l8ObYN-zVv&sig=Vcldf9_aqrJ-u7nQ1lBRKbknHps#v=onepage&q&f=false
Forum info
http://stackoverflow.com/questions/18624005/how-do-i-perform-a-convolution-in-python-with-a-variable-width-gaussian
http://timstaley.co.uk/posts/convolving-pdfs-in-python/
Possible workflows:
KDE convolution
KDE of fragment GC values
bandwidth cross validation: https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/
convolution of KDE with diffusion function:
gaussian w/ mean of 0 and scale param = 44.5 (kb) / (mean fragment length)
http://www.academia.edu/428160/Using_Analytical_Ultracentrifugation_of_DNA_in_CsCl_Gradients_to_Explore_Large-Scale_Properties_of_Genomes
http://nbviewer.ipython.org/github/timstaley/ipython-notebooks/blob/compiled/probabilistic_programming/convolving_distributions_illustration.ipynb
variable KDE
variable KDE of fragment GC values where kernel sigma is determined by mean fragment length
gaussian w/ scale param = 44.5 (kb) / fragment length
Standard deviation of homogeneous DNA fragments
Vinograd et al., 1963; (band-centrifugation):
\begin{align}
\sigma^2 = \frac{r_0}{r_0^0} \left{ \frac{r_0}{r_0^0} + 2D \left( t - t^0 \right) \right}
\end{align}
Standard deviation of Gaussian band (assuming equilibrium), Meselson et al., 1957:
\begin{align}
\sigma^2 = -\sqrt{w} \
w = \textrm{molecular weight}
\end{align}
Standard deviation of Gaussian band at a given time, Meselson et al., 1957:
\begin{equation}
t^* = \frac{\sigma^2}{D} \left(ln \frac{L}{\sigma} + 1.26 \right), \quad L\gg\sigma \
\sigma^2 = \textrm{stdev at equilibrium} \
L = \textrm{length of column}
\end{equation}
Gaussian within 1% of equillibrium value from center.
! assumes density gradient established at t = 0
Alternative form (from Birne and Rickwood 1978; eq 6.22):
\begin{align}
t = \frac{\beta^{\circ}(p_p - p_m)}{w^4 r_p^2 s} \left(1.26 + ln \frac{r_b - r_t}{\sigma}\right)
\end{align}
\begin{equation}
t = \textrm{time in seconds} \
\beta^{\circ} = \beta^{\circ} \textrm{ of salt forming the density gradient (CsCl = ?)} \
p_p = \textrm{buoyant density of the the particle at equilibrium} \
p_m = \textrm{average density of the medium} \
w = \textrm{angular velocity} \
r_p = \textrm{distance (cm) of particle from from the axis of rotation (at equilibrium)} \
s = \textrm{sedimentation rate} (S_{20,w} * 10^{-13}) \
r_b = \textrm{distance to top of gradient (cm)} \
r_t = \textrm{distance to bottom of gradient (cm)} \
r_b - r_t = \textrm{length of gradient (L)}
\end{equation}
Solving for sigma:
\begin{align}
\sigma = \frac{L}{e^{\left(\frac{t w^4 r_p^2 s}{\beta^{\circ}(p_p - p_m)} - 1.26\right)}}
\end{align}
sigma (alternative; but assuming sedimentation equilibrium reached; no time component)
\begin{align}
{\sigma} = \frac{\theta}{M_{app}} \frac{RT}{ \frac{w^2r_c}{\beta} * w^2r_o }
\end{align}
\begin{equation}
{\theta} = \textrm{buoyant dnesity of the macromolecules} \
M_{app} = \textrm{apparent molecular weight oif the solvated macromolecules} \
R = \textrm{universal gas constant} \
T = \textrm{Temperate in K} \
w = \textrm{angular velocity} \
\beta^{\circ} = \beta^{\circ} \textrm{ coef. of salt forming the density gradient} \
r_c = \textrm{isoconcentration point} \
r_o = \textrm{distance (cm) of particle from from the axis of rotation (at equilibrium)} \
\end{equation}
Clay et al., 2003 method (assumes sedimentation equilibrium)
\begin{align}
\sigma = \sqrt{\frac{\rho R T}{B^2 G M_C l}}
\end{align}
\begin{equation}
{\rho} = \textrm{buoyant dnesity of the macromolecules} \
R = \textrm{universal gas constant} \
T = \textrm{Temperate in K} \
\beta = \beta^{\circ} \textrm{ coef. of salt forming the density gradient} \
M_C = \textrm{molecular weight per base pair of dry cesium DNA} \
G = \textrm{Constant from Clay et al., 2003 (7.87x10^-10) } \
l = \textrm{fragment length (bp)} \
\end{equation}
Variables specific to the Buckley lab setup
\begin{equation}
\omega = (2\pi \times \textrm{RPM}) /60, \quad \textrm{RPM} = 55000 \
\beta^{\circ} = 1.14 \times 10^9 \
r_b = 4.85 \
r_t = 2.6 \
L = r_b - r_t \
s = S_{20,w} * 10^{-13} \
S_{20,w} = 2.8 + 0.00834 * (l*666)^{0.479}, \quad \textrm{where l = length of fragment; S in Svedberg units} \
p_m = 1.7 \
p_p = \textrm{buoyant density of the particle in CsCl} \
r_p = ? \
t = \textrm{independent variable}
\end{equation}
isoconcentration point
\begin{equation}
r_c = \sqrt{(r_t^2 + r_t * r_b + r_b^2)/3}
\end{equation}
r<sub>p</sub> in relation to the particle's buoyant density
\begin{equation}
r_p = \sqrt{ ((p_p-p_m)\frac{2\beta^{\circ}}{w}) + r_c^2 } \
p_p = \textrm{buoyant density}
\end{equation}
buoyant density of a DNA fragment in CsCl
\begin{equation}
p_p = 0.098F + 1.66, \quad \textrm{where F = G+C molar fraction}
\end{equation}
info needed on a DNA fragment to determine it's sigma of the Guassian distribution
fragment length
fragment G+C
Coding equations
End of explanation
"""
%%R -w 450 -h 300
# time to eq
calc_time2eq = function(x, B, L, rpm, r_t, r_b, sigma, p_m){
l = x[1]
GC = x[2]
s = calc_S(l, GC)
w = rpm2w2(rpm)
p_p = GC2BD(GC)
r_c = calc_R_c(r_t, r_b)
#r_p = calc_R_p(p_p, p_m, B, w, r_c)
t = time2eq(B, p_p, p_m, w, r_c, s, L, sigma)
t = t / 360
return(t)
}
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.7
l = seq(100,20000,100) # bp
GC = 1:100 # percent
sigma = 0.01
df = expand.grid(l, GC)
df$t = apply(df, 1, calc_time2eq, B=B, L=L, rpm=rpm, r_t=r_t, r_b=r_b, sigma=sigma, p_m=p_m)
colnames(df) = c('length', 'GC', 'time')
df %>% head
cols = rev(rainbow(12))
p1 = ggplot(df, aes(GC, length, fill=time)) +
geom_tile() +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
geom_hline(yintercept=4000, linetype='dashed', color='black') +
#geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='GC (%)', y='dsDNA length (bp)') +
theme_bw() +
theme(
text = element_text(size=16)
)
p1
"""
Explanation: Time to equilibrium
End of explanation
"""
%%R
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.7
l = 500 # bp
GC = 50 # pebrcent
t = 60 * 60 * 66 # sec
S = calc_S(l, GC)
w2 = rpm2w2(rpm)
p_p = GC2BD(GC)
r_c = calc_R_c(r_t, r_b)
r_p = calc_R_p(p_p, p_m, B, w2, r_c)
sigma = calc_dif_sigma(L, w2, r_p, S, t, B, p_p, p_m)
print(sigma)
#sigma_BD = sigma2BD(r_p, sigma, p_m, B, w2, r_c)
#print(sigma_BD)
%%R
#-- alternative calculation
p_p = 1.7
M = l * 882
R = 8.3144598 #J mol^-1 K^-1
T = 293.15
calc_stdev(p_p, M, R, T, w2, r_c, B, r_p)
"""
Explanation: sigma as a function of time & fragment length
End of explanation
"""
%%R -h 300 -w 850
calc_sigma_BD = function(x, rpm, GC, r_t, r_b, p_m, B, L){
l = x[1]
t = x[2]
S = calc_S(l, GC)
w2 = rpm2w2(rpm)
p_p = GC2BD(GC)
r_c = calc_R_c(r_t, r_b)
r_p = calc_R_p(p_p, p_m, B, w2, r_c)
sigma = calc_dif_sigma(L, w2, r_p, S, t, B, p_p, p_m)
if (sigma > L){
return(NA)
} else {
return(sigma)
}
}
# params
GC = 50
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.66
# pairwise calculations of all parameters
l = 50**seq(1,3, by=0.05)
t = 6**seq(3,8, by=0.05)
df = expand.grid(l, t)
df$sigma = apply(df, 1, calc_sigma_BD, rpm=rpm, GC=GC, r_t=r_t, r_b=r_b, p_m=p_m, B=B, L=L)
colnames(df) = c('length', 'time', 'sigma')
df= df %>%
mutate(sigma = ifelse((sigma < 1e-20 | sigma > 1e20), NA, sigma))
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(time, length, fill=sigma)) +
geom_tile() +
scale_x_log10(expand=c(0,0)) +
scale_y_log10(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
#geom_hline(yintercept=4000, linetype='dashed', color='black') +
geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='Time', y='Length') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
"""
Explanation: Graphing sigma as a function of time & fragment length
End of explanation
"""
%%R -h 300 -w 850
# params
GC = 20
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.66
# pairwise calculations of all parameters
l = 50**seq(1,3, by=0.05)
t = 6**seq(3,8, by=0.05)
df = expand.grid(l, t)
df$sigma = apply(df, 1, calc_sigma_BD, rpm=rpm, GC=GC, r_t=r_t, r_b=r_b, p_m=p_m, B=B, L=L)
colnames(df) = c('length', 'time', 'sigma')
df= df %>%
mutate(sigma = ifelse((sigma < 1e-20 | sigma > 1e20), NA, sigma))
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(time, length, fill=sigma)) +
geom_tile() +
scale_x_log10(expand=c(0,0)) +
scale_y_log10(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
#geom_hline(yintercept=4000, linetype='dashed', color='black') +
geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='Time', y='Length') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
"""
Explanation: Low GC
End of explanation
"""
%%R -h 300 -w 850
# params
GC = 80
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.66
# pairwise calculations of all parameters
l = 50**seq(1,3, by=0.05)
t = 6**seq(3,8, by=0.05)
df = expand.grid(l, t)
df$sigma = apply(df, 1, calc_sigma_BD, rpm=rpm, GC=GC, r_t=r_t, r_b=r_b, p_m=p_m, B=B, L=L)
colnames(df) = c('length', 'time', 'sigma')
df= df %>%
mutate(sigma = ifelse((sigma < 1e-20 | sigma > 1e20), NA, sigma))
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(time, length, fill=sigma)) +
geom_tile() +
scale_x_log10(expand=c(0,0)) +
scale_y_log10(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
#geom_hline(yintercept=4000, linetype='dashed', color='black') +
geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='Time', y='Length') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
"""
Explanation: High GC
End of explanation
"""
%%R
calc_dif_sigma_Clay = function(rho, R, T, B, G, M, l){
sigma = sqrt((rho*R*T)/(B**2*G*M*l))
return(sigma)
}
%%R -w 850 -h 300
wrap_calc_sigma_Clay = function(x, R, T, B, G, m){
l= x[1]
GC = x[2]
rho = GC2BD(GC)
sigma = calc_dif_sigma_Clay(rho, R, T, B, G, m, l)
return(sigma)
}
# params
R = 8.3145e7
T = 293.15
G = 7.87e-10
M = 882
B = 1.14e9
l = 50**seq(1,3, by=0.05)
GC = 1:100
# pairwise calculations of all parameters
df = expand.grid(l, GC)
df$sigma = apply(df, 1, wrap_calc_sigma_Clay, R=R, T=T, B=B, G=G, m=M)
colnames(df) = c('length', 'GC', 'sigma')
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(GC, length, fill=sigma)) +
geom_tile() +
scale_y_log10(expand=c(0,0)) +
scale_x_continuous(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
labs(y='length (bp)', x='G+C') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
"""
Explanation: Plotting Clay et al,. method
End of explanation
"""
%pylab inline
import scipy as sp
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import mixture
#import sklearn.mixture as mixture
"""
Explanation: --Sandbox--
Graphing the equations above
End of explanation
"""
n_frags = 10000
frag_GC = np.random.normal(0.5,0.1,n_frags)
frag_GC[frag_GC < 0] = 0
frag_GC[frag_GC > 1] = 1
frag_len = np.random.normal(10000,1000,n_frags)
ret = plt.hist2d(frag_GC, frag_len, bins=100)
"""
Explanation: Generating fragments
End of explanation
"""
RPM = 55000
omega = (2 * np.pi * RPM) / 60
beta_o = 1.14 * 10**9
radius_bottom = 4.85
radius_top = 2.6
col_len = radius_bottom - radius_top
density_medium = 1.7
"""
Explanation: Setting variables
End of explanation
"""
# BD from GC
frag_BD = 0.098 * frag_GC + 1.66
ret = plt.hist(frag_BD, bins=100)
sedimentation = (frag_len*666)**0.479 * 0.00834 + 2.8 # l = length of fragment
ret = plt.hist(sedimentation, bins=100)
# sedimentation as a function of fragment length
len_range = np.arange(1,10000, 100)
ret = plt.scatter(len_range, 2.8 + 0.00834 * (len_range*666)**0.479 )
# isoconcentration point
iso_point = sqrt((radius_top**2 + radius_top * radius_bottom + radius_bottom**2)/3)
iso_point
# radius of particle
#radius_particle = np.sqrt( (frag_BD - density_medium)*2*(beta_o/omega) + iso_point**2 )
#ret = plt.hist(radius_particle)
"""
Explanation: Calculation functions
End of explanation
"""
n_dists = 10
n_samp = 10000
def make_mm(n_dists):
dist_loc = np.random.uniform(0,1,n_dists)
dist_scale = np.random.uniform(0,0.1, n_dists)
dists = [mixture.NormalDistribution(x,y) for x,y in zip(dist_loc, dist_scale)]
eq_weights = np.array([1.0 / n_dists] * n_dists)
eq_weights[0] += 1.0 - np.sum(eq_weights)
return mixture.MixtureModel(n_dists, eq_weights, dists)
mm = make_mm(n_dists)
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
n_dists = 1000
mm = make_mm(n_dists)
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
n_dists = 10000
mm = make_mm(n_dists)
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
n_samp = 100000
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
"""
Explanation: Testing out speed of mixture models
End of explanation
"""
x = np.random.normal(3, 1, 100)
y = np.random.normal(1, 1, 100)
H, xedges, yedges = np.histogram2d(y, x, bins=100)
H
"""
Explanation: Notes:
a mixture model with many distributions (>1000) is very slow for sampling
End of explanation
"""
|
getsmarter/bda
|
module_2/M2_NB1_SourcesOfData.ipynb
|
mit
|
import pandas as pd
from pandas_datareader import data, wb
import numpy as np
import matplotlib.pylab as plt
import matplotlib
import folium
import geocoder
import wikipedia
#set plot options
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10, 8)
"""
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Sources of data
Your completion of the notebook exercises will be graded based on your ability to do the following:
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data?
Create: Are you able to produce notebooks that serve as computational records of a session and can be used to share your insights with others?
Notebook objectives
By the end of this notebook you will be expected to:
Use "trusted" and "untrusted" data sources to enrich your analysis;
and
Understand the implications of the five Rs on data quality from external sources.
List of exercises
Exercise 1: Enriching analysis with data from "trusted" sources.
Exercise 2: Pros and cons of using data from "untrusted" sources.
Notebook introduction
Data collection is expensive and time consuming, as Arek Stopczynski alluded to in this module's video content.
In some cases, you will be lucky enough to have existing datasets available to support your analysis. You may have datasets from previous analyses, access to data providers, or curated datasets from your organization. In many cases, however, you will not have access to the data that you require to support your analysis, and you will have to find alternate mechanisms.
The data quality requirements will differ based on the problem you are trying to solve. Taking the hypothetical case of geocoding a location, which was introduced in Module 1, the accuracy of the geocoded location does not need to be exact when you are simply trying to plot the locations of students on a map. Geocoding a location for an automated vehicle to turn off the highway, on the other hand, has an entirely different accuracy requirement.
Note:
Those of you who work in large organizations may be privileged enough to have company data governance and data quality initiatives. These efforts and teams can often add significant value both in terms of supplying company-standard curated data, and making you aware of the internal policies that need to be adhered to.
As a data analyst or data scientist, it is important to be aware of the implications of your decisions. You need to choose the appropriate set of tools and methods to deal with sourcing and supplying data.
Technology has matured in recent years, and allowed access to a host of sources of data that can be used in your analyses. In many cases you can access free resources, or obtain (at a cost) data that has been curated, is at a lower latency, or comes with a service-level agreement. Some governments have even made datasets publicly available.
You have been introduced to OpenPDS, in the video content, where the focus shifts from supplying raw data -- where the provider needs to apply security principles before sharing datasets -- to supplying answers rather than data. OpenPDS allows users to collect, store, and control access to their data, while also allowing them to protect their privacy. In this way, users still have ownership of their data, as defined by the new deal on data.
This notebook demonstrates another example of how to source external data to enrich your analyses. The Python ecosystem contains a rich set of tools and libraries that can help you to exploit the available resources.
This course will not go into detail regarding the various options to source and interact with social data from sources such as Twitter, LinkedIn, Facebook, and Google Plus. However, you should be able to find libraries that will assist you in sourcing and manipulating these sources of data.
Twitter data is a good example because, depending on the options selected by the Twitter user, every tweet contains not just the message or content that most users are aware of. It also contains a view of the network of the person, home location, location from which the message was sent, and a number of other features that can be very useful when studying networks around a topic of interest. Professor Alex Pentland pointed out the difference in what you share with the world (how you want to be seen) compared to what you actually do and believe (what you commit to). Be sure to keep these concepts in mind when you start exploring the additional sources of data. Those who are interested in the topic can start to explore the options by visiting the Twitter library on PyPI.
Start with the five Rs introduced in Module 1, and consider the following questions:
- How accurate does my dataset need to be?
- How often should the dataset be updated?
- What happens if the data provider is no longer available?
- Do I need to adhere to any organizational standards to ensure consistent reporting or integration with other applications?
- Are there any implications to getting the values wrong?
You may need to start with “untrusted” data sources as a means of validating that your analysis can be executed. Once this is done, you can replace the untrusted components with trusted and curated datasets, as your analysis matures.
<div class="alert alert-warning">
<b>Note</b>:<br>
It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears.
</div>
Load libraries and set options
End of explanation
"""
# Load the grouped_geocoded dataset from Module 1.
df1 = pd.read_csv('data/grouped_geocoded.csv',index_col=[0])
# Prepare the student location dataset for use in this example.
# We use the geometrical center by obtaining the mean location for all observed coordinates per country.
df2 = df1.groupby('country').agg({'student_count': [np.sum], 'lat': [np.mean],
'long': [np.mean]}).reset_index()
# Reset the index.
df3 = df2.reset_index(level=1, drop=True)
# Review the data
df3.head()
"""
Explanation: 1. Source additional data from public sources
This section will provide short examples to demonstrate the use of public data sources in your notebooks.
1.1 World Bank
This example demonstrates how to source data from an external source to enrich your existing analyses. You will need to combine the data sources and add additional features to the example of student locations plotted on the world map in Module 1's Notebook 3.
The specific indicator chosen has little relevance other than to demonstrate the process that you will typically follow in completing your projects. Population counts, from an untrusted source, will be added to your map, and you will use scaling factors combined with the number of students, and population size of the country to demonstrate adding external data with minimal effort.
This example makes use of the pandas-datareader module, which supports remote data access. This library has support for extracting data from various internet sources into a Pandas DataFrame. Currently, the supported sources are:
Google Finance
Enigma
Quandl
St.Louis FED (FRED)
Kenneth French’s data library
World Bank
OECD
Eurostat
Thrift Savings Plan
Nasdaq Trader symbol definitions.
This example focuses on enriching your student dataset from Module 1, using the World Bank's Development Indicators. In the following sections, you will use the data you saved in a previous exercise, add corresponding indicators for each country in the data, and find the mean location for all observed coordinates per country.
Prepare the student data
In the next code cell, you will load the data from disk, apply the groupby method to group the data by country and, for each group, find the total student count and the average of their GPS coordinates. The final dataset containing the country, student count, and averaged GPS coordinates is saved as a separate DataFrame variable.
End of explanation
"""
df3.columns = df3.columns.droplevel(1)
df3.rename(columns={'lat': "lat_mean",
'long': "long_mean"}, inplace=True)
df3.head()
"""
Explanation: The column label index has multiple levels. Although this is useful metadata, it would be better to drop multilevel labeling and, instead, rename the columns to capture this information.
End of explanation
"""
# After running this cell you can close the help by clicking on close (`X`) button in the upper right corner
wb.download?
# The selected indicator is the world population, "SP.POP.TOTL", for the years from 2008 to 2016
wb_indicator = 'SP.POP.TOTL'
start_year = 2008
end_year = 2016
df4 = wb.download(indicator = wb_indicator,
country = ['all'],
start = start_year,
end = end_year)
# Review the data
df4.head()
"""
Explanation: Get and prepare the external dataset from the World Bank
Remember you can use "wb.download?" (without the quotation marks) in a separate code cell to get help on the pandas-datareader method for remote data access of the World Bank Indicators. Refer to the pandas-datareader remote data access documentation for more detailed help.
End of explanation
"""
df5 = df4.reset_index()
idx = df5.groupby(['country'])['year'].transform(max) == df5['year']
"""
Explanation: The data set contains entries for multiple years. The focus of this example is the entry corresponding to the latest year of data available for each country.
End of explanation
"""
# Create a new dataframe where entries corresponds to maximum year indexes in previous list.
df6 = df5.loc[idx,:]
# Review the data
df6.head()
"""
Explanation: You can now extract only the values that correspond to the most recent year available for each country.
End of explanation
"""
# Combine the student and population datasets.
df7 = pd.merge(df3, df6, on='country', how='left')
# Rename the columns of our merged dataset and assign to a new variable.
df8 = df7.rename(index=str, columns={('SP.POP.TOTL'): "PopulationTotal_Latest_WB"})
# Drop NAN values.
df8 = df8[~df8.PopulationTotal_Latest_WB.isnull()]
# Reset index.
df8.reset_index(inplace = True)
df8.head()
"""
Explanation: Now merge your dataset with the World Bank data.
End of explanation
"""
# Plot the combined dataset
# Set map center and zoom level
mapc = [0, 30]
zoom = 2
# Create map object.
map_osm = folium.Map(location=mapc,
tiles='Stamen Toner',
zoom_start=zoom)
# Plot each of the locations that we geocoded.
for j in range(len(df8)):
# Plot a blue circle marker for country population.
folium.CircleMarker([df8.lat_mean[j], df8.long_mean[j]],
radius=df8.PopulationTotal_Latest_WB[j]/20000000,
popup='Population',
color='#3186cc',
fill_color='#3186cc',
).add_to(map_osm)
# Plot a red circle marker for students per country.
folium.CircleMarker([df8.lat_mean[j], df8.long_mean[j]],
radius=df8.student_count[j]/50,
popup='Students',
color='red',
fill_color='red',
).add_to(map_osm)
# Show the map.
map_osm
"""
Explanation: Let's plot the data.
Note:
The visualization below does not have any meaning. The scaling factors selected are used to demonstrate the difference in population sizes, and number of students on this course, per country.
End of explanation
"""
# Your solution here
# Note: Break your logic using separate cells to break code into units that can be executed
# should you need to review individual steps.
"""
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Review the available indicators in the World Bank dataset, and select an indicator of your choice (other than the population indicator).
Using a copy of the code (from above) in the cells below, replace the population indicator with your selected indicator. Instead of returning the most recent value for your selected indicator, compute the mean and standard deviation for the years from 2006 to 2016. You will need to use the Pandas groupby().agg() chained methods, together with the following functions from NumPy:
np.mean
np.std.
You can review the data preparation section for the student data above for an example.
Add comments (lines starting with a "#") giving a brief description of your view on the observed results. Make sure to include, in one or two sentences in each case, the following:
1. A clear description why you selected the indicator.
- What your expectation was before including the data.
- What you think the results may indicate.
Important:
- Only the external data needs to be prepared. You do not need to prepare the student dataset again. Just use the student data that you prepared above and join this to the new dataset you sourced.
- Only plot the mean values for your selected indicator (not the standard deviation values).
End of explanation
"""
# Display MIT page summary from Wikipedia
print(wikipedia.summary("MIT"))
# Display a single sentence summary.
wikipedia.summary("MIT", sentences=1)
# Create variable page that contains the wikipedia information.
page = wikipedia.page("List of countries and dependencies by population")
# Display the page title.
page.title
# Display the page URL. This can be utilised to create links back to descriptions.
page.url
"""
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
1.2 Using Wikipedia as a data source
To demonstrate how quickly data can be sourced from public, "untrusted" data sources, you have been supplied with a number of sample scripts below. While these sources contain distinctly rich datasets, which you can acquire with minimal effort, they can be amended by anyone, and may not be 100% accurate. In some cases, you will have to manually transform the datasets, while in others, you might be able to use pre-built libraries.
Execute the code cells below before completing Exercise 2.
End of explanation
"""
|
empet/Plotly-plots
|
Spiral-Plot.ipynb
|
gpl-3.0
|
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
PI=np.pi
a=2
theta=np.linspace(3*PI/2, 8*PI, 400)
z=a*theta*np.exp(-1j*theta)
plt.figure(figsize=(6,6))
plt.plot(z.real, z.imag)
plt.axis('equal')
"""
Explanation: Spiral plot
Spiral plot is a method of visualizing (periodic) time series. Here we adapt it to visualize tennis tournament results for Simona Halep, No 2 WTA 2015 ranking.
We generate bar charts of set results in each tournament along an Archimedean spiral of equation
$z(\theta)=a\theta e^{-i \theta}$, $a>0, \theta>3\pi/2$. Our bars are curvilinear bars, i.e. they have spiral arcs as base.
Matplotlib plot of this spiral:
End of explanation
"""
h=7.0
score={0: 0., 1:10./h, 2: 20/h, 3: 30/h, 4: 40/h, 5: 50/h, 6: 60/h, 7: 70/h}
score[6]
import plotly.plotly as py
from plotly.graph_objs import *
"""
Explanation: Each ray (starting from origin O(0,0)) crosses successive turnings of the spiral at constant distance points, namely at distance=$2\pi a$.
With our choice a=2, this distance is $4\pi=12.56637$. Hence we set the tallest bar corresponding to
a set score of 7 as having the height 10. The bar height corresponding to any set score in ${0, 1, 2, \ldots, 7}$ can be read from the following dictionary:
End of explanation
"""
import json
with open("halep2015.json") as json_file:
jdata = json.load(json_file)
print jdata['Shenzen']
"""
Explanation: Read the json file created from data posted at wtatennis.com:
End of explanation
"""
played_at=['Shenzen', 'Australian Open', 'Fed Cup', 'Dubai', 'Indiana Wells', 'Miami',
'Stuttgart', 'Madrid', 'Rome', 'French Open', 'Birmingham', 'Wimbledon', 'Toronto',
'Cincinnati', 'US Open', 'Guangzhou', 'Wuhan', 'Beijing', 'WTA Finals' ]
#define a dict giving the number of matches played by Halep in each tournament k
nmatches={ k: len(jdata[where][3:]) for (k, where) in enumerate(played_at) }
"""
Explanation: played_at is the list of tournaments Simona Halep participated in:
End of explanation
"""
def make_arc(aa, theta0, theta1, dist, nr=4):# defines the arc of spiral between theta0 and theta1,
theta=np.linspace(theta0, theta1, nr)
pts=(aa*theta+dist)*np.exp(-1j*theta)# points on spiral arc r=aa*theta
string_arc='M '
for k in range(len(theta)):
string_arc+=str(pts.real[k])+', '+str(pts.imag[k])+' L '
return string_arc
make_arc(0.2, PI+0.2, PI, 4)[1:]
"""
Explanation: The arcs of spiral are defined as Plotly SVG paths:
End of explanation
"""
def make_bar(bar_height, theta0, fill_color, rad=0.2, a=2):
theta1=theta0+rad
C=(a*theta1+bar_height)*np.exp(-1j*theta1)
D=a*theta0*np.exp(-1j*theta0)
return dict(
line=Line(color=fill_color, width=0.5
),
path= make_arc(a, theta0, theta0+rad, 0.0)+str(C.real)+', '+str(C.imag)+' '+\
make_arc(a, theta1, theta0, bar_height)[1:]+ str(D.real)+', '+str(D.imag),
type='path',
fillcolor=fill_color
)
"""
Explanation: The function make_bar returns a Plotly dict that will be used to generate the bar shapes:
End of explanation
"""
def make_layout(title, plot_size):
axis=dict(showline=False, # hide axis line, grid, ticklabels and title
zeroline=False,
showgrid=False,
showticklabels=False,
title=''
)
return Layout(title=title,
font=Font(size=12),
xaxis=XAxis(axis),
yaxis=YAxis(axis),
showlegend=False,
width=plot_size,
height=plot_size,
margin=Margin(t=30, b=30, l=30, r=30),
hovermode='closest',
shapes=[]# below we append to shapes the dicts defining
#the bars associated to set scores
)
title='Simona Halep 2015 Tournament Results<br>Each arc of spiral corresponds to a tournament'
layout=make_layout(title, 700)
"""
Explanation: Define a function setting the plot layout:
End of explanation
"""
interM=2.0#the length of circle arc approximating an arc of spiral, between two consecutive matches
interT=3.5# between two tournaments
"""
Explanation: The bar charts corresponding to two consecutive matches in a tournament are separated by an arc of length interM,
whereas the bar charts corresponding to two consecutive tournaments are separated by a longer arc, interT:
End of explanation
"""
colors=['#dc3148','#864d7f','#9e70a2', '#caaac2','#d6c7dd', '#e6e1dd']
"""
Explanation: The bars are colored by the following rule: the bars associated to Halep's results are colored in red (colors[0]),
while the colors for opponents are chosen according to their rank (see the code below). The darker colors correspond to high ranked opponents, while the lighter ones to lower ranked opponents.
End of explanation
"""
a=2.0 # the parameter in spiral equation z(theta)=a*theta exp(-i theta)
theta0=3*PI/2 # the starting point of the spiral
Theta=[]# the list of tournament arc ends
Opponent=[]# the list of opponents in each set of all matches played by halep
rankOp=[]# rank of opponent list
middleBar=[]# theta coordinate for the middle point of each bar base
half_h=[]# the list of bar heights/2
wb=1.5# bar width along the spiral
rad=wb/(a*theta0)#the angle in radians corresponding to an arc of length wb,
#within the circle of radius a*theta
rank_Halep=[]
Halep_set_sc=[]# list of Halep set scores
Opponent_set_sc=[]# list of opponent set scores
bar_colors=[]# the list of colors assigned to each bar in bar charts
for k, where in enumerate(played_at):
nr=nmatches[k]# nr is the number of matches played by Halep in the k^th tournament
Theta.append(theta0)
for match in range(nr):
player=jdata[where][3+match].keys()[0]# opponent name in match match
rankOp.append(int(player.partition('(')[2].partition(')')[0]))#Extract opponent rank:
set_sc=jdata[where][3+match].values()[0]#set scores in match match
sets=len(set_sc)
#set bar colors according to opponent rank
if rankOp[-1] in range(1,11): col=colors[1]
elif rankOp[-1] in range(11, 21): col=colors[2]
elif rankOp[-1] in range(21, 51): col=colors[3]
elif rankOp[-1] in range(51, 101): col=colors[4]
else: col=colors[5]
for s in range(0, sets, 2):
middleBar.append(0.5*(2*theta0+rad))# get the middle of each angular interval
# defining bar base
rank_Halep+=[jdata[where][0]['rank']]
Halep_set_sc.append(set_sc[s])
half_h.append(0.5*score[set_sc[s]])# middle of bar height
bar_colors.append(colors[0])
layout['shapes'].append(make_bar(score[set_sc[s]], theta0, colors[0], rad=rad, a=2))
rad=wb/(a*theta0)
theta0=theta0+rad
middleBar.append(0.5*(2*theta0+rad))
Opponent_set_sc.append(set_sc[s+1])
half_h.append(0.5*score[set_sc[s+1]])
Opponent.append(jdata[where][3+match].keys()[0])
bar_colors.append(col)
layout['shapes'].append(make_bar(score[set_sc[s+1]], theta0, col , rad=rad, a=2))
rad=wb/(a*theta0)
theta0=theta0+rad
gapM=interM/(a*theta0)
theta0=theta0-rad+gapM
gapT=interT/(a*theta0)
Theta.append(theta0)
theta0=theta0-gapM+gapT
"""
Explanation: Get data for generating bars and data to be displayed when hovering the mouse over the plot:
End of explanation
"""
print len(bar_colors), len(middleBar), len(Opponent), len(half_h)
"""
Explanation: Check list lengths:
End of explanation
"""
nrB=nrB=len(bar_colors)
playersRank=['n']*nrB
for k in range(0,nrB, 2):
playersRank[k]=u'Halep'+' ('+'{:d}'.format(rank_Halep[k/2])+')'+'<br>'+\
'set score: '+str(Halep_set_sc[k/2])
for k in range(1, nrB, 2):
playersRank[k]=Opponent[(k-1)/2]+'<br>'+'set score: '+str(Opponent_set_sc[(k-1)/2])
players=[]# Plotly traces that define position of text on bars
for k in range(nrB):
z=(a*middleBar[k]+half_h[k])*np.exp(-1j*middleBar[k])
players.append(Scatter(x=[z.real],
y=[z.imag],
mode='markers',
marker=Marker(size=0.25, color=bar_colors[k]),
name='',
text=playersRank[k],
hoverinfo='text'
)
)
LT=len(Theta)
aa=[a-0.11]*2+[a-0.1]*3+[a-0.085]*5+[a-0.075]*5+[a-0.065]*4# here is a trick to get spiral arcs
#looking at the same distance from the bar charts
spiral=[] #Plotly traces of spiral arcs
for k in range(0, LT, 2):
X=[]
Y=[]
theta=np.linspace(Theta[k], Theta[k+1], 40)
Z=aa[k/2]*theta*np.exp(-1j*theta)
X+=Z.real.tolist()
Y+=Z.imag.tolist()
X.append(None)
Y.append(None)
spiral.append(Scatter(x=X,
y=Y,
mode='lines',
line=Line(color='#23238E', width=4),
name='',
text=played_at[k/2],
hoverinfo='text'))
data=Data(spiral+players)
fig=Figure(data=data,layout=layout)
py.sign_in('empet', 'my_api_key')
py.iplot(fig, filename='spiral-plot')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Define the list of strings to be displayed for each bar:
End of explanation
"""
|
pvalienteverde/ElCuadernillo
|
ElCuadernillo/20160301_TensorFlowGradientDescentWithMomentum/GradientDescentWithMoment.ipynb
|
mit
|
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
import sys
import time
from IPython.display import Image
sys.path.append('/home/pedro/git/ElCuadernillo/ElCuadernillo/20160301_TensorFlowGradientDescentWithMomentum')
import gradient_descent_with_momentum as gdt
"""
Explanation: TensorFlow, Mini-Batch/Stochastic GradientDescent With Moment
End of explanation
"""
grado=4
tamano=100000
x,y,coeficentes=gdt.generar_muestra(grado,tamano)
print ("Coeficientes: ",coeficentes)
plt.plot(x,y,'.')
"""
Explanation: Input
Generamos la muestra de grado 4
End of explanation
"""
train_x=gdt.generar_matriz_coeficientes(x,grado) # MatrizA
train_y=np.reshape(y,(y.shape[0],-1)) # VectorColumna
learning_rate_inicial=1e-2
"""
Explanation: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 4
Generamos la matriz de coeficientes de grado 4
End of explanation
"""
pesos_gd,ecm_gd,tiempo_gd=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=1,
learning_rate_inicial=learning_rate_inicial,
momentum=0.0)
"""
Explanation: Solucion 1: Por medio gradient descent
End of explanation
"""
pesos_mgd,ecm_mgd,tiempo_mgd=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=10000,
learning_rate_inicial=learning_rate_inicial,
momentum=0.0)
"""
Explanation: <img src="capturas/gradient_descent.png">
Solución 2: Por medio mini-batch=1000 gradient descent
End of explanation
"""
pesos_mgdm,ecm_mgdm,tiempo_mgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=10000,
learning_rate_inicial=learning_rate_inicial,
momentum=0.9)
"""
Explanation: <img src="capturas/mini_batch_gradient_descent.png">
Solución 3: Por medio mini-batch=10000 gradient descent With Moment
End of explanation
"""
pesos_sgdm,ecm_sgdm,tiempo_sgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=len(train_x),
learning_rate_inicial=learning_rate_inicial,
momentum=0.9)
"""
Explanation: <img src="capturas/minibatch_gradient_descent_momentum.png">
Solución 4: Por medio mini-batch=1 Stocastict gradient descent With Moment
End of explanation
"""
pesos_sgdm,ecm_sgdm,tiempo_sgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=len(train_x),
learning_rate_inicial=1e-3, # Disminuimos la tasa de aprendizaje
momentum=0.9)
"""
Explanation: <img src="capturas/stocastic_gradient_descent_momentum_fail.png">
End of explanation
"""
|
elenduuche/deep-learning
|
autoencoder/Simple_Autoencoder.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
google-research/google-research
|
aptamers_mlpd/learning/create_binned_and_super_bin_labels.ipynb
|
apache-2.0
|
import pandas as pd
import math
import numpy as np
"""
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
"""
# PD sequencing counts across experiments
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Load PD Data
with open('pd_clustered_input_data_manuscript.csv') as f:
pd_input_df = pd.read_csv(f)
"""
Explanation: Load in Data
End of explanation
"""
def generate_pos_neg_normalized_ratio(df, col_prefix):
"""Adds fraction columns to the dataframe with the calculated pos/neg ratio.
Args:
df: (pd.DataFrame) DataFrame expected to have columns [col_prefix]_positive,
[col_prefix]_negative contain read counts for the positive and negative
selection conditions, respectively.
col_prefix: (str) Prefix of the columns to use to calculate the ratio. For
example, 'round1_very_positive'.
Returns:
(pd.DataFrame) The original dataframe with three new columns:
[col_prefix]_positive_frac contains the fraction of the total positive
pool that is this sequence.
[col_prefix]_negative_frac contains the fraction of the total negative
pool that is this sequence.
[col_prefix]_pos_neg_ratio: The read-depth normalized fraction of the
sequence that ended in the positive pool.
"""
col_pos = col_prefix + '_' + 'positive'
col_neg = col_prefix + '_' + 'negative'
df[col_pos + '_frac'] = df[col_pos] / df[col_pos].sum()
df[col_neg + '_frac'] = df[col_neg] / df[col_neg].sum()
df[col_prefix + '_pos_neg_ratio'] = df[col_pos + '_frac'] / (
df[col_pos + '_frac'] + df[col_neg + '_frac'])
return df
def fraction_to_3bins (frac, min_bin=0.1, max_bin=0.9):
'''Takes a positive / (positive + negative) fraction and converts to ternary.
Args:
frac: (float) positive / (positive + negative) fraction.
min_bin: (float) Cutoff between bin 0 and bin 1.
max_bin: (float) Cutoff between bin 1 and bin 2.
Returns:
(int) Bin
'''
if math.isnan(frac):
return 0
if frac < min_bin:
return 0
elif frac > max_bin:
return 2
else:
return 1
def bins_to_super_bins (low, medium, high):
'''Take the binned labels and convert it to a single SuperBin label.
Args:
low: (int) Bin for low stringency.
medium: (int) Bin for medium stringency.
high: (int) Bin for high strigency.
Returns:
(int) SuperBin.
'''
if high == 0:
if medium == 0:
if low == 0:
# If all three bins are 0 return 0
return 0
if low == 1:
# Borderline low stringency.
return 1
if low == 2:
# Unambiguous low strigency
return 2
elif medium == 1:
if low == 1:
# If medium and low are 1 return 2
# The idea is that this added support is similar to low being = 2.
return 2
if low == 2:
# Borderline medium stringency.
return 3
elif medium == 2:
# This is an unambiguous medium stringency.
if low == 2:
return 4
elif high == 1:
# Require that anything in the potentially high bin passes low stringency.
if low == 2:
if medium == 1:
# If medium and and high are borderline this is similar to medium = 2.
return 4
if medium == 2:
# Borderline high strigency.
return 5
elif high == 2 and medium == 2 and low == 2:
# Unambiguous high stringency.
return 6
# The bins provide an ambiguous story and we need to exclude.
return -1
"""
Explanation: Helper Functions
End of explanation
"""
# Generate Binned and SuperBin labels as additional columns in dataframe
# Binned cols: low_3bins, med_3bins, high_3bins
# SuperBin col: super_bin
for col_prefix, stringency_level in zip(
['round2_high_no_serum', 'round2_medium_no_serum', 'round2_low_no_serum'],
['low', 'med', 'high']):
pd_input_df = generate_pos_neg_normalized_ratio(pd_input_df, col_prefix)
pd_input_df['%s_3bins' %(stringency_level)] = pd_input_df[col_prefix + '_pos_neg_ratio'].apply(fraction_to_3bins)
pd_input_df['super_bin'] = pd_input_df.apply(
lambda x: bins_to_super_bins(x.low_3bins, x.med_3bins, x.high_3bins),
axis=1)
"""
Explanation: Create Binned and SuperBin Labels
End of explanation
"""
|
turbomanage/training-data-analyst
|
quests/endtoendml/labs/4_preproc.ipynb
|
apache-2.0
|
%pip install apache-beam[gcp]==2.13.0
"""
Explanation: <h1> Preprocessing using Cloud Dataflow </h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create ML dataset using <a href="https://cloud.google.com/dataflow/">Cloud Dataflow</a></li>
<li>Simulate a dataset where no ultrasound is performed (i.e. male or female unknown as a feature)</li>
<li>Launch the Cloud Dataflow job to preprocess the data</li>
</ol>
TODO: Complete the lab notebook #TODO sections. You can refer to the solutions/ notebook for reference.
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
End of explanation
"""
import apache_beam as beam
print(beam.__version__)
"""
Explanation: After installing Apache Beam, restart your kernel by selecting "Kernel" from the menu and clicking "Restart kernel..."
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
"""
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
"""
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
"""
# Create SQL query using natality data after the year 2000
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
"""
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
"""
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
# TODO create logic for no_ultrasound where we only know whether its a single baby or multiple (but not how many multiple)
no_ultrasound['is_male'] = 'Unknown'
if # TODO create logic check for multiples
no_ultrasound['plurality'] = 'Multiple(2+)'
else: # TODO create logic check for single
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'max_num_workers': 6
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
"""
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
"""
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
"""
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
End of explanation
"""
|
AlpineNow/python-alpine-api
|
doc/JupyterNotebookExamples/Introduction.ipynb
|
mit
|
import alpine as AlpineAPI
from pprint import pprint
import json
"""
Explanation: Introduction
Let's start with a example of an Alpine API session.
Initialize a session.
Take a tour of some commands.
Run a workflow and download the results.
Import the Python Alpine API and some other useful packages.
End of explanation
"""
filename = "alpine_login.conf"
with open(filename, "r") as f:
data = f.read()
conn_info = json.loads(data)
host = conn_info["host"]
port = conn_info["port"]
username = conn_info["username"]
password = conn_info["password"]
"""
Explanation: Setup
Have access to a workflow on your Alpine instance that you can run. You'll need a few pieces of information in order to log in and run the workflow. First, find the URL of the open workflow. It should look something like:
https://<AlpineHost>:<PortNum>/#workflows/<WorkflowID>
You'll also need your Alpine username and password.
I've stored my connection information in a configuration file named alpine_login.conf that looks something like this:
JSON
{
"host": "AlpineHost",
"port": "PortNum",
"username": "fakename",
"password": "12345"
}
End of explanation
"""
test_workspace_name = "API Sample Workspace"
test_workflow_name = "Data ETL"
"""
Explanation: Here are the names of a workspace and a workflow within it that we want to run.
End of explanation
"""
session = AlpineAPI.APIClient(host, port, username, password)
"""
Explanation: Create a session and log in the user.
End of explanation
"""
pprint(session.get_license())
pprint(session.get_version())
"""
Explanation: Use the API
Get information about the Alpine instance.
End of explanation
"""
pprint(session.get_status())
"""
Explanation: Find information about the logged-in user.
End of explanation
"""
len(session.user.get_list())
"""
Explanation: Find information on all users.
End of explanation
"""
user_id = session.user.get_id(username)
pprint(session.user.update(user_id, title = "Assistant to the Regional Manager"))
"""
Explanation: Find your user ID and then use it to update your user data.
End of explanation
"""
test_workspace_id = session.workspace.get_id(test_workspace_name)
session.workspace.member.add(test_workspace_id, user_id);
"""
Explanation: A similar set of commands can be used to create and update workspaces and the membership of each workspace.
End of explanation
"""
workflow_id = session.workfile.get_id(workfile_name = "Data ETL",
workspace_id = test_workspace_id)
process_id = session.workfile.process.run(workflow_id)
session.workfile.process.wait_until_finished(workflow_id = workflow_id,
process_id = process_id,
verbose = True,
query_time = 5)
"""
Explanation: Run a workflow
To run a workflow use the Process subclass of the Workfile class. The wait_until_finished method will periodically query the status of the running workflow and returns control to the user when the workflow has completed.
End of explanation
"""
flow_results = session.workfile.process.download_results(workflow_id, process_id)
pprint(flow_results, depth=2)
"""
Explanation: We can download results using the download_results method. The workflow results contain a summary of the output of each operator as well as metadata about the workflow run.
End of explanation
"""
|
sylvchev/coursIntroPython
|
cours/5-ApprendrePython-Bibliotheque.ipynb
|
gpl-3.0
|
f=open('fichiertravail.txt', 'w')
"""
Explanation: Fichiers et entrées/sorties
Pour écrire ou lire dans un fichier, open() renvoie un objet de type fichier, et est utilisée plus généralement avec deux arguments : open(nomfichier, mode).
End of explanation
"""
f.write('Voici un test\n')
"""
Explanation: Le premier argument est une chaîne de caractères contenant le nom du fichier. Le deuxième argument est une autre chaîne de caractères contenant quelques caractères décrivant la manière d’utiliser le fichier. mode vaut 'r' quand le fichier doit être seulement lu, 'w' pour seulement être écrit (un fichier déjà existant avec le même nom sera effacé), et 'a' ouvre le fichier en ajout ; les données écrites dans le fichier seront automatiquement ajoutées à la fin. 'r+' ouvre le fichier pour la lecture et l’écriture. L’argument mode est facultatif ; 'r' sera pris par défaut s’il est omis.
Pour écrire dans un fichier, on peut utiliser la primitive f.write( chaine ), qui écrit la chaîne de caractères dans le fichier
End of explanation
"""
value = ('la reponse est', 42)
s = str(value)
f.write(s)
"""
Explanation: Pour écrire quelque chose d’autre qu’une chaîne il est nécessaire de commencer par le convertir en chaîne :
End of explanation
"""
f.close()
"""
Explanation: Une fois le fichier utilisé, il faut le fermer avec f.close(). Après avoir appelé f.close(), les tentatives d’utiliser l’objet fichier échoueront automatiquement.
End of explanation
"""
with open('fichiertravail.txt', 'r') as f:
s = f.read()
print (s)
"""
Explanation: Il existe une façon plus adéquate pour travailler sur un fichier, qui permet de ne pas avoir besoin d'utiliser close. Nous verrons cette syntaxe en utilisant f.read( taille ) qui lit une certaine quantité de données et les retourne en tant que chaîne de caractères. taille est un argument numérique facultatif. Quand taille est omis ou négatif, le contenu entier du fichier sera lu et retourné. Autrement, au plus taille octets sont lus et retournés.
End of explanation
"""
with open('fichiertravail.txt', 'r') as f:
s = f.readline()
while s != '':
print (s)
s = f.readline()
"""
Explanation: f.readline() lit une seule ligne à partir du fichier ; un caractère de fin de ligne (\n) est laissé à l’extrémité de la chaîne de caractères lue, et est seulement omis sur la dernière ligne du fichier si le fichier ne se termine pas par une fin de ligne. Cela rend la valeur de retour non ambiguë ; si f.readline() renvoie une chaîne de caractères vide, la fin du fichier a été atteinte, alors qu’une fin de ligne est représentée par '\n', une chaîne de caractères contenant seulement une seule fin de ligne.
End of explanation
"""
with open('fichiertravail.txt', 'r') as f:
lines = f.readlines()
for i, l in enumerate(lines):
print ('ligne', i, ':', l)
"""
Explanation: f.readlines() renvoie une liste contenant toutes les lignes de données dans le fichier. Si un paramètre optionnel sizehint est donné, alors elle lit le nombre d’octets indiqué, plus autant d’octets qu’il en faut pour compléter la dernière ligne commencée, et renvoie la liste des lignes ainsi lues. Cela est souvent utile pour per- mettre la lecture par lignes efficace, sans devoir charger entièrement le fichier en mémoire. La liste retournée est entièrement faite de lignes complètes.
End of explanation
"""
import os
"""
Explanation: Le module pickle
Les chaînes de caractères peuvent facilement être écrites et lues dans un fichier. Les nombres demandent un peu plus d’effort, puisque la méthode read() renvoie seulement les chaînes de caractères, qui devront être passées vers une fonction comme int(), qui prend une chaîne de caractères comme '123' et renvoie sa valeur numérique 123. Cependant, quand vous voulez sauvegarder des types de données plus complexes comme des listes, des dictionnaires, ou des instances de classe, les choses deviennent beaucoup plus compliquées.
Plutôt que faire écrire et déboguer constamment par les utilisateurs le code pour sauvegarder des types de données complexes, Python fournit un module standard appelé pickle. C’est un module étonnant qui peut prendre presque n’importe quel objet Python (même quelques formes de code Python !), et le convertir en une représentation sous forme de chaîne de caractères ; ce processus s’appelle pickling. Reconstruire l’objet à partir de sa représentation en chaîne de caractères s’appelle unpickling. Entre pickling et unpickling, la chaîne de caractères représentant l’objet a pu avoir été enregistrée dans un fichier ou des données, ou avoir été envoyée à une machine éloignée via une connexion réseau.
Si vous avez un objet x, et un objet fichier f ouvert en écriture, la voie la plus simple de pickler l’objet prend seulement une ligne de code :
pickle.dump(x, f)
Pour unpickler l’objet, si f est un objet fichier ouvert en lecture :
x = pickle.load(f)
La gestion des fichiers
Il est possible de manipuler les primitives systèmes en Python. La plupart des fonctions du système sont disponible dans le module os:
End of explanation
"""
import shutil
shutil.copyfile('fichiertravail.txt', 'macopie.txt')
shutil.move('macopie.txt', 'lacopie.txt')
"""
Explanation: Pour connaître toutes les fonctions disponibles, il est possible d'utiliser help(os) ou dir(os).
Pour les tâches courantes de gestion de fichiers et répertoires, la module shutil fournit une interface de haut niveau facile à utiliser :
End of explanation
"""
import glob
glob.glob('*.txt')
"""
Explanation: Le module glob fournit une fonction pour construire des listes de fichiers à partir de recherches avec jockers (les *) dans des répertoires :
End of explanation
"""
import urllib2
for line in urllib2.urlopen('http://www.python.org'):
if 'meta' in line: # on affiche uniquement les lignes contenant la balise meta
print (line)
"""
Explanation: Accès à internet
Il y a un certain nombre de modules pour accéder à Internet et pour traiter les protocoles de l’Internet. Deux des plus simples sont urllib2 pour récupérer des données depuis des url et smtplib pour envoyer du courrier :
End of explanation
"""
|
MichaelGrupp/evo
|
notebooks/pandas_bridge.ipynb
|
gpl-3.0
|
# magic plot configuration
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%matplotlib notebook
from IPython.display import display
"""
Explanation: pandas_bridge
The evo.tools.pandas_bridge module converts:
* evo.core.trajectory.PosePath3D
* evo.core.trajectory.PoseTrajectory3D
* evo.core.result.Result
to Pandas dataframes.
Load some required modules for this demo:
End of explanation
"""
import pandas as pd
from evo.tools import pandas_bridge
from evo.tools import file_interface
"""
Explanation: Load Pandas and pandas_bridge:
End of explanation
"""
# load a trajectory from a ROS bag...
from rosbags.rosbag1 import Reader as Rosbag1Reader
with Rosbag1Reader("../test/data/ROS_example.bag") as reader:
traj = file_interface.read_bag_trajectory(reader, "S-PTAM")
# ...or from a KITTI file...
traj = file_interface.read_kitti_poses_file("../test/data/KITTI_00_gt.txt")
# ...or from a TUM file...
traj = file_interface.read_tum_trajectory_file("../test/data/fr2_desk_ORB.txt")
"""
Explanation: Trajectories
End of explanation
"""
traj_df = pandas_bridge.trajectory_to_df(traj)
"""
Explanation: Convert a trajectory to a dataframe:
End of explanation
"""
print("First entries of the dataframe:")
display(traj_df.head())
print("Some statistics of the dataframe:")
display(traj_df.describe())
print("A plot:")
traj_df[["x", "y", "z"]].plot(kind="line", subplots=True)
"""
Explanation: Some examples for what you can do with it:
End of explanation
"""
# generate some result files
import subprocess as sp
cmd_1 = "evo_ape kitti ../test/data/KITTI_00_gt.txt ../test/data/KITTI_00_ORB.txt --save_results ../test/data/res1.zip --no_warnings"
cmd_2 = "evo_ape kitti ../test/data/KITTI_00_gt.txt ../test/data/KITTI_00_SPTAM.txt --save_results ../test/data/res2.zip --no_warnings"
sp.call(cmd_1.split(" "))
sp.call(cmd_2.split(" "))
"""
Explanation: Results
End of explanation
"""
result_1 = file_interface.load_res_file("../test/data/res1.zip")
result_2 = file_interface.load_res_file("../test/data/res2.zip")
"""
Explanation: Load results:
End of explanation
"""
result_df_1 = pandas_bridge.result_to_df(result_1)
result_df_2 = pandas_bridge.result_to_df(result_2)
result_df = pd.concat([result_df_1, result_df_2], axis="columns")
display(result_df)
"""
Explanation: Convert results into individual dataframes and concatenate them:
End of explanation
"""
display(result_df.loc["stats"])
exclude = result_df.loc["stats"].index.isin(["sse"]) # don't plot sse
result_df.loc["stats"][~exclude].plot(kind="bar")
"""
Explanation: Some examples for what you can do with it:
End of explanation
"""
|
dnc1994/MachineLearning-UW
|
ml-classification/module-10-online-learning-assignment-solution.ipynb
|
mit
|
from __future__ import division
import graphlab
"""
Explanation: Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with respect to a single coefficient.
Implement stochastic gradient ascent.
Compare convergence of stochastic gradient ascent with that of batch gradient ascent.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
products = graphlab.SFrame('amazon_baby_subset.gl/')
"""
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
"""
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string manipulation functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
"""
products
"""
Explanation: The SFrame products now contains one column for each of the 193 important_words.
End of explanation
"""
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
"""
Explanation: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
"""
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
"""
Explanation: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
"""
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
"""
Explanation: Note that we convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
"""
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-10-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?
Building on logistic regression
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.
End of explanation
"""
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative
"""
Explanation: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
In Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
Complete the following code block:
End of explanation
"""
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
"""
Explanation: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
End of explanation
"""
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
"""
Explanation: Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
How are the functions $\ell\ell(\mathbf{w})$ and $\ell\ell_A(\mathbf{w})$ related?
Modifying the derivative for stochastic gradient ascent
Recall from the lecture that the gradient for a single data point $\color{red}{\mathbf{x}_i}$ can be computed using the following formula:
$$
\frac{\partial\ell_{\color{red}{i}}(\mathbf{w})}{\partial w_j} = h_j(\color{red}{\mathbf{x}i})\left(\mathbf{1}[y\color{red}{i} = +1] - P(y_\color{red}{i} = +1 | \color{red}{\mathbf{x}_i}, \mathbf{w})\right)
$$
Computing the gradient for a single data point
Do we really need to re-write all our code to modify $\partial\ell(\mathbf{w})/\partial w_j$ to $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$?
Thankfully No!. Using NumPy, we access $\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]
and $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\partial\ell{\color{red}{i}}(\mathbf{w})/\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.
We compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ using the following steps:
* First, compute $P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.
* Next, compute $\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].
* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters.
Let us follow these steps for j = 1 and i = 10:
End of explanation
"""
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
"""
Explanation: Quiz Question: The code block above computed $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ for j = 1 and i = 10. Is $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ a scalar or a 194-dimensional vector?
Modifying the derivative for using a batch of data points
Stochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.
Given a mini-batch (or a set of data points) $\mathbf{x}{i}, \mathbf{x}{i+1} \ldots \mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:
$$
\color{red}{\sum{s = i}^{i+B}} \frac{\partial\ell_{s}}{\partial w_j} = \color{red}{\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
Computing the gradient for a "mini-batch" of data points
Using NumPy, we access the points $\mathbf{x}i, \mathbf{x}{i+1} \ldots \mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]
and $y_i$ in the training data using sentiment_train[i:i+B].
We can compute $\color{red}{\sum_{s = i}^{i+B}} \partial\ell_{s}/\partial w_j$ easily as follows:
End of explanation
"""
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size] == +1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += step_size * derivative / batch_size
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
"""
Explanation: Quiz Question: The code block above computed
$\color{red}{\sum_{s = i}^{i+B}}\partial\ell_{s}(\mathbf{w})/{\partial w_j}$
for j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?
Quiz Question: For what value of B is the term
$\color{red}{\sum_{s = 1}^{B}}\partial\ell_{s}(\mathbf{w})/\partial w_j$
the same as the full gradient
$\partial\ell(\mathbf{w})/{\partial w_j}$?
Averaging the gradient across a batch
It is a common practice to normalize the gradient update rule by the batch size B:
$$
\frac{\partial\ell_{\color{red}{A}}(\mathbf{w})}{\partial w_j} \approx \color{red}{\frac{1}{B}} {\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
In other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.
Implementing stochastic gradient ascent
Now we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:
End of explanation
"""
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
"""
Explanation: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
End of explanation
"""
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
"""
Explanation: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?
Running gradient ascent using the stochastic gradient ascent implementation
Instead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)
Small Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.
We now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = 1
* max_iter = 10
End of explanation
"""
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)
"""
Explanation: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = len(feature_matrix_train)
* max_iter = 200
End of explanation
"""
print 50000 / 100 * 2
"""
Explanation: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):
$$
[\text{# of passes}] = \frac{[\text{# of data points touched so far}]}{[\text{size of dataset}]}
$$
Quiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?
End of explanation
"""
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
"""
Explanation: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
"""
Explanation: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
End of explanation
"""
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
"""
Explanation: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
End of explanation
"""
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
"""
Explanation: Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.
Stochastic gradient ascent vs batch gradient ascent
To compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.
We are comparing:
* stochastic gradient ascent: step_size = 0.1, batch_size=100
* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)
Write code to run stochastic gradient ascent for 200 passes using:
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
"""
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
"""
Explanation: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
End of explanation
"""
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
"""
Explanation: Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent?
It's always better
10 passes
20 passes
150 passes or more
Explore the effects of step sizes on stochastic gradient ascent
In previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.
To start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:
* initial_coefficients=np.zeros(194)
* batch_size=100
* max_iter initialized so as to run 10 passes over the data.
End of explanation
"""
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
"""
Explanation: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:
step_size = 1e-4
step_size = 1e-3
step_size = 1e-2
step_size = 1e-1
step_size = 1e0
step_size = 1e1
step_size = 1e2
For consistency, we again apply smoothing_window=30.
End of explanation
"""
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
"""
Explanation: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
End of explanation
"""
|
evanmiltenburg/python-for-text-analysis
|
Chapters-colab/Chapter_03_Strings.ipynb
|
apache-2.0
|
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Materil.zip
!rm images.zip
"""
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_03_Strings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
# Here are some strings:
string_1 = "Hello, world!"
string_2 = 'I ❤️ cheese' # If you are using Python 2, your computer will not like this.
string_3 = '1,2,3,4,5,6,7,8,9'
"""
Explanation: Chapter 3 - Strings
This notebook uses code snippets and explanations from this course.
In this notebook, we will focus on the datatype strings. The first thing you learned was printing a simple sentence: "Hello, world!" This sentence, as any other text, was stored by Python as a string. Here are some reasons why strings are important:
Text is usually represented as a string. Text analysis is the ofcus of our course, so we will be dealing with strings a lot.
Strings are also used when reading in files: We tell python which file to open by giving it a filepath, e.g. '../Data/books/HuckFinn.txt'. Don't worry about this for now, we will explain it in block 3
At the end of this chapter, you will be able to:
define strings and understand their internal representation
understand strings as sequences
use character indices for string slicing
combine strings through printing, concatenation and insertion
compare strings using comparison operators and the in operator
understand strings as immutable objects
work with and understand string methods
understand the difference between args and kwargs
If you want to learn more about these topics, you might find the following links useful:
Documentation: String methods
Documentation: Literal String Interpolation (f-strings)
Explanation: Strings
Explanation: F-strings
Video: Strings - working with text data
Video: Strings
Video: String Indexing and Slicing
If you have questions about this chapter, please contact us (cltl.python.course@gmail.com).
1. Defining and representing strings
A string is a sequence of letters/characters which together form a whole (for instance a word, sentence or entire text). In Python, a string is a type of object for which the value is enclosed by single or double quotes. Let's define a few of them:
End of explanation
"""
# Run this cell to see the error generated by the following line.
restaurant = 'Wendy's'
"""
Explanation: There is no difference in declaring a string with single or double quotes. However, if your string contains a quote symbol it can lead to errors if you try to enclose it with the same quotes.
End of explanation
"""
restaurant = "Wendy's"
# Similarly, we can enclose a string containing double quotes with single quotes:
quotes = 'Using "double" quotes enclosed by a single quote.'
"""
Explanation: In the example above the error indicates that there is something wrong with the letter s. This is because the single quote closes the string we started, and anything after that is unexpected.
To solve this we can enclose the string in double quotes, as follows:
End of explanation
"""
restaurant = 'Wendy\'s'
print(restaurant)
restaurant = "Wendy\"s"
print(restaurant)
"""
Explanation: We can also use the escape character "\" in front of the quote, which will tell Python not to treat this specific quote as the end of the string.
End of explanation
"""
# This example also works with single-quotes.
long_string = "A very long string\n\
can be split into multiple\n\
sentences by appending a newline symbol\n\
to the end of the line."
print(long_string)
"""
Explanation: 1.1 Multi-line strings
Strings in Python can also span across multiple lines, which can be useful for when you have a very long string, or when you want to format the output of the string in a certain way. This can be achieved in two ways:
With single or double quotes, where we manually indicate that the rest of the string continues on the next line with a backslash.
With three single or double quotes.
We will first demonstrate how this would work when you use one double or single quote.
End of explanation
"""
long_string = "A very long string \
can be split into multiple \
sentences by appending a backslash \
to the end of the line."
print(long_string)
"""
Explanation: The \n or newline symbol indicates that we want to start the rest of the text on a new line in the string, the following \ indicates that we want the string to continue on the next line of the code. This difference can be quite hard to understand, but best illustrated with an example where we do not include the \n symbol.
End of explanation
"""
long_string = """A very long string
can also be split into multiple
sentences by enclosing the string
with three double or single quotes."""
print(long_string)
print()
another_long_string = '''A very long string
can also be split into multiple
sentences by enclosing the string
with three double or single quotes.'''
print(another_long_string)
"""
Explanation: As you can see, Python now interprets this example as a single line of text. If we use the recommended way in Python to write multiline strings, with triple double or single quotes, you will see that the \n or newline symbol is automatically included.
End of explanation
"""
long_string = "A very long string\
can be split into multiple\
sentences by appending a backslash\
to the end of the line."
print(long_string)
"""
Explanation: What will happen if you remove the backslash characters in the example? Try it out in the cell below.
End of explanation
"""
multiline_text_1 = """This is a multiline text, so it is enclosed by triple quotes.
Pretty cool stuff!
I always wanted to type more than one line, so today is my lucky day!"""
multiline_text_2 = "This is a multiline text, so it is enclosed by triple quotes.\nPretty cool stuff!\nI always wanted to type more than one line, so today is my lucky day!"
print(multiline_text_1)
print() # this just prints an empty line
print(multiline_text_2)
"""
Explanation: 1.2 Internal representation: using repr()
As we have seen above, it is possible to make strings that span multiple lines. Here are two ways to do so:
End of explanation
"""
print(multiline_text_1 == multiline_text_2)
"""
Explanation: Internally, these strings are equally represented. We can check that with the double equals sign, which checks if two objects are the same:
End of explanation
"""
# Show the internal representation of multiline_text_1.
print(repr(multiline_text_1))
print(repr(multiline_text_2))
"""
Explanation: So from this we can conclude that multiline_text_1 has the same hidden characters (in this case \n, which stands for 'new line') as multiline_text_2. You can show that this is indeed true by using the built-in repr() function (which gives you the Python-internal representation of an object).
End of explanation
"""
colors = "yellow\tgreen\tblue\tred"
print(colors)
print(repr(colors))
"""
Explanation: Another hidden character that is often used is \t, which represents tabs:
End of explanation
"""
my_string = "Sandwiches are yummy"
print(my_string[1])
print(my_string[-1])
"""
Explanation: 2. Strings as sequences
2.1 String indices
Strings are simply sequences of characters. Each character in a string therefore has a position, which can be referred to by the index number of the position. The index numbers start at 0 and then increase to the length of the string. You can also start counting backwards using negative indices. The following table shows all characters of the sentence "Sandwiches are yummy" in the first row. The second row and the third row show respectively the positive and negative indices for each character:
| Characters | S | a | n | d | w | i | c | h | e | s | | a | r | e | | y | u | m | m | y |
|----------------|---|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|----|----|----|
| Positive index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |
| Negative index |-20|-19|-18|-17|-16|-15|-14|-13|-12|-11|-10|-9|-8|-7|-6|-5|-4|-3|-2|-1|
You can access the characters of a string as follows:
End of explanation
"""
number_of_characters = len(my_string)
print(number_of_characters) # Note that spaces count as characters too!
"""
Explanation: Length: Python has a built-in function called len() that lets you compute the length of a sequence. It works like this:
End of explanation
"""
my_string = "Sandwiches are yummy"
print(my_string[1:4])
"""
Explanation: 2.2 Slicing and indices applied to strings
Besides using single indices we can also extract a range from a string:
End of explanation
"""
print(my_string[1:4])
print(my_string[1:4:1])
print(my_string[11:14])
print(my_string[15:])
print(my_string[:9])
print('cow'[::2])
print('cow'[::-2])
# a fun trick to reverse sequences:
print(my_string[::-1])
# You can do something similar with lists (you don't have to understand this is detail now - but we'll show you an
# example already, so you've seen it):
my_list = ['a', 'bunch', 'of', 'words']
print(my_list[3])
print(my_list[2:4])
print(my_list[-1])
"""
Explanation: This is called string slicing. So how does this notation work?
python
my_string[i] # Get the character at index i.
my_string[start:end] # Get the substring starting at 'start' and ending *before* 'end'.
my_string[start:end:stepsize] # Get all characters starting from 'start', ending before 'end',
# with a specific step size.
You can also leave parts out:
python
my_string[:i] # Get the substring starting at index 0 and ending just before i.
my_string[i:] # Get the substring starting at i and running all the way to the end.
my_string[::i] # Get a string going from start to end with step size i.
You can also have negative step size. my_string[::-1] is the idiomatic way to reverse a string.
Tip: Slicing and accessing values via indices is very useful and can be applied to other python objects, which have a fixed sequence, such as lists (we will see how in the subsequent notebooks). Try to understand what is going on with string slicing - it will be very helpful in rest of the course!
Do you know what the following statements will print?
End of explanation
"""
# This is fine, because we are creating a new string. The old one remains unchanged:
fruit = 'guanabana'
island = fruit[:5]
print(island, 'island')
print(fruit, 'fruit')
# This works because we are creating a new string and overwriting our old one
fruit = fruit[5:] + 'na'
print(fruit)
# This attempt to change the ending into `aan' does not work because now we are trying to change an existing string
fruit[4:5] = 'an'
print(fruit)
# We could do this with a list though (don't worry about this yet - it is just meant to show the contrast)
fruits = ['cherry', 'blueberry', 'banana']
fruits[2:3] = ['rasperry', 'kiwi']
fruits
# If we want to modify a string by exchanging characters, we need to do:
fruit = fruit[:4] + 'an'
print(fruit)
"""
Explanation: 3. Immutability
The mutability of an object refers to whether an object can change or not. Strings are immutable, meaning that they cannot be changed. It is possible to create a new string-object based on the old one, but we cannot modify the existing string-object. The cells below demonstrate this.
End of explanation
"""
print('a' == 'a')
print('a' != 'b')
print('a' == 'A') # string comparison is case-sensitive
print('a' < 'b') # alphabetical order
print('A' < 'a') # uppercase comes before lowercase
print('B' < 'a') # uppercase comes before lowercase
print()
print('orange' == 'Orange')
print('orange' > 'Orange')
print('orange' < 'Orange')
print('orange' > 'banana')
print('Orange' > 'banana')
"""
Explanation: The reasons for why strings are immutable are beyond the scope of this notebook. Just remember that if you want to modify a string, you need to overwrite the entire string, and you cannot modify parts of it by using individual indices.
4. Comparing strings
In Python it is possible to use comparison operators (as used in conditional statements) on strings. These operators are:
== ('is the same as')
!= ('is not the same as')
< ('is smaller than')
<= ('is the same as or smaller than')
> ('is greater than')
>= ('is the same as or greater than')
Attention
'=' is used to assign a value to a variable whereas '==' is used to compare two values. If you get errors in comparisons, check if you used the correct operator.
Some of these symbols are probably familiar to you from your math classes. Most likely, you have used them before to compare numbers. However, we can also use them to compare strings!
There are a number of things we have to know about python when comparing strings:
String comparison is always case-sensitive
Internally, characters are represented as numerical values, which can be ranked. You can use the smaller than/greater than operators to put words in lexicographical order. This is similar to the alphabetical order you would use with a dictionary, except that all the uppercase letters come before all the lowercase letters (so first A, B, C, etc. and then a, b, c, etc.)
Hint: In practice, you will often use == and !=. The 'greater than' and 'smaller than' operators are used in sorting algorithms (e.g. to sort a list of strings in alphabetical order), but you will hardly ever use them directly to compare strings.
End of explanation
"""
"fun" in "function"
"I" in "Team"
"am" in "Team"
"App" in "apple" # Capitals are not the same as lowercase characters!
"apple" in "apple"
"applepie" in "apple"
"""
Explanation: Another way of comparing strings is to check whether a string is part of another string, which can be done using the in operator. It returns True if the string contains the relevant substring, and False if it doesn't. These two values (True and False) are called boolean values, or booleans for short. We'll talk about them in more detail later. Here are some examples to try (can you predict what will happen before running them?):
End of explanation
"""
print("Hello", "World")
print("Hello " + "World")
"""
Explanation: 5. Printing, concatenating and inserting strings
You will often find yourself concatenating and printing combinations of strings. Consider the following examples:
End of explanation
"""
number = 5
print("I have", number, "apples")
"""
Explanation: Even though they may look similar, there are two different things happening here. Simply said: the plus in the expression is doing concatenation, but the comma is not doing concatenation.
The 'print()' function, which we have seen many times now, will print as strings everything in a comma-separated sequence of expressions to your screen, and it will separate the results with single blanks by default. Note that you can mix types: anything that is not already a string is automatically converted to its string representation.
End of explanation
"""
number = 5
print("I have " + str(number) + " apples")
"""
Explanation: String concatenation, on the other hand, happens when we merge two strings into a single object using the + operator. No single blanks are inserted, and you cannot concatenate mix types. So, if you want to merge a string and an integer, you will need to convert the integer to a string.
End of explanation
"""
my_string = "I have " + str(number) + " apples"
print(my_string)
"""
Explanation: Optionally, we can assign the concatenated string to a variable:
End of explanation
"""
my_string = "apples " * 5
print(my_string)
"""
Explanation: In addition to using + to concatenate strings, we can also use the multiplication sign * in combination with an integer for repeating strings (note that we again need to add a blank after 'apples' if we want it to be inserted):
End of explanation
"""
print("Hello", "World")
print("Hello" + "World")
print("Hello " + "World")
print(5, "eggs")
print(str(5), "eggs")
print(5 + " eggs")
print(str(5) + " eggs")
text = "Hello" + "World"
print(text)
print(type(text))
text = "Hello", "World"
print(text)
print(type(text))
"""
Explanation: The difference between "," and "+" when printing and concatenating strings can be confusing at first. Have a look at these examples to get a better sense of their differences.
End of explanation
"""
name = "Pia"
age = 26
country = "Austria"
residence = "The Netherlands"
introduction = "Hello. My name is " + name + ". I'm " + str(age) + " years old and I'm from " + country + \
", but I live in "+ residence +'.'
print(introduction)
"""
Explanation: 5.1 Using f-strings
We can imagine that string concatenation can get rather confusing and unreadable if we have more variables. Consider the following example:
End of explanation
"""
name="Pia"
age=26
country="Austria"
residence = "The Netherlands"
introduction = f"Hello. My name is {name}. I'm {age} years old and I'm from {country}, but I live in {residence}."
introduction
"""
Explanation: Luckily, there is a way to make the code a lot more easy to understand and nicely formatted. In Python, you can use a
string formatting mechanism called Literal String Interpolation. Strings that are formatted using this mechanism are called f-strings, after the leading character used to denote such strings, and standing for "formatted strings". It works as follows:
End of explanation
"""
text = f"Soon, I'm turning {age+1} years."
print(text)
"""
Explanation: We can even do cool stuff like this with f-strings:
End of explanation
"""
string_1 = 'Hello, world!'
print(string_1) # The original string.
print(string_1.lower()) # Lowercased.
print(string_1.upper())# Uppercased.
"""
Explanation: Other formatting methods that you may come across include %-formatting and str.format(), but we recommend that you use f-strings because they are the most intuitive.
Using f-strings can be extremely useful if you're dealing with a lot of data you want to modify in a similar way. Suppose you want to create many new files containing data and name them according to a specific system. You can create a kind of template name and then fill in specific information using variables. (More about files later.)
6. String methods
A method is a function that is associated with an object. For example, the string-method lower() turns a string into all lowercase characters, and the string method upper() makes strings uppercase. You can call this method using the dot-notation as shown below:
End of explanation
"""
# Run this cell to see all methods for strings
dir(str)
"""
Explanation: 6.1 Learning about methods
So how do you find out what kind of methods an object has? There are two options:
Read the documentation. See here for the string methods.
Use the dir() function, which returns a list of method names (as well as attributes of the object). If you want to know what a specific method does, use the help() function.
Run the code below to see what the output of dir() looks like.
The method names that start and end with double underscores ('dunder methods') are Python-internal. They are what makes general methods like len() work (len() internally calls the string.__len__() function), and cause Python to know what to do when you, for example, use a for-loop with a string.
The other method names indicate common and useful methods.
End of explanation
"""
help(str.upper)
"""
Explanation: If you'd like to know what one of these methods does, you can just use help() (or look it up online):
End of explanation
"""
x = 'test' # Defining x.
y = x.upper() # Using x.upper(), assigning the result to variable y.
print(y) # Print y.
print(x) # Print x. It is unchanged.
"""
Explanation: It's important to note that string methods only return the result. They do not change the string itself.
End of explanation
"""
# Find out more about each of the methods used below by changing the name of the method
help(str.strip)
s = ' Humpty Dumpty sat on the wall '
print(s)
s = s.strip()
print(s)
print(s.upper())
print(s.lower())
print(s.count("u"))
print(s.count("U"))
print(s.find('sat'))
print(s.find('t', 12))
print(s.find('q', 12))
print(s.replace('sat on', 'fell off'))
words = s.split() # This returns a list, which we will talk about later.
for word in words: # But you can iterate over each word in this manner
print(word.capitalize())
print('-'.join(words))
"""
Explanation: Below we illustrate some of the string methods. Try to understand what is happening. Use the help() function to find more information about each of these methods.
End of explanation
"""
print("A message").
print("A message')
print('A message"')
"""
Explanation: Exercises
Exercise 1:
Can you identify and explain the errors in the following lines of code? Correct them please!
End of explanation
"""
my_string = "Sandwiches are yummy"
# your code here
"""
Explanation: Exercise 2:
Can you print the following? Try using both positive and negative indices.
the letter 'd' in my_string
the letter 'c' in my_string
End of explanation
"""
# your code here
"""
Explanation: Can you print the following? Try using both positive and negative indices.
make a new string containing your first name and print its first letter
print the number of letters in your name
End of explanation
"""
# your code here
"""
Explanation: Exercise 3:
Can you print all a's in the word 'banana'?
End of explanation
"""
# your code here
"""
Explanation: Can you print 'banana' in reverse ('ananab')?
End of explanation
"""
my_string = "banana"
new_string = # your code here
"""
Explanation: Can you exchange the first and last characters in my_string ('aananb')? Create a new variable new_string to store your result.
End of explanation
"""
name = "Bruce Banner"
alterego = "The Hulk"
colour = "Green"
country = "USA"
print("His name is" + name + "and his alter ego is" + alterego +
", a big" + colour + "superhero from the" + country + ".")
"""
Explanation: Exercise 4:
Find a way to fix the spacing problem below keeping the "+".
End of explanation
"""
name = "Bruce Banner"
alterego = "The Hulk"
colour = "Green"
country = "USA"
print("His name is" + name + "and his alter ego is" + alterego +
", a big" + colour + "superhero from the" + country + ".")
"""
Explanation: How would you print the same sentence using ","?
End of explanation
"""
name = "Bruce Banner"
alterego = "The Hulk"
colour = "green"
country = "the USA"
birth_year = 1969
current_year = 2017
print("His name is " + name + " and his alter ego is " + alterego +
", a big " + colour + " superhero from " + country + ". He was born in " + str(birth_year) +
", so he must be " + str(current_year - birth_year - 1) + " or " + str(current_year - birth_year) +
" years old now.")
"""
Explanation: Can you rewrite the code below using an f-string?
End of explanation
"""
my_string = "banana"
# your code here
"""
Explanation: Exercise 5:
Replace all a's by o's in 'banana' using a string method.
End of explanation
"""
my_string = "Humpty Dumpty sat on the wall"
# your code here
"""
Explanation: Remove all spaces in the sentence using a string method.
End of explanation
"""
# find out what lstrip() and rstrip() do
"""
Explanation: What do the methods lstrip() and rstrip() do? Try them out below.
End of explanation
"""
# find out what startswith() and endswith() do
"""
Explanation: What do the methods startswith() and endswith() do? Try them out below.
End of explanation
"""
|
google/starthinker
|
colabs/anonymize.ipynb
|
apache-2.0
|
!pip install git+https://github.com/google/starthinker
"""
Explanation: BigQuery Anonymize Dataset
Copies tables and view from one dataset to another and anynonamizes all rows. Used to create sample datasets for dashboards.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'auth_read':'service', # Credentials used.
'from_project':'', # Original project to read from.
'from_dataset':'', # Original dataset to read from.
'to_project':None, # Anonymous data will be writen to.
'to_dataset':'', # Anonymous data will be writen to.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter BigQuery Anonymize Dataset Recipe Parameters
Ensure you have user access to both datasets.
Provide the source project and dataset.
Provide the destination project and dataset.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'anonymize':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'service','description':'Credentials used.'}},
'bigquery':{
'from':{
'project':{'field':{'name':'from_project','kind':'string','order':1,'description':'Original project to read from.'}},
'dataset':{'field':{'name':'from_dataset','kind':'string','order':2,'description':'Original dataset to read from.'}}
},
'to':{
'project':{'field':{'name':'to_project','kind':'string','order':3,'default':None,'description':'Anonymous data will be writen to.'}},
'dataset':{'field':{'name':'to_dataset','kind':'string','order':4,'description':'Anonymous data will be writen to.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute BigQuery Anonymize Dataset
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
stephank16/enes_graph_use_case
|
prov_templates/Data_ingest_use_case_templates.ipynb
|
gpl-3.0
|
# import variable setting dictionaries from dkrz data ingest tool chain
# and remove __doc__ strings from dictionary (would clutter PROV graph visualizations)
from provtemplates import workflow_steps
from collections import MutableMapping
from contextlib import suppress
def delete_keys_from_dict(dictionary, keys):
for key in keys:
with suppress(KeyError):
del dictionary[key]
for value in dictionary.values():
if isinstance(value, MutableMapping):
delete_keys_from_dict(value, keys)
workflow_dict = workflow_steps.WORKFLOW_DICT
from provtemplates import provconv
import prov.model as prov
import six
import itertools
from provtemplates import workflow_steps
ns_dict = {
'prov':'http://www.w3.org/ns/prov#',
'var':'http://openprovenance.org/var#',
'vargen':'http://openprovenance.org/vargen#',
'tmpl':'http://openprovenance.org/tmpl#',
'foaf':'http://xmlns.com/foaf/0.1/',
'ex': 'http://example.org/',
'orcid':'http://orcid.org/',
#document.set_default_namespace('http://example.org/0/')
'rdf':'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'rdfs':'http://www.w3.org/2000/01/rdf-schema#',
'xsd':'http://www.w3.org/2001/XMLSchema#',
'ex1': 'http://example.org/1/',
'ex2': 'http://example.org/2/'
}
prov_doc01 = provconv.set_namespaces(ns_dict,prov.ProvDocument())
prov_doc02 = provconv.set_namespaces(ns_dict,prov.ProvDocument())
prov_doc03 = provconv.set_namespaces(ns_dict,prov.ProvDocument())
prov_doc1 = prov_doc01.bundle("var:data-ingest-wflow")
prov_doc2 = prov_doc02.bundle("var:data-ingest-wflow")
prov_doc3 = prov_doc03.bundle("var:data-ingest-wflow")
prov_doc01.set_default_namespace('http://enes.org/ns/ingest#')
prov_doc02.set_default_namespace('http://enes.org/ns/ingest#')
prov_doc03.set_default_namespace('http://enes.org/ns/ingest#')
def gen_bundles(workflow_dict,prov_doc):
global_in_out = prov_doc.entity('var:wf_doc')
for wflow_step, wflow_stepdict in workflow_dict.items():
nbundle = prov_doc.bundle('var:'+wflow_step)
out_node = nbundle.entity('var:'+wflow_step+'_out')
agent = nbundle.agent('var:'+wflow_step+'_agent')
activity = nbundle.activity('var:'+wflow_step+'_activity')
in_node = nbundle.entity('var:'+wflow_step+'_in')
nbundle.wasGeneratedBy(out_node,activity)
nbundle.used(activity,in_node)
nbundle.wasAssociatedWith(activity,agent)
nbundle.wasDerivedFrom(in_node,out_node)
nbundle.used(activity,global_in_out)
nbundle.wasGeneratedBy(global_in_out,activity)
def in_bundles(workflow_dict,prov_doc):
first = True
out_nodes = []
nbundle = prov_doc
for wflow_step, wflow_stepdict in workflow_dict.items():
#nbundle = prov_doc.bundle('var:'+wflow_step)
out_node = nbundle.entity('var:'+wflow_step+'_out')
agent = nbundle.agent('var:'+wflow_step+'_agent')
activity = nbundle.activity('var:'+wflow_step+'_activity')
if first:
in_node = nbundle.entity('var:'+wflow_step+'_in')
nbundle.used(activity,in_node)
first = False
out_nodes.append((nbundle,out_node,agent,activity))
return out_nodes
def chain_bundles(nodes):
'''
chaining based on "used" activity relationship
'''
i = 1
for (nbundle,out_node,agent,activity) in nodes[1:]:
(prev_bundle,prev_out,prev_agent,prev_activity) = nodes[i-1]
nbundle.used(activity,prev_out)
i += 1
for (nbundle,out_node,agent,activity) in nodes:
nbundle.wasGeneratedBy(out_node,activity)
nbundle.wasAssociatedWith(activity,agent)
def chain_hist_bundles(nodes,prov_doc):
'''
chaining based on "used" activity relationship
add an explicit end_result composing all the generated
intermediate results
'''
i = 1
for (nbundle,out_node,agent,activity) in nodes[1:]:
(prev_bundle,prev_out,prev_agent,prev_activity) = nodes[i-1]
nbundle.used(activity,prev_out)
i += 1
for (nbundle,out_node,agent,activity) in nodes:
nbundle.wasGeneratedBy(out_node,activity)
nbundle.wasAssociatedWith(activity,agent)
wf_out = prov_doc.entity("ex:wf_result")
wf_agent = prov_doc.agent("ex:workflow_handler")
wf_activity = prov_doc.activity("ex:wf_trace_composition")
prov_doc.wasGeneratedBy(wf_out,wf_activity)
prov_doc.wasAssociatedWith(wf_activity,wf_agent)
for (nbundle,out_node,agent,activity) in nodes:
prov_doc.used(wf_activity,out_node)
"""
Explanation: ENES use case 1: data ingest workflow at data center
Approach:
Step1: Generate prov template based on workflow description
(copied from existing data ingest workflow handling software)
Result: can be done in a few lines of python code
Step2: experiment with prov template expansion
core problem is the best option to
represent the workflow in a provenance graph - many different options ..
thus this notebook is used to present different options for representation of
the provenance template as a basis for discussion:
Step3: Discussion of different template representations of a specific workflow
End of explanation
"""
# generate prov_template options and print provn representation
gen_bundles(workflow_dict,prov_doc01)
print(prov_doc01.get_provn())
%matplotlib inline
prov_doc01.plot()
prov_doc01.serialize('data-ingest1.rdf',format='rdf')
"""
Explanation: Template representation variant 1
bundles for each workflow step
(characterized by output, activity, and agent with relationships)
every activity uses information from a global provenance log file (used relationship)
and every activity updates parts of a global provenance log file (was generated by relationship)
NB: ! this produces not valid ProvTemplates, as multiple bundles are used
End of explanation
"""
nodes = in_bundles(workflow_dict,prov_doc2)
chain_bundles(nodes)
print(prov_doc02.get_provn())
%matplotlib inline
prov_doc02.plot()
from prov.dot import prov_to_dot
dot = prov_to_dot(prov_doc02)
prov_doc02.serialize('ingest-prov-version2.rdf',format='rdf')
dot.write_png('ingest-prov-version2.png')
"""
Explanation: Template representation variant 2:
workflow steps without bundles
workflow steps are chained (output is input to next step)
End of explanation
"""
gnodes = in_bundles(workflow_dict,prov_doc3)
chain_hist_bundles(gnodes,prov_doc3)
print(prov_doc03.get_provn())
dot = prov_to_dot(prov_doc03)
dot.write_png('ingest-prov-version3.png')
%matplotlib inline
prov_doc03.plot()
prov_doc03.serialize('data-ingest3.rdf',format='rdf')
# ------------------ to be removed --------------------------------------
# generate prov_template options and print provn representation
gen_bundles(workflow_dict,prov_doc1)
print(prov_doc1.get_provn())
nodes = in_bundles(workflow_dict,prov_doc2)
chain_bundles(nodes)
print(prov_doc2.get_provn())
gnodes = in_bundles(workflow_dict,prov_doc3)
chain_hist_bundles(gnodes,prov_doc3)
print(prov_doc3.get_provn())
%matplotlib inline
prov_doc1.plot()
prov_doc2.plot()
prov_doc3.plot()
"""
Explanation: Template representation variant 3:
workflow steps without bundles
workflow steps are chained (output is input to next step)
global workflow representation generation added
End of explanation
"""
|
blue-yonder/tsfresh
|
notebooks/examples/02 sklearn Pipeline.ipynb
|
mit
|
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from tsfresh.examples import load_robot_execution_failures
from tsfresh.transformers import RelevantFeatureAugmenter
from tsfresh.utilities.dataframe_functions import impute
"""
Explanation: Feature Selection in a sklearn pipeline
This notebook is quite similar to the first example.
This time however, we use the sklearn pipeline API of tsfresh.
If you want to learn more, have a look at the documentation.
End of explanation
"""
from tsfresh.examples.robot_execution_failures import download_robot_execution_failures
download_robot_execution_failures()
df_ts, y = load_robot_execution_failures()
"""
Explanation: Load and Prepare the Data
Check out the first example notebook to learn more about the data and format.
End of explanation
"""
X = pd.DataFrame(index=y.index)
# Split data into train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y)
"""
Explanation: We want to use the extracted features to predict for each of the robot executions, if it was a failure or not.
Therefore our basic "entity" is a single robot execution given by a distinct id.
A dataframe with these identifiers as index needs to be prepared for the pipeline.
End of explanation
"""
ppl = Pipeline([
('augmenter', RelevantFeatureAugmenter(column_id='id', column_sort='time')),
('classifier', RandomForestClassifier())
])
"""
Explanation: Build the pipeline
We build a sklearn pipeline that consists of a feature extraction step (RelevantFeatureAugmenter) with a subsequent RandomForestClassifier.
The RelevantFeatureAugmenter takes roughly the same arguments as extract_features and select_features do.
End of explanation
"""
ppl.set_params(augmenter__timeseries_container=df_ts);
"""
Explanation: <div class="alert alert-warning">
Here comes the tricky part!
The input to the pipeline will be our dataframe `X`, which one row per identifier.
It is currently empty.
But which time series data should the `RelevantFeatureAugmenter` to actually extract the features from?
We need to pass the time series data (stored in `df_ts`) to the transformer.
</div>
In this case, df_ts contains the time series of both train and test set, if you have different dataframes for
train and test set, you have to call set_params two times
(see further below on how to deal with two independent data sets)
End of explanation
"""
ppl.fit(X_train, y_train)
"""
Explanation: We are now ready to fit the pipeline
End of explanation
"""
y_pred = ppl.predict(X_test)
"""
Explanation: The augmenter has used the input time series data to extract time series features for each of the identifiers in the X_train and selected only the relevant ones using the passed y_train as target.
These features have been added to X_train as new columns.
The classifier can now use these features during trainings.
Prediction
During interference, the augmentor does only extract the relevant features it has found out in the training phase and the classifier predicts the target using these features.
End of explanation
"""
print(classification_report(y_test, y_pred))
"""
Explanation: So, finally we inspect the performance:
End of explanation
"""
ppl.named_steps["augmenter"].feature_selector.relevant_features
"""
Explanation: You can also find out, which columns the augmenter has selected
End of explanation
"""
df_ts_train = df_ts[df_ts["id"].isin(y_train.index)]
df_ts_test = df_ts[df_ts["id"].isin(y_test.index)]
ppl.set_params(augmenter__timeseries_container=df_ts_train);
ppl.fit(X_train, y_train);
import pickle
with open("pipeline.pkl", "wb") as f:
pickle.dump(ppl, f)
"""
Explanation: <div class="alert alert-info">
In this example we passed in an empty (except the index) `X_train` or `X_test` into the pipeline.
However, you can also fill the input with other features you have (e.g. features extracted from the metadata)
or even use other pipeline components before.
</div>
Separating the time series data containers
In the example above we passed in a single df_ts into the RelevantFeatureAugmenter, which was used both for training and predicting.
During training, only the data with the ids from X_train where extracted and during prediction the rest.
However, it is perfectly fine to call set_params twice: once before training and once before prediction.
This can be handy if you for example dump the trained pipeline to disk and re-use it only later for prediction.
You only need to make sure that the ids of the enteties you use during training/prediction are actually present in the passed time series data.
End of explanation
"""
import pickle
with open("pipeline.pkl", "rb") as f:
ppk = pickle.load(f)
ppl.set_params(augmenter__timeseries_container=df_ts_test);
y_pred = ppl.predict(X_test)
print(classification_report(y_test, y_pred))
"""
Explanation: Later: load the fitted model and do predictions on new, unseen data
End of explanation
"""
|
sysid/nbs
|
lstm/SequenceClassification_LSTM_CNN.ipynb
|
mit
|
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=top_words)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
# step redundent, eval part of training already
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
"""
Explanation: Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras
http://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/
Sequence classification is a predictive modeling problem where you have some sequence of inputs over space or time and the task is to predict a category for the sequence.
What makes this problem difficult is that the sequences can vary in length, be comprised of a very large vocabulary of input symbols and may require the model to learn the long term context or dependencies between symbols in the input sequence.
We will map each word onto a 32 length real valued vector. We will also limit the total number of words that we are interested in modeling to the 5000 most frequent words, and zero out the rest. Finally, the sequence length (number of words) in each review varies, so we will constrain each review to be 500 words, truncating long reviews and pad the shorter reviews with zero values.
End of explanation
"""
from keras.layers import Dropout
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length, dropout=0.2))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
# Final evaluation of the model
#scores = model.evaluate(X_test, y_test, verbose=0)
#print("Accuracy: %.2f%%" % (scores[1]*100))
"""
Explanation: You can see that this simple LSTM with little tuning achieves near state-of-the-art results on the IMDB problem. Importantly, this is a template that you can use to apply LSTM networks to your own sequence classification problems.
LSTM For Sequence Classification With Dropout
Recurrent Neural networks like LSTM generally have the problem of overfitting.
Dropout can be applied between layers using the Dropout Keras layer. We can do this easily by adding new Dropout layers between the Embedding and LSTM layers and the LSTM and Dense output layers. We can also add dropout to the input on the Embedded layer by using the dropout parameter. For example:
End of explanation
"""
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length, dropout=0.2))
model.add(LSTM(100, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
"""
Explanation: We can see dropout having the desired impact on training with a slightly slower trend in convergence and in this case a lower final accuracy. The model could probably use a few more epochs of training and may achieve a higher skill (try it an see).
Alternately, dropout can be applied to the input and recurrent connections of the memory units with the LSTM precisely and separately.
Keras provides this capability with parameters on the LSTM layer, the dropout_W for configuring the input dropout and dropout_U for configuring the recurrent dropout. For example, we can modify the first example to add dropout to the input and recurrent connections as follows:
End of explanation
"""
from keras.layers.convolutional import Convolution1D
from keras.layers.convolutional import MaxPooling1D
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Convolution1D(nb_filter=32, filter_length=3, border_mode='same', activation='relu'))
model.add(MaxPooling1D(pool_length=2))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
%%capture output
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=1, batch_size=64)
output.show()
"""
Explanation: We can see that the LSTM specific dropout has a more pronounced effect on the convergence of the network than the layer-wise dropout. As above, the number of epochs was kept constant and could be increased to see if the skill of the model can be further lifted.
Dropout is a powerful technique for combating overfitting in your LSTM models and it is a good idea to try both methods, but you may bet better results with the gate-specific dropout provided in Keras.
LSTM and Convolutional Neural Network For Sequence Classification
Convolutional neural networks excel at learning the spatial structure in input data.
The IMDB review data does have a one-dimensional spatial structure in the sequence of words in reviews and the CNN may be able to pick out invariant features for good and bad sentiment. This learned spatial features may then be learned as sequences by an LSTM layer.
We can easily add a one-dimensional CNN and max pooling layers after the Embedding layer which then feed the consolidated features to the LSTM. We can use a smallish set of 32 features with a small filter length of 3. The pooling layer can use the standard length of 2 to halve the feature map size.
For example, we would create the model as follows:
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.13.2/examples/notebooks/generated/generic_mle.ipynb
|
bsd-3-clause
|
import numpy as np
from scipy import stats
import statsmodels.api as sm
from statsmodels.base.model import GenericLikelihoodModel
"""
Explanation: Maximum Likelihood Estimation (Generic models)
This tutorial explains how to quickly implement new maximum likelihood models in statsmodels. We give two examples:
Probit model for binary dependent variables
Negative binomial model for count data
The GenericLikelihoodModel class eases the process by providing tools such as automatic numeric differentiation and a unified interface to scipy optimization functions. Using statsmodels, users can fit new MLE models simply by "plugging-in" a log-likelihood function.
Example 1: Probit model
End of explanation
"""
data = sm.datasets.spector.load_pandas()
exog = data.exog
endog = data.endog
print(sm.datasets.spector.NOTE)
print(data.exog.head())
"""
Explanation: The Spector dataset is distributed with statsmodels. You can access a vector of values for the dependent variable (endog) and a matrix of regressors (exog) like this:
End of explanation
"""
exog = sm.add_constant(exog, prepend=True)
"""
Explanation: Them, we add a constant to the matrix of regressors:
End of explanation
"""
class MyProbit(GenericLikelihoodModel):
def loglike(self, params):
exog = self.exog
endog = self.endog
q = 2 * endog - 1
return stats.norm.logcdf(q*np.dot(exog, params)).sum()
"""
Explanation: To create your own Likelihood Model, you simply need to overwrite the loglike method.
End of explanation
"""
sm_probit_manual = MyProbit(endog, exog).fit()
print(sm_probit_manual.summary())
"""
Explanation: Estimate the model and print a summary:
End of explanation
"""
sm_probit_canned = sm.Probit(endog, exog).fit()
print(sm_probit_canned.params)
print(sm_probit_manual.params)
print(sm_probit_canned.cov_params())
print(sm_probit_manual.cov_params())
"""
Explanation: Compare your Probit implementation to statsmodels' "canned" implementation:
End of explanation
"""
import numpy as np
from scipy.stats import nbinom
def _ll_nb2(y, X, beta, alph):
mu = np.exp(np.dot(X, beta))
size = 1/alph
prob = size/(size+mu)
ll = nbinom.logpmf(y, size, prob)
return ll
"""
Explanation: Notice that the GenericMaximumLikelihood class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates.
Example 2: Negative Binomial Regression for Count Data
Consider a negative binomial regression model for count data with
log-likelihood (type NB-2) function expressed as:
$$
\mathcal{L}(\beta_j; y, \alpha) = \sum_{i=1}^n y_i ln
\left ( \frac{\alpha exp(X_i'\beta)}{1+\alpha exp(X_i'\beta)} \right ) -
\frac{1}{\alpha} ln(1+\alpha exp(X_i'\beta)) + ln \Gamma (y_i + 1/\alpha) - ln \Gamma (y_i+1) - ln \Gamma (1/\alpha)
$$
with a matrix of regressors $X$, a vector of coefficients $\beta$,
and the negative binomial heterogeneity parameter $\alpha$.
Using the nbinom distribution from scipy, we can write this likelihood
simply as:
End of explanation
"""
from statsmodels.base.model import GenericLikelihoodModel
class NBin(GenericLikelihoodModel):
def __init__(self, endog, exog, **kwds):
super(NBin, self).__init__(endog, exog, **kwds)
def nloglikeobs(self, params):
alph = params[-1]
beta = params[:-1]
ll = _ll_nb2(self.endog, self.exog, beta, alph)
return -ll
def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds):
# we have one additional parameter and we need to add it for summary
self.exog_names.append('alpha')
if start_params == None:
# Reasonable starting values
start_params = np.append(np.zeros(self.exog.shape[1]), .5)
# intercept
start_params[-2] = np.log(self.endog.mean())
return super(NBin, self).fit(start_params=start_params,
maxiter=maxiter, maxfun=maxfun,
**kwds)
"""
Explanation: New Model Class
We create a new model class which inherits from GenericLikelihoodModel:
End of explanation
"""
import statsmodels.api as sm
medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data
medpar.head()
"""
Explanation: Two important things to notice:
nloglikeobs: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix).
start_params: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization.
That's it! You're done!
Usage Example
The Medpar
dataset is hosted in CSV format at the Rdatasets repository. We use the read_csv
function from the Pandas library to load the data
in memory. We then print the first few columns:
End of explanation
"""
y = medpar.los
X = medpar[["type2", "type3", "hmo", "white"]].copy()
X["constant"] = 1
"""
Explanation: The model we are interested in has a vector of non-negative integers as
dependent variable (los), and 5 regressors: Intercept, type2,
type3, hmo, white.
For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.
End of explanation
"""
mod = NBin(y, X)
res = mod.fit()
"""
Explanation: Then, we fit the model and extract some information:
End of explanation
"""
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('P-values: ', res.pvalues)
print('AIC: ', res.aic)
"""
Explanation: Extract parameter estimates, standard errors, p-values, AIC, etc.:
End of explanation
"""
print(res.summary())
"""
Explanation: As usual, you can obtain a full list of available information by typing
dir(res).
We can also look at the summary of the estimation results.
End of explanation
"""
res_nbin = sm.NegativeBinomial(y, X).fit(disp=0)
print(res_nbin.summary())
print(res_nbin.params)
print(res_nbin.bse)
"""
Explanation: Testing
We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.
End of explanation
"""
|
JoseGuzman/myIPythonNotebooks
|
pub/Migration velocity.ipynb
|
gpl-2.0
|
%pylab inline
import pandas as pd
# read CSV file in pandas
mydf = pd.read_csv('.data/Julie_R1_Bef_S4_cell123_Position.csv', skiprows=2)
mydf.head()
"""
Explanation: <H1>Migration velocity</H1>
<P> To compute the velocity of the trajectories of several particles, we generated a file with the 3D coordinates (Position X, Position Y and Position Z) acquired every 10 minutes.
End of explanation
"""
# get basic information
print('Number of samples %d'%len(mydf))
print('Number of particles = %d'%len(mydf['TrackID'].unique()))
print('Distance units = %s'%mydf['Unit'][0])
# get TrackIDs
TrackID = mydf['TrackID'].unique()
# select only locations, sampling points and TrackIDs
df = mydf[['Position X','Position Y', 'Position Z', 'Time','TrackID']]
df0 = df.loc[df['TrackID'] == TrackID[0]]
df1 = df.loc[df['TrackID'] == TrackID[1]]
df2 = df.loc[df['TrackID'] == TrackID[2]]
counter = 0
for i in TrackID:
mysize = len( df.loc[df['TrackID'] == i] )
counter +=mysize
print('Number of samples in TrackID = %d is %d'%(i,mysize))
print('Total number of samples %d'%counter)
df0.head() # show first values of first particle
# collect a list of 3d coordinates
P0 = zip(df0['Position X'], df0['Position Y'], df0['Position Z'])
P1 = zip(df1['Position X'], df1['Position Y'], df1['Position Z'])
P2 = zip(df2['Position X'], df2['Position Y'], df2['Position Z'])
P0[0] # test the values are correct
"""
Explanation: <H2>Show basic file information</H2>
End of explanation
"""
def distance(myarray):
"""
Calculate the distance between 2 3D coordinates along the
axis of the numpy array.
"""
# slice() method is useful for large arrays
# see diff in ./local/lib/python2.7/site-packages/numpy/lib/function_base.py
a = np.asanyarray(myarray)
slice1 = [slice(None)] # create a slice type object
slice2 = [slice(None)]
slice1[-1] = slice(1, None) # like array[1:]
slice2[-1] = slice(None, -1) # like array[:-1]
slice1 = tuple(slice1)
slice2 = tuple(slice2)
# calculate sqrt( dx^2 + dy^2 + dz^2)
sum_squared = np.sum( np.power(a[slice2]-a[slice1],2), axis=1)
return np.sqrt( sum_squared)
"""
Explanation: <H2>Compute euclidian distances </H2>
End of explanation
"""
# retrieve time vector
#dt = 10 # sampling interval in minutes
dt = 0.1666 # sampling interval in hours
t0 = df0['Time'].values*dt
print(len(t0))
D0 = distance(P0) # in um
S0 = D0/10. # speed in um/min
t0 = t0[:-1] # when ploting speeds we do not need the last sampling point
plt.plot(t0, S0, color = '#006400')
plt.ylabel('Speed (um/min)'),
plt.xlabel('Time (hours)')
plt.title('Particle %d'%TrackID[0]);
"""
Explanation: <H2>Velocities</H2>
<P>This is simply the distance if sampling time is constant </P>
End of explanation
"""
print('Track duration %2.4f min'%(len(t0)*10.))
print('total traveled distances = %2.4f um'%np.sum(D0))
print('total average speed = %2.4f um/min'%S0.mean())
# retrieve time vector and calculate speed
dt = 0.1666 # sampling interval in hours
t1 = df1['Time'].values*dt
D1 = distance(P1) # in um
S1 = D1/10. #um/min
t1 = t1[:-1]
plt.plot(t1, S1, color = '#4169E1')
plt.ylabel('Speed (um/min)'),
plt.xlabel('Time (hours)')
plt.title('Particle %d'%TrackID[1]);
print('Track duration %2.4f min'%(len(t1)*10.))
print('total traveled distances = %2.4f um'%np.sum(D1))
print('total average speed = %2.4f um/min'%S1.mean())
# retrieve time vector and calculate speed
dt = 0.1666 # sampling interval in hours
t2 = df2['Time'].values*dt
D2 = distance(P2) # in um
S2 = D2/10. #um/min
t2 = t2[:-1]
plt.plot(t2, S2, color = '#800080')
plt.xlabel('Time (hours)')
plt.ylabel('Speed (um/min)'), plt.title('Particle %d'%TrackID[2]);
print('Track duration %2.4f min'%(len(t2)*10.))
print('total traveled distances = %2.4f um'%np.sum(D2))
print('total average speed = %2.4f um/min'%S2.mean())
#Overlap
plt.plot(t0, S0, color = '#006400');
plt.plot(t1, S1, color = '#4169E1');
plt.plot(t2, S2, color = '#800080');
plt.xlabel('Time (hours)');
plt.ylabel('Speed (um/min)'), plt.title('All Particles');
"""
Explanation: <H2>Particle information</H2>
End of explanation
"""
S0_norm = S0/np.max(S0)
S1_norm = S1/np.max(S1)
S2_norm = S2/np.max(S2)
#Overlap
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
ax1.plot(t0, S0_norm, color = 'darkgreen', alpha=0.5)
ax2.plot(t1, S1_norm, color = 'royalblue')
ax3.plot(t2, S2_norm, color = 'purple')
#ax3.plot(np.arange(1500), mysin, color= 'cyan')
ax3.set_xlabel('Time (hours)');
for ax in fig.axes:
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
ax.get_yaxis().set_visible(False)
ax.get_xaxis().set_visible(False)
#ax.axis('Off')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax3.get_xaxis().set_visible(True)
ax.get_xaxis().set_ticks(np.arange(0,25,5))
ax3.spines['bottom'].set_visible(True)
ax3.spines['left'].set_visible(True)
"""
Explanation: <H2>Show normalized speeds</H2>
End of explanation
"""
n = len(S0) # length of the signal
k = np.arange(n)
T = n*dt
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y0 = np.fft.fft(S0)/n # fft computing and normalization
Y0 = Y0[range(n/2)]
plt.plot(frq, abs(Y0),color = 'darkgreen') # plotting the spectrum
plt.xlabel('Freq (hours)')
plt.ylabel('|Y(freq)|')
#plt.ylim(ymax=0.02)
n = len(S1) # length of the signal
k = np.arange(n)
T = n*dt
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y1 = np.fft.fft(S1)/n # fft computing and normalization
Y1 = Y1[range(n/2)]
plt.plot(frq, abs(Y0),color = 'darkgreen') # plotting the spectrum
plt.plot(frq, abs(Y1),color = 'royalblue') # plotting the spectrum
plt.xlabel('Freq (hours)')
plt.ylabel('|Y(freq)|')
plt.ylim(ymax = 0.1)
"""
Explanation: <H2>Fourier transform</H2>
End of explanation
"""
|
google/jax
|
docs/notebooks/vmapped_log_probs.ipynb
|
apache-2.0
|
import functools
import itertools
import re
import sys
import time
from matplotlib.pyplot import *
import jax
from jax import lax
import jax.numpy as jnp
import jax.scipy as jsp
from jax import random
import numpy as np
import scipy as sp
"""
Explanation: Autobatching log-densities example
This notebook demonstrates a simple Bayesian inference example where autobatching makes user code easier to write, easier to read, and less likely to include bugs.
Inspired by a notebook by @davmre.
End of explanation
"""
np.random.seed(10009)
num_features = 10
num_points = 100
true_beta = np.random.randn(num_features).astype(jnp.float32)
all_x = np.random.randn(num_points, num_features).astype(jnp.float32)
y = (np.random.rand(num_points) < sp.special.expit(all_x.dot(true_beta))).astype(jnp.int32)
y
"""
Explanation: Generate a fake binary classification dataset
End of explanation
"""
def log_joint(beta):
result = 0.
# Note that no `axis` parameter is provided to `jnp.sum`.
result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.))
result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta))))
return result
log_joint(np.random.randn(num_features))
# This doesn't work, because we didn't write `log_prob()` to handle batching.
try:
batch_size = 10
batched_test_beta = np.random.randn(batch_size, num_features)
log_joint(np.random.randn(batch_size, num_features))
except ValueError as e:
print("Caught expected exception " + str(e))
"""
Explanation: Write the log-joint function for the model
We'll write a non-batched version, a manually batched version, and an autobatched version.
Non-batched
End of explanation
"""
def batched_log_joint(beta):
result = 0.
# Here (and below) `sum` needs an `axis` parameter. At best, forgetting to set axis
# or setting it incorrectly yields an error; at worst, it silently changes the
# semantics of the model.
result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.),
axis=-1)
# Note the multiple transposes. Getting this right is not rocket science,
# but it's also not totally mindless. (I didn't get it right on the first
# try.)
result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta.T).T)),
axis=-1)
return result
batch_size = 10
batched_test_beta = np.random.randn(batch_size, num_features)
batched_log_joint(batched_test_beta)
"""
Explanation: Manually batched
End of explanation
"""
vmap_batched_log_joint = jax.vmap(log_joint)
vmap_batched_log_joint(batched_test_beta)
"""
Explanation: Autobatched with vmap
It just works.
End of explanation
"""
@jax.jit
def log_joint(beta):
result = 0.
# Note that no `axis` parameter is provided to `jnp.sum`.
result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=10.))
result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta))))
return result
batched_log_joint = jax.jit(jax.vmap(log_joint))
"""
Explanation: Self-contained variational inference example
A little code is copied from above.
Set up the (batched) log-joint function
End of explanation
"""
def elbo(beta_loc, beta_log_scale, epsilon):
beta_sample = beta_loc + jnp.exp(beta_log_scale) * epsilon
return jnp.mean(batched_log_joint(beta_sample), 0) + jnp.sum(beta_log_scale - 0.5 * np.log(2*np.pi))
elbo = jax.jit(elbo)
elbo_val_and_grad = jax.jit(jax.value_and_grad(elbo, argnums=(0, 1)))
"""
Explanation: Define the ELBO and its gradient
End of explanation
"""
def normal_sample(key, shape):
"""Convenience function for quasi-stateful RNG."""
new_key, sub_key = random.split(key)
return new_key, random.normal(sub_key, shape)
normal_sample = jax.jit(normal_sample, static_argnums=(1,))
key = random.PRNGKey(10003)
beta_loc = jnp.zeros(num_features, jnp.float32)
beta_log_scale = jnp.zeros(num_features, jnp.float32)
step_size = 0.01
batch_size = 128
epsilon_shape = (batch_size, num_features)
for i in range(1000):
key, epsilon = normal_sample(key, epsilon_shape)
elbo_val, (beta_loc_grad, beta_log_scale_grad) = elbo_val_and_grad(
beta_loc, beta_log_scale, epsilon)
beta_loc += step_size * beta_loc_grad
beta_log_scale += step_size * beta_log_scale_grad
if i % 10 == 0:
print('{}\t{}'.format(i, elbo_val))
"""
Explanation: Optimize the ELBO using SGD
End of explanation
"""
figure(figsize=(7, 7))
plot(true_beta, beta_loc, '.', label='Approximated Posterior Means')
plot(true_beta, beta_loc + 2*jnp.exp(beta_log_scale), 'r.', label='Approximated Posterior $2\sigma$ Error Bars')
plot(true_beta, beta_loc - 2*jnp.exp(beta_log_scale), 'r.')
plot_scale = 3
plot([-plot_scale, plot_scale], [-plot_scale, plot_scale], 'k')
xlabel('True beta')
ylabel('Estimated beta')
legend(loc='best')
"""
Explanation: Display the results
Coverage isn't quite as good as we might like, but it's not bad, and nobody said variational inference was exact.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.