repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
GoogleCloudPlatform/training-data-analyst
|
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
|
apache-2.0
|
import os
PROJECT_ID = "michaelabel-gcp-training"
os.environ["PROJECT_ID"] = PROJECT_ID
"""
Explanation: AI Explanations: Explaining a tabular data model
Overview
In this tutorial we will perform the following steps:
Build and train a Keras model.
Export the Keras model as a TF 1 SavedModel and deploy the model on Cloud AI Platform.
Compute explainations for our model's predictions using Explainable AI on Cloud AI Platform.
Dataset
The dataset used for this tutorial was created from a BigQuery Public Dataset: NYC 2018 Yellow Taxi data.
Objective
The goal is to train a model using the Keras Sequential API that predicts how much a customer is compelled to pay (fares + tolls) for a taxi ride given the pickup location, dropoff location, the day of the week, and the hour of the day.
This tutorial focuses more on deploying the model to AI Explanations than on the design of the model itself. We will be using preprocessed data for this lab. If you wish to know more about the data and how it was preprocessed please see this notebook.
Before you begin
This notebook was written with running in Google Colabratory in mind. The notebook will run on Cloud AI Platform Notebooks or your local environment if the proper packages are installed.
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type and select GPU for Hardward Accelerator.
Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. You should skip this step.
Be sure to change the PROJECT_ID below to your project before running the cell!
End of explanation
"""
import sys
import warnings
warnings.filterwarnings('ignore')
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# If you are running this notebook in Colab, follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
!pip install witwidget --quiet
!pip install tensorflow==1.15.2 --quiet
!gcloud config set project $PROJECT_ID
elif "DL_PATH" in os.environ:
!sudo pip install tabulate --quiet
"""
Explanation: If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth. Ignore the error message related to tensorflow-serving-api.
End of explanation
"""
BUCKET_NAME = "michaelabel-gcp-training-ml"
REGION = "us-central1"
os.environ['BUCKET_NAME'] = BUCKET_NAME
os.environ['REGION'] = REGION
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. Note that you may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
"""
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET_NAME} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist.
End of explanation
"""
%tensorflow_version 1.x
import tensorflow as tf
import tensorflow.feature_column as fc
import pandas as pd
import numpy as np
import json
import time
# Should be 1.15.2
print(tf.__version__)
"""
Explanation: Import libraries for creating model
Import the libraries we'll be using in this tutorial. This tutorial has been tested with TensorFlow 1.15.2.
End of explanation
"""
%%bash
# Copy the data to your notebook instance
mkdir taxi_preproc
gsutil cp -r gs://cloud-training/bootcamps/serverlessml/taxi_preproc/*_xai.csv ./taxi_preproc
ls -l taxi_preproc
"""
Explanation: Downloading and preprocessing data
In this section you'll download the data to train and evaluate your model from a public GCS bucket. The original data has been preprocessed from the public BigQuery dataset linked above.
End of explanation
"""
CSV_COLUMNS = ['fare_amount', 'dayofweek', 'hourofday', 'pickuplon',
'pickuplat', 'dropofflon', 'dropofflat']
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
DTYPES = ['float32', 'str' , 'int32', 'float32' , 'float32' , 'float32' , 'float32' ]
def prepare_data(file_path):
df = pd.read_csv(file_path, usecols = range(7), names = CSV_COLUMNS,
dtype = dict(zip(CSV_COLUMNS, DTYPES)), skiprows=1)
labels = df['fare_amount']
df = df.drop(columns=['fare_amount'])
df['dayofweek'] = df['dayofweek'].map(dict(zip(DAYS, range(7)))).astype('float32')
return df, labels
train_data, train_labels = prepare_data('./taxi_preproc/train_xai.csv')
valid_data, valid_labels = prepare_data('./taxi_preproc/valid_xai.csv')
# Preview the first 5 rows of training data
train_data.head()
"""
Explanation: Read the data with Pandas
We'll use Pandas to read the training and validation data into a DataFrame. We will only use the first 7 columns of the csv files for our models.
End of explanation
"""
# Create functions to compute engineered features in later Lambda layers
def euclidean(params):
lat1, lon1, lat2, lon2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
NUMERIC_COLS = ['pickuplon', 'pickuplat', 'dropofflon', 'dropofflat', 'hourofday', 'dayofweek']
def transform(inputs):
transformed = inputs.copy()
transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([
inputs['pickuplat'],
inputs['pickuplon'],
inputs['dropofflat'],
inputs['dropofflon']])
feat_cols = {colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS}
feat_cols['euclidean'] = fc.numeric_column('euclidean')
print("BEFORE TRANSFORMATION")
print("INPUTS:", inputs.keys())
print("AFTER TRANSFORMATION")
print("TRANSFORMED:", transformed.keys())
print("FEATURES", feat_cols.keys())
return transformed, feat_cols
def build_model():
raw_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
transformed, feat_cols = transform(raw_inputs)
dense_inputs = tf.keras.layers.DenseFeatures(feat_cols.values(),
name = 'dense_input')(transformed)
h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dense_inputs)
h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)
output = tf.keras.layers.Dense(1, activation='linear', name = 'output')(h2)
model = tf.keras.models.Model(raw_inputs, output)
return model
model = build_model()
model.summary()
# Compile the model and see a summary
optimizer = tf.keras.optimizers.Adam(0.001)
model.compile(loss='mean_squared_error', optimizer=optimizer,
metrics = [tf.keras.metrics.RootMeanSquaredError()])
tf.keras.utils.plot_model(model, to_file='model_plot.png', show_shapes=True,
show_layer_names=True, rankdir="TB")
"""
Explanation: Build, train, and evaluate our model with Keras
We'll use tf.Keras to build a our ML model that takes our features as input and predicts the fare amount.
But first, we will do some feature engineering. We will be utilizing tf.feature_column and tf.keras.layers.Lambda to implement our feature engineering in the model graph to simplify our serving_input_fn later.
End of explanation
"""
def load_dataset(features, labels, mode):
dataset = tf.data.Dataset.from_tensor_slices(({"dayofweek" : features["dayofweek"],
"hourofday" : features["hourofday"],
"pickuplat" : features["pickuplat"],
"pickuplon" : features["pickuplon"],
"dropofflat" : features["dropofflat"],
"dropofflon" : features["dropofflon"]},
labels
))
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.repeat().batch(256).shuffle(256*10)
else:
dataset = dataset.batch(256)
return dataset.prefetch(1)
train_dataset = load_dataset(train_data, train_labels, tf.estimator.ModeKeys.TRAIN)
valid_dataset = load_dataset(valid_data, valid_labels, tf.estimator.ModeKeys.EVAL)
"""
Explanation: Create an input data pipeline with tf.data
Per best practices, we will use tf.Data to create our input data pipeline. Our data is all in an in-memory dataframe, so we will use tf.data.Dataset.from_tensor_slices to create our pipeline.
End of explanation
"""
tf.keras.backend.get_session().run(tf.tables_initializer(name='init_all_tables'))
steps_per_epoch = 426433 // 256
model.fit(train_dataset, steps_per_epoch=steps_per_epoch, validation_data=valid_dataset, epochs=10)
# Send test instances to model for prediction
predict = model.predict(valid_dataset, steps = 1)
predict[:5]
"""
Explanation: Train the model
Now we train the model. We will specify a number of epochs which to train the model and tell the model how many steps to expect per epoch.
End of explanation
"""
## Convert our Keras model to an estimator
keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export')
print(model.input)
# We need this serving input function to export our model in the next cell
serving_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
model.input
)
export_path = keras_estimator.export_saved_model(
'gs://' + BUCKET_NAME + '/explanations',
serving_input_receiver_fn=serving_fn
).decode('utf-8')
"""
Explanation: Export the model as a TF 1 SavedModel
In order to deploy our model in a format compatible with AI Explanations, we'll follow the steps below to convert our Keras model to a TF Estimator, and then use the export_saved_model method to generate the SavedModel and save it in GCS.
End of explanation
"""
!saved_model_cli show --dir $export_path --all
"""
Explanation: Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.
End of explanation
"""
# Print the names of our tensors
print('Model input tensors: ', model.input)
print('Model output tensor: ', model.output.name)
baselines_med = train_data.median().values.tolist()
baselines_mode = train_data.mode().values.tolist()
print(baselines_med)
print(baselines_mode)
explanation_metadata = {
"inputs": {
"dayofweek": {
"input_tensor_name": "dayofweek:0",
"input_baselines": [baselines_mode[0][0]] # Thursday
},
"hourofday": {
"input_tensor_name": "hourofday:0",
"input_baselines": [baselines_mode[0][1]] # 8pm
},
"dropofflon": {
"input_tensor_name": "dropofflon:0",
"input_baselines": [baselines_med[4]]
},
"dropofflat": {
"input_tensor_name": "dropofflat:0",
"input_baselines": [baselines_med[5]]
},
"pickuplon": {
"input_tensor_name": "pickuplon:0",
"input_baselines": [baselines_med[2]]
},
"pickuplat": {
"input_tensor_name": "pickuplat:0",
"input_baselines": [baselines_med[3]]
},
},
"outputs": {
"dense": {
"output_tensor_name": "output/BiasAdd:0"
}
},
"framework": "tensorflow"
}
print(explanation_metadata)
"""
Explanation: Deploy the model to AI Explanations
In order to deploy the model to Explanations, we need to generate an explanations_metadata.json file and upload this to the Cloud Storage bucket with our SavedModel. Then we'll deploy the model using gcloud.
Prepare explanation metadata
We need to tell AI Explanations the names of the input and output tensors our model is expecting, which we print below.
The value for input_baselines tells the explanations service what the baseline input should be for our model. Here we're using the median for all of our input features. That means the baseline prediction for this model will be the fare our model predicts for the median of each feature in our dataset.
End of explanation
"""
# Write the json to a local file
with open('explanation_metadata.json', 'w') as output_file:
json.dump(explanation_metadata, output_file)
!gsutil cp explanation_metadata.json $export_path
"""
Explanation: Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.
End of explanation
"""
MODEL = 'taxifare_explain'
os.environ["MODEL"] = MODEL
%%bash
exists=$(gcloud ai-platform models list | grep ${MODEL})
if [ -n "$exists" ]; then
echo -e "Model ${MODEL} already exists."
else
echo "Creating a new model."
gcloud ai-platform models create ${MODEL}
fi
"""
Explanation: Create the model
Now we will create out model on Cloud AI Platform if it does not already exist.
End of explanation
"""
# Each time you create a version the name should be unique
import datetime
now = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
VERSION_IG = 'v_IG_{}'.format(now)
VERSION_SHAP = 'v_SHAP_{}'.format(now)
# Create the version with gcloud
!gcloud beta ai-platform versions create $VERSION_IG \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method 'integrated-gradients' \
--num-integral-steps 25
!gcloud beta ai-platform versions create $VERSION_SHAP \
--model $MODEL \
--origin $export_path \
--runtime-version 1.15 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method 'sampled-shapley' \
--num-paths 50
# Make sure the model deployed correctly. State should be `READY` in the following log
!gcloud ai-platform versions describe $VERSION_IG --model $MODEL
!echo "---"
!gcloud ai-platform versions describe $VERSION_SHAP --model $MODEL
"""
Explanation: Create the model version
Creating the version will take ~5-10 minutes. Note that your first deploy may take longer.
End of explanation
"""
# Format data for prediction to our model
!rm taxi-data.txt
!touch taxi-data.txt
prediction_json = {"dayofweek": "3", "hourofday": "17", "pickuplon": "-74.0026", "pickuplat": "40.7410", "dropofflat": "40.7790", "dropofflon": "-73.8772"}
with open('taxi-data.txt', 'a') as outfile:
json.dump(prediction_json, outfile)
# Preview the contents of the data file
!cat taxi-data.txt
"""
Explanation: Getting predictions and explanations on deployed model
Now that your model is deployed, you can use the AI Platform Prediction API to get feature attributions. We'll pass it a single test example here and see which features were most important in the model's prediction. Here we'll use gcloud to call our deployed model.
Format our request for gcloud
To use gcloud to make our AI Explanations request, we need to write the JSON to a file. Our example here is for a ride from the Google office in downtown Manhattan to LaGuardia Airport at 5pm on a Tuesday afternoon.
Note that we had to write our day of the week at "3" instead of "Tue" since we encoded the days of the week outside of our model and serving input function.
End of explanation
"""
resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_IG --json-instances='taxi-data.txt'
response_IG = json.loads(resp_obj.s)
resp_obj
resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_SHAP --json-instances='taxi-data.txt'
response_SHAP = json.loads(resp_obj.s)
resp_obj
"""
Explanation: Making the explain request
Now we make the explaination requests. We will go ahead and do this here for both integrated gradients and SHAP using the prediction JSON from above.
End of explanation
"""
explanations_IG = response_IG['explanations'][0]['attributions_by_label'][0]
explanations_SHAP = response_SHAP['explanations'][0]['attributions_by_label'][0]
predicted = round(explanations_SHAP['example_score'], 2)
baseline = round(explanations_SHAP['baseline_score'], 2 )
print('Baseline taxi fare: ' + str(baseline) + ' dollars')
print('Predicted taxi fare: ' + str(predicted) + ' dollars')
"""
Explanation: Understanding the explanations response
First let's just look at the difference between our predictions using our baselines and our predicted taxi fare for the example.
End of explanation
"""
from tabulate import tabulate
feature_names = valid_data.columns.tolist()
attributions_IG = explanations_IG['attributions']
attributions_SHAP = explanations_SHAP['attributions']
rows = []
for feat in feature_names:
rows.append([feat, prediction_json[feat], attributions_IG[feat], attributions_SHAP[feat]])
print(tabulate(rows,headers=['Feature name', 'Feature value', 'Attribution value (IG)', 'Attribution value (SHAP)']))
"""
Explanation: Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed our model prediction up by that amount, and vice versa for negative attribution values. Which features seem like they're the most important...well it seems like the location features are the most important!
End of explanation
"""
|
rainyear/pytips
|
Tips/2016-04-13-Iterator-Tools.ipynb
|
mit
|
from itertools import cycle, count, repeat
print(count.__doc__)
counter = count()
print(next(counter))
print(next(counter))
print(list(map(lambda x, y: x+y, range(10), counter)))
odd_counter = map(lambda x: 'Odd#{}'.format(x), count(1, 2))
print(next(odd_counter))
print(next(odd_counter))
print(cycle.__doc__)
cyc = cycle(range(5))
print(list(zip(range(6), cyc)))
print(next(cyc))
print(next(cyc))
print(repeat.__doc__)
print(list(repeat('Py', 3)))
rep = repeat('p')
print(list(zip(rep, 'y'*3)))
"""
Explanation: Python 迭代器工具
0x01 介绍了迭代器的概念,即定义了 __iter__() 和 __next__() 方法的对象,或者通过 yield 简化定义的“可迭代对象”,而在一些函数式编程语言(见 0x02 Python 中的函数式编程)中,类似的迭代器常被用于产生特定格式的列表(或序列),这时的迭代器更像是一种数据结构而非函数(当然在一些函数式编程语言中,这两者并无本质差异)。Python 借鉴了 APL, Haskell, and SML 中的某些迭代器的构造方法,并在 itertools 中实现(该模块是通过 C 实现,源代码:/Modules/itertoolsmodule.c)。
itertools 模块提供了如下三类迭代器构建工具:
无限迭代
整合两序列迭代
组合生成器
1. 无限迭代
所谓无限(infinite)是指如果你通过 for...in... 的语法对其进行迭代,将陷入无限循环,包括:
count(start, [step])
cycle(p)
repeat(elem [,n])
从名字大概可以猜出它们的用法,既然说是无限迭代,我们自然不会想要将其所有元素依次迭代取出,而通常是结合 map/zip 等方法,将其作为一个取之不尽的数据仓库,与有限长度的可迭代对象进行组合操作:
End of explanation
"""
from itertools import cycle, compress, islice, takewhile, count
# 这三个方法(如果使用恰当)可以限定无限迭代
# print(compress.__doc__)
print(list(compress(cycle('PY'), [1, 0, 1, 0])))
# 像操作列表 l[start:stop:step] 一样操作其它序列
# print(islice.__doc__)
print(list(islice(cycle('PY'), 0, 2)))
# 限制版的 filter
# print(takewhile.__doc__)
print(list(takewhile(lambda x: x < 5, count())))
from itertools import groupby
from operator import itemgetter
print(groupby.__doc__)
for k, g in groupby('AABBC'):
print(k, list(g))
db = [dict(name='python', script=True),
dict(name='c', script=False),
dict(name='c++', script=False),
dict(name='ruby', script=True)]
keyfunc = itemgetter('script')
db2 = sorted(db, key=keyfunc) # sorted by `script'
for isScript, langs in groupby(db2, keyfunc):
print(', '.join(map(itemgetter('name'), langs)))
from itertools import zip_longest
# 内置函数 zip 以较短序列为基准进行合并,
# zip_longest 则以最长序列为基准,并提供补足参数 fillvalue
# Python 2.7 中名为 izip_longest
print(list(zip_longest('ABCD', '123', fillvalue=0)))
"""
Explanation: 2. 整合两序列迭代
所谓整合两序列,是指以两个有限序列为输入,将其整合操作之后返回为一个迭代器,最为常见的 zip 函数就属于这一类别,只不过 zip 是内置函数。这一类别完整的方法包括:
accumulate()
chain()/chain.from_iterable()
compress()
dropwhile()/filterfalse()/takewhile()
groupby()
islice()
starmap()
tee()
zip_longest()
这里就不对所有的方法一一举例说明了,如果想要知道某个方法的用法,基本通过 print(method.__doc__) 就可以了解,毕竟 itertools 模块只是提供了一种快捷方式,并没有隐含什么深奥的算法。这里只对下面几个我觉得比较有趣的方法进行举例说明。
End of explanation
"""
from itertools import product, permutations, combinations, combinations_with_replacement
print(list(product(range(2), range(2))))
print(list(product('AB', repeat=2)))
print(list(combinations_with_replacement('AB', 2)))
# 赛马问题:4匹马前2名的排列组合(A^4_2)
print(list(permutations('ABCDE', 2)))
# 彩球问题:4种颜色的球任意抽出2个的颜色组合(C^4_2)
print(list(combinations('ABCD', 2)))
"""
Explanation: 3. 组合生成器
关于生成器的排列组合:
product(*iterables, repeat=1):两输入序列的笛卡尔乘积
permutations(iterable, r=None):对输入序列的完全排列组合
combinations(iterable, r):有序版的排列组合
combinations_with_replacement(iterable, r):有序版的笛卡尔乘积
End of explanation
"""
|
H4ml3t/wmarchive-examples
|
How to write results into HDFS - example.ipynb
|
mit
|
# is SparkContext already loaded?
sc
# Make sure you have a HiveContext
sqlContext
# Which is the version?
sc.version
# load a dataframe from Avro files
df = sqlContext.read.format("com.databricks.spark.avro").load("/cms/wmarchive/test/avro/2016/01/01/")
df.printSchema()
%%time
df.count()
"""
Explanation: How to write results into HDFS
After submitting a job, we will need to retrieve the result. This can be stored in HDFS or elsewhere. Depending on the output size this can be a convenient approach or not. If so, we will need to write it in some format in order to read it back afterwards.
To produce this test I'm using Spark 1.5.1 (Pyspark 1.5.1) and spark-avro libraries loaded like this:
bash
spark-submit --packages com.databricks:spark-avro_2.10:2.0.1 [...]
For Spark 1.3.0 use
bash
spark-submit --packages com.databricks:spark-avro_2.10:1.0.0 [...]
Example with Spark 1.3.0 is provided in a separated file.
Index:
How to store aggregation results
Example #1
Example #2
How to store selection results
Example #1
End of explanation
"""
aggregation1 = df.select("steps.performance.cpu") \
.rdd \
.flatMap(lambda cpuArrayRows: cpuArrayRows[0]) \
.map(lambda row: row.asDict()) \
.flatMap(lambda rowDict: [(k,v) for k,v in rowDict.iteritems()]) \
.reduceByKey(lambda x,y: x+y)
%%time
aggregation1.collect()
# Store the file as a simple text file
aggregation1.saveAsTextFile("wmarchive/test-plaintext-aggregation1")
%%bash
hadoop fs -text wmarchive/test-plaintext-aggregation1/*
aggregated1DF = sqlContext.createDataFrame([{v[0]:v[1] for v in aggregation1.collect()}])
# saving in Json format
aggregated1DF.toJSON().saveAsTextFile("wmarchive/test-json-aggregation1")
%%bash
hadoop fs -text wmarchive/test-json-aggregation1/*
# how to write in Avro format
aggregated1DF.write.format("com.databricks.spark.avro").save("wmarchive/test-avro-aggregation1")
%%bash
hadoop fs -text wmarchive/test-avro-aggregation1/*
"""
Explanation: Aggregation examples
First example
1) Aggregated sum of all steps.performance.cpu values. In this case the result is a single line that can be easily stored back in HDFS, also in a textual format.
End of explanation
"""
|
gammapy/PyGamma15
|
tutorials/analysis-stats/Tutorial.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Tutorial about statistical methods
The following contains a sequence of simple exercises, designed to get familiar with using Minuit for maximum likelihood fits and emcee to determine parameters by MCMC. Commands are generally commented, i.e. in order to activate them, simply uncomment them. A few functions are still to be defined... which is part of the exercise. Have fun!
End of explanation
"""
np.random.seed(42)
y = np.random.random(10000)
x = 1./np.sqrt(y)
plt.hist(x, bins=100, range=(1,10), histtype='stepfilled',color='blue')
plt.yscale('log')
"""
Explanation: Generate a dataset to be fitted
End of explanation
"""
def nllp(a)
# here define the function
return 1.
"""
Explanation: Maximum likelihood fit of a simple power law
First define the negative-log likelihood function for a density proportional to x**(-a) the range 1 < x < infinity
End of explanation
"""
import iminuit
# minp = iminuit.Minuit(nllp,a= ?,error_a=?, errordef=?)
# minp.migrad()
"""
Explanation: Then minimize it using iminuit
End of explanation
"""
# minp.hesse()
# minp.minos()
# minp.draw_profile('a')
"""
Explanation: Error analysis
First determine the parabolic errors using hesse() and then do a parameter scan using minos() to determine the 68% confidence level errors.
End of explanation
"""
from scipy.integrate import quad
def pdfpn(x, a):
return x**(-a)
def pdfpn_norm(a):
# here insert the calculation of the normalisation as a function of a
return 1.
def nllpn(a):
# calculate and return the proper negative-log likelihood function
return 1.
"""
Explanation: Use of an un-normalised PDF
The above example shall be modified such that the normalisation of the likelihood function, which so far was determined analytically, now is determined numerically in the fit. This is the more realistic case, since in many case no (simple) analytical normalisation exists. As a first step, this requires to load the integration package.
End of explanation
"""
# minpn = iminuit.Minuit(nllpn, a=?, error_a=?, errordef=?)
# minpn.migrad()
"""
Explanation: Then do the same minimization steps as before.
End of explanation
"""
def pdfcn(x, a, b):
return x**(-a)*np.exp(-b*b*x)
def pdfcn_norm(a, b):
# determine the normalization
return 1.
def nllcn(a, b):
# calculate an return the negative-log likelihood function
return 1.
"""
Explanation: Extend the fit model by an exponential cutoff
The exponential cutoff is implemented by exp(-bbx), i.e. exponential growth is not allowed for real valued parameters b. The implications of this ansatz shall be discussed when looking at the solution. After that, the example can be modified to use exp(-b*x).
Here the likelihood function has no (simple) analytical normalisation anymore, i.e. we directly do the numerical approach.
End of explanation
"""
# mincn = iminuit.Minuit(nllcn, a=?, b=?, error_a=?, error_b=?, errordef=?)
# mincn.migrad()
# mincn.hesse()
# mincn.minos()
# mincn.draw_profile('a')
# mincn.draw_profile('b')
# mincn.draw_contour('a','b')
"""
Explanation: As before, use Minuit for minimisation and error analysis, but now in two dimensions. Study parabolic errors and minos errors, the latter both for the single variables and for both together.
End of explanation
"""
import emcee
"""
Explanation: Do the same analysis by MCMC
End of explanation
"""
# Define the posterior.
# for clarity the prior and likelihood are separated
# emcee requires log-posterior
def log_prior(theta):
a, b = theta
if b < 0:
return -np.inf # log(0)
else:
return 0.
def log_likelihood(theta, x):
a, b = theta
return np.sum(-a*np.log(x) - b*b*x)
def log_posterior(theta, x):
a , b = theta
# construct and the log of the posterior
return 1.
"""
Explanation: emcee requires as input the log-likelihood of the posterior in the parameters a and b. In the following it is composed of the log-of the prior and the log-likelihood of the data. Initially use a simple uniform prior in a and b with the constraint b>0. Afterwards one can play with the prior to see how strongly it affects the result.
End of explanation
"""
ndim = 2 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nburn = 100 # "burn-in" period to let chains stabilize
nsteps = 1000 # number of MCMC steps to take
# random starting point
np.random.seed(0)
starting_guesses = np.random.random((nwalkers, ndim))
"""
Explanation: Here we'll set up the computation. emcee combines multiple "walkers", each of which is its own MCMC chain. The number of trace results will be nwalkers * nsteps
End of explanation
"""
#sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x])
#%time sampler.run_mcmc(starting_guesses, nsteps)
#print("done")
"""
Explanation: run the MCMC (and time it using IPython's %time magic
End of explanation
"""
#emcee_trace = sampler.chain[:, nburn:, :].reshape(-1, ndim).T
#len(emcee_trace[0])
"""
Explanation: sampler.chain is of shape (nwalkers, nsteps, ndim). Before analysis throw-out the burn-in points and reshape.
End of explanation
"""
# plt.hist(emcee_trace[0], 100, range=(?,?) , histtype='stepfilled', color='cyan')
# plt.hist(emcee_trace[1], 100, range=(?,?) , histtype='stepfilled', color='cyan')
# plt.plot(emcee_trace[0],emcee_trace[1],',k')
"""
Explanation: Analyse the results. Plot the projected (marginalized) posteriors for the parameters a and b and also the joinyt density as sampled by the MCMC.
End of explanation
"""
def compute_sigma_level(trace1, trace2, nbins=20):
"""From a set of traces, bin by number of standard deviations"""
L, xbins, ybins = np.histogram2d(trace1, trace2, nbins)
L[L == 0] = 1E-16
logL = np.log(L)
shape = L.shape
L = L.ravel()
# obtain the indices to sort and unsort the flattened array
i_sort = np.argsort(L)[::-1]
i_unsort = np.argsort(i_sort)
L_cumsum = L[i_sort].cumsum()
L_cumsum /= L_cumsum[-1]
xbins = 0.5 * (xbins[1:] + xbins[:-1])
ybins = 0.5 * (ybins[1:] + ybins[:-1])
return xbins, ybins, L_cumsum[i_unsort].reshape(shape)
#xbins, ybins, sigma = compute_sigma_level(emcee_trace[0], emcee_trace[1])
#plt.contour(xbins, ybins, sigma.T, levels=[0.683, 0.955])
#plt.plot(emcee_trace[0], emcee_trace[1], ',k', alpha=0.1)
"""
Explanation: As a final step, generate 2-dim bayesian confidence level contours containing 68.3% and 95.5% probability content. For that define a convenient plot functions and use them. Overlay the contours with the scatter plot.
End of explanation
"""
|
georgetown-analytics/machine-learning
|
examples/bbengfort/bikeshare/bikeshare.ipynb
|
mit
|
import os
import sys
sys.path.append("/Users/benjamin/Repos/ddl/yellowbrick")
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context('notebook')
sns.set_style('whitegrid')
"""
Explanation: Bikeshare Ridership
Notebook to predict the number of riders per day for a bike share network based on the season of year and the given weather.
Notebook Setup
End of explanation
"""
data = pd.read_csv('bikeshare.csv')
data.head()
data.riders.mean()
from sklearn.model_selection import train_test_split as tts
features = [
'season', 'year', 'month', 'hour', 'holiday', 'weekday', 'workingday',
'weather', 'temp', 'feelslike', 'humidity', 'windspeed',
]
target = 'registered' # can be one of 'casual', 'registered', 'riders'
X = data[features]
y = data[target]
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2)
"""
Explanation: Data Loading
End of explanation
"""
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import r2_score
# OLS
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f}".format(r2,me))
# L2 and L1 Regularization
alphas = np.logspace(-10, 0, 200)
from sklearn.linear_model import RidgeCV
model = RidgeCV(alphas=alphas)
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f} alpha={:0.3f}".format(r2,me, model.alpha_))
from sklearn.linear_model import LassoCV
model = LassoCV(alphas=alphas)
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f} alpha={:0.3f}".format(r2,me, model.alpha_))
from sklearn.linear_model import ElasticNetCV
model = ElasticNetCV(alphas=alphas)
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f}".format(r2,me))
sns.boxplot(y=target, data=data)
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
model = Pipeline([
('poly', PolynomialFeatures(2)),
('lasso', LassoCV(alphas=alphas)),
])
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f} alpha={:0.3f}".format(r2,me, model.named_steps['lasso'].alpha_))
model = Pipeline([
('poly', PolynomialFeatures(2)),
('ridge', RidgeCV(alphas=alphas)),
])
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f} alpha={:0.3f}".format(r2,me, model.named_steps['ridge'].alpha_))
model = Pipeline([
('poly', PolynomialFeatures(3)),
('ridge', RidgeCV(alphas=alphas)),
])
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f} alpha={:0.3f}".format(r2,me, model.named_steps['ridge'].alpha_))
model = Pipeline([
('poly', PolynomialFeatures(4)),
('ridge', RidgeCV(alphas=alphas)),
])
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f} alpha={:0.3f}".format(r2,me, model.named_steps['ridge'].alpha_))
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f}".format(r2,me))
"""
Explanation: Do Some Regression
End of explanation
"""
import pickle
with open('forest-riders.pkl', 'wb') as f:
pickle.dump(model, f)
with open('forest-riders.pkl', 'rb') as f:
model = pickle.load(f)
model.predict(X_test)
from sklearn.ensemble import AdaBoostRegressor
model = AdaBoostRegressor()
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f}".format(r2,me))
from sklearn.linear_model import BayesianRidge
model = BayesianRidge()
model.fit(X_train, y_train)
yhat = model.predict(X_test)
r2 = r2_score(y_test, yhat)
me = mse(y_test, yhat)
print("r2={:0.3f} MSE={:0.3f}".format(r2,me))
from sklearn.neighbors import KNeighborsRegressor
model = KNeighborsRegressor(5)
model.fit(X_train, y_train)
model.score(X_test, y_test)
"""
Explanation: Save the Forests!
End of explanation
"""
|
JustinNoel1/ML-Course
|
exercise-sessions/Session-11/Session-11.ipynb
|
apache-2.0
|
import numpy as np
import pandas as pd
"""
Explanation: Problem Set 11
First the exercise:
* What is the maximum depth of a decision tree trained on $N$ samples?
The decision tree must make a proper split at each node, so the size of each node must reduce by at least one as we move down one level. So the maximum depth of a decision tree is $N-1$.
* If we train a decision tree to an arbitrary depth, what will be the training error?
Assuming the training data assigns unique labels to samples with identical features, this will be Zero. If we train a decision tree to arbitrary depth we will end up with a tree where each node contains samples with identical features. If each of these samples has the same label than any of the standard rules (voting, averaging) will return the correct response.
* How can we alter a loss function to help regularize a decision tree?
One of the simplest ways is to add to our loss function an increasing function of the depth of the node. For example, we could just add $\lambda |D|$ or perhaps $\lambda 2^|D|$ where $\lambda$ is an appropriate hyperparameter (probably very small). One should choose so that growth of this regularization term so that it will not dominate the unregularized cost function when obtaining improvements at the desired rate.
Python Lab
Now let us load our standard libraries.
End of explanation
"""
big_df = pd.read_csv("UCI_Credit_Card.csv")
big_df.head()
len(big_df)
len(big_df.dropna())
df = big_df.drop(labels = ['ID'], axis = 1)
labels = df['default.payment.next.month']
df.drop('default.payment.next.month', axis = 1, inplace = True)
num_samples = 25000
train_x, train_y = df[0:num_samples], labels[0:num_samples]
test_x, test_y = df[num_samples:], labels[num_samples:]
test_x.head()
train_y.head()
"""
Explanation: Let us load the credit card dataset and extract a small dataframe of numerical features to test on.
End of explanation
"""
class bin_transformer(object):
def __init__(self, df, num_quantiles = 2):
self.quantiles = df.quantile(np.linspace(1./num_quantiles, 1.-1./num_quantiles,num_quantiles-1))
def transform(self, df):
new = pd.DataFrame()
fns = {}
for col_name in df.axes[1]:
for ix, q in self.quantiles.iterrows():
quart = q[col_name]
new[col_name+str(ix)] = (df[col_name] >= quart)
fns[col_name+str(ix)] =(col_name, lambda x: x[col_name]>=quart)
return new, fns
transformer = bin_transformer(df,5)
train_x_t, tr_fns = transformer.transform(train_x)
test_x_t, test_fns = transformer.transform(test_x)
train_x_t.head()
tr_fns
"""
Explanation: Now let us write our transformation function.
End of explanation
"""
def bdd_cross_entropy(pred, label):
return -np.mean(label*np.log(pred+10**(-20)))
def MSE(pred,label):
return np.mean((pred-label)**2)
def acc(pred,label):
return np.mean((pred>=0.5)==(label == 1))
"""
Explanation: Now let us build some simple loss functions for 1d labels.
End of explanation
"""
def find_split(x, y, loss, verbose = False):
min_ax = None
base_loss = loss(np.mean(y),y)
min_loss = base_loss
N = len(x)
for col_name in x.axes[1]:
mask = x[col_name]
num_pos = np.sum(mask)
num_neg = N - num_pos
pos_y = np.mean(y[mask])
neg_y = np.mean(y[~mask])
l = (num_pos*loss(pos_y, y[mask]) + num_neg*loss(neg_y, y[~mask]))/N
if verbose:
print("Column {0} split has improved loss {1}".format(col_name, base_loss-l))
if l < min_loss:
min_loss = l
min_ax = col_name
return min_ax, min_loss
find_split(train_x_t, train_y, MSE, verbose = True)
find_split(train_x_t, train_y, bdd_cross_entropy, verbose = 0)
find_split(train_x_t, train_y, acc, verbose = 0)
np.mean(train_y[train_x_t['PAY_00.8']])
np.mean(train_y[~train_x_t['PAY_00.8']])
np.mean(train_y[train_x_t['AGE0.2']])
np.mean(train_y[~train_x_t['AGE0.2']])
"""
Explanation: Now let us define the find split function.
End of explanation
"""
|
yunqu/PYNQ
|
boards/Pynq-Z1/base/notebooks/arduino/arduino_grove_gesture.ipynb
|
bsd-3-clause
|
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
"""
Explanation: Grove Gesture Example
This example shows how to use the
Grove gesture sensor on the board.
The gesture sensor can detect 10 gestures as follows:
| Raw value read by sensor | Gesture |
|--------------------------|--------------------|
| 0 | No detection |
| 1 | forward |
| 2 | backward |
| 3 | right |
| 4 | left |
| 5 | up |
| 6 | down |
| 7 | clockwise |
| 8 | counter-clockwise |
| 9 | wave |
For this notebook, a PYNQ Arduino shield is also required.
The grove gesture sensor is attached to the I2C interface on the shield.
This grove sensor should also work with PMOD interfaces on the board.
End of explanation
"""
from pynq.lib.arduino import Grove_Gesture
from pynq.lib.arduino import ARDUINO_GROVE_I2C
sensor = Grove_Gesture(base.ARDUINO, ARDUINO_GROVE_I2C)
"""
Explanation: 1. Instantiate the sensor object
End of explanation
"""
sensor.set_speed(240)
"""
Explanation: 2. Set speed
There are currently 2 modes available for users to use: far and near.
The corresponding fps are 120 and 240, respectively.
For more information, please refer to Grove gesture sensor.
End of explanation
"""
from time import sleep
for i in range(10):
print(sensor.read_gesture())
sleep(3)
"""
Explanation: 3. Read gestures
The following code will read 10 gestures within 30 seconds.
Try to change your gesture in front of the sensor and check the results.
End of explanation
"""
|
relf/smt
|
tutorial/SMT_Noise.ipynb
|
bsd-3-clause
|
import numpy as np
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
# defining the training data
xt = np.array([0.0, 1.0, 2.0, 2.5, 4.0])
yt = np.array([0.0, 1.0, 1.5, 1.1, 1.0])
# defining the models
sm_noise_free = KRG() # noise-free Kriging model
sm_noise_fixed = KRG(noise0=[1e-6]) # noisy Kriging model with fixed variance
sm_noise_estim = KRG(noise0=[1e-6], eval_noise=True) # noisy Kriging model with estimated variance
# training the models
sm_noise_free.set_training_values(xt, yt)
sm_noise_free.train()
sm_noise_fixed.set_training_values(xt, yt)
sm_noise_fixed.train()
sm_noise_estim.set_training_values(xt, yt)
sm_noise_estim.train()
# predictions
x = np.linspace(0, 4, 100).reshape(-1, 1)
y_noise_free = sm_noise_free.predict_values(x) # predictive mean
var_noise_free = sm_noise_free.predict_variances(x) # predictive variance
y_noise_fixed = sm_noise_fixed.predict_values(x) # predictive mean
var_noise_fixed = sm_noise_fixed.predict_variances(x) # predictive variance
y_noise_estim = sm_noise_estim.predict_values(x) # predictive mean
var_noise_estim = sm_noise_estim.predict_variances(x) # predictive variance
# plotting predictions +- 3 std confidence intervals
plt.rcParams['figure.figsize'] = [17, 4]
fig, axes = plt.subplots(1, 3)
axes[0].fill_between(np.ravel(x),
np.ravel(y_noise_free-3*np.sqrt(var_noise_free)),
np.ravel(y_noise_free+3*np.sqrt(var_noise_free)),
alpha=0.2, label='3-sd confidence intervals')
axes[0].scatter(xt, yt, label="training data")
axes[0].plot(x, y_noise_free, label='mean')
axes[0].set_title('noise-free Kriging model')
axes[0].legend(loc=0)
axes[0].set_xlabel(r'$x$')
axes[0].set_ylabel(r'$y$')
axes[1].fill_between(np.ravel(x),
np.ravel(y_noise_fixed-3*np.sqrt(var_noise_fixed)),
np.ravel(y_noise_fixed+3*np.sqrt(var_noise_fixed)),
alpha=0.2, label='3-sd confidence intervals')
axes[1].scatter(xt, yt, label="training data")
axes[1].plot(x, y_noise_fixed, label='mean')
axes[1].set_title('Kriging model with fixed noise')
axes[1].set_xlabel(r'$x$')
axes[1].set_ylabel(r'$y$')
axes[2].fill_between(np.ravel(x),
np.ravel(y_noise_estim-3*np.sqrt(var_noise_estim)),
np.ravel(y_noise_estim+3*np.sqrt(var_noise_estim)),
alpha=0.2, label='3-sd confidence intervals')
axes[2].scatter(xt, yt, label="training data")
axes[2].plot(x, y_noise_estim, label='mean')
axes[2].set_title('Kriging model with estimated noise')
axes[2].set_xlabel(r'$x$')
axes[2].set_ylabel(r'$y$')
plt.show()
"""
Explanation: <div class="jumbotron text-left"><b>
This tutorial describes how to use the SMT toolbox with an additive noise term
</b></div>
Andrés F. LOPEZ-LOPERA ONERA/DTIS/M2CI - Janvier 2021
<p class="alert alert-success" style="padding:1em">
To use SMT models, please follow this link: https://github.com/SMTorg/SMT/blob/master/README.md. The documentation is available here: http://smt.readthedocs.io/en/latest/
</p>
Reference paper: https://www.sciencedirect.com/science/article/pii/S0965997818309360?via%3Dihub
Preprint: https://www.researchgate.net/profile/Mohamed_Amine_Bouhlel/publication/331976718_A_Python_surrogate_modeling_framework_with_derivatives/links/5cc3cebd299bf12097829631/A-Python-surrogate-modeling-framework-with-derivatives.pdf
Cite us:
M.-A. Bouhlel, J. T. Hwang, N. Bartoli, R. Lafage, J. Morlier, J .R.R.A Martins (2019), A Python surrogate modeling framework with derivatives, Advances in Engineering Software, 102662
1. Problem statement
In this notebook, we focus on surrogate Kriging models accouting for a noise term:
$$ y(\mathbf{x}_i) = f(\mathbf{x}_i) + \varepsilon_i, \quad i = 1, \ldots, n,$$
with $\mathbf{x} \in \mathbb{R}^d$, $y \in \mathbb{R}$. Note that $f$ is a (latent) noise-free function and that $\varepsilon_1, \ldots, \varepsilon_n$ are additive noises.
For Kriging purposes, we assume that the latent function $f$ is GP-distributed, i.e. $f \sim \mathcal{GP}(\mu, k)$, and that $\varepsilon_1, \ldots, \varepsilon_n \sim \mathcal{N}(0, \Omega)$ are additive Gaussian noises. For the latter, we assume that they are mutually independent and independent of $f$. This means that the covariance matrix $\Omega$ is given by:
$$\Omega = \begin{bmatrix} \tau_1^2 & \ldots & 0 \ \vdots & \ddots & \vdots \ 0 & \ldots & \tau_n^2 \end{bmatrix},$$
with noise variances $\tau_1^2, \ldots, \tau_n^2 \in \mathbb{R}^+$. Then, due to the linearity, $y$ is also GP-distributed.
Two cases will be considered in the following.
Homoscedastic (default) case: the noise variances are considered to be equal, i.e.: $\tau^2 = \tau_1^2 = \cdots = \tau_n^2$. The value of $\tau^2$ can be estimated via maximum likelihood.
Heterocedastic case: the noise variances vary across the observations, i.e.: $\tau_1^2 \ne \cdots \ne \tau_n^2$. Those variances can only be estimated if repetitions of observations are given. Developments here are based on pointwise sensible estimates [1] and on the implementations in [2].
References
[1] Bruce Ankenman, Barry L. Nelson and Jeremy Staum (2010). "Stochastic Kriging for Simulation Metamodeling." Operations Research. Vol. 58, No. 2 , pp. 371-382
[2] Olivier Roustant, David Ginsbourger and Yves Deville (2012). "DiceKriging, DiceOptim: Two R Packages for the analysis of computer experiments by kriging-based metamodeling and optimization." Journal of Statistical Software, Vol. 51, No. 1, pp. 1-55.
2. Homoscedastic Kriging example
To account for an homoscedastic noise term, you can control two main parameters:
noise0: the initial noise variance. By default, noise0 = 0.
eval_noise: a flag indicator to estimate the noise variance. By default, eval_noise = False.
<div class="alert alert-info fade in" id="d110">
<p>Remark: by default, no noise is considered in the database.</p>
</div>
Example 2.1: model comparisons
Next, we test the performance of different Kriging-based models under noise-free and noisy considerations. For the noisy case, we either manually fix or estimate the noise variance parameter via maximum likelihood.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
# defining the toy example
def target_fun(x):
return np.cos(5*x)
nobs = 50 # number of obsertvations
np.random.seed(0) # a seed for reproducibility
xt = np.random.uniform(size=nobs) # design points
y_free_noise = target_fun(xt) # noise-free observations
# adding a random noise to observations
yt = target_fun(xt) + np.random.normal(scale=0.05, size=nobs)
# training the model
sm = KRG(eval_noise=True)
sm.set_training_values(xt, yt)
sm.train()
# predictions
x = np.linspace(0, 1, 100).reshape(-1, 1)
y = sm.predict_values(x) # predictive mean
var = sm.predict_variances(x) # predictive variance
# plotting predictions +- 3 std confidence intervals
plt.rcParams['figure.figsize'] = [8, 4]
plt.fill_between(np.ravel(x),
np.ravel(y-3*np.sqrt(var)),
np.ravel(y+3*np.sqrt(var)),
alpha=0.2, label='3-sd confidence intervals')
plt.scatter(xt, yt, label="training noisy data")
plt.plot(x, y, label='mean')
plt.plot(x, target_fun(x), label='target function')
plt.title('Kriging model with noise observations')
plt.legend(loc=0)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.show()
"""
Explanation: Example 2.2: noisy observations
We now consider an example with noisy observations. In that case, a noise term is mandatory since we are not interested in the interpolation of data. By setting eval_noise = True, then the noise variance is estimated via maximum likelihood.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
# defining the training data
xt = np.array([0.0, 1.0, 2.0, 2.5, 4.0])
yt = np.array([0.0, 1.0, 1.5, 1.1, 1.0])
# defining the noise variance per observed data
noise0 = [0.05, 0.001, 0.01, 0.03, 0.05]
# the noise0 must be of the same length than yt. If its length is equal
# to one, the same noise variance is considered everywhere (homoscedastic case)
sm = KRG(noise0=noise0, use_het_noise=True)
sm.set_training_values(xt, yt)
sm.train()
x = np.linspace(0, 4, 100).reshape(-1, 1)
y = sm.predict_values(x)
var = sm.predict_variances(x)
# plotting the resulting Kriging model
plt.fill_between(np.ravel(x), np.ravel(y-3*np.sqrt(var)),
np.ravel(y+3*np.sqrt(var)), alpha=0.2,
label='3-sd confidence intervals')
plt.scatter(xt, yt, label="training data")
plt.plot(x, y, label='mean')
plt.title('heteroscedastic Kriging model with given noise variances')
plt.legend(loc=0)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.show()
"""
Explanation: 3. Heteroscedastic Kriging example
To account for an heteroscedastic noise term, you only need to set use_het_noise == True. In that case, you need to either provide the noise variances for each observation point (see Example 3.1) or to consider observations with repetitions (see Example 3.2).
Example 3.1: model with user-predefined noise variances
In some applications, we have access to error bars (noise variances). Those variances noise0 can be passed as a list or np.ndarray to enrich the Kriging model.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from smt.surrogate_models import KRG
# defining the training data
xt = np.array([0.0, 1.0, 2.0, 2.5, 4.0])
yt = np.array([0.0, 1.0, 1.5, 1.1, 1.0])
# adding noisy repetitions
xt_full = xt.copy()
yt_full = yt.copy()
for i in range(4):
xt_full = np.concatenate((xt_full, xt))
np.random.seed(i)
yt_full = np.concatenate((yt_full,
yt + np.std(yt)*np.random.uniform(size=yt.shape)))
# training the model
sm = KRG(use_het_noise=True, eval_noise=True)
sm.set_training_values(xt_full, yt_full)
sm.train()
# predictions
x = np.linspace(0, 4, 100).reshape(-1, 1)
y = sm.predict_values(x)
var = sm.predict_variances(x)
# plotting the resulting Kriging model
plt.fill_between(np.ravel(x), np.ravel(y-3*np.sqrt(var)),
np.ravel(y+3*np.sqrt(var)), alpha=0.2,
label='3-sd confidence intervals')
plt.scatter(xt_full, yt_full, label="training data with repetitions")
plt.plot(x, y, label='mean')
plt.title('heteroscedastic Kriging model with repetitions')
plt.legend(loc=0)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.show()
"""
Explanation: Example 3.2: observations with repetitions
If considering observations with repetitions, the noise variance can be estimated according to [1,2]. In that case, there is no need to define noise0 but you must provide the repetitions.
End of explanation
"""
|
dwaithe/ONBI_image_analysis
|
day2_colocalisation/.ipynb_checkpoints/2015 Correlation and Colocalisation practical-checkpoint.ipynb
|
gpl-2.0
|
#This line is very important: (It turns on the inline visuals!)
%pylab inline
a = [2,9,32,12,14,6,9,23,4,5,13,6,7,92,21,45];
b = [7,21,4,2,92,9,9,6,13,12,45,5,6,23,14,32];
#Please calculate the dot product of the vectors 'a' and 'b'.
#You may use any method you like. If get stuck. Check:
#http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html
#If you rearrange the numbers in 'b', what sequence will give
#the highest dot-product magnitude?
"""
Explanation: Introduction to Correlation and Colocalisation with Python.
Reading images
Dominic Waithe 2015 (c)
Exercise: See the similarities between the dot-product and correlation. Apply correlation to images to obtain a metric of colocalisation/similarity. Use colocalisation to assess the quality of registration.
We start with two lists of numbers (or two vectors or arrays as they are known). Please find the dot product of the two vectors. The dot product formula is a follows:<img src="dotProduct.png">
In python there is more than one way to find the dot product of two vectors. It can be performed using 'for loops' or through vectorised notation
End of explanation
"""
#The cross-correlation algorithm is another name for the Pearson's test.
#Here it is written in code form and utilising the builtin functions:
c = [0,1,2]
d = [3,4,5]
rho = np.average((c-np.average(c))*(d-np.average(d)))/(np.std(c)*np.std(d))
print('rho',round(rho,3))
#equally you can write
rho = np.dot(c-np.average(c),d-np.average(d))/sqrt(((np.dot(c-np.average(c),c-np.average(c)))*np.dot(d-np.average(d),d-np.average(d))))
print('rho',round(rho,3))
#Why is the rho for c and d, 1.0?
#Edit the variables c and d and find the pearson's value for 'a' and 'b'.
#What happens when you correlate 'a' with 'a'?
#Here is an image from the Fiji practical
from tifffile import imread as imreadtiff
im = imreadtiff('neuron.tif')
print('image dimensions',im.shape, ' im dtype:',im.dtype)
subplot(2,2,1)
imshow(im[0,:,:],cmap='Blues_r')
subplot(2,2,2)
imshow(im[1,:,:],cmap='Greens_r')
subplot(2,2,3)
imshow(im[2,:,:],cmap='Greys_r')
subplot(2,2,4)
imshow(im[3,:,:],cmap='Reds_r')
"""
Explanation: The Pearson's test
Exercise: See the similarities
The above example shows you how two number sequences can be compared with nothing more complicated than by using the dot product. This works as long as the sequences comprise of the same numbers but in a shuffled order. To compare different sequences with the original we normalise by the magnitude of the vectors. To include this step. We use a more complicated equation:
<img src="eqn_full.gif">
https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
https://en.wikipedia.org/wiki/Cross-correlation
Hopefully you can see the top of this equation is very similar to the dot-product, except that it is centered on zero (subtraction of the mu, the mean) and the variance is normalised (division by standard deviation).
Because the equation is normalised, a perfectly correlated sequence yeilds a rho value of 1.0. A perfectly random comparison yields 0 and two anti-correlated sequences will yield a value of -1.0.
End of explanation
"""
a = im[0,:,:].reshape(-1)
b = im[3,:,:].reshape(-1)
#Calculate the pearson's coefficent (rho) for the image channel 0, 3.
#You should hopefully obtain a value 0.829
#from tifffile import imread as imreadtiff
im = imreadtiff('composite.tif')
#The organisation of this file is not simple. It is also a 16-bit image.
print("shape of im: ",im.shape,"bit-depth: ",im.dtype)
#We can assess the image data like so.
CH0 = im[0,0,:,:]
CH1 = im[1,0,:,:]
#Single channels visualisation can handle 16-bit
subplot(2,2,1)
imshow(CH0,cmap='Reds_r')
subplot(2,2,2)
imshow(CH1,cmap='Greens_r')
subplot(2,2,3)
#RGB data have to range between 0 and 255 in each channel and be int (8-bit).
imRGB = np.zeros((CH0.shape[0],CH0.shape[1],3))
imRGB[:,:,0] = CH0/255.0
imRGB[:,:,1] = CH1/255.0
imshow(255-imRGB.astype(int8))
#What is the current Pearson's value for this image?
"""
Explanation: Pearson's comparison of microscopy derived images
End of explanation
"""
rho_max = 0
#This moves one of your images with respect to the other.
for c in range(1,40):
for r in range(1,40):
#We need to dynamically sample our image.
temp = CH0[c:-40+c,r:-40+r].reshape(-1);
#The -40 makes sure they are the same size.
ref = CH1[:-40,:-40].reshape(-1);
rho = np.dot(temp-np.average(temp),ref-np.average(ref))/sqrt(((np.dot(temp-np.average(temp),temp-np.average(temp)))*np.dot(ref-np.average(ref),ref-np.average(ref))))
#You will need to work out the highest rho value is recorded.
#You will then need to find the coordinates of this high rho.
#You will then need to provide a visualisation with the image translated.
"""
Explanation: Maybe remove so not to clash with Mark's.
Last challenge
Exercise: The above image is not registered. Can you devise a way of registering this image using the Pearson's test, as a measure for the similarity of the image in different positions. hint you will need to move one of the images relative to the other and measure the colocalisation in this position. The best localisation will have the highest rho value. Produce an image of your fully registered image.
End of explanation
"""
|
schoolie/bokeh
|
examples/howto/charts/deep_dive-attributes.ipynb
|
bsd-3-clause
|
from bokeh.charts.attributes import AttrSpec, ColorAttr, MarkerAttr
"""
Explanation: Bokeh Charts Attributes
One of Bokeh Charts main contributions is that it provides a flexible interface for applying unique attributes based on the unique values in column(s) of a DataFrame.
Internally, the bokeh chart uses the AttrSpec to define the mapping, but allows the user to pass in their own spec, or utilize a function to produce a customized one.
End of explanation
"""
attr = AttrSpec(items=[1, 2, 3], iterable=['a', 'b', 'c'])
attr.attr_map
"""
Explanation: Simple Examples
The AttrSpec assigns values in the iterable to values in items.
End of explanation
"""
attr[1]
"""
Explanation: You will see that the key in the mapping will be a tuple, and it will always be a tuple. The mapping works like this because the AttrSpec(s) are often used with Pandas DataFrames groupby method. The groupby method can return a single value or a tuple of values when used with multiple columns, so this is just making sure that is consistent.
However, you can still access the values in the following way:
End of explanation
"""
color = ColorAttr(items=[1, 2, 3])
color.attr_map
"""
Explanation: The ColorAttr is just a custom AttrSpec that has a default palette as the iterable, but can be customized, and will likely provide some other color generation functionality.
End of explanation
"""
color = ColorAttr(items=list(range(0, 10)))
color.attr_map
"""
Explanation: Let's assume that you don't know how many unique items you are working with, but you have defined the things that you want to assign the items to. The AttrSpec will automatically cycle the iterable for you. This is important for exploratory analysis.
End of explanation
"""
from bokeh.sampledata.autompg import autompg as df
df.head()
color_attr = ColorAttr(df=df, columns=['cyl', 'origin'])
color_attr.attr_map
"""
Explanation: Because there are only 6 unique colors in the default palette, the palette repeats starting on the 7th item.
Using with Pandas
End of explanation
"""
color_attr.series
"""
Explanation: You will notice that this is similar to a pandas series with a MultiIndex, which is seen below.
End of explanation
"""
from bokeh.charts.data_source import ChartDataSource
fill_color = ColorAttr(columns=['cyl', 'origin'])
ds = ChartDataSource.from_data(df)
ds.join_attrs(fill_color=fill_color).head()
"""
Explanation: You can think of this as a SQL table with 3 columns, two of which are an index. You can imagine how you might join this view data into the original data source to assign these colors to the associated rows.
Combining with ChartDataSource
End of explanation
"""
# add new column
df['large_displ'] = df['displ'] >= 350
fill_color = ColorAttr(columns=['cyl', 'origin'])
line_color = ColorAttr(columns=['large_displ'])
ds = ChartDataSource.from_data(df)
ds.join_attrs(fill_color=fill_color, line_color=line_color).head(10)
"""
Explanation: Multiple Attributes
End of explanation
"""
line_color = ColorAttr(df=df, columns=['large_displ'], palette=['Green', 'Red'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head(10)
"""
Explanation: Custom Iterable
You will see that the output contains the combined chart_index and the columns for both attributes. The values of each are joined in based on the original assignment. For example, line_color only has two colors because the large_displ column only has two values.
If we wanted to change the true/false, we can modify the ColorAttr.
End of explanation
"""
df_sorted = df.sort(columns=['large_displ'], ascending=False)
line_color = ColorAttr(df=df_sorted, columns=['large_displ'], palette=['Green', 'Red'], sort=False)
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
"""
Explanation: Altering Attribute Assignment Order
You may not have wanted to assign the values in the order that occured. So, you would have five options.
Pre order the data and tell the attribute not to sort.
Make the column a categorical and set the order.
Specify the sort options to the AttrSpec
Manually specify the items in the order you want them to be assigned.
Specify the iterable in the order you want.
1. Pre order the data
End of explanation
"""
df.sort(columns='large_displ').head()
import pandas as pd
df_cat = df.copy()
# create the categorical and set the default (ascending)
df_cat['large_displ'] = pd.Categorical.from_array(df.large_displ).reorder_categories([True, False])
# we don't have to sort here, but doing it so you can see the order that the attr spec will see
df_cat.sort(columns='large_displ').head()
line_color = ColorAttr(df=df_cat, columns=['large_displ'], palette=['Green', 'Red'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
"""
Explanation: 2. Make the column a categorical and set the order
We'll show the default sort order of a boolean column, which is ascending.
End of explanation
"""
# the items will be sorted descending (uses same sorting options as pandas)
line_color = ColorAttr(df=df, columns=['large_displ'], palette=['Green', 'Red'], sort=True, ascending=False)
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
"""
Explanation: 3. Specify the sort options to the AttrSpec
End of explanation
"""
# remove df so the items aren't auto-calculated
# still need column name for when palette is joined into the dataset
line_color = ColorAttr(columns=['large_displ'], items=[True, False], palette=['Green', 'Red'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
"""
Explanation: 4. Manually specify the items in the order you want them
End of explanation
"""
line_color = ColorAttr(df=df, columns=['large_displ'], palette=['Red', 'Green'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
"""
Explanation: 5. Change the order of the iterable
End of explanation
"""
|
Diyago/Machine-Learning-scripts
|
DEEP LEARNING/NLP/LSTM RNN/Toxic multiclass prediction Glove + Bidirection LSTM.ipynb
|
apache-2.0
|
EMBEDDING_FILE = f'glove.6B.50d.txt'
TRAIN_DATA_FILE = f'train.csv'
TEST_DATA_FILE = f'test.csv'
"""
Explanation: We include the GloVe word vectors in our input files. To include these in your kernel, simple click 'input files' at the top of the notebook, and search 'glove' in the 'datasets' section.
End of explanation
"""
embed_size = 50 # how big is each word vector
max_features = 20000 # how many unique words to use (i.e num rows in embedding vector)
maxlen = 100 # max number of words in a comment to use
"""
Explanation: Set some basic config parameters:
End of explanation
"""
train = pd.read_csv(TRAIN_DATA_FILE)
test = pd.read_csv(TEST_DATA_FILE)
list_sentences_train = train["comment_text"].fillna("_na_").values
list_classes = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
y = train[list_classes].values
list_sentences_test = test["comment_text"].fillna("_na_").values
"""
Explanation: Read in our data and replace missing values:
End of explanation
"""
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(list_sentences_train))
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)
list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
X_te = pad_sequences(list_tokenized_test, maxlen=maxlen)
"""
Explanation: Standard keras preprocessing, to turn each comment into a list of word indexes of equal length (with truncation or padding as needed).
End of explanation
"""
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.strip().split()) for o in open(EMBEDDING_FILE))
"""
Explanation: Read the glove word vectors (space delimited strings) into a dictionary from word->vector.
End of explanation
"""
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
emb_mean,emb_std
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
"""
Explanation: Use these vectors to create our embedding matrix, with random initialization for words that aren't in GloVe. We'll use the same mean and stdev of embeddings the GloVe has when generating the random init.
End of explanation
"""
inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size, weights=[embedding_matrix])(inp)
x = Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(x)
x = GlobalMaxPool1D()(x)
x = Dense(50, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
"""
Explanation: Simple bidirectional LSTM with two fully connected layers. We add some dropout to the LSTM since even 2 epochs is enough to overfit.
End of explanation
"""
model.fit(X_t, y, batch_size=128, epochs=2, validation_split=0.35)
"""
Explanation: Now we're ready to fit out model! Use validation_split when not submitting.
End of explanation
"""
y_test = model.predict([X_te], batch_size=1024, verbose=1)
sample_submission = pd.read_csv('sample_submission.csv')
sample_submission[list_classes] = y_test
sample_submission.to_csv('submission.csv', index=False)
"""
Explanation: And finally, get predictions for the test set and prepare a submission CSV:
End of explanation
"""
|
qutip/qutip-notebooks
|
docs/guide/Eseries.ipynb
|
lgpl-3.0
|
%matplotlib inline
import numpy as np
from pylab import *
from qutip import *
"""
Explanation: Eseries Class
Contents
Exponential-Series Representation of Quantum Objects
Applications of Exponential-Series
End of explanation
"""
es1 = eseries(sigmax(), 1j)
"""
Explanation: <a id='exponential'></a>
Exponential-Series Representation of Time-Dependent Quantum Objects
The eseries object in QuTiP is a representation of an exponential-series expansion of time-dependent quantum objects (a concept borrowed from the quantum optics toolbox).
An exponential series is parameterized by its amplitude coefficients $c_i$ and rates $r_i$, so that the series takes the form
$E(t) = \sum_i c_i e^{r_i t}$. The coefficients are typically quantum objects (i.e. states, operators, etc.), so that the value of the eseries also is a quantum object, and the rates can be either real or complex numbers (describing decay rates and oscillation frequencies, respectively). Note that all amplitude coefficients in an exponential series must be of the same dimensions and composition.
In QuTiP, an exponential series object is constructed by creating an instance of the class eseries:
End of explanation
"""
omega = 1.0
es2 = (eseries(0.5 * sigmax(), 1j * omega) + eseries(0.5 * sigmax(), -1j * omega))
"""
Explanation: where the first argument is the amplitude coefficient (here, the sigma-X operator), and the second argument is the rate. The eseries in this example represents the time-dependent operator $\sigma_x e^{i t}$. To add more terms to an eseries object we simply add objects using the + operator:
End of explanation
"""
es2 = eseries([0.5 * sigmax(), 0.5 * sigmax()], [1j * omega, -1j * omega])
"""
Explanation: The eseries in this example represents the operator $0.5 \sigma_x e^{i\omega t} + 0.5 \sigma_x e^{-i\omega t}$, which is the exponential series representation of $\sigma_x \cos(\omega t)$. Alternatively, we can also specify a list of amplitudes and rates when the eseries is created:
End of explanation
"""
es2
"""
Explanation: We can inspect the structure of an eseries object by printing it to the standard output console:
End of explanation
"""
esval(es2, 0.0) # equivalent to es2.value(0.0)
es2.value(0)
"""
Explanation: and we can evaluate it at time $t$ by using the esval function or the value method:
End of explanation
"""
times = [0.0, 1.0 * np.pi, 2.0 * np.pi]
esval(es2, times)
es2.value(times)
"""
Explanation: or for a list of times [0.0, 1.0 * pi, 2.0 * pi]:
End of explanation
"""
es3 = (eseries([0.5*sigmaz(), 0.5*sigmaz()], [1j, -1j]) +
eseries([-0.5j*sigmax(), 0.5j*sigmax()], [1j, -1j]))
rho = fock_dm(2, 1)
es3_expect = expect(rho, es3)
es3_expect
"""
Explanation: To calculate the expectation value of an time-dependent operator represented by an eseries, we use the expect function. For example, consider the operator $\sigma_x \cos(\omega t) + \sigma_z\sin(\omega t)$, and say we would like to know the expectation value of this operator for a spin in its excited state (rho = fock_dm(2,1) produce this state):
End of explanation
"""
es3_expect.value([0.0, pi/2])
"""
Explanation: Note the expectation value of the eseries object, expect(rho, es3), itself is an eseries, but with amplitude coefficients that are c-numbers instead of quantum operators. To evaluate the c-number eseries at the times times we use es3_expect.value(times) or equivalently esval(es3_expect, times).
End of explanation
"""
psi0 = basis(2,1)
H = sigmaz()
L = liouvillian(H, [sqrt(1.0) * destroy(2)])
es = ode2es(L, psi0)
"""
Explanation: <a id='applications'></a>
Applications of Exponential Series
The exponential series formalism can be useful for the time-evolution of quantum systems. One approach to calculating the time evolution of a quantum system is to diagonalize its Hamiltonian (or Liouvillian, for dissipative systems) and to express the propagator (e.g., $\exp(-iHt) \rho \exp(iHt)$) as an exponential series.
The QuTiP function ode2es and essolve use this method to evolve quantum systems in time. The exponential series approach is particularly suitable for cases when the same system is to be evolved for many different initial states, since the diagonalization only needs to be performed once (as opposed to e.g. the ode solver that would need to be ran independently for each initial state).
As an example, consider a spin-1/2 with a Hamiltonian pointing in the $\sigma_z$ direction, and that is subject to noise causing relaxation. For a spin originally is in the up state, we can create an eseries object describing its dynamics by using the es2ode function:
End of explanation
"""
es
"""
Explanation: The ode2es function diagonalizes the Liouvillian $L$ and creates an exponential series with the correct eigenfrequencies and amplitudes for the initial state
$\psi_0$ (psi0).
We can examine the resulting eseries object by printing a text representation:
End of explanation
"""
es.value([0.0, 1.0])
"""
Explanation: or by evaluating it and arbitrary points in time (here at 0.0 and 1.0):
End of explanation
"""
es_expect = expect(sigmaz(), es)
"""
Explanation: and the expectation value of the exponential series can be calculated using the expect function:
End of explanation
"""
times = linspace(0.0, 10.0, 100)
sz_expect = es_expect.value(times)
plot(times, sz_expect, lw=2)
xlabel("Time", fontsize=14)
ylabel("Expectation value of sigma-z", fontsize=14)
show()
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/guide.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: The result es_expect is now an exponential series with c-numbers as amplitudes, which easily can be evaluated at arbitrary times:
End of explanation
"""
|
sujitpal/intro-dl-talk-code
|
src/01-nonlinearity.ipynb
|
unlicense
|
from __future__ import division, print_function
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.utils import np_utils
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def read_dataset(filename):
Z = np.loadtxt(filename, delimiter=",")
y = Z[:, 0]
X = Z[:, 1:]
return X, y
def plot_dataset(X, y):
Xred = X[y==0]
Xblue = X[y==1]
plt.scatter(Xred[:, 0], Xred[:, 1], color='r', marker='o')
plt.scatter(Xblue[:, 0], Xblue[:, 1], color='b', marker='o')
plt.xlabel("X[0]")
plt.ylabel("X[1]")
plt.show()
"""
Explanation: Effect of Number and Depth of Hidden Units on Nonlinearity
This notebook looks at the effect of increasing the number of hidden layers and the number of hidden units in each layer in order to model non-linear data.
The code is adapted from Simple end-to-end Tensorflow examples blog post by Jason Baldridge. The ideas here are identical, except the implementation uses Keras instead of Tensorflow.
Imports and setup
End of explanation
"""
X, y = read_dataset("../data/linear.csv")
X = X[y != 2]
y = y[y != 2].astype("int")
print(X.shape, y.shape)
plot_dataset(X, y)
"""
Explanation: Linearly Separable Data
Our first dataset is linearly separable as seen in the scatter plot below.
End of explanation
"""
Y = np_utils.to_categorical(y, 2)
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, Y, test_size=0.3, random_state=0)
"""
Explanation: Our y values need to be in sparse one-hot encoding format, so we convert the labels to this format. We then split the dataset 70% for training and 30% for testing.
End of explanation
"""
model = Sequential()
model.add(Dense(2, input_shape=(2,)))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
"""
Explanation: Construct a model with an input layer which takes 2 inputs, and a softmax output layer. The softmax activation takes the scores from each output line and converts them to probabilities. There is no non-linear activation in this network. The equation is given by:
<img src="files/linear_eqn.png"/>
Training this model for 50 epochs yields accuracy of 82.8% on the test set.
End of explanation
"""
X, y = read_dataset("../data/moons.csv")
y = y.astype("int")
print(X.shape, y.shape)
plot_dataset(X, y)
Y = np_utils.to_categorical(y, 2)
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, Y, test_size=0.3, random_state=0)
"""
Explanation: Linearly non-separable data #1
The data below is the moons dataset. The two clusters cannot be separated by a straight line.
End of explanation
"""
model = Sequential()
model.add(Dense(50, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
"""
Explanation: A network with the same configuration as above produces a accuracy of 85.67% on the test set, as opposed to 92.7% on the linear dataset.
Let us add a hidden layer of 50 hidden units and a Rectified Linear Unit (ReLu) activation to induce some non-linearity in the model. This gives us an accuracy of 89.3%.
End of explanation
"""
model = Sequential()
model.add(Dense(50, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dense(100))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
"""
Explanation: Lets add another layer. Layers produce non-linearity. We add another hidden layer with 100 units, also with a ReLu activation unit. This brings up our accuracy to 92%. The separation is still mostly linear, with just the beginnings of non-linearity.
End of explanation
"""
X, y = read_dataset("../data/saturn.csv")
y = y.astype("int")
print(X.shape, y.shape)
plot_dataset(X, y)
"""
Explanation: Linearly non-separable data #2
This is the saturn dataset. The data is definitely not linearly separable unless one applies a radial function to project onto a sphere and cut horizontally across the sphere. We will not do this, since our objective is to investigate the effect of hidden layers and hidden units.
End of explanation
"""
model = Sequential()
model.add(Dense(50, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dense(100))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
"""
Explanation: Previous network (producing 90.5% accuracy on test data for the moon data) produces 90.3% accuracy on the Saturn data. You can see the boundary getting non-linear.
End of explanation
"""
model = Sequential()
model.add(Dense(1024, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(128))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
"""
Explanation: Lets increase the number of hidden units from 1 to 2. The number of hidden units in each layer is also much larger. We have also added Rectified Linear Unit activations and Dropouts on each layer. Using this, our accuracy goes up to 98.8%. The separation boundary is now definitely non-linear.
End of explanation
"""
|
marcinofulus/teaching
|
Python4physicists_SS2017/Python4hum-Jupyter_intro-from0.ipynb
|
gpl-3.0
|
for i in range(4):
print(i)
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
X = np.linspace(-np.pi, np.pi, 656)
F = np.sin(1/(X**2+0.07))
plt.plot(X,F)
"""
Explanation: Jupyter notebook
Sposoby interakcji z programem komputerowym:
terminal tekstowy
GUI
notatnik (NEW!)
Jupyter
Środowisko typu "notatnik"
<img src="http://jupyter.org/assets/main-logo.svg" width="20%" align="right">
Notatnik:
Dokument
Środowisko
Aplikacja "sieciowa"
Jupyterhub
<img src="http://jupyter.org/assets/hublogo.svg" width="50%" align="right">
Jupyterhub
<img src="figs/what_is_jupyterhub.png" align='center'>
* http://www.slideshare.net/willingc/jupyterhub-a-thing-explainer-overview-66104290 *
alfa - jupyterhub @ UŚ
<img src="http://jupyter.org/assets/hublogo.svg" width='120px' align='right'>
adres: http://alfa.smcebi.us.edu.pl
Wymaga współczesnej przeglądarki internetowej.
Informacje:
Admin: Mirosław Ziółkowski
strona www: http://www.smcebi.us.edu.pl/infrastruktura-it/
smcebi.edu.pl
zmiana hasła
Jupyter notebook
komórki tekstowe (markdown cells) - przykład
komórki z kodem (code cells)
kernel - czyli jądro obsługuje komórki z kodem
lista kerneli (jąder) jest długa....
system $\LaTeX$:
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
Markdown
asdfa sdfa sdf adf df
asdf asfd s
asdf asdf
asd fas f
asd fa
a dsfa
asd fa
sadf asd
End of explanation
"""
from ipywidgets import interact
def f(x):
print(x)
interact(f, x=10);
from ipywidgets import widgets
w = widgets.IntSlider(min=0,max=10,value=3,step=1,width="430px")
w
w.value
w.value = 4
"""
Explanation: widgets
End of explanation
"""
from math import sin,cos
%timeit sin(cos(sin(1.23)))
import numpy as np
%timeit np.sum(np.sin(np.random.randn(int(1e6))))
%timeit np.sum(np.sin(np.random.randn(int(1e6))))
%time np.sum(np.sin(np.random.randn(int(1e6))))
%%sh
pwd
for i in `ls`
do
echo Plik: $i
done
"""
Explanation: Notebook magics
Komórki zaczynające się od
"%" - line magics
"%%" - cell magics
Line magics - przykład
End of explanation
"""
from ipywidgets import widgets
w = widgets.Text()
w
def mojcallback(w):
print("OK ----",w.value)
w.on_submit(mojcallback)
w.value = "dads"
"""
Explanation: Widget callback przykład
End of explanation
"""
|
shngli/Data-Mining-Python
|
Mining massive datasets/MapReduce SVM.ipynb
|
gpl-3.0
|
from collections import defaultdict
import math
# determine if an integer n is a prime number
def isPrime(n):
if n == 2:
return True
if n%2 == 0 or n <= 1:
return False
sqr = int(math.sqrt(n)) + 1
for divisor in range(3, sqr, 2):
if n%divisor == 0:
return False
return True
# Output the prime divisors of each integers
reduce = defaultdict(list)
def map(integer):
output = []
for i in range(2, integer):
if isPrime(i) and integer%i == 0:
output.append(i)
return output
# Input list of integers
integer = [15, 21, 24, 30, 49]
# Print every integer and its prime divisor(s)
# eg. The prime divisors of 15 are 3 & 5
for n in integer:
print "Integer:", n
primeDivisor = map(n)
print "Prime divisor(s):", primeDivisor
for key in primeDivisor:
reduce[key].append(n)
for key, values in reduce.items():
print "prime divisor and the sum of integers:", key, ",", sum(values)
"""
Explanation: MapReduce / SVM
Question 1
Suppose our input data to a map-reduce operation consists of integer values (the keys are not important). The map function takes an integer i and produces the list of pairs (p,i) such that p is a prime divisor of i. For example, map(12) = [(2,12), (3,12)]. The reduce function is addition. That is, reduce(p, [i1, i2, ..., ik]) is (p, i1 + i2 +...+ ik). Compute the output, if the input is the set of integers 15, 21, 24, 30, 49.
End of explanation
"""
import numpy as np
import itertools
M = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],])
v = np.array([1, 2, 3, 4])
def mr(M, v):
t = []
mr, mc = M.shape
for i in range(mc):
for j in range(mr):
t.append((i, M[i, j] * v[j]))
t = sorted(t, key=lambda x:x[0])
for x in t:
print (x[0]+1, x[1])
r = np.zeros((mr, 1))
for key, vals in itertools.groupby(t, key=lambda x:x[0]):
vals = [x[1] for x in vals]
r[key] = sum(vals)
print '%s, %s' % (key, sum(vals))
return r.transpose()
#print np.dot(M, v.transpose())
print mr(M, v)
"""
Explanation: Question 2
Use the matrix-vector multiplication and apply the Map function to this matrix and vector:
| 1 | 2 | 3 | 4 | | 1 |
|---|---|---|---| |---|
| 5 | 6 | 7 | 8 | | 2 |
| 9 | 10 | 11 | 12 | | 3 |
| 13 | 14 | 15 | 16 | | 4 |
Then, identify the key-value pairs that are output of Map.
End of explanation
"""
from IPython.display import Image
Image(filename='relations.jpeg')
"""
Explanation: Question 3
Suppose we have the following relations:
End of explanation
"""
import numpy as np
import itertools
R = [(0, 1),
(1, 2),
(2, 3)]
S = [(0, 1),
(1, 2),
(2, 3)]
def hash_join(R, S):
h = {}
for a, b in R:
h.setdefault(b, []).append(a)
j = []
for b, c in S:
if not h.has_key(b):
continue
for r in h[b]:
j.append( (r, b, c) )
return j
def mr(R, S):
m = []
for a, b in R:
m.append( (b, ('R', a)) )
for b, c in S:
m.append( (b, ('S', c)) )
m = sorted(m, key=lambda x:x[0])
r = []
for key, vals in itertools.groupby(m, key=lambda x:x[0]):
vals = [x[1] for x in vals]
print key, vals
rs = [x for x in vals if x[0] == 'R']
ss = [x for x in vals if x[0] == 'S']
for ri in rs:
for si in ss:
r.append( (ri[1], key, si[1]) )
return r
print hash_join(R, S)
print mr(R, S)
"""
Explanation: and we take their natural join. Apply the Map function to the tuples of these relations. Then, construct the elements that are input to the Reduce function. Identify these elements.
End of explanation
"""
from IPython.display import Image
Image(filename='svm1.jpeg')
"""
Explanation: Question 4
The figure below shows two positive points (purple squares) and two negative points (green circles):
End of explanation
"""
import math
import numpy as np
P = [((5, 4), 1),
((8, 3), 1),
((3, 3), -1),
((7, 2), -1)]
def line(pl0, pl1, p):
dx, dy = pl1[0] - pl0[0], pl1[1] - pl0[1]
a = abs((pl1[1] - pl0[1]) * p[0] - (pl1[0] - pl0[0]) * p[1] + pl1[0]*pl0[1] - pl0[0]*pl1[1])
return a / math.sqrt(dx*dx + dy*dy)
def closest(L, pts):
dist = [line(L[0][0], L[1][0], x[0]) for x in pts]
ix = np.argmin(dist)
return pts[ix], dist[ix]
def solve(A, B):
# find the point in B closest to the line through both points in A
p, d = closest(A, B)
M = np.hstack((
np.array([list(x[0]) for x in A] + [list(p[0])]),
np.ones((3, 1))))
b = np.array([x[1] for x in A] + [p[1]])
x = np.linalg.solve(M, b)
return x, d
S = [solve([a for a in P if a[1] == 1], [a for a in P if a [1] == -1]),
solve([a for a in P if a[1] == -1], [a for a in P if a [1] == 1])]
ix = np.argmax([x[1] for x in S])
x = S[ix][0]
print 'w1 = %0.2f' % x[0]
print 'w2 = %0.2f' % x[1]
print 'b = %0.2f' % x[2]
"""
Explanation: That is, the training data set consists of:
- (x1,y1) = ((5,4),+1)
- (x2,y2) = ((8,3),+1)
- (x3,y3) = ((7,2),-1)
- (x4,y4) = ((3,3),-1)
Our goal is to find the maximum-margin linear classifier for this data. In easy cases, the shortest line between a positive and negative point has a perpendicular bisector that separates the points. If so, the perpendicular bisector is surely the maximum-margin separator. Alas, in this case, the closest pair of positive and negative points, x2 and x3, have a perpendicular bisector that misclassifies x1 as negative, so that won't work.
The next-best possibility is that we can find a pair of points on one side (i.e., either two positive or two negative points) such that a line parallel to the line through these points is the maximum-margin separator. In these cases, the limit to how far from the two points the parallel line can get is determined by the closest (to the line between the two points) of the points on the other side. For our simple data set, this situation holds.
Consider all possibilities for boundaries of this type, and express the boundary as w.x+b=0, such that w.x+b≥1 for positive points x and w.x+b≤-1 for negative points x. Assuming that w = (w1,w2), identify the value of w1, w2, and b.
End of explanation
"""
Image(filename='newsvm4.jpeg')
"""
Explanation: Question 5
Consider the following training set of 16 points. The eight purple squares are positive examples, and the eight green circles are negative examples.
End of explanation
"""
import numpy as np
pos = [(5, 10),
(7, 10),
(1, 8),
(3, 8),
(7, 8),
(1, 6),
(3, 6),
(3, 4)]
neg = [(5, 8),
(5, 6),
(7, 6),
(1, 4),
(5, 4),
(7, 4),
(1, 2),
(3, 2)]
C = [(x, 1) for x in pos] + [(x, -1) for x in neg]
w, b = np.array([-1, 1]), -2
d = np.dot(np.array([list(x[0]) for x in C]), w) + b
print("Points"+"\t"+"Slack")
for i, m in enumerate(np.sign(d) == np.array([x[1] for x in C])):
if C[i][1] == 1:
slack = 1 - d
else:
slack = 1 + d
#print "%s %d %0.2f %0.2f" % (C[i][0], C[i][1], d[i], slack[i])
print "%s\t%0.2f" % (C[i][0], slack[i])
"""
Explanation: We propose to use the diagonal line with slope +1 and intercept +2 as a decision boundary, with positive examples above and negative examples below. However, like any linear boundary for this training set, some examples are misclassified. We can measure the goodness of the boundary by computing all the slack variables that exceed 0, and then using them in one of several objective functions. In this problem, we shall only concern ourselves with computing the slack variables, not an objective function.
To be specific, suppose the boundary is written in the form w.x+b=0, where w = (-1,1) and b = -2. Note that we can scale the three numbers involved as we wish, and so doing changes the margin around the boundary. However, we want to consider this specific boundary and margin. Determine the slack for each of the 16 points.
End of explanation
"""
Image(filename='gold.jpeg')
Image(filename='dectree1.jpeg')
"""
Explanation: Question 6
Below we see a set of 20 points and a decision tree for classifying the points.
End of explanation
"""
A = 0
S = 1
pos = [(28,145),
(38,115),
(43,83),
(50,130),
(50,90),
(50,60),
(50,30),
(55,118),
(63,88),
(65,140)]
neg = [(23,40),
(25,125),
(29,97),
(33,22),
(35,63),
(42,57),
(44, 105),
(55,63),
(55,20),
(64,37)]
def classify(p):
if p[A] < 45:
return p[S] >= 110
else:
return p[S] >= 75
e = [p for p, v in zip(pos, [classify(x) for x in pos]) if not v] + \
[p for p, v in zip(neg, [classify(x) for x in neg]) if v]
print e
"""
Explanation: To be precise, the 20 points represent (Age,Salary) pairs of people who do or do not buy gold jewelry. Age (appreviated A in the decision tree) is the x-axis, and Salary (S in the tree) is the y-axis. Those that do are represented by gold points, and those that do not by green points. The 10 points of gold-jewelry buyers are:
(28,145), (38,115), (43,83), (50,130), (50,90), (50,60), (50,30), (55,118), (63,88), and (65,140).
The 10 points of those that do not buy gold jewelry are:
(23,40), (25,125), (29,97), (33,22), (35,63), (42,57), (44, 105), (55,63), (55,20), and (64,37).
Some of these points are correctly classified by the decision tree and some are not. Determine the classification of each point, and then indicate the points that are misclassified.
End of explanation
"""
|
d00d/quantNotebooks
|
Notebooks/quantopian_research_public/notebooks/lectures/p-Hacking_and_Multiple_Comparisons_Bias/notebook.ipynb
|
unlicense
|
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
"""
Explanation: p-Hacking and Multiple Comparisons Bias
By Delaney Mackenzie and Maxwell Margenot.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
Multiple comparisons bias is a pervasive problem in statistics, data science, and in general forecasting/predictions. The short explanation is that the more tests you run, the more likely you are to get an outcome that you want/expect. If you ignore the multitude of tests that failed, you are clearly setting yourself up for failure by misinterpreting what's going on in your data.
A particularly common example of this is when looking for relationships in large data sets comprising of many indepedent series or variables. In this case you run a test each time you evaluate whether a relationship exists between a set of variables.
Statistics Merely Illuminates This Issue
Most folks also fall prey to multiple comparisons bias in real life. Any time you make a decision you are effectively taking an action based on an hypothesis. That hypothesis is often tested. You can end up unknowingly making many tests in your daily life.
An example might be deciding which medicine is helping cure a cold you have. Many people will take multiple medicines at once to try and get rid of symptoms. You may think that a certain medicine worked, when in reality none did and the cold just happened to start getting better at some point.
The point here is that this problem doesn't stem from statistical testing and p-values. Rather, these techniques give us much more information about the problem and when it might be occuring.
End of explanation
"""
X = pd.Series(np.random.normal(0, 1, 100))
Y = X
r_s = stats.spearmanr(Y, X)
print 'Spearman Rank Coefficient: ', r_s[0]
print 'p-value: ', r_s[1]
"""
Explanation: Refresher: Spearman Rank Correlation
Please refer to this lecture for more full info, but here is a very brief refresher on Spearman Rank Correlation.
It's a variation of correlation that takes into account the ranks of the data. This can help with weird distributions or outliers that would confuse other measures. The test also returns a p-value, which is key here.
A higher coefficient means a stronger estimated relationship.
End of explanation
"""
X = pd.Series(np.random.normal(0, 1, 100))
Y = X + np.random.normal(0, 1, 100)
r_s = stats.spearmanr(Y, X)
print 'Spearman Rank Coefficient: ', r_s[0]
print 'p-value: ', r_s[1]
"""
Explanation: If we add some noise our coefficient will drop.
End of explanation
"""
# Setting a cutoff of 5% means that there is a 5% chance
# of us getting a significant p-value given no relationship
# in our data (false positive).
# NOTE: This is only true if the test's assumptions have been
# satisfied and the test is therefore properly calibrated.
# All tests have different assumptions.
cutoff = 0.05
X = pd.Series(np.random.normal(0, 1, 100))
Y = X + np.random.normal(0, 1, 100)
r_s = stats.spearmanr(Y, X)
print 'Spearman Rank Coefficient: ', r_s[0]
if r_s[1] < cutoff:
print 'There is significant evidence of a relationship.'
else:
print 'There is not significant evidence of a relationship.'
"""
Explanation: p-value Refresher
For more info on p-values see this lecture. What's important to remember is they're used to test a hypothesis given some data. Here we are testing the hypothesis that a relationship exists between two series given the series values.
IMPORTANT: p-values must be treated as binary.
A common mistake is that p-values are treated as more or less significant. This is bad practice as it allows for what's known as p-hacking and will result in more false positives than you expect. Effectively, you will be too likely to convince yourself that relationships exist in your data.
To treat p-values as binary, a cutoff must be set in advance. Then the p-value must be compared with the cutoff and treated as significant/not signficant. Here we'll show this.
The Cutoff is our Significance Level
We can refer to the cutoff as our significance level because a lower cutoff means that results which pass it are significant at a higher level of confidence. So if you have a cutoff of 0.05, then even on random data 5% of tests will pass based on chance. A cutoff of 0.01 reduces this to 1%, which is a more stringent test. We can therefore have more confidence in our results.
End of explanation
"""
df = pd.DataFrame()
"""
Explanation: Experiment - Running Many Tests
We'll start by defining a data frame.
End of explanation
"""
N = 20
T = 100
for i in range(N):
X = np.random.normal(0, 1, T)
X = pd.Series(X)
name = 'X%s' % i
df[name] = X
df.head()
"""
Explanation: Now we'll populate it by adding N randomly generated timeseries of length T.
End of explanation
"""
cutoff = 0.05
significant_pairs = []
for i in range(N):
for j in range(i+1, N):
Xi = df.iloc[:, i]
Xj = df.iloc[:, j]
results = stats.spearmanr(Xi, Xj)
pvalue = results[1]
if pvalue < cutoff:
significant_pairs.append((i, j))
"""
Explanation: Now we'll run a test on all pairs within our data looking for instances where our p-value is below our defined cutoff of 5%.
End of explanation
"""
(N * (N-1) / 2) * 0.05
"""
Explanation: Before we check how many significant results we got, let's run out some math to check how many we'd expect. The formula for the number of pairs given N series is
$$\frac{N(N-1)}{2}$$
There are no relationships in our data as it's all randomly generated. If our test is properly calibrated we should expect a false positive rate of 5% given our 5% cutoff. Therefore we should expect the following number of pairs that achieved significance based on pure random chance.
End of explanation
"""
len(significant_pairs)
"""
Explanation: Now let's compare to how many we actually found.
End of explanation
"""
def do_experiment(N, T, cutoff=0.05):
df = pd.DataFrame()
# Make random data
for i in range(N):
X = np.random.normal(0, 1, T)
X = pd.Series(X)
name = 'X%s' % i
df[name] = X
significant_pairs = []
# Look for relationships
for i in range(N):
for j in range(i+1, N):
Xi = df.iloc[:, i]
Xj = df.iloc[:, j]
results = stats.spearmanr(Xi, Xj)
pvalue = results[1]
if pvalue < cutoff:
significant_pairs.append((i, j))
return significant_pairs
num_experiments = 100
results = np.zeros((num_experiments,))
for i in range(num_experiments):
# Run a single experiment
result = do_experiment(20, 100, cutoff=0.05)
# Count how many pairs
n = len(result)
# Add to array
results[i] = n
"""
Explanation: We shouldn't expect the numbers to match too closely here on a consistent basis as we've only run one experiment. If we run many of these experiments we should see a convergence to what we'd expect.
Repeating the Experiment
End of explanation
"""
np.mean(results)
"""
Explanation: The average over many experiments should be closer.
End of explanation
"""
def get_pvalues_from_experiment(N, T):
df = pd.DataFrame()
# Make random data
for i in range(N):
X = np.random.normal(0, 1, T)
X = pd.Series(X)
name = 'X%s' % i
df[name] = X
pvalues = []
# Look for relationships
for i in range(N):
for j in range(i+1, N):
Xi = df.iloc[:, i]
Xj = df.iloc[:, j]
results = stats.spearmanr(Xi, Xj)
pvalue = results[1]
pvalues.append(pvalue)
return pvalues
"""
Explanation: Visualizing What's Going On
What's happening here is that p-values should be uniformly distributed, given no signal in the underlying data. Basically, they carry no information whatsoever and will be equally likely to be 0.01 as 0.99. Because they're popping out randomly, you will expect a certain percentage of p-values to be underneath any threshold you choose. The lower the threshold the fewer will pass your test.
Let's visualize this by making a modified function that returns p-values.
End of explanation
"""
pvalues = get_pvalues_from_experiment(10, 100)
plt.hist(pvalues)
plt.ylabel('Frequency')
plt.title('Observed p-value');
"""
Explanation: We'll now collect a bunch of pvalues. As in any case we'll want to collect quite a number of p-values to start getting a sense of how the underlying distribution looks. If we only collect few, it will be noisy like this:
End of explanation
"""
pvalues = get_pvalues_from_experiment(50, 100)
plt.hist(pvalues)
plt.ylabel('Frequency')
plt.title('Observed p-value');
"""
Explanation: Let's dial up our N parameter to get a better sense. Keep in mind that the number of p-values will increase at a rate of
$$\frac{N (N-1)}{2}$$
or approximately quadratically. Therefore we don't need to increase N by much.
End of explanation
"""
pvalues = get_pvalues_from_experiment(50, 100)
plt.vlines(0.01, 0, 150, colors='r', linestyle='--', label='0.01 Cutoff')
plt.vlines(0.05, 0, 150, colors='r', label='0.05 Cutoff')
plt.hist(pvalues, label='P-Value Distribution')
plt.legend()
plt.ylabel('Frequency')
plt.title('Observed p-value');
"""
Explanation: Starting to look pretty flat, as we expected. Lastly, just to visualize the process of drawing a cutoff, we'll draw two artificial lines.
End of explanation
"""
num_experiments = 100
results = np.zeros((num_experiments,))
for i in range(num_experiments):
# Run a single experiment
result = do_experiment(20, 100, cutoff=0.01)
# Count how many pairs
n = len(result)
# Add to array
results[i] = n
np.mean(results)
"""
Explanation: We can see that with a lower cutoff we should expect to get fewer false positives. Let's check that with our above experiment.
End of explanation
"""
(N * (N-1) / 2) * 0.01
"""
Explanation: And finally compare it to what we expected.
End of explanation
"""
num_experiments = 100
results = np.zeros((num_experiments,))
N = 20
T = 100
desired_level = 0.05
num_tests = N * (N - 1) / 2
new_cutoff = desired_level / num_tests
for i in range(num_experiments):
# Run a single experiment
result = do_experiment(20, 100, cutoff=new_cutoff)
# Count how many pairs
n = len(result)
# Add to array
results[i] = n
np.mean(results)
"""
Explanation: Sensitivity / Specificity Tradeoff
As with any adjustment of p-value cutoff, we have a tradeoff. A lower cutoff decreases the rate of false positives, but also decreases the chance we find a real relationship (true positive). So you can't just decrease your cutoff to solve this problem.
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Reducing Multiple Comparisons Bias
You can't really eliminate multiple comparisons bias, but you can reduce how much it impacts you. To do so we have two options.
Option 1: Run fewer tests.
This is often the best option. Rather than just sweeping around hoping you hit an interesting signal, use your expert knowledge of the system to develop a great hypothesis and test that. This process of exploring the data, coming up with a hypothesis, then gathering more data and testing the hypothesis on the new data is considered the gold standard in statistical and scientific research. It's crucial that the data set on which you develop your hypothesis is not the one on which you test it. Because you found the effect while exploring, the test will likely pass and not really tell you anything. What you want to know is how consistent the effect is. Moving to new data and testing there will not only mean you only run one test, but will be an 'unbiased estimator' of whether your hypothesis is true. We discuss this a lot in other lectures.
Option 2: Adjustment Factors and Bon Ferroni Correction
WARNING: This section gets a little technical. Unless you're comfortable with significance levels, we recommend looking at the code examples first and maybe reading the linked articles before fully diving into the text.
If you must run many tests, try to correct your p-values. This means applying a correction factor to the cutoff you desire to obtain the one actually used when determining whether p-values are significant. The most conservative and common correction factor is Bon Ferroni.
Example: Bon Ferroni Correction
The concept behind Bon Ferroni is quite simple. It just says that if we run $m$ tests, and we have a significance level/cutoff of $a$, then we should use $a/m$ as our new cutoff when determining significance. The math works out because of the following.
Let's say we run $m$ tests. We should expect to see $ma$ false positives based on random chance that pass out cutoff. If we instead use $a/m$ as our cutoff, then we should expect to see $ma/m = a$ tests that pass our cutoff. Therefore we are back to our desired false positive rate of $a$.
Let's try it on our experiment above.
End of explanation
"""
|
probml/pyprobml
|
notebooks/misc/GCP_CC_TPU_Pod_Slice_JAX.ipynb
|
mit
|
# Hints from :
# https://medium.com/analytics-vidhya/how-to-access-files-from-google-cloud-storage-in-colab-notebooks-8edaf9e6c020
# https://stackoverflow.com/questions/57772453/login-on-colab-with-gcloud-without-service-account
"""
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/GCP_CC_TPU_Pod_Slice_JAX.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
from google.colab import auth
auth.authenticate_user()
"""
Explanation: Authenticate GCP
End of explanation
"""
!curl https://sdk.cloud.google.com | bash
"""
Explanation: Install GCloud SDK into a new directory
End of explanation
"""
%%file example.py
# The following code snippet will be run on all TPU hosts
import jax
# The total number of TPU cores in the Pod
device_count = jax.device_count()
# The number of TPU cores attached to this host
local_device_count = jax.local_device_count()
# The psum is performed over all mapped devices across the Pod
xs = jax.numpy.ones(jax.local_device_count())
r = jax.pmap(lambda x: jax.lax.psum(x, 'i'), axis_name='i')(xs)
# Print from a single host to avoid duplicated output
if jax.process_index() == 0:
print('global device count:', jax.device_count())
print('local device count:', jax.local_device_count())
print('pmap result:', r)
"""
Explanation: Run the following commands in colab's terminal
Install GCloud Alpha components
bash
gcloud1="/root/google-cloud-sdk/bin/gcloud"
$gcloud1 components install alpha
Set your GCP Project ID
bash
project_id="YOUR_PROJECT_ID"
$gcloud1 config set project $project_id
Create your TPU VM per the insturctions
$gcloud1 alpha compute tpus tpu-vm create *YOUR_TPU_VM_NAME* \
--zone us-east1-d \
--accelerator-type v3-32 \
--version v2-alpha
Install JAX on the pod slice
bash
$gcloud1 alpha compute tpus tpu-vm ssh *YOUR_TPU_VM_NAME* \
--zone us-east1-d \
--worker=all \
--command="pip install 'jax[tpu]>=0.2.16' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html"
Run the cell below to write example.py
End of explanation
"""
|
dietmarw/EK5312_ElectricalMachines
|
Chapman/Ch3-Example_3-01.ipynb
|
unlicense
|
%pylab notebook
"""
Explanation: Electric Machinery Fundamentals 5th edition
Chapter 3 (Code examples)
Example 3-1
Calculate the net magetic field produced by a three-phase stator.
Import the PyLab namespace (provides set of useful commands and constants like $\pi$)
End of explanation
"""
bmax = 1 # Normalize bmax to 1
freq = 60 # 60 Hz
w = 2*pi*freq # angluar velocity (rad/s)
"""
Explanation: Set up the basic conditions:
End of explanation
"""
t = linspace(0, 1./60, 100) # 100 values for one period
Baa = sin(w*t) * (cos(0) + 1j*sin(0))
Bbb = sin(w*t-2*pi/3) * (cos(2*pi/3) + 1j*sin(2*pi/3))
Bcc = sin(w*t+2*pi/3) * (cos(-2*pi/3) + 1j*sin(-2*pi/3))
"""
Explanation: First, generate the three component magnetic fields
End of explanation
"""
Bnet = Baa + Bbb + Bcc
"""
Explanation: Calculate Bnet:
End of explanation
"""
circle = 1.5 * (cos(w*t) + 1j*sin(w*t))
"""
Explanation: Calculate a circle representing the expected maximum value of Bnet:
End of explanation
"""
# First set up the figure, the axis, and the plot element we want to animate
fig = figure()
ax = axes(xlim=(-1.6, 2.5), ylim=(-1.6, 1.6), aspect='equal')
laa, lbb, lcc, lnet = ax.plot([], [], 'k', # black
[], [], 'b', # blue
[], [], 'm', # magenta
[], [], 'r', # red
lw=2)
ax.legend(('Baa', 'Bbb', 'Bcc', 'Bnet'), loc=4)
# initialization function: plot the background of each frame
def init():
ax.plot(real(circle), imag(circle), 'g');
return
# animation function. This is called sequentially
def animate(i):
laa.set_data([0, real(Baa[i])], [0, imag(Baa[i])])
lbb.set_data([0, real(Bbb[i])], [0, imag(Bbb[i])])
lcc.set_data([0, real(Bcc[i])], [0, imag(Bcc[i])])
lnet.set_data([0, real(Bnet[i])], [0, imag(Bnet[i])])
return laa, lbb, lcc, lnet
from matplotlib import animation
# call the animator.
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=50);
"""
Explanation: Plot the magnitude and direction of the resulting magnetic fields. Note that Baa is black, Bbb is blue, Bcc is magneta, and Bnet is red.
End of explanation
"""
from IPython.display import HTML
HTML(anim.to_html5_video())
"""
Explanation: The animation above might be a bit "skippy" due to the browser performance trying to cope with the inline animation.
The solution to a smooth animation is to the animation as a video and embed it right here:
End of explanation
"""
|
yevheniyc/C
|
1d_Biopython_Cookbook/Chapter_1.ipynb
|
mit
|
p = (4, 5, 6, 7)
x, y, z, w = p # x -> 4
data = ['ACME', 50, 91.1, (2012, 12, 21)]
name, _, price, date = data # name -> 'ACME', data -> (2012, 12, 21)
s = 'Hello'
a, b, c, d, e = s # a -> H
p = (4, 5)
x, y, z = p # "ValueError"
"""
Explanation: Chapter 1 - Data Structures and Algorithms
1.1 Unpacking a Sequence into Separate Variables:
Problem:
Unpacking tuple/sequence into a collection of variables
Solution:
Any sequence/iterable can be unpacked into variables using an assignment operation. The number of variables and structure must match the number of sequence items:
End of explanation
"""
def drop_first_last(grades):
""" Drop first and last exams, then average the rest. """
first, *middle, last = grades
return avg(middle)
def arbitrary_numbers():
""" Name and email followed by phone number(s). """
record = ('Dave', 'dave@example.com', '555-555-5555', '555-555-5544')
name, email, *phone_numbers = record # phone_number always a list
return phone_numbers
def recent_to_first_n():
""" Most recent quarter compares to the average of the first n. """
sales_records = ('23.444', '234.23', '0', 23.12, '15.56')
*trailing_qtrs, current_qtr = sales_record
trailing_avg = sum(trailing_qtrs) / len(trailing_qtrs)
return avg_comparison(trailing_avg, current_qtr)
"""
Explanation: 1.2 Unpacking Elements from Iterables of Arbitrary Length:
Problem:
Unpacking unknown number of elements in tuple/sequence/iterables into variables
Solution:
Use "star expressions" for handling multiples:
End of explanation
"""
####### 1 ##############
records = [ ('foo', 1, 2), ('bar', 'hello'), ('foo', 3, 4) ]
def do_foo(x, y):
print('foo', x, y)
def do_bar(s):
print('bar', s)
for tag, *args in records:
if tag == 'foo':
do_foo(*args)
elif tag == 'bar':
do_bar(*args)
#########################
######## 2 ##############
line = 'nobody:*:-2:-2:Unprivileged User:/var/empty:/usr/bin/false'
uname, *fields, homedir, sh = line.split(':') # uname -> nobody
#########################
######### 3 #############
record = ('ACME', 50, 123, 45, (12, 18, 2012))
name, *_, (*_, year) = record # name and year
#########################
######### 4 #############
def sum(items):
""" Recursions are not recommended w/ Python. """
head, *tail = items
return head + sum(*tail) if tail else head
#########################
"""
Explanation: Discussion:
This is often implemented with iterables of unknown(arbitrary) length, and known pattern: "everything after element 1 is a number".
Handy when iterating over a sequence of tuples of varying length or of tagged tuples.
Handy when unpacking with string processing operations
Handy when unpacking and throwing away some variables
Handly when spliting a list into head and tail components, which could be used to implement recursive solutions.
End of explanation
"""
from collections import deque
def search(lines, pattern, history=5):
""" Returns a line that matches the pattern and 5 previous lines"""
previous_lines = deque(maxlen=history) # a generator of a list with max length
for line in lines:
if pattern in line:
yield line, previous_lines
previous_lines.append(line)
# Example use on a file
if __name__ == '__main__':
with open('somefile.txt') as f:
for line, prevlines in search(f, 'python', 5):
for pline in prevlines:
print(pline, end='')
print(line, end='')
print('-' * 20)
"""
Explanation: 1.3 Keeping the Last N Items (in list queue with deque):
Problem:
Keep a limited history of the last few items seen during iteration or processing.
Solution:
Use collections.deque: perform a simple text search on a sequence of lines and yield matching lines with previous N lines of conext when found:
End of explanation
"""
######## 1, 2, 3 ########
q = deque(maxlen=3)
q.append(1)
q.appendleft(4)
q.pop() # 1
q.popleft() # 4
#########################
"""
Explanation: Generator functions (with yield) are common when searching for items. This decouples the process of searching from the code that uses results:
deque(maxlen=5) uses fixed-size queue; although we could append/delete items from a list, this is more elegant/faster
Handly when a simple queue structure is needed; without maxlen, use pop/append
Popping/appending/popleft/appendleft has O(1) vs O(N) complexity
End of explanation
"""
import heapq
nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]
print(heapq.nlargest(3, nums)) # [42, 37 ,23]
print(heapq.nsmallest(3, nums)) # [-4, 1, 2]
heap.heappop(nums) # -4
# use key parameter to use with complicated data structures
portfolio = [
{'name': 'IBM', 'shares': 100, 'price': 91.1},
{'name': 'AAPL', 'shares': 50, 'price': 543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(3, portfolio, key=lambda s: s['price'])
expensive = heapq.nlargest(3, portfolio, key=lambda s: s['price'])
# if N is close to the size of the items:
sorted(nums)[:N] # a better approach
"""
Explanation: 1.4 Finding the Largest or Smallest N Items
Problem:
Make a list of the largest or smallest N items in a collection.
Solution:
The heapq module has nlargest() and nsmallest()
End of explanation
"""
import heapq
class PriorityQueue:
def __init__(self):
self._queue = []
self._index = 0
def __repr__(self):
return 'PriorityQueue({}) with index({})'.format(self._queue, self._index)
def push(self, item, priority):
heapq.heappush(self._queue, (-priority, self._index, item)) # heappush(list, ())
self._index += 1
def pop(self):
return heapq.heappop(self._queue)[-1] # self_queue includes [(priority, index, item)]
class Item:
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Item({!r})'.format(self.name)
q = PriorityQueue()
print(q)
q.push(Item('foo'), 1)
print(q)
q.push(Item('bar'), 5)
print(q)
q.push(Item('spam'), 4)
print(q)
q.push(Item('grok'), 1)
print(q)
q.pop() # -> Item('bar')
print(q)
q.pop() # -> Item('spam')
print(q)
q.pop() # -> Item('foo')
print(q)
q.pop() # -> Item('grok')
print(q)
# foo and grok were popped in the same order in which they were inserted
"""
Explanation: Discussion
When looking for N smallest/largest numbers, heapq provides superior performance. heap[0] is always the smallest number. Structures are converted into a list where items are ordered as a heap (underneath).
1.5 Implementing a Priority Queue
Problem:
Implement a queue that sorts items by a given priority and always returns the item with the highest priority on each pop operations.
Solution:
Use heapq to implement a simple priority queue
End of explanation
"""
from collections import defaultdict
d = defaultdict(list) # multiple values will be added to a list
d['a'].append(1)
d['a'].append(2)
d['b'].append(4)
d = defaultdict(set) # multiple values will be added to a set
d['a'].add(1)
d['b'].add(2)
d['a'].add(5)
# Messier setdefault
d = {}
d.setdefault('a', []).append(1)
d.setdefault('a', []).append(2) # will add to the existing list
# Even messier
d = {}
for key, value in paiers:
if key not in d:
d[key] = []
d[key].append(value)
# Best!
d = defaultdict(list)
for key, value in pairs:
d[key].append(value)
"""
Explanation: Discussion:
This recipe focuses on the use of heapq module. Functions heapq.heappush() and heapq.heappop() insert and remove items from a list _queue so that the first item in the list has the highest priority.
heappop() and heappush() have O(log N) complexity
a queue consists of tuples (-priority, index, item); priority is negated so that to add items with the highest priority to the beginning of the _queue
index value is used to properly order items with the same priority; index also works for comparison operations:
By introducing the extra index and making (priority, index, item) tuples, you avoid this problem entirely since no two tuples will ever have the same value for index (and Python never bothers to compare the remaining tuple values once the result of com‐ parison can be determined)::
a = (1, 0, Item('foo'))
b = (5, 1, Item('bar'))
c = (1, 2, Item('grok'))
a < b # True
a < c # True
we can use this queue for communication between threads, but we will need to add appropriate locking and signaling (look ahead)
1.6 Mapping Keys to Multiple Values in a Dictionary
Problem:
Make a dictionary that maps keys to more than one value (multidict)
Solution:
A dictionary is a mapping where each key is mapped to a single value. When mapping keys to multiple values, we need to store multiple values in a different container: list or set.
Use lists to preserve the insertion order of the items
Use sets to eliminate duplicates (when we don't care about the order)
Use defaultdict in the collections to construct such structure:
defaultdict automatically initializes the first value of the key
defaultdict automatically adds default values later on when accessing dictionary
if we don't want the above behavior, use setdefault (it is messier however)
End of explanation
"""
from collections import OrderedDict
d = OrderedDict()
d['foo'] = 1
d['bar'] = 2
d['spam'] = 3
d['grok'] = 4
for key in d:
print(key, d[key]) # -> 'foo 1', 'bar 2', 'spam 3', 'grok 4'
# Use when serializing JSON
import json
json.dumps(d) # -> '{"foo": 1, "bar": 2, "spam": 3, "grok": 4}'
"""
Explanation: 1.7 Keepping Dictionaries in Order
Problem:
Control the order of items in a dictionary when iterating or serializing
Solution:
Use OrderedDict from the collections to control dictionary order. It is particularly useful when building a mapping that later will be serialized or encoded into a different format. For example, when controlling the order of fields appearing in a JSON encoding, first build the data in OrderedDict and then json dump.
End of explanation
"""
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# to get calculated values first reverse and zip
min_price = min(zip(prices.values(), prices.keys())) # (10.75, 'FB')
max_price = max(zip(prices.values(), prices.keys())) # (612.78, 'AAPL')
# to rank the data use zip with sorted
prices_sorted = sorted(zip(prices.values(), prices.keys())) # [(10.75, 'FB'), (37.2, 'HPQ')...]
# the iterator can be consumed only once
prices_and_names = zip(prices.values(), prices.keys())
print(min(prices_and_names)) # result OK
print(max(prices_and_names)) # ValueError: max() arg is an empty sequence
"""
Explanation: Discussion:
An OrderedDict is an expensive procedure - beware for items exceeding 100000 lines:
internally maintains a doubly linked list that orders the keys according to insertion order; when a new item is first inserted, it is placed at the end of this list; subsequent reassignment of an existing key doesn't change the order.
be aware that the size of OrderedDict is more than twice as large as a normal dictionary due to the extra linked list that's created.
if building a data structure involving a large number of OrderedDict instances (> 100000 lines of CSV file into a list of OrderedDict instances) be careful!
1.8 Calculating with Dictionaries
Problem:
Performing various calculations (min, max, sort) on a dictionary
Solution:
Reverse keys and values, then perform a calculation function on the zip result.
Important:
1. max/min/sort is performed on the keys
2. if the keys are the same, max/min/sort is then based on the values
3. zip creates an iterator, which can only be consumed once
End of explanation
"""
#### 1 #############
min(prices) # 'AAPL'
max(prices) # 'IBM'
#### 2 ############
min(prices.values()) # 10.75
max(prices.values()) # 612.78
#### 3 ############
min(prices, key=lambda k: prices[k]) # 'FB'
max(prices, key=lambda k: prices[k]) # 'AAPL' -> perfrom calculation on values and return key
# to get the value as well as the key, additionally:
min_key = min(prices, key=lambda k: prices[k])
min_value = prices[min(prices, key=lambda k: prices[k])]
#### 4, 5 #########
prices = { 'AAA' : 45.23, 'ZZZ': 45.23 }
min(zip(prices.values(), prices.keys())) # (45.23, 'AAA')
max(zip(prices.values(), prices.keys())) # (45.23, 'ZZZ')
"""
Explanation: Discussion:
Common reductions on a dicitionary process the keys and not the values
This is not (probably) what you want, as usually calcualtions are performed on values
In addition to a value result, we often need to know the corresponding key
That is why the zip solution works really well and not too clunky
As noted before, if the values in (values, keys) are the same, the keys will be used
For clear example on lambda functions and key attributes go to:
https://wiki.python.org/moin/HowTo/Sorting
End of explanation
"""
a={
'x' : 1,
'y' : 2,
'z' : 3
}
b={
'w' : 10,
'x' : 11,
'y' : 2
}
# find keys in common
a.keys() & b.keys() # {'x', 'y'}
# find keys in a that are not in b
a.keys() - b.keys() # {'z'}
# find (key, value) pairs in common
a.items() & b.items() # {('y', 2)}
# alter/filter dictionary contents - make a new dict with selected keys removed
c = { key: a[key] for key in a.keys() - {'z', 'w'}} # {'x': 1, 'y': 2}
"""
Explanation: 1.9 Finding Commonalities in Two Dictionaries
Problem:
Find out what two different dictionaries have in common (keys, values, etc.)
Solution:
Perfrom common set operations using the keys() or items() methods
End of explanation
"""
###### 1 #########
def dedupe(items):
''' Add a unique item to the seen, and then check agains seen.'''
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(dedupe(a)) # [1, 5, 2, 9, 10]
##### 2 ##########
def dedupe(items, key=None): # key is similar to min/max/sorted
''' Purpose of the key argument is to specify a function(lambda)
that converts sequence items into a hashable type for the
purposes of duplicate detection.
'''
seen = set()
for item in items:
val = item if key is None else key(item) # key could be lambda of values, keys, etc.
if val not in seen:
yield item
seen.add(val)
a = [ {'x':1, 'y':2}, {'x':1, 'y':3}, {'x':1, 'y':2}, {'x':2, 'y':4}]
# remove duplicates based on x/y values
list(dedupe(a, key=lambda d: (d['x'], d['y']))) # [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}]
##### 3 #########
# remove duplicates based on x values - for each item in "a" sequence execute the lambda function
list(dedupe(a, key=lambda d: d['x'])) # [{'x': 1, 'y': 2}, {'x': 2, 'y': 4}]
"""
Explanation: Discussion:
The keys() method of a dictionary returns a keys-view object that exposes the keys. Key views support set operations: unions, intersections, and differences.
The items() method of a dictionary returns an items-view object consisting of (key, value) pairs. This object supports similar set operations and can be used to perform operations such as finding out which key-value pairs two dictionaries have in common.
Although similar, the values() method of a dictionary does not support the set oper‐ ations described in this recipe. In part, this is due to the fact that unlike keys, the items contained in a values view aren’t guaranteed to be unique. However, if you must perform such calculations, they can be accomplished by simply converting the values to a set first.
1.10 Removing Duplicates form a Sequence while Maintaining Order
Problem:
Eliminate the duplicate values in a sequence, but preserve the order
Solution:
If the values in the sequence are hashable (preserver order), use a set and a generator.
If a sequence consists of unhashable types (dicts) use the key/lambda combo
The key/lambda combo also works well when eliminating duplicates based on the values of a single field, attribute, or a larger data structure
For an amazing explanation of iterables, iterators, generators and yield:
http://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python
End of explanation
"""
# let's eliminate duplicate lines from a file using the dedupe(items, key=None) generator
with open('somefile.txt', 'r') as f:
# the generator will spit out a single value (line) at a time,
# while keeping track (a pointer) to where it is located during each yield
for line in dedupe(f):
# process unique lines
pass
"""
Explanation: Discussion:
To eliminate duplicates without preserving an order use a set
The generator functions allows us to be extremely general purpose: not only tied to list processing, but also to file
End of explanation
"""
|
kyledef/jammerwebscraper
|
Scrape Newsday.ipynb
|
mit
|
# Import dependencies (i.e. packages that extend the standard language to perform specific [advance] functionality)
import urllib
import urllib2
from datetime import datetime, date, timedelta
from bs4 import BeautifulSoup
"""
Explanation: Web Scraping in Python Series
Introduction
The system will provide a simple example of using python to extract information from a website. Note this is part 1 of 4. Participants are not expected to have any experience in python or background in web scraping. However, some understanding of HTML will be useful.
This draws inspiration from http://web.stanford.edu/~zlotnick/TextAsData/Web_Scraping_with_Beautiful_Soup.html
End of explanation
"""
# Step 1 - create a function to generates a list(array) of dates
def genDatesNewsDay(start_date = date.today(), num_days = 3):
# date_list = [start - timedelta(days=x) for x in range(0, num_days)] # generate a list of dates
# While we expand the above line for beginners understanding
date_list = []
for d in range(0, num_days):
temp = start_date - timedelta(days=d)
date_list.append(temp.strftime('%Y-%-m-%d'))# http://strftime.org/ used a reference
return date_list
# Step 2 -Test the generated URL to ensure they point to
def traverseDatesNewsDay(func, start_date = date.today(), num_days = 3):
base_url="http://www.newsday.co.tt/archives/"
dates_str_list = genDatesNewsDay(start_date, num_days)
for date in dates_str_list:
url = base_url + date
func(url)
def printDate(date):
print(date)
traverseDatesNewsDay(printDate)
from dateutil.relativedelta import relativedelta
# http://www.guardian.co.tt/archive/2017-02?page=3
base_url = "http://www.guardian.co.tt/archive/"
# print date.today().strftime("%Y-%-m")
dates_str_list = []
page_content_list = []
for i in range(0, 12):
d = date.today() - relativedelta(months=+i)
page_url = base_url + d.strftime("%Y-%-m")
dates_str_list.append(page_url)
try:
page_content_list.append( urllib.urlopen(page_url).read() )
except:
print "Unable to find content for {0}".format(page_url)
print "Generated {0} urls and retrieved {1} pages".format(len(dates_str_list), len(page_content_list))
url = dates_str_list[0]
request = urllib2.Request("http://www.guardian.co.tt/archive/2017-2")
request.add_header('User-Agent', user_agent)
request.add_header('Accept-Language', accept_language)
content = urllib2.build_opener().open(request).read()
def fetch_content(url):
user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3004.3 Safari/537.36"
accept_language="en-GB,en-US;q=0.8,en;q=0.6"
request = urllib2.Request(url)
request.add_header('User-Agent', user_agent)
request.add_header('Accept-Language', accept_language)
content = urllib2.build_opener().open(request).read()
return content
beau = BeautifulSoup(content, "html5lib")
main_block = beau.find(id="block-system-main")
links = main_block.find_all("div", class_="view-content")[0].find_all('a')
last = main_block.find("li", class_="pager-last last")
max_pages = int(last.find("a")['href'].split("=")[1])
pages_list = range(1, max_pages+1)
# len(links)
# url = "http://www.guardian.co.tt/archive/2017-2"
# page = pages_list[0]
# url = "{0}?page={1}".format(url, page)
# content = fetch_content(url)
base_url = "http://www.guardian.co.tt/"
stories = []
stories_links = []
for pg in links:
url = base_url + pg['href']
stories_links.append(url)
stories.append( fetch_content(url) )
first = True
emo_count = {
"anger" : 0,
"disgust": 0,
"fear" : 0,
"joy" : 0,
"sadness": 0
}
socio_count = {
"openness_big5": 0,
"conscientiousness_big5": 0,
"extraversion_big5" : 0,
"agreeableness_big5" : 0,
"emotional_range_big5": 0
}
for story in stories:
beau = BeautifulSoup(story, "html5lib")
# main_block = beau.find("h1", class_="title")
paragraphs = beau.find(id="block-system-main").find_all("p")
page_text = ""
for p in paragraphs:
page_text += p.get_text()
tone_analyzer = getAnalyser()
res = tone_analyzer.tone(page_text)
tone = res['document_tone']['tone_categories']
emo = tone[0]['tones'] # we want the emotional tone
soci= tone[2]['tones'] # we also want the social tone
e_res = processTone(emo)
emo_count[e_res['tone_id']] += 1
s_res = processTone(soci)
socio_count[s_res['tone_id']] += 1
for e in emo_count:
print("{0} articles were classified with the emotion {1}".format(emo_count[e], e))
for s in socio_count:
print("{0} articles were classified as {1}".format(socio_count[s], s))
# Step 3 - Read content and process page
def processPage(page_url):
print("Attempting to read content from {0}".format(page_url))
page_content = urllib.urlopen(page_url).read()
beau = BeautifulSoup(page_content, "html5lib")
tables = beau.find_all("table") #https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all
for i in range(0,13):
named_sec = tables[i].h3
if named_sec:
print("i {0} produced {1}".format(i,named_sec))
article_links = beau.find_all("a", 'title')
print("Found {0} tables and {1} articles".format(len(tables), len(article_links)))
# traverseDatesNewsDay(processPage,num_days = 1)
"""
Explanation: From analysing the newsday archieve website we see that the URL follows a parsable convention
http://www.newsday.co.tt/archives/YYYY-M-DD.html
So our general approach will be as follows:
1. Generate date in the expected form between an ending and starting date
2. Test to ensure the dates generated are valid. (refine step1 based on results)
3. Read the content and process based on our goal for scaping the page
End of explanation
"""
# Integrating IBM Watson
import json
from watson_developer_cloud import ToneAnalyzerV3
from local_settings import *
def getAnalyser():
tone_analyzer = ToneAnalyzerV3(
username= WATSON_CREDS['username'],
password= WATSON_CREDS['password'],
version='2016-05-19')
return tone_analyzer
# tone_analyzer = getAnalyser()
# tone_analyzer.tone(text='A word is dead when it is said, some say. Emily Dickinson')
def analysePage(page_url):
page_content = urllib.urlopen(page_url).read()
beau = BeautifulSoup(page_content, "html5lib")
tables = beau.find_all("table") #https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all
article_links = beau.find_all("a", 'title')
print("Found {0} tables and {1} articles".format(len(tables), len(article_links)))
for i in article_links:
print i
# traverseDatesNewsDay(analysePage,num_days = 1)
page_content = urllib.urlopen("http://www.newsday.co.tt/archives/2017-2-2").read()
beau = BeautifulSoup(page_content, "html5lib")
tables = beau.find_all("table") #https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all
article_links = beau.find_all("a", 'title')
print("Found {0} tables and {1} articles".format(len(tables), len(article_links)))
def processTone(tone):
large = tone[0]['score']
large_i = 0
for i in range(1, len(tone)):
if tone[i]['score'] > large:
large = tone[i]['score']
large_i = i
return tone[large_i]
"""
Explanation: The Purpose (Goal) of Scraping
Our main purpose of developing this exercise was to determine if the statement that the majority of the news published was negative. To do this we need to capture the sentiment of the information extracted from the link. While we can develop sentiment analysis tools using python, the process of training and validating is too much work at this time. Therefore, we utilize the IBM Watson Tone Analyzer API. We selected this API because it provides a greater amount of detail rather than a binary positive or negative result.
To use the watson api for python:
We installed the pip pacakge
bash
pip install --upgrade watson-developer-cloud
We created an account (free for 30 days)
https://tone-analyzer-demo.mybluemix.net/
Use the API referene to build application
http://www.ibm.com/watson/developercloud/tone-analyzer/api/v3/?python#
Created a local_settings.py file that contains the credentials retrieved from signing up
End of explanation
"""
first = True
emo_count = {
"anger" : 0,
"disgust": 0,
"fear" : 0,
"joy" : 0,
"sadness": 0
}
socio_count = {
"openness_big5": 0,
"conscientiousness_big5": 0,
"extraversion_big5" : 0,
"agreeableness_big5" : 0,
"emotional_range_big5": 0
}
for i in article_links:
res = tone_analyzer.tone(i['title'])
tone = res['document_tone']['tone_categories']
emo = tone[0]['tones'] # we want the emotional tone
soci= tone[2]['tones'] # we also want the social tone
e_res = processTone(emo)
emo_count[e_res['tone_id']] += 1
s_res = processTone(soci)
socio_count[s_res['tone_id']] += 1
for e in emo_count:
print("{0} articles were classified with the emotion {1}".format(emo_count[e], e))
for s in socio_count:
print("{0} articles were classified as {1}".format(socio_count[s], s))
"""
Explanation: The Understanding of the structure of the response is provided in the API reference
https://www.ibm.com/watson/developercloud/tone-analyzer/api/v3/?python#post-tone
End of explanation
"""
|
Danghor/Algorithms
|
Python/Chapter-09/Union-Find-OO.ipynb
|
gpl-2.0
|
class UnionFind:
def __init__(self, M):
self.mParent = { x: x for x in M }
self.mHeight = { x: 1 for x in M }
"""
Explanation: An Object-Oriented Implementation of the Union-Find Algorithm
The class UnionFind maintains three member variables:
- mParent is a dictionary that assigns each node to its parent node.
Initially, all nodes point to themselves.
- mHeight is a dictionary that stores the height of the trees. If $x$ is a node, then
$\texttt{mHeight}[x]$ is the height of the tree rooted at $x$.
Initially, all trees contain but a single node and therefore have the height $1$.
End of explanation
"""
def find(self, x):
p = self.mParent[x]
if p == x:
return x
return self.find(p)
UnionFind.find = find
del find
"""
Explanation: Given an element $x$ from the set $M$, the function $\texttt{self}.\texttt{find}(x)$
returns the ancestor of $x$ that is at the root of the tree containing $x$.
End of explanation
"""
def union(self, x, y):
root_x = self.find(x)
root_y = self.find(y)
if root_x != root_y:
if self.mHeight[root_x] < self.mHeight[root_y]:
self.mParent[root_x] = root_y
elif self.mHeight[root_x] > self.mHeight[root_y]:
self.mParent[root_y] = root_x
else:
self.mParent[root_y] = root_x
self.mHeight[root_x] += 1
UnionFind.union = union
def partition(M, R):
UF = UnionFind(M)
for x, y in R:
UF.union(x, y)
Roots = { x for x in M if UF.find(x) == x }
return [{y for y in M if UF.find(y) == r} for r in Roots]
def demo():
M = set(range(1, 10))
R = { (1, 4), (7, 9), (3, 5), (2, 6), (5, 8), (1, 9), (4, 7) }
P = partition(M, R)
return P
P = demo()
P
"""
Explanation: Given two elements $x$ and $y$ and an object $o$ of type UnionFind, the call $o.\texttt{union}(x, y)$ changes the unionFind object $o$ so that afterwards the equation
$$ o.\texttt{find}(x) = o.\texttt{find}(y) $$
holds.
End of explanation
"""
|
tensorflow/probability
|
tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
%matplotlib inline
import functools
import matplotlib.pyplot as plt; plt.style.use('ggplot')
import numpy as np
import seaborn as sns; sns.set_context('notebook')
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
"""
Explanation: Bayesian Gaussian Mixture Model and Hamiltonian MCMC
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Bayesian_Gaussian_Mixture_Model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab we'll explore sampling from the posterior of a Bayesian Gaussian Mixture Model (BGMM) using only TensorFlow Probability primitives.
Model
For $k\in{1,\ldots, K}$ mixture components each of dimension $D$, we'd like to model $i\in{1,\ldots,N}$ iid samples using the following Bayesian Gaussian Mixture Model:
$$\begin{align}
\theta &\sim \text{Dirichlet}(\text{concentration}=\alpha_0)\
\mu_k &\sim \text{Normal}(\text{loc}=\mu_{0k}, \text{scale}=I_D)\
T_k &\sim \text{Wishart}(\text{df}=5, \text{scale}=I_D)\
Z_i &\sim \text{Categorical}(\text{probs}=\theta)\
Y_i &\sim \text{Normal}(\text{loc}=\mu_{z_i}, \text{scale}=T_{z_i}^{-1/2})\
\end{align}$$
Note, the scale arguments all have cholesky semantics. We use this convention because it is that of TF Distributions (which itself uses this convention in part because it is computationally advantageous).
Our goal is to generate samples from the posterior:
$$p\left(\theta, {\mu_k, T_k}{k=1}^K \Big| {y_i}{i=1}^N, \alpha_0, {\mu_{ok}}_{k=1}^K\right)$$
Notice that ${Z_i}_{i=1}^N$ is not present--we're interested in only those random variables which don't scale with $N$. (And luckily there's a TF distribution which handles marginalizing out $Z_i$.)
It is not possible to directly sample from this distribution owing to a computationally intractable normalization term.
Metropolis-Hastings algorithms are technique for for sampling from intractable-to-normalize distributions.
TensorFlow Probability offers a number of MCMC options, including several based on Metropolis-Hastings. In this notebook, we'll use Hamiltonian Monte Carlo (tfp.mcmc.HamiltonianMonteCarlo). HMC is often a good choice because it can converge rapidly, samples the state space jointly (as opposed to coordinatewise), and leverages one of TF's virtues: automatic differentiation. That said, sampling from a BGMM posterior might actually be better done by other approaches, e.g., Gibb's sampling.
End of explanation
"""
class MVNCholPrecisionTriL(tfd.TransformedDistribution):
"""MVN from loc and (Cholesky) precision matrix."""
def __init__(self, loc, chol_precision_tril, name=None):
super(MVNCholPrecisionTriL, self).__init__(
distribution=tfd.Independent(tfd.Normal(tf.zeros_like(loc),
scale=tf.ones_like(loc)),
reinterpreted_batch_ndims=1),
bijector=tfb.Chain([
tfb.Shift(shift=loc),
tfb.Invert(tfb.ScaleMatvecTriL(scale_tril=chol_precision_tril,
adjoint=True)),
]),
name=name)
"""
Explanation: Before actually building the model, we'll need to define a new type of distribution. From the model specification above, its clear we're parameterizing the MVN with an inverse covariance matrix, i.e., [precision matrix](https://en.wikipedia.org/wiki/Precision_(statistics%29). To accomplish this in TF, we'll need to roll out our Bijector. This Bijector will use the forward transformation:
Y = tf.linalg.triangular_solve((tf.linalg.matrix_transpose(chol_precision_tril), X, adjoint=True) + loc.
And the log_prob calculation is just the inverse, i.e.:
X = tf.linalg.matmul(chol_precision_tril, X - loc, adjoint_a=True).
Since all we need for HMC is log_prob, this means we avoid ever calling tf.linalg.triangular_solve (as would be the case for tfd.MultivariateNormalTriL). This is advantageous since tf.linalg.matmul is usually faster owing to better cache locality.
End of explanation
"""
def compute_sample_stats(d, seed=42, n=int(1e6)):
x = d.sample(n, seed=seed)
sample_mean = tf.reduce_mean(x, axis=0, keepdims=True)
s = x - sample_mean
sample_cov = tf.linalg.matmul(s, s, adjoint_a=True) / tf.cast(n, s.dtype)
sample_scale = tf.linalg.cholesky(sample_cov)
sample_mean = sample_mean[0]
return [
sample_mean,
sample_cov,
sample_scale,
]
dtype = np.float32
true_loc = np.array([1., -1.], dtype=dtype)
true_chol_precision = np.array([[1., 0.],
[2., 8.]],
dtype=dtype)
true_precision = np.matmul(true_chol_precision, true_chol_precision.T)
true_cov = np.linalg.inv(true_precision)
d = MVNCholPrecisionTriL(
loc=true_loc,
chol_precision_tril=true_chol_precision)
[sample_mean, sample_cov, sample_scale] = [
t.numpy() for t in compute_sample_stats(d)]
print('true mean:', true_loc)
print('sample mean:', sample_mean)
print('true cov:\n', true_cov)
print('sample cov:\n', sample_cov)
"""
Explanation: The tfd.Independent distribution turns independent draws of one distribution, into a multivariate distribution with statistically independent coordinates. In terms of computing log_prob, this "meta-distribution" manifests as a simple sum over the event dimension(s).
Also notice that we took the adjoint ("transpose") of the scale matrix. This is because if precision is inverse covariance, i.e., $P=C^{-1}$ and if $C=AA^\top$, then $P=BB^{\top}$ where $B=A^{-\top}$.
Since this distribution is kind of tricky, let's quickly verify that our MVNCholPrecisionTriL works as we think it should.
End of explanation
"""
dtype = np.float64
dims = 2
components = 3
num_samples = 1000
bgmm = tfd.JointDistributionNamed(dict(
mix_probs=tfd.Dirichlet(
concentration=np.ones(components, dtype) / 10.),
loc=tfd.Independent(
tfd.Normal(
loc=np.stack([
-np.ones(dims, dtype),
np.zeros(dims, dtype),
np.ones(dims, dtype),
]),
scale=tf.ones([components, dims], dtype)),
reinterpreted_batch_ndims=2),
precision=tfd.Independent(
tfd.WishartTriL(
df=5,
scale_tril=np.stack([np.eye(dims, dtype=dtype)]*components),
input_output_cholesky=True),
reinterpreted_batch_ndims=1),
s=lambda mix_probs, loc, precision: tfd.Sample(tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=MVNCholPrecisionTriL(
loc=loc,
chol_precision_tril=precision)),
sample_shape=num_samples)
))
def joint_log_prob(observations, mix_probs, loc, chol_precision):
"""BGMM with priors: loc=Normal, precision=Inverse-Wishart, mix=Dirichlet.
Args:
observations: `[n, d]`-shaped `Tensor` representing Bayesian Gaussian
Mixture model draws. Each sample is a length-`d` vector.
mix_probs: `[K]`-shaped `Tensor` representing random draw from
`Dirichlet` prior.
loc: `[K, d]`-shaped `Tensor` representing the location parameter of the
`K` components.
chol_precision: `[K, d, d]`-shaped `Tensor` representing `K` lower
triangular `cholesky(Precision)` matrices, each being sampled from
a Wishart distribution.
Returns:
log_prob: `Tensor` representing joint log-density over all inputs.
"""
return bgmm.log_prob(
mix_probs=mix_probs, loc=loc, precision=chol_precision, s=observations)
"""
Explanation: Since the sample mean and covariance are close to the true mean and covariance, it seems like the distribution is correctly implemented. Now, we'll use MVNCholPrecisionTriL tfp.distributions.JointDistributionNamed to specify the BGMM model. For the observational model, we'll use tfd.MixtureSameFamily to automatically integrate out the ${Z_i}_{i=1}^N$ draws.
End of explanation
"""
true_loc = np.array([[-2., -2],
[0, 0],
[2, 2]], dtype)
random = np.random.RandomState(seed=43)
true_hidden_component = random.randint(0, components, num_samples)
observations = (true_loc[true_hidden_component] +
random.randn(num_samples, dims).astype(dtype))
"""
Explanation: Generate "Training" Data
For this demo, we'll sample some random data.
End of explanation
"""
unnormalized_posterior_log_prob = functools.partial(joint_log_prob, observations)
initial_state = [
tf.fill([components],
value=np.array(1. / components, dtype),
name='mix_probs'),
tf.constant(np.array([[-2., -2],
[0, 0],
[2, 2]], dtype),
name='loc'),
tf.linalg.eye(dims, batch_shape=[components], dtype=dtype, name='chol_precision'),
]
"""
Explanation: Bayesian Inference using HMC
Now that we've used TFD to specify our model and obtained some observed data, we have all the necessary pieces to run HMC.
To do this, we'll use a partial application to "pin down" the things we don't want to sample. In this case that means we need only pin down observations. (The hyper-parameters are already baked in to the prior distributions and not part of the joint_log_prob function signature.)
End of explanation
"""
unconstraining_bijectors = [
tfb.SoftmaxCentered(),
tfb.Identity(),
tfb.Chain([
tfb.TransformDiagonal(tfb.Softplus()),
tfb.FillTriangular(),
])]
@tf.function(autograph=False)
def sample():
return tfp.mcmc.sample_chain(
num_results=2000,
num_burnin_steps=500,
current_state=initial_state,
kernel=tfp.mcmc.SimpleStepSizeAdaptation(
tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
step_size=0.065,
num_leapfrog_steps=5),
bijector=unconstraining_bijectors),
num_adaptation_steps=400),
trace_fn=lambda _, pkr: pkr.inner_results.inner_results.is_accepted)
[mix_probs, loc, chol_precision], is_accepted = sample()
"""
Explanation: Unconstrained Representation
Hamiltonian Monte Carlo (HMC) requires the target log-probability function be differentiable with respect to its arguments. Furthermore, HMC can exhibit dramatically higher statistical efficiency if the state-space is unconstrained.
This means we'll have to work out two main issues when sampling from the BGMM posterior:
$\theta$ represents a discrete probability vector, i.e., must be such that $\sum_{k=1}^K \theta_k = 1$ and $\theta_k>0$.
$T_k$ represents an inverse covariance matrix, i.e., must be such that $T_k \succ 0$, i.e., is positive definite.
To address this requirement we'll need to:
transform the constrained variables to an unconstrained space
run the MCMC in unconstrained space
transform the unconstrained variables back to the constrained space.
As with MVNCholPrecisionTriL, we'll use Bijectors to transform random variables to unconstrained space.
The Dirichlet is transformed to unconstrained space via the softmax function.
Our precision random variable is a distribution over postive semidefinite matrices. To unconstrain these we'll use the FillTriangular and TransformDiagonal bijectors. These convert vectors to lower-triangular matrices and ensure the diagonal is positive. The former is useful because it enables sampling only $d(d+1)/2$ floats rather than $d^2$.
End of explanation
"""
acceptance_rate = tf.reduce_mean(tf.cast(is_accepted, dtype=tf.float32)).numpy()
mean_mix_probs = tf.reduce_mean(mix_probs, axis=0).numpy()
mean_loc = tf.reduce_mean(loc, axis=0).numpy()
mean_chol_precision = tf.reduce_mean(chol_precision, axis=0).numpy()
precision = tf.linalg.matmul(chol_precision, chol_precision, transpose_b=True)
print('acceptance_rate:', acceptance_rate)
print('avg mix probs:', mean_mix_probs)
print('avg loc:\n', mean_loc)
print('avg chol(precision):\n', mean_chol_precision)
loc_ = loc.numpy()
ax = sns.kdeplot(loc_[:,0,0], loc_[:,0,1], shade=True, shade_lowest=False)
ax = sns.kdeplot(loc_[:,1,0], loc_[:,1,1], shade=True, shade_lowest=False)
ax = sns.kdeplot(loc_[:,2,0], loc_[:,2,1], shade=True, shade_lowest=False)
plt.title('KDE of loc draws');
"""
Explanation: We'll now execute the chain and print the posterior means.
End of explanation
"""
|
balarsen/pymc_learning
|
StateSpace/Bayesian state space estimation in Python via Metropolis-Hastings.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
import pandas as pd
import pymc as mc
from scipy import signal
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
"""
Explanation: Bayesian state space estimation in Python via Metropolis-Hastings
This post demonstrates how to use the (http://www.statsmodels.org/) tsa.statespace package along with the PyMC to very simply estimate the parameters of a state space model via the Metropolis-Hastings algorithm (a Bayesian posterior simulation technique).
Although the technique is general to any state space model available in Statsmodels and also to any custom state space model, the provided example is in terms of the local level model and the equivalent ARIMA(0,1,1) model.
End of explanation
"""
# True values
T = 1000
sigma2_eps0 = 3
sigma2_eta0 = 10
# Simulate data
np.random.seed(1234)
eps = np.random.normal(scale=sigma2_eps0**0.5, size=T)
eta = np.random.normal(scale=sigma2_eta0**0.5, size=T)
mu = np.cumsum(eta)
y = mu + eps
# Plot the time series
fig, ax = plt.subplots(figsize=(13,2))
ax.plot(y);
ax.set(xlabel='$T$', title='Simulated series');
"""
Explanation: Suppose we have a time series $YT≡{yt}Tt=0YT≡{yt}t=0T$ which we model as local level process:
ytμt+1=μt+εt,εt∼N(0,σ2ε)=μt+ηt,ηt∼N(0,σ2η)
yt=μt+εt,εt∼N(0,σε2)μt+1=μt+ηt,ηt∼N(0,ση2)
In this model, there are two unknown parameters, which we will collected in a vector ψψ, so that: ψ=(σ2ε,σ2η)ψ=(σε2,ση2); let's set their true values as follows (denoted with the subscript 0):
ψ0=(σ2ε,0,σ2η,0)=(3,10)
ψ0=(σε,02,ση,02)=(3,10)
Finally, we also must specify the prior μ0∼N(m0,P0)μ0∼N(m0,P0) to initialize the Kalman filter.
Set T=1000T=1000.
End of explanation
"""
# Priors
precision = mc.Gamma('precision', 2, 4)
ratio = mc.Gamma('ratio', 2, 1)
# Likelihood calculated using the state-space model
class LocalLevel(sm.tsa.statespace.MLEModel):
def __init__(self, endog):
# Initialize the state space model
super(LocalLevel, self).__init__(endog, k_states=1,
initialization='approximate_diffuse',
loglikelihood_burn=1)
# Initialize known components of the state space matrices
self.ssm['design', :] = 1
self.ssm['transition', :] = 1
self.ssm['selection', :] = 1
@property
def start_params(self):
return [1. / np.var(self.endog), 1.]
@property
def param_names(self):
return ['h_inv', 'q']
def update(self, params, transformed=True):
params = super(LocalLevel, self).update(params, transformed)
h, q = params
sigma2_eps = 1. / h
sigma2_eta = q * sigma2_eps
self.ssm['obs_cov', 0, 0] = sigma2_eps
self.ssm['state_cov', 0, 0] = sigma2_eta
# Instantiate the local level model with our simulated data
ll_mod = LocalLevel(y)
# Create the stochastic (observed) component
@mc.stochastic(dtype=LocalLevel, observed=True)
def local_level(value=ll_mod, h=precision, q=ratio):
return value.loglike([h, q], transformed=True)
# Create the PyMC model
ll_mc = mc.Model((precision, ratio, local_level))
# Create a PyMC sample
ll_sampler = mc.MCMC(ll_mc)
"""
Explanation: It turns out it will be convenient to write the model in terms of the precision of εε, defined to be h−1≡σ2εh−1≡σε2, and the ratio of the variances: q≡σ2η/σ2εq≡ση2/σε2 so that qh−1=σ2ηqh−1=ση2.
Then our error terms can be written:
εt∼N(0,h−1)ηt∼N(0,qh−1)
εt∼N(0,h−1)ηt∼N(0,qh−1)
And the true values are:
h−10=1/3=0.33h0−1=1/3=0.33
q=10/3=3.33q=10/3=3.33
To take a Bayesian approach to this problem, we assume that ψψ is a random variable, and we want to learn about the values of ψψ based on the data YTYT; in fact we want a density p(ψ|YT)p(ψ|YT). To do this, we use Bayes rule to write:
p(ψ|YT)=p(YT|ψ)p(ψ)p(YT)
p(ψ|YT)=p(YT|ψ)p(ψ)p(YT)
or
p(ψ|YT)⏟posterior∝p(YT|ψ)⏟likelihoodp(ψ)⏟prior
p(ψ|YT)⏟posterior∝p(YT|ψ)⏟likelihoodp(ψ)⏟prior
The object of interest is the posterior; to achieve it we need to specify a prior density for the unknown parameters and the likelihood function of the model.
Prior
We will use the following priors:
Precision
Since the precision must be positive, but has no theoretical upper bound, we use a Gamma prior:
h∼Gamma(αh,βh)
h∼Gamma(αh,βh)
to be specific, the density is written:
p(h)=βαhhΓ(α)hαh−1e−βhh
p(h)=βhαhΓ(α)hαh−1e−βhh
and we set the hyperparameters as αh=2,βh=2αh=2,βh=2. In this case, we have E(h)=αh/βh=1E(h)=αh/βh=1 and also E(h−1)=E(σ2ε)=1E(h−1)=E(σε2)=1.
Ratio of variances
Similarly, the ratio of variances must be positive, but has no theoretical upper bound, so we again use an (independent) Gamma prior:
q∼Gamma(αq,βq)
q∼Gamma(αq,βq)
and we set the same hyperparameters, so αq=2,βq=2αq=2,βq=2. Since E(q)=1E(q)=1, our prior is of equal variances. We then have E(σ2η)=E(qh−1)=E(q)E(h−1)=1E(ση2)=E(qh−1)=E(q)E(h−1)=1.
Initial state prior
As noted above, the Kalman filter must be initialized with μ0∼N(m0,P0)μ0∼N(m0,P0). We will use the following approximately diffuse prior:
μ0∼N(0,106)
μ0∼N(0,106)
Likelihood
For given parameters, likelihood of this model can be calculated via prediction error decomposition using an application of the Kalman filter iterations.
Posterior Simulation: Metropolis-Hastings
One option for describing the posterior is via MCMC posterior simulation methods. The Metropolis-Hastings algorthm is simple and only requires the ability to evaluate the prior densities and the likelihood. The priors have known densities, and the likelihood function can be computed using the state space models from the Statsmodels tsa.statespace package. We will use the PyMC package to streamline specification of priors and sampling in the Metropolis-Hastings case.
The statespace package is meant to make it easy to specify and evaluate state space models. Below, we create a new LocalLevel class. Among other things, it inherits from MLEModel a loglike method which we can use to evaluate the likelihood at various parameters.
End of explanation
"""
|
ling7334/tensorflow-get-started
|
mnist/TensorFlow_Mechanics_101.ipynb
|
apache-2.0
|
data_sets = input_data.read_data_sets(FLAGS.train_dir, FLAGS.fake_data)
"""
Explanation: TensorFlow运作方式入门
代码:tensorflow/examples/tutorials/mnist/
本篇教程的目的,是向大家展示如何利用TensorFlow使用(经典)MNIST数据集训练并评估一个用于识别手写数字的简易前馈神经网络(feed-forward neural network)。我们的目标读者,是有兴趣使用TensorFlow的资深机器学习人士。
因此,撰写该系列教程并不是为了教大家机器学习领域的基础知识。
在学习本教程之前,请确保您已按照安装TensorFlow教程中的要求,完成了安装。
教程使用的文件
本教程引用如下文件:
文件|目的
-----|-----
mnist.py | 构建一个完全连接(fully connected)的MINST模型所需的代码。
fully_connected_feed.py|利用下载的数据集训练构建好的MNIST模型的主要代码,以数据反馈字典(feed dictionary)的形式作为输入模型。
只需要直接运行fully_connected_feed.py文件,就可以开始训练
准备数据
MNIST是机器学习领域的一个经典问题,指的是让机器查看一系列大小为28x28像素的手写数字灰度图像,并判断这些图像代表0-9中的哪一个数字。
更多相关信息,请查阅Yann LeCun网站中关于MNIST的介绍或者Chris Olah对MNIST的可视化探索。
下载
在run_training()方法的一开始,input_data.read_data_sets()函数会确保你的本地训练文件夹中,已经下载了正确的数据,然后将这些数据解压并返回一个含有DataSet实例的字典。
End of explanation
"""
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
"""
Explanation: 注意:fake_data标记是用于单元测试的,读者可以不必理会。
数据集|目的
---|---
data_sets.train|55000个图像和标签(labels),作为主要训练集。
data_sets.validation|5000个图像和标签,用于迭代验证训练准确度。
data_sets.test|10000个图像和标签,用于最终测试训练准确度(trained accuracy)。
输入与占位符
placeholder_inputs()函数将生成两个tf.placeholder操作,定义传入参数的维度,包括batch_size值,后续还会将实际的训练用例传入图。images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
mnist.IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
End of explanation
"""
with tf.name_scope('hidden1'):
"""
Explanation: 在训练循环(training loop)的后续步骤中,传入的整个图像和标签数据集会被切片,以符合每一个操作所设置的batch_size值,占位符操作将会填补以符合这个batch_size值。然后使用feed_dict参数,将数据传入sess.run()函数。
构建图
在为数据创建占位符之后,就可以运行mnist.py文件,经过三阶段的模式函数操作:inference(), loss(),和training()。图就构建完成了。
inference() —— 尽可能地构建好图,满足促使神经网络向前反馈并做出预测的要求。
loss() —— 往inference图中添加生成损失(loss)所需要的操作(ops)。
training() —— 往损失图中添加计算并应用梯度(gradients)所需的操作。
推理
inference()函数会尽可能地构建图,做到返回包含了预测结果(output prediction)的Tensor。
它接受图像占位符为输入,在此基础上借助ReLu(Rectified Linear Units)激活函数,构建一对完全连接层(layers),以及一个有着十个节点(node)、指明了输出logits模型的线性层。
每一层都创建于一个唯一的tf.name_scope之下,创建于该作用域之下的所有元素都将带有其前缀。
End of explanation
"""
weights = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
name='weights')
biases = tf.Variable(tf.zeros([hidden1_units]),
name='biases')
"""
Explanation: 在定义的作用域中,每一层所使用的权重和偏差都在tf.Variable实例中生成,并且包含了各自期望的维度:
End of explanation
"""
hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
logits = tf.matmul(hidden2, weights) + biases
"""
Explanation: 例如,当这些层是在hidden1作用域下生成时,赋予权重变量的独特名称将会是"hidden1/weights"。
每个变量在构建时,都会获得初始化操作(initializer ops)。
在这种最常见的情况下,通过tf.truncated_normal函数初始化权重变量,给赋予的shape则是一个二维tensor,其中第一个维度代表该层中权重变量所连接(connect from)的单元数量,第二个维度代表该层中权重变量所连接到的(connect to)单元数量。对于名叫hidden1的第一层,相应的维度则是[IMAGE_PIXELS, hidden1_units],因为权重变量将图像输入连接到了hidden1层。tf.truncated_normal初始函数将根据所得到的均值和标准差,生成一个随机分布。
然后,通过tf.zeros函数初始化偏差变量(biases),确保所有偏差的起始值都是0,而它们的shape则是其在该层中所接到的(connect to)单元数量。
图的三个主要操作,分别是两个tf.nn.relu操作,它们中嵌入了隐藏层所需的tf.matmul;以及logits模型所需的另外一个tf.matmul。三者依次生成,各自的tf.Variable实例则与输入占位符或下一层的输出tensor所连接。
End of explanation
"""
labels = tf.to_int64(labels)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits, name='xentropy')
"""
Explanation: 最后,程序会返回包含了输出结果的logits张量。
损失
loss()函数通过添加所需的损失操作,进一步构建图。
首先,labels_placeholer中的值将被转化为64位整型,然后,自动使用tf.nn.sparse_softmax_cross_entropy_with_logits操作,
将labels_placeholer编码为一个独热码,并与inference()的输出logits比较。
End of explanation
"""
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
"""
Explanation: 然后,使用tf.reduce_mean函数,计算batch维度(第一维度)下交叉熵(cross entropy)的平均值,将将该值作为总损失。
End of explanation
"""
tf.summary.scalar('loss', loss)
"""
Explanation: 最后,程序会返回包含了损失值的张量。
注意:交叉熵是信息理论中的概念,可以让我们描述如果基于已有事实,相信神经网络所做的推测最坏会导致什么结果。更多详情,请查阅博文《可视化信息理论》(http://colah.github.io/posts/2015-09-Visual-Information/)
训练
training()函数添加了通过梯度下降(gradient descent)将损失最小化所需的操作。
首先,该函数从loss()函数中获取损失张量,将其交给tf.summary.scalar,后者在与tf.summary.FileWriter(见下文)配合使用时,可以向事件文件(events file)中生成汇总值(summary values)。在本篇教程中,每次写入汇总值时,它都会释放损失Tensor的当前值(snapshot value)。
End of explanation
"""
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
"""
Explanation: 接下来,我们实例化一个tf.train.GradientDescentOptimizer,负责按照所要求的学习效率(learning rate)应用梯度下降法(gradients)。
End of explanation
"""
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
"""
Explanation: 之后,我们生成一个变量用于保存全局训练步骤(global training step)的数值,并使用tf.train.Optimizer.minimize操作更新系统中的训练权重(trainable weights)、增加全局步骤。根据惯例,这个操作被称为train_op,是TensorFlow会话(session)诱发一个完整训练步骤所必须运行的操作(见下文)。
End of explanation
"""
with tf.Graph().as_default():
"""
Explanation: 训练模型
一旦图构建完毕,就通过fully_connected_feed.py文件中的用户代码进行循环地迭代式训练和评估。
图
在run_training()这个函数的一开始,是一个Python语言中的with命令,这个命令表明所有已经构建的操作都要与默认的tf.Graph全局实例关联起来。
End of explanation
"""
sess = tf.Session()
"""
Explanation: tf.Graph实例是一系列可以作为整体执行的操作。TensorFlow的大部分场景只需要依赖默认图一个实例即可。
利用多个图的更加复杂的使用场景也是可能的,但是超出了本教程的范围。
会话
完成全部的构建准备、生成全部所需的操作之后,我们就可以创建一个tf.Session,用于运行图。
End of explanation
"""
with tf.Session() as sess:
"""
Explanation: 另外,也可以利用with代码块生成Session,限制作用域:
End of explanation
"""
init = tf.global_variables_initializer()
sess.run(init)
"""
Explanation: Session函数中没有传入参数,表明该代码将会依附于(如果还没有创建会话,则会创建新的会话)默认的本地会话。
生成会话之后,所有tf.Variable实例都会立即通过调用各自初始化操作中的tf.Session.run函数进行初始化。
End of explanation
"""
for step in xrange(FLAGS.max_steps):
sess.run(train_op)
"""
Explanation: tf.Session.run方法将会运行图中与作为参数传入的操作相对应的完整子集。在初次调用时,init操作只包含了变量初始化程序tf.group。图的其他部分不会在这里,而是在下面的训练循环运行。
训练循环
完成会话中变量的初始化之后,就可以开始训练了。
训练的每一步都是通过用户代码控制,而能实现有效训练的最简单循环就是:
End of explanation
"""
images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size,
FLAGS.fake_data)
"""
Explanation: 但是,本教程中的例子要更为复杂一点,原因是我们必须把输入的数据根据每一步的情况进行切分,以匹配之前生成的占位符。
向图输入参数
执行每一步时,我们的代码会生成一个输入字典(feed dictionary),其中包含对应步骤中训练所要使用的例子,这些例子的键就是其所代表的占位符操作。
fill_feed_dict函数中,会查询给定的DataSet,给下一batch_size批次的图像和标签,与占位符相匹配的张量则会载入下一批次的图像和标签。
End of explanation
"""
feed_dict = {
images_placeholder: images_feed,
labels_placeholder: labels_feed,
}
"""
Explanation: 然后,以占位符为键,创建一个Python字典对象,键值则是其代表的输入张量。
End of explanation
"""
for step in xrange(FLAGS.max_steps):
feed_dict = fill_feed_dict(data_sets.train,
images_placeholder,
labels_placeholder)
_, loss_value = sess.run([train_op, loss],
feed_dict=feed_dict)
"""
Explanation: 这个字典随后作为feed_dict参数,传入sess.run()函数中,为这一步的训练提供输入样例。
检查状态
代码中明确其需要获取的两个值:[train_op, loss]。
End of explanation
"""
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
"""
Explanation: 因为要获取这两个值,sess.run()会返回一个有两个元素的元组。其中每一个张量,对应了返回的元组中的numpy数组,而这些数组中包含了当前这步训练中对应张量的值。由于train_op并不会产生输出,其在返回的元祖中的对应元素就是None,所以会被抛弃。但是,如果模型在训练中出现偏差,loss张量的值可能会变成NaN,所以我们要获取它的值,并记录下来。
假设训练一切正常,没有出现NaN,训练循环会每隔100个训练步骤,就打印一行简单的状态文本,告知用户当前的训练状态。
End of explanation
"""
summary = tf.summary.merge_all()
"""
Explanation: 状态可视化
为了释放TensorBoard所使用的事件文件(events file),所有的即时数据(在这里只有一个)都要在图构建阶段合并至一个操作(op)中。
End of explanation
"""
summary_writer = tf.summary.FileWriter(FLAGS.train_dir, sess.graph)
"""
Explanation: 在创建好会话(session)之后,可以实例化一个tf.summary.FileWriter,用于写入包含了图表本身和即时数据具体值的事件文件。
End of explanation
"""
summary_str = sess.run(summary, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
"""
Explanation: 最后,每次运行summary时,都会往事件文件中写入最新的即时数据,函数的输出会传入事件文件读写器(writer)的add_summary()函数。
End of explanation
"""
saver = tf.train.Saver()
"""
Explanation: 事件文件写入完毕之后,可以就训练文件夹打开一个TensorBoard,查看即时数据的情况。
注意:了解更多如何构建并运行TensorBoard的信息,请查看相关教程Tensorboard:训练过程可视化。
保存检查点
为了得到可以用来后续恢复模型以进一步训练或评估的检查点文件(checkpoint file),我们实例化一个tf.train.Saver。
End of explanation
"""
saver.save(sess, FLAGS.train_dir, global_step=step)
"""
Explanation: 在训练循环中,将定期调用tf.train.Saver.save方法,向训练文件夹中写入包含了当前所有可训练变量值得检查点文件。
End of explanation
"""
saver.restore(sess, FLAGS.train_dir)
"""
Explanation: 这样,我们以后就可以使用tf.train.Saver.restore方法,重载模型的参数,继续训练。
End of explanation
"""
print('Training Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.train)
print('Validation Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.validation)
print('Test Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.test)
"""
Explanation: 评估模型
每隔一千个训练步骤,我们的代码会尝试使用训练数据集与测试数据集,对模型进行评估。do_eval函数会被调用三次,分别使用训练数据集、验证数据集合测试数据集。
End of explanation
"""
eval_correct = mnist.evaluation(logits, labels_placeholder)
"""
Explanation: 注意,更复杂的使用场景通常是,先隔绝data_sets.test测试数据集,只有在大量的超参数优化调整(hyperparameter tuning)之后才进行检查。但是,由于MNIST问题比较简单,我们在这里一次性评估所有的数据。
构建评估图
在进入训练循环之前,我们应该先调用mnist.py文件中的evaluation()函数,传入的logits和标签参数要与loss()函数的一致。这样做事为了先构建Eval操作。
End of explanation
"""
eval_correct = tf.nn.in_top_k(logits, labels, 1)
"""
Explanation: evaluation()函数会生成tf.nn.in_top_k操作,如果在K个最有可能的预测中可以发现真的标签,那么这个操作就会将模型输出标记为正确。在本文中,我们把K的值设置为1,也就是只有在预测是真的标签时,才判定它是正确的。
End of explanation
"""
for step in xrange(steps_per_epoch):
feed_dict = fill_feed_dict(data_set,
images_placeholder,
labels_placeholder)
true_count += sess.run(eval_correct, feed_dict=feed_dict)
"""
Explanation: 评估输出
之后,我们可以创建一个循环,往其中添加feed_dict,并在调用sess.run()函数时传入eval_correct操作,目的就是用给定的数据集评估模型。
End of explanation
"""
precision = true_count / num_examples
print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
(num_examples, true_count, precision))
"""
Explanation: true_count变量会累加所有in_top_k操作判定为正确的预测之和。接下来,只需要将正确测试的总数,除以例子总数,就可以得出准确率了。
End of explanation
"""
|
ocelot-collab/ocelot
|
demos/ipython_tutorials/9_thz_source.ipynb
|
gpl-3.0
|
# To activate interactive matplolib in notebook
# %matplotlib notebook
from ocelot import *
from ocelot.gui import *
import time
#Initial Twiss parameters
tws0 = Twiss()
tws0.beta_x = 29.171
tws0.beta_y = 29.171
tws0.alpha_x = 10.955
tws0.alpha_y = 10.955
tws0.gamma_x = 4.148367385417024
tws0.gamma_y = 4.148367385417024
tws0.E = 0.005
# Drifts
D0 = Drift(l=3.52)
D1 = Drift(l=0.3459)
D2 = Drift(l=0.2043)
D3 = Drift(l=0.85)
D4 = Drift(l=0.202)
D5 = Drift(l=0.262)
D6 = Drift(l=2.9)
D8 = Drift(l=1.8)
D9 = Drift(l=0.9)
D11 = Drift(l=1.31)
D12 = Drift(l=0.81)
D13 = Drift(l=0.50)
D14 = Drift(l=1.0)
D15 = Drift(l=1.5)
D18 = Drift(l=0.97)
D19 = Drift(l=2.3)
D20 = Drift(l=2.45)
# Quadrupoles
q1 = Quadrupole(l=0.3, k1=-1.537886, eid='Q1')
q2 = Quadrupole(l=0.3, k1=1.435078, eid='Q2')
q3 = Quadrupole(l=0.2, k1=1.637, eid='Q3')
q4 = Quadrupole(l=0.2, k1=-2.60970, eid='Q4')
q5 = Quadrupole(l=0.2, k1=3.4320, eid='Q5')
q6 = Quadrupole(l=0.2, k1=-1.9635, eid='Q6')
q7 = Quadrupole(l=0.2, k1=-0.7968, eid='Q7')
q8 = Quadrupole(l=0.2, k1=2.7285, eid='Q8')
q9 = Quadrupole(l=0.2, k1=-3.4773, eid='Q9')
q10 = Quadrupole(l=0.2, k1=0.780, eid='Q10')
q11 = Quadrupole(l=0.2, k1=-1.631, eid='Q11')
q12 = Quadrupole(l=0.2, k1=1.762, eid='Q12')
q13 = Quadrupole(l=0.2, k1=-1.8, eid='Q13')
q14 = Quadrupole(l=0.2, k1=1.8, eid='Q14')
q15 = Quadrupole(l=0.2, k1=-1.8, eid='Q15')
# SBends
b1 = SBend(l=0.501471120927, angle=0.1327297047, e2=0.132729705, tilt=1.570796327, eid='B1')
b2 = SBend(l=0.501471120927, angle=-0.1327297047, e1=-0.132729705, tilt=1.570796327, eid='B2')
b3 = SBend(l=0.501471120927, angle=-0.1327297047, e2=-0.132729705, tilt=1.570796327, eid='B3')
b4 = SBend(l=0.501471120927, angle=0.1327297047, e1=0.132729705, tilt=1.570796327, eid='B4')
# Cavitys
c1 = Cavity(l=1.0377, v=0.01815975, freq=1300000000.0, eid='C1')
c3 = Cavity(l=0.346, v=0.0024999884, phi=180.0, freq=3900000000.0, eid='C3')
und = Undulator(lperiod=0.2, nperiods=20, Kx=30)
start_und = Marker()
end = Marker()
# Lattice
cell = (D0, c1, D1, c1, D1, c1, D1, c1, D1, c1, D1, c1, D1, c1, D1, c1, D2, q1, D3,
q2, D4, c3, D5, c3, D5, c3, D5, c3, D5, c3, D5, c3, D5, c3, D5, c3, D6, q3, D6,
q4, D8, q5, D9, q6, D9, q7, D11, q8, D12, q9, D13, b1, D14, b2, D15, b3, D14, b4, D13,
q10, D9, q11, D18, q12, D19, q13, D19, q14, D19, q15, D20, start_und, und, D14, end)
lat = MagneticLattice(cell, stop=start_und)
tws = twiss(lat, tws0)
plot_opt_func(lat, tws, legend=False, fig_name=100)
plt.show()
"""
Explanation: This notebook was created by Sergey Tomin (sergey.tomin@desy.de). Source and license info is on GitHub. July 2019.
Tutorial N9. Simple accelerator based THz source.
In this tutorial we will focus on another feature of the SR module (see PFS tutorial N1. Synchrotron radiation module. Web version), namely the calculation of coherent radiation.
Details and limitation of the SR module in that mode can be found in G. Geloni, T. Tanikawa and S. Tomin, Dynamical effects on superradiant THz emission from an undulator. J. Synchrotron Rad. (2019). 26, 737-749
As a first step we consider a simple accelerator with the electron beam formation system (bunch compressor). Undulator parameters are chosen to generate radiation in THz range.
Contents
Accelerator
Lattice
Simple compression scenario
Tracking up to undulator
Coherent radiation from the beam
<a id='accelerator'></a>
Accelerator
Accelerator includes an accelerator module and linearizer (third harmonic cavity) and a bunch compressor. IN other words we reproduce simplified version of the XFEL injector without the injector dogleg.
Lattice
End of explanation
"""
from ocelot.utils import *
R56, T566, U5666, Sref = chicane_RTU(yoke_len=0.5, dip_dist=D14.l * np.cos(b1.angle), r=b1.l/b1.angle, type="c")
print("bunch compressor R56 = ", R56, " m")
"""
Explanation: Also we can found the main parameters of the chicane with chicane_RTU(yoke_len, dip_dist, r, type)
End of explanation
"""
import scipy.optimize
# M*a = b
k = 2*np.pi/3e8*1.3e9
n = 3
M = np.array([[1, 0, 1, 0],
[0, -k, 0, -(n*k)],
[-k**2, 0, -(n*k)**2, 0],
[0, k**3, 0, (n*k)**3]])
b = np.array([125, -1300, 0, 0])
def F(x):
V1 = x[0]
phi1 = x[1]
V13 = x[2]
phi3 = x[3]
V = np.array([V1*np.cos(phi1*np.pi/180),
V1*np.sin(phi1*np.pi/180),
V13*np.cos(phi3*np.pi/180),
V13*np.sin(phi3*np.pi/180)]).T
return np.dot(M, V) - b
x = scipy.optimize.broyden1(F, [150, 10, 20, 190])
V1, phi1, V13, phi13 = x
print("V1 = ", V1, " MeV")
print("phi1 = ", phi1)
print("V13 = ", V13, " MeV")
print("phi13 = ", phi13)
"""
Explanation: <a id='compression'></a>
Simple compression scenario
We consider a simple compression scheme with an accelerator module and third harmonic linearizer and a magnetic chicane. To have full picture of the compression techniques, I would recommend:
* I. Zagorodnov and M. Dohlus, Semianalytical modeling of multistage bunch compression with collective effects
* and M. Dohlus, T. Limberg, and P. Emma, ICFA Beam
Dynamics Newsletter 38, 15 (2005)
To compress a bunch longitudinally, the time of flight through some section must be shorter for the tail of the bunch than it is for the head. The usual technique starts out by introducing a correlation between the longitudinal position of the particles in the bunch and their energy using a radio frequency (RF) accelerating system.
At the end of a linac which induces an energy chirp $\delta' = \frac{1}{E_0}\frac{dE}{ds}$, the mapping of
longitudinal position and relative energy deviation of an electron is
\begin{equation}
\begin{split}
s_1 &= s_0 \
\delta_1 &= \delta' s_0 + \delta_{i}
\end{split}
\end{equation}
where $\delta_{i} = \frac{\Delta E_i}{E_0}$ is not correlated energy spread along the bunch length.
The transformation of the longitudinal coordinate in compressor BC can be approximated by the expression up to first order:
\begin{equation}
\begin{split}
s_2 &= s_1 - R_{56}\delta_1 = (1 - \delta' R_{56}) s_0 + R_{56}\delta_{i}\
\delta_2 &= \delta_1
\end{split}
\end{equation}
$$\quad$$
Taking an ensemble average over all particles in the bunch and by definition $<s_0 \delta_{i}> = 0$, the second moment of the distribution $\sigma_{s_0} = <s_2^2>^{1/2}$ is:
\begin{equation}
\sigma_{s_2} = \sqrt{ (1 - \delta' R_{56})^2 \sigma_{s_0} + R_{56}^2\sigma_{\delta_{i}}^2 }
\end{equation}
Compression factor is :
\begin{equation}
C = \frac{\sigma_{s_0}}{ \sigma_{s_2}}
\end{equation}
Suppose, uncorrelated energy spread is small and we chose $\delta' = -10$ and $R_{56} = -0.048$ m as we calculated above, in that case the compression factor after the chicane is
$$
C = \frac{1}{1 - \delta' R_{56}} = 1.9
$$
The non-linearities of both the accelerating RF fields and the longitudinal dispersion
can distort the longitudinal phase space. A higher harmonic RF system can be used to compensate the non-linearities of the fundamental frequency system and the higher order longitudinal dispersion in the magnetic chicanes.
To linearize longitudinal phase space, a working point for RF phases and amplitudes must be found for the fundamental frequency and the $n$-th harmonic system (n = 3 for the European XFEL).
The relation between the normalized RF amplitudes
\begin{equation}
\begin{bmatrix}
1 & 0 & 1 & 0 \
0 & -k & 0 & -n k \
-k^2 & 0 & -(n k )^2 & 0) \
0 & k^3 & 0 &(n k )^3
\end{bmatrix}
\begin{bmatrix}
V_1 \cos(\phi_1)\
V_1 \sin(\phi_1)\
V_{13} \cos(\phi_{13}) \
V_{13} \sin(\phi_{13}) \
\end{bmatrix} = \frac{1}{e}
\begin{bmatrix}
E_1 - E_0\
E_1\delta_2' - E_0 \delta_0'\
E_1\delta_2'' - E_0 \delta_0''\
E_1\delta_2''' - E_0 \delta_0'''\
\end{bmatrix}
\end{equation}
$$\quad$$
In our case we assume initial beam energy $E_0 = 5$ MeV and $\delta_0' = \delta_0'' = \delta_0''' = 0$.
$$\quad$$
And final energy we chose $E_1 = 130$ MeV and $\delta_2' = -10$ with $\delta_2'' = \delta_2''' = 0$
So, our vector on the right side will be
\begin{equation}
\begin{bmatrix}
E_1 - E_0\
E_1\delta_2' - E_0 \delta_0'\
E_1\delta_2'' - E_0 \delta_0''\
E_1\delta_2''' - E_0 \delta_0'''\
\end{bmatrix} =
\begin{bmatrix}
125\
-1300 \
0\
0\
\end{bmatrix}
\end{equation}
<div class="alert alert-block alert-info">
<b>Note:</b> We calculated $R_{56}$ only for the chicane and did not take into account undulator.
</div>
In our case undulator has high
$$
R_{56} = -\frac{L_u}{\gamma}(1 + K^2/2) \approx -0.028 \quad m
$$
So, total compression after undulator will be
$$
C = \frac{1}{1 - \delta R_{56}} = 4.1
$$
End of explanation
"""
# type new parameters,
# NOTE in OCELOT cavity voltage in [GeV] so to traslate calculated voltage we need factor 1/1000
# and we have 8 cavities for main RF module and linearizer
c1.v = V1/8/1000
c1.phi = phi1
c3.v = V13/8/1000
c3.phi = phi13
# and update lattice
lat.update_transfer_maps()
"""
Explanation: Now we update cavities parameters in the lattice
End of explanation
"""
np.random.seed(10)
parray = generate_parray(sigma_x=0.0001, sigma_px=2e-05, sigma_y=None, sigma_py=None,
sigma_tau=0.001, sigma_p=0.0001, chirp=0.0, charge=0.5e-09,
nparticles=300, energy=0.005, tau_trunc=None)
show_e_beam(parray,nparts_in_slice=50,smooth_param=0.1, nbins_x=50, nbins_y=50, nfig=10)
plt.show()
"""
Explanation: Generate electron beam
End of explanation
"""
navi = Navigator(lat)
tws_track, parray = track(lat, parray, navi)
show_e_beam(parray, nfig=201)
plt.show()
parray.E
"""
Explanation: Tracking up to undulator
End of explanation
"""
from ocelot.rad import *
lat = MagneticLattice(cell, start=start_und, stop=end)
screen = Screen()
screen.z = 1000.0
screen.size_x = 15
screen.size_y = 15
screen.nx = 1
screen.ny = 1
screen.start_energy = 0.001 # eV
screen.end_energy = 3e-3 # eV
screen.num_energy = 1001
# to estimate radiation properties we need to create beam class
beam = Beam()
beam.E = 0.13
# NOTE: this function just estimate spontanious emmision
print_rad_props(beam, K=und.Kx, lu=und.lperiod, L=und.l, distance=screen.z)
start = time.time()
screen_i = coherent_radiation(lat, screen, parray, accuracy=1)
print()
print("time exec: ", time.time() - start, " s")
show_flux(screen_i, unit="mm", title="")
"""
Explanation: <a id='coherent'></a>
Coherent radiation from the beam
End of explanation
"""
show_e_beam(parray, nfig=203)
plt.show()
"""
Explanation: Beam after undulator.
as you can notice, the beam was compressed in the undulator in approximatly in two times as was calculated in the Simple compression scenario
End of explanation
"""
n = 100
x = screen.beam_traj.x(n)
y = screen.beam_traj.y(n)
z = screen.beam_traj.z(n)
plt.title("trajectory of " + str(n)+"th particle")
plt.plot(z, x, label="X")
plt.plot(z, y, label="Y")
plt.xlabel("Z [m]")
plt.ylabel("X/Y [m]")
plt.legend()
plt.show()
"""
Explanation: Electron trajectories
In some cases, it is worth checking the trajectory of the particle used to calculate the radiation.
For this purpose, a special object BeamTraject is attached to the object screen after radiation calculation:
screen.beam_traj = BeamTraject()
To retrieve trajectory you need to specify number of electron what you are interested, for example:
x = screen.beam_traj.x(n=0)
End of explanation
"""
|
hanezu/cs231n-assignment
|
17-assignment2/TensorFlow.ipynb
|
mit
|
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
"""
Explanation: What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)
What is it?
TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
Why?
Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
How will I learn TensorFlow?
TensorFlow has many excellent tutorials available, including those from Google themselves.
Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
Load Datasets
End of explanation
"""
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
"""
Explanation: Example Model
Some useful utilities
. Remember that our image data is initially N x H x W x C, where:
* N is the number of datapoints
* H is the height of each image in pixels
* W is the height of each image in pixels
* C is the number of channels (usually 3: R, G, B)
This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.
The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
TensorFlow Details
In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.
End of explanation
"""
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict,1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss,correct_prediction,accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
# generate indicies for the batch
start_idx = (i*batch_size)%X_train.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
# create a feed dictionary for this batch
feed_dict = {X: Xd[idx,:],
y: yd[idx],
is_training: training_now }
# get batch size
actual_batch_size = yd[i:i+batch_size].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
"""
Explanation: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
BatchNorm: https://www.tensorflow.org/api_docs/python/tf/contrib/layers/batch_norm
Training the model on one epoch
While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide.
Optionally we can also specify a device context such as /cpu:0 or /gpu:0. For documentation on this behavior see this TensorFlow guide
You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below
End of explanation
"""
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
pass
pass
y_out = complex_model(X,y,is_training)
"""
Explanation: Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:
7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs
End of explanation
"""
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
"""
Explanation: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
End of explanation
"""
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
"""
Explanation: You should see the following from the run above
(64, 10)
True
GPU!
Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
End of explanation
"""
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
mean_loss = None
optimizer = None
pass
train_step = optimizer.minimize(mean_loss)
"""
Explanation: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information
* Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
* Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
End of explanation
"""
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
"""
Explanation: Train the model
Below we'll create a session and train the model over one epoch. You should see a loss of 3.0 - 5.0 and an accuracy of 0.2 to 0.3. There will be some variation due to random seeds and differences in initialization
End of explanation
"""
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
"""
Explanation: Check the accuracy of the model.
Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.5 to 2.0 with an accuracy of 0.3 to 0.4.
End of explanation
"""
# Feel free to play with this cell
def my_model(X,y,is_training):
pass
pass
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = my_model(X,y,is_training)
mean_loss = None
optimizer = None
train_step = optimizer.minimize(mean_loss)
pass
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
"""
Explanation: Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
Use TensorFlow Scope: Use TensorFlow scope and/or tf.layers to make it easier to write deeper networks. See this tutorial for making how to use tf.layers.
Use Learning Rate Decay: As the notes point out, decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the Tensorflow documentation for learning rate decay.
Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
Regularization: Add l2 weight regularization, or perhaps use Dropout as in the TensorFlow MNIST tutorial
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
Model ensembles
Data augmentation
New Architectures
ResNets where the input from the previous layer is added to the output.
DenseNets where inputs into previous layers are concatenated together.
This blog has an in-depth overview
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.
Have fun and happy training!
End of explanation
"""
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
"""
Explanation: Describe what you did here
In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
Tell us here
Test Set - Do this only once
Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/official/custom/sdk-custom-image-classification-online.ipynb
|
apache-2.0
|
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
"""
Explanation: Custom training and online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK for Python to train and deploy a custom image classification model for online prediction.
Dataset
The dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.
The steps performed include:
Create a Vertex AI custom job for training a model.
Train a TensorFlow model.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest (preview) version of Vertex SDK for Python.
End of explanation
"""
! pip install {USER_FLAG} --upgrade google-cloud-storage
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
! pip install {USER_FLAG} --upgrade pillow
"""
Explanation: Install the pillow library for loading images.
End of explanation
"""
! pip install {USER_FLAG} --upgrade numpy
"""
Explanation: Install the numpy library for manipulation of image data.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
import os
PROJECT_ID = ""
if not os.getenv("IS_TESTING"):
# Get your Google Cloud project ID from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
"""
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
"""
Explanation: Otherwise, set your project ID here.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import gapic as aip
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
End of explanation
"""
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
"""
Explanation: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
See the locations where accelerators are available.
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.
End of explanation
"""
TRAIN_VERSION = "tf-gpu.2-1"
DEPLOY_VERSION = "tf2-gpu.2-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
End of explanation
"""
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
"""
Explanation: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Define the command args for the training script
Prepare the command-line arguments to pass to your training script.
- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
"""
%%writefile task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
MODEL_DIR = os.getenv("AIP_MODEL_DIR")
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(MODEL_DIR)
"""
Explanation: Training script
In the next cell, you will write the contents of the training script, task.py. In summary:
Get the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(MODEL_DIR)) to the specified model directory.
End of explanation
"""
job = aiplatform.CustomTrainingJob(
display_name=JOB_NAME,
script_path="task.py",
container_uri=TRAIN_IMAGE,
requirements=["tensorflow_datasets==1.3.0"],
model_serving_container_image_uri=DEPLOY_IMAGE,
)
MODEL_DISPLAY_NAME = "cifar10-" + TIMESTAMP
# Start the training
if TRAIN_GPU:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
)
else:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_count=0,
)
"""
Explanation: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters:
display_name: The user-defined name of this training pipeline.
script_path: The local path to the training script.
container_uri: The URI of the training container image.
requirements: The list of Python package dependencies of the script.
model_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.
Use the run function to start training, which takes the following parameters:
args: The command line arguments to be passed to the Python script.
replica_count: The number of worker replicas.
model_display_name: The display name of the Model if the script produces a managed Model.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.
End of explanation
"""
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU.name,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_COMPUTE.name,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
"""
Explanation: Deploy the model
Before you use your model to make predictions, you need to deploy it to an Endpoint. You can do this by calling the deploy function on the Model resource. This will do two things:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
The function takes the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Traffic split
The traffic_split parameter is specified as a Python dictionary. You can deploy more than one instance of your model to an endpoint, and then set the percentage of traffic that goes to each instance.
You can use a traffic split to introduce a new model gradually into production. For example, if you had one existing model in production with 100% of the traffic, you could deploy a new model to the same endpoint, direct 10% of traffic to it, and reduce the original model's traffic to 90%. This allows you to monitor the new model's performance while minimizing the distruption to the majority of users.
Compute instance scaling
You can specify a single instance (or node) to serve your online prediction requests. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to serve your online prediction requests, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
Endpoint
The method will block until the model is deployed and eventually return an Endpoint object. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
"""
# Download the images
! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .
"""
Explanation: Make an online prediction request
Send an online prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset:
End of explanation
"""
import numpy as np
from PIL import Image
# Load image data
IMAGE_DIRECTORY = "cifar_test_images"
image_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(".jpg")]
# Decode JPEG images into numpy arrays
image_data = [
np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files
]
# Scale and convert to expected format
x_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]
# Extract labels from image name
y_test = [int(file.split("_")[1]) for file in image_files]
"""
Explanation: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test:
Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:
You can extract the labels from the image filenames. Each image's filename format is "image_{LABEL}_{IMAGE_NUMBER}.jpg"
End of explanation
"""
predictions = endpoint.predict(instances=x_test)
y_predicted = np.argmax(predictions.predictions, axis=1)
correct = sum(y_predicted == np.array(y_test))
accuracy = len(y_predicted)
print(
f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}"
)
"""
Explanation: Send the prediction request
Now that you have test images, you can use them to send a prediction request. Use the Endpoint object's predict function, which takes the following parameters:
instances: A list of image instances. According to your custom model, each image instance should be a 3-dimensional matrix of floats. This was prepared in the previous step.
The predict function returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
Confidence level for the prediction (predictions), between 0 and 1, for each of the ten classes.
You can then run a quick evaluation on the prediction results:
1. np.argmax: Convert each list of confidence levels to a label
2. Compare the predicted labels to the actual labels
3. Calculate accuracy as correct/total
End of explanation
"""
deployed_model_id = endpoint.list_models()[0].id
endpoint.undeploy(deployed_model_id=deployed_model_id)
"""
Explanation: Undeploy the model
To undeploy your Model resource from the serving Endpoint resource, use the endpoint's undeploy method with the following parameter:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed. You can retrieve the deployed models using the endpoint's deployed_models property.
Since this is the only deployed model on the Endpoint resource, you can omit traffic_split.
End of explanation
"""
delete_training_job = True
delete_model = True
delete_endpoint = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Delete the model
model.delete()
# Delete the endpoint
endpoint.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Endpoint
Cloud Storage Bucket
End of explanation
"""
|
IS-ENES-Data/submission_forms
|
dkrz_forms/Templates/ESGF_replication_submission_form.ipynb
|
apache-2.0
|
# Evaluate this cell to identifiy your form
from dkrz_forms import form_widgets, form_handler, checks
form_infos = form_widgets.show_selection()
# Evaluate this cell to generate your personal form instance
form_info = form_infos[form_widgets.FORMS.value]
sf = form_handler.init_form(form_info)
form = sf.sub.entity_out.report
"""
Explanation: ESGF replication request form
This form is intended to request data to be replicated from other ESGF nodes to be made
locally available in the DKRZ CMIP data pool.
The specification of a requested data collection is based on the search facets describing the data collection. These facets correspond directly to the search categories you use to find data in one of the ESGF portals (e.g. https://esgf-data.dkrz.de/).
Attention: To be able to check your data replication requests before submission it is recommended to have a working [synda replication tool](http://prodiguer.github.io/synda/ installation available. If you use the hosted submission form at https://data-forms.dkrz.de:8080 this is the case by default ..
Specification of ESGF data to be replicated
To be able to automate the data replication process as much as possible we recommend the following steps, which are supported in this form. In case you have problems with this approach please contact us directly via mail (esgf-replication 'at' dkrz.de).
Step 1: define your data request based on the search facets you need to characterize the data collection in one of the ESGF portals.
Step 2: write down your facet selection choices in the specific format supported by the synda replication tool:
The specification is based on so called selection files see examples for a set of examples
specifiy the selection files characterizing your request in this part
Step 3: Test and check your selection file(s) with respect to correctnes
Step 4: Provide information on the context of your request
Step 5: Generate file lists associated to your replica request Check your selection file(s) with respect to data volume adressed
Step 6: Submit your replication request
General remarks:
We recommend to install the synda application at your lab in case you have recurring needs for data to be made available at DKRZ, this way you can prepare and verify your replication at your lab.
We recommend to split your request into a set of small well defined selection files instead of specifying one complex file characterizing your complete data needs
Identify your form
Evaluate the following cell ("SHIFT-ENTER"), you will then see a list of all your forms. Please select the one you are currently working on (the name must match the name at the top of this page !!).
End of explanation
"""
# provide the list of selection file names (.txt files)
# detailed, characterizing file names prefered ..
# e.g. sel_file_list = ["cmip5_mpi-m_rcp_1.txt","cmip5_smhi_rcp_0.txt"]
form.selection_files = ["...","..."] # strings in a list
#---- generation of input fields for your files
text_w = form_widgets.get_selection_files(form.selection_files)
form_widgets.gen_text_widgets(text_w)
"""
Explanation: Step 2: Edit and store your replica selection file(s)
Please provide the facet values charaterizing your data request. You can find the appropriate settings either
- by using an ESGF portal and remembering your search facets or
- by playing around with the cells below until your request is fully specified or
- by installing the synda tool at your lab and using the tool directly at home - just copy the tested synda selection files into the slots below..
an example seclection file looks like:
project="CMIP5"
model="CNRM-CM5 CSIRO-Mk3-6-0"
experiment="historical amip"
ensemble="r1i1p1"
variable[atmos][mon]="tasmin tas psl"
variable[ocean][fx]="areacello sftof"
variable[land][mon]="mrsos,nppRoot,nep"
variable[seaIce][mon]="sic evap"
variable[ocnBgchem][mon]="dissic fbddtalk"
You can store your request using the cells below by adding %%writefile seclection/myfilename.txt as a first line. Please select "myfilenamee" carefully to be able to remember later the dataset which this file charecterizes e.g. %%writefile erich_cmip5_atmos_vars_for_exp1.txt
store your selection files using the cells below
please provide the names of your selection files in the cell below and evaluate it ("SHIFT-ENTER")
for each file name an input field is generated to be filled with your data specification (use "copy-paste" to provide your selection files).
End of explanation
"""
%%bash
# The following command searches for data sets matching your request
synda search -s ./selection/your_selection_file_to_be_checked.txt
# other helpfull commands
# synda check -s ./selection/...
# synda -h
# Final check always should be done with the following command
# - it shows the volume of data associated to your request
# synda show -s ./selection/...
"""
Explanation: Step3 (optional): Check your selection file(s)
using the cells below you can directly interact with the synda tool to check your selection files. The generic syntax is:
- synda <command> -s ./selection/<your_selection_file>
see also the example synda cells in the Appendix of this form
End of explanation
"""
form.file_list = utils.get_file_list(form.selection_files)
print form.file_list
"""
Explanation: Step 4 (optional): Generate file list associated to your request
by evaluating the cell below, the file list associated to your request at this time is generated.
In case this fails please first try to debug your request using the cells above or by directly installing synda at your site.
If problems persist please continue with the form submission below - we will try to resolve your request by direct interaction with our data managers ..
End of explanation
"""
form_handler.save_form(sf,"..my comment..") # add a comment to remember this specific
form_handler.email_form_info(sf) # do not change
form_handler.form_submission(sf) # do not change
"""
Explanation: Step 5: Provide additional information with respect to your request
to be completed
info on:
update frequency requested (new versions)
when data can be deleted
scientific/project context this data is needed for ...
...
Step 6: Submit your data replication request
Please provide the file names of the selection files you tested above and which you now want to submit to the DKRZ data managers.
End of explanation
"""
%%bash
# synda dump tas GFDL-ESM2M -F line -f -C size,filenam
synda variable tas
# synda search cmip5 MOHC HadGEM2-A amip4
# synda search cmip5 mon atmos -l 1000xCO2 mon atmos Amon r1i1p1
%%bash
synda -h
%%bash
synda check selection -s selection/test.txt
"""
Explanation: Appendix: Example synda calls
play around with synda ..
Explore Metadata
example synda calls to search and explore metadata
End of explanation
"""
|
w4zir/ml17s
|
lectures/lec07-logistic-regression.ipynb
|
mit
|
from IPython.display import Image
Image(filename='images/06_03.jpg', width=1000)
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 7: Logistic Regression
Overview
Logistic Regression
Resources
Credits
<br>
<br>
K - Nearest Neighbor Classifier
End of explanation
"""
Image(filename='images/07_01.png', width=500)
"""
Explanation: <br>
<img style="float: left;" src="images/06_04.png" width=500>
<br>
Logistic Regression
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.23/_downloads/f574d1e7527e4460eb09a16f6f836e35/60_maxwell_filtering_sss.ipynb
|
bsd-3-clause
|
import os
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import mne
from mne.preprocessing import find_bad_channels_maxwell
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60)
"""
Explanation: Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory:
End of explanation
"""
fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')
crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif')
"""
Explanation: Background on SSS and Maxwell filtering
Signal-space separation (SSS) :footcite:TauluKajola2005,TauluSimola2006
is a technique based on the physics
of electromagnetic fields. SSS separates the measured signal into components
attributable to sources inside the measurement volume of the sensor array
(the internal components), and components attributable to sources outside
the measurement volume (the external components). The internal and external
components are linearly independent, so it is possible to simply discard the
external components to reduce environmental noise. Maxwell filtering is a
related procedure that omits the higher-order components of the internal
subspace, which are dominated by sensor noise. Typically, Maxwell filtering
and SSS are performed together (in MNE-Python they are implemented together
in a single function).
Like SSP <tut-artifact-ssp>, SSS is a form of projection. Whereas SSP
empirically determines a noise subspace based on data (empty-room recordings,
EOG or ECG activity, etc) and projects the measurements onto a subspace
orthogonal to the noise, SSS mathematically constructs the external and
internal subspaces from spherical harmonics_ and reconstructs the sensor
signals using only the internal subspace (i.e., does an oblique projection).
<div class="alert alert-danger"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems,
and should be considered *experimental* for non-Neuromag data. See the
Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring
for details.</p></div>
The MNE-Python implementation of SSS / Maxwell filtering currently provides
the following features:
Basic bad channel detection
(:func:~mne.preprocessing.find_bad_channels_maxwell)
Bad channel reconstruction
Cross-talk cancellation
Fine calibration correction
tSSS
Coordinate frame translation
Regularization of internal components using information theory
Raw movement compensation (using head positions estimated by MaxFilter)
cHPI subtraction (see :func:mne.chpi.filter_chpi)
Handling of 3D (in addition to 1D) fine calibration files
Epoch-based movement compensation as described in
:footcite:TauluKajola2005 through :func:mne.epochs.average_movements
Experimental processing of data from (un-compensated) non-Elekta
systems
Using SSS and Maxwell filtering in MNE-Python
For optimal use of SSS with data from Elekta Neuromag® systems, you should
provide the path to the fine calibration file (which encodes site-specific
information about sensor orientation and calibration) as well as a crosstalk
compensation file (which reduces interference between Elekta's co-located
magnetometer and paired gradiometer sensor units).
End of explanation
"""
raw.info['bads'] = []
raw_check = raw.copy()
auto_noisy_chs, auto_flat_chs, auto_scores = find_bad_channels_maxwell(
raw_check, cross_talk=crosstalk_file, calibration=fine_cal_file,
return_scores=True, verbose=True)
print(auto_noisy_chs) # we should find them!
print(auto_flat_chs) # none for this dataset
"""
Explanation: Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent
bad channel noise from spreading.</p></div>
Let's see if we can automatically detect it.
End of explanation
"""
bads = raw.info['bads'] + auto_noisy_chs + auto_flat_chs
raw.info['bads'] = bads
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.find_bad_channels_maxwell` needs to operate on
a signal without line noise or cHPI signals. By default, it simply
applies a low-pass filter with a cutoff frequency of 40 Hz to the
data, which should remove these artifacts. You may also specify a
different cutoff by passing the ``h_freq`` keyword argument. If you
set ``h_freq=None``, no filtering will be applied. This can be
useful if your data has already been preconditioned, for example
using :func:`mne.chpi.filter_chpi`,
:func:`mne.io.Raw.notch_filter`, or :meth:`mne.io.Raw.filter`.</p></div>
Now we can update the list of bad channels in the dataset.
End of explanation
"""
# Only select the data forgradiometer channels.
ch_type = 'grad'
ch_subset = auto_scores['ch_types'] == ch_type
ch_names = auto_scores['ch_names'][ch_subset]
scores = auto_scores['scores_noisy'][ch_subset]
limits = auto_scores['limits_noisy'][ch_subset]
bins = auto_scores['bins'] # The the windows that were evaluated.
# We will label each segment by its start and stop time, with up to 3
# digits before and 3 digits after the decimal place (1 ms precision).
bin_labels = [f'{start:3.3f} – {stop:3.3f}'
for start, stop in bins]
# We store the data in a Pandas DataFrame. The seaborn heatmap function
# we will call below will then be able to automatically assign the correct
# labels to all axes.
data_to_plot = pd.DataFrame(data=scores,
columns=pd.Index(bin_labels, name='Time (s)'),
index=pd.Index(ch_names, name='Channel'))
# First, plot the "raw" scores.
fig, ax = plt.subplots(1, 2, figsize=(12, 8))
fig.suptitle(f'Automated noisy channel detection: {ch_type}',
fontsize=16, fontweight='bold')
sns.heatmap(data=data_to_plot, cmap='Reds', cbar_kws=dict(label='Score'),
ax=ax[0])
[ax[0].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[0].set_title('All Scores', fontweight='bold')
# Now, adjust the color range to highlight segments that exceeded the limit.
sns.heatmap(data=data_to_plot,
vmin=np.nanmin(limits), # bads in input data have NaN limits
cmap='Reds', cbar_kws=dict(label='Score'), ax=ax[1])
[ax[1].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[1].set_title('Scores > Limit', fontweight='bold')
# The figure title should not overlap with the subplots.
fig.tight_layout(rect=[0, 0.03, 1, 0.95])
"""
Explanation: We called ~mne.preprocessing.find_bad_channels_maxwell with the optional
keyword argument return_scores=True, causing the function to return a
dictionary of all data related to the scoring used to classify channels as
noisy or flat. This information can be used to produce diagnostic figures.
In the following, we will generate such visualizations for
the automated detection of noisy gradiometer channels.
End of explanation
"""
raw.info['bads'] += ['MEG 2313'] # from manual inspection
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>You can use the very same code as above to produce figures for
*flat* channel detection. Simply replace the word "noisy" with
"flat", and replace ``vmin=np.nanmin(limits)`` with
``vmax=np.nanmax(limits)``.</p></div>
You can see the un-altered scores for each channel and time segment in the
left subplots, and thresholded scores – those which exceeded a certain limit
of noisiness – in the right subplots. While the right subplot is entirely
white for the magnetometers, we can see a horizontal line extending all the
way from left to right for the gradiometers. This line corresponds to channel
MEG 2443, which was reported as auto-detected noisy channel in the step
above. But we can also see another channel exceeding the limits, apparently
in a more transient fashion. It was therefore not detected as bad, because
the number of segments in which it exceeded the limits was less than 5,
which MNE-Python uses by default.
<div class="alert alert-info"><h4>Note</h4><p>You can request a different number of segments that must be
found to be problematic before
`~mne.preprocessing.find_bad_channels_maxwell` reports them as bad.
To do this, pass the keyword argument ``min_count`` to the
function.</p></div>
Obviously, this algorithm is not perfect. Specifically, on closer inspection
of the raw data after looking at the diagnostic plots above, it becomes clear
that the channel exceeding the "noise" limits in some segments without
qualifying as "bad", in fact contains some flux jumps. There were just not
enough flux jumps in the recording for our automated procedure to report
the channel as bad. So it can still be useful to manually inspect and mark
bad channels. The channel in question is MEG 2313. Let's mark it as bad:
End of explanation
"""
raw_sss = mne.preprocessing.maxwell_filter(
raw, cross_talk=crosstalk_file, calibration=fine_cal_file, verbose=True)
"""
Explanation: After that, performing SSS and Maxwell filtering is done with a
single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk
and fine calibration filenames provided (if available):
End of explanation
"""
raw.pick(['meg']).plot(duration=2, butterfly=True)
raw_sss.pick(['meg']).plot(duration=2, butterfly=True)
"""
Explanation: To see the effect, we can plot the data before and after SSS / Maxwell
filtering.
End of explanation
"""
head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',
'test_move_anon_raw.pos')
head_pos = mne.chpi.read_head_pos(head_pos_file)
mne.viz.plot_head_positions(head_pos, mode='traces')
"""
Explanation: Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The :func:~mne.preprocessing.maxwell_filter function has parameters
int_order and ext_order for setting the order of the spherical
harmonic expansion of the interior and exterior components; the default
values are appropriate for most use cases. Additional parameters include
coord_frame and origin for controlling the coordinate frame ("head"
or "meg") and the origin of the sphere; the defaults are appropriate for most
studies that include digitization of the scalp surface / electrodes. See the
documentation of :func:~mne.preprocessing.maxwell_filter for details.
Spatiotemporal SSS (tSSS)
An assumption of SSS is that the measurement volume (the spherical shell
where the sensors are physically located) is free of electromagnetic sources.
The thickness of this source-free measurement shell should be 4-8 cm for SSS
to perform optimally. In practice, there may be sources falling within that
measurement volume; these can often be mitigated by using Spatiotemporal
Signal Space Separation (tSSS) :footcite:TauluSimola2006.
tSSS works by looking for temporal
correlation between components of the internal and external subspaces, and
projecting out any components that are common to the internal and external
subspaces. The projection is done in an analogous way to
SSP <tut-artifact-ssp>, except that the noise vector is computed
across time points instead of across sensors.
To use tSSS in MNE-Python, pass a time (in seconds) to the parameter
st_duration of :func:~mne.preprocessing.maxwell_filter. This will
determine the "chunk duration" over which to compute the temporal projection.
The chunk duration effectively acts as a high-pass filter with a cutoff
frequency of $\frac{1}{\mathtt{st_duration}}~\mathrm{Hz}$; this
effective high-pass has an important consequence:
In general, larger values of st_duration are better (provided that your
computer has sufficient memory) because larger values of st_duration
will have a smaller effect on the signal.
If the chunk duration does not evenly divide your data length, the final
(shorter) chunk will be added to the prior chunk before filtering, leading
to slightly different effective filtering for the combined chunk (the
effective cutoff frequency differing at most by a factor of 2). If you need
to ensure identical processing of all analyzed chunks, either:
choose a chunk duration that evenly divides your data length (only
recommended if analyzing a single subject or run), or
include at least 2 * st_duration of post-experiment recording time at
the end of the :class:~mne.io.Raw object, so that the data you intend to
further analyze is guaranteed not to be in the final or penultimate chunks.
Additional parameters affecting tSSS include st_correlation (to set the
correlation value above which correlated internal and external components
will be projected out) and st_only (to apply only the temporal projection
without also performing SSS and Maxwell filtering). See the docstring of
:func:~mne.preprocessing.maxwell_filter for details.
Movement compensation
If you have information about subject head position relative to the sensors
(i.e., continuous head position indicator coils, or :term:cHPI), SSS
can take that into account when projecting sensor data onto the internal
subspace. Head position data can be computed using
:func:mne.chpi.compute_chpi_locs and :func:mne.chpi.compute_head_pos,
or loaded with the:func:mne.chpi.read_head_pos function. The
example data <sample-dataset> doesn't include cHPI, so here we'll
load a :file:.pos file used for testing, just to demonstrate:
End of explanation
"""
|
dlsun/symbulate
|
labs/Lab 3 - Discrete Distributions.ipynb
|
mit
|
from symbulate import *
%matplotlib inline
"""
Explanation: Symbulate Lab 3 - Discrete Distributions
This Jupyter notebook provides a template for you to fill in. Read the notebook from start to finish, completing the parts as indicated. To run a cell, make sure the cell is highlighted by clicking on it, then press SHIFT + ENTER on your keyboard. (Alternatively, you can click the "play" button in the toolbar above.)
In this lab you will use the Symbulate package. You should have completed Section 2 of the "Getting Started Tutorial" and read Sections 1-4 and parts of Section 5 of the documentation (you can ignore parts about continuous random variables for now). A few specific links to the documentation are provided below, but it will probably make more sense if you read the documentation from start to finish. You should Symbulate commands whenever possible. If you find yourself writing long blocks of Python code, you are probably doing something wrong. For example, you should not need to write any for loops.
Remember to run the next cell first.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: Part I: Binomial and Hypergeometric distributions
Shuffle a standard deck of 52 cards (13 hearts, and 39 other cards) and draw 5. Consider the number of hearts drawn.
Problem 1
First suppose the draws are made with replacement, and let $X$ represent the number of hearts among the 5 cards drawn.
a)
Define a probability space P in which an outcome corresponds to an ordered sequence of draws with replacement. (Hint: you only need to consider whether a card is a heart or not. Let 1 represent heart, and 0 not a heart. See the examples for BoxModel; use the probs argument like in In[6:], or a dictionary like input like in In[7:].)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: b)
Define a RV $X$ on the probability space P which counts the number of hearts. (Hint: what simple function will count the number of 1s in a sequence of 0/1s?)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: c)
Simulate 10000 values of $X$, store the values in a variable x, and summarize its approximate distribution in a table.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: d)
Display the approximate distribution of $X$ in a plot. Overlay the true probability mass function on the plot. (Hint.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: e)
Use the simulation results to estimate $P(X=3)$. Enter the appropriate Symbulate commands below; don't just use the above table.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: f)
Use the .pdf() method to calculate the exact value of $P(X=3)$. (Hint: what is the name of the distribution of $X$ in this case?) Compare the approximation from the previous part with the exact value; recall that a relative frequency based on $N$ repetitions of a simulation is likely to be within $1/\sqrt{N}$ of the true probability.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: g)
Use the simulation results to estimate $E(X)$. Compare the approximate expected value with the theoretical expected value. (A mean based on $N$ repetitions of a simulation is likely to be within $2SD(X)/\sqrt{N}$ of the true expected value.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: Problem 2
Now suppose the draws are made without replacement, and let $Y$ represent the number of hearts among the 5 cards drawn.
a)
Define a probability space Q in which an outcome corresponds to an ordered sequence of draws without replacement. (Hint: As in problem 1, you only need to consider whether a card is a heart or not, but now it is necessary to specify the actual number of cards of each type.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: b)
Define a RV $Y$ on the probability space Q which counts the number of hearts.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: c)
Simulate 10000 values of $Y$, store the values in a variable y, and summarize its approximate distribution in a table.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: d)
Display the approximate distribution of $Y$ in a plot. Overlay the true probability mass function on the plot. (Hint. Also, see Handout 11.)
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: e)
Use the simulation results to estimate $P(Y=3)$. Enter the appropriate Symbulate commands below; don't just use the above table.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: f)
Use the .pdf() method to calculate the exact value of $P(Y=3)$. (Hint: See Handout 11. What is the name of the distribution of $Y$ in this case?) Compare the approximation from the previous part with the exact value; recall that a relative frequency based on $N$ repetitions of a simulation is likely to be within $1/\sqrt{N}$ of the true probability.
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: g)
Use the simulation results to estimate $E(Y)$. Compare the approximate expected value with the theoretical expected value. (A mean based on $N$ repetitions of a simulation is likely to be within $2SD(X)/\sqrt{N}$ of the true expected value.) Also compare the expected value of $Y$ (without replacement) and $X$ (with replacement); are these values within the margin of error of each other?
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: h)
Compare your results from Problems 1 and 2. How does the distribution of the number of hearts drawn change between with and without replacement? Are the expected values the same? (Nothing to respond, just think about it.)
Part II: Poisson approximation of the Binomial
When $n$ is "large" and $p$ is "small", a Binomial($n$, $p$) distribution is well approximated by a Poisson($np$) distribution. This part illustrates this fact.
Let $X$ have a Binomial distribution with $n$ trials and probability of success on each trial $p=\lambda /n$, where $\lambda$ is a constant. When $n$ is large, the number of trials is large but the probability of success on any single trial is small. Note that the expected value of $X$ is $n(\lambda/n) = \lambda$, which does not depend on $n$.
We will assume $\lambda = 3$.
a)
Let $n=10$.
Define a RV $X$ which has a Binomial($n$, $3/n$) distribution. (Hint, also refer to Example 2.7 in the Symbulate tutorial.)
Simulate 10000 values of $X$ and plot the approximate distribution.
Overlay the Poisson(3) probability mass function.
Does a Poisson(3) distribution seems like a good approximation of a Binomial(10, 3/10) distribution?
End of explanation
"""
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: b)
Let $n=100$.
Define a RV $X$ which has a Binomial($n$, $3/n$) distribution. (Hint, also refer to Example 2.7 in the Symbulate tutorial.)
Simulate 10000 values of $X$ and plot the approximate distribution.
Overlay the Poisson(3) probability mass function.
Does a Poisson(3) distribution seems like a good approximation of a Binomial(100, 3/100) distribution?
End of explanation
"""
n = 6
labels = list(range(n))
def number_matches(x):
count = 0
for i in range(0, n, 1):
if x[i] == labels[i]:
count += 1
return count
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: Part III: Poisson approximation in the matching problem
Consider the matching babies problem again. (Last time, I promise!) There are $n$ mothers and $n$ babies, and one baby is returned to each mother completely at random. Let $X$ represent the number of babies that are returned to the correct mother.
Recall that in HW1 you used an applet to run simulations for different values of $n$. You should have observed
Regardless of the value of $n$, the expected value of $X$ is 1.
Aside from the smallest values of $n$, the probability of at least one match was about 0.63.
See HW1 solutions in PL for a refresher.
We will investigate these two properties further in this part. To put this problem in the context of what we have been discussing this week:
Each time a baby is returned to a mother can be considered a trial.
Each trial results in success (the baby is returned to the correct mother) or failure (not).
There are a fixed number of trials, $n$.
So far, the conditions for the Binomial situation are satisfied. But does $X$ have a Binomial distribution?
a)
What is the probability that any particular mother receives the correct baby? Is the probability of success the same for each trial?
TYPE YOUR EXPLANATION HERE.
b)
Are the trials independent? Does $X$ have a Binomial distribution?
TYPE YOUR EXPLANATION HERE.
c)
In Part II you saw how Poisson distributions can sometimes approximate Binomial distributions. But Poisson approximations are valid much more generally. In particular, unless $n$ is really small, the number of matches $X$ in the matching problem has an approximate Poisson distribution with mean 1.
Explain why $E(X)=1$ regardless of $n$. You don't need to give a proof, but do think of a reasonable explanation. (Hint: consider part a) of Part III. Also consider your comparison of Binomial and Hypergeometric from Part I, and the means in particular; what happens here is similar.)
TYPE YOUR EXPLANATION HERE.
d)
Now you will use simulation to approximate the distribution of $X$ when $n=6$.
Label the babies $0, 1, \ldots, n-1$ (the code labels = list(range(n)) below does this).
Define an appropriate probability space P in which an outcome corresponds to the ordered shuffling of the babies.
Define a RV $X$ on the probability space P through an appropriate function. You can use the number_matches function below.
Simulate 10000 values of $X$ and display the approximate distribution in a plot.
Overlay the Poisson(1) probability mass function.
Optional: use the simulation results to approximate $P(X\ge 1)$ and $E(X)$. This is optional because you already did it in HW1 using the applet, but make sure you know how to do it in Symbulate.
Does a Poisson(1) distribution seem like a good approximation to the distribution of $X$ when $n=6$?
End of explanation
"""
n = 6 # BE SURE TO CHANGE THIS VALUE
labels = list(range(n))
def number_matches(x):
count = 0
for i in range(0, n, 1):
if x[i] == labels[i]:
count += 1
return count
# Type all of your code for this problem in this cell.
# Feel free to add additional cells for scratch work, but they will not be graded.
"""
Explanation: e)
Pick another value of $n\ge 6$ and repeat part d).
Does a Poisson(1) distribution seem like a good approximation to the distribution of $X$ for this value of $n$?
End of explanation
"""
|
zegnus/self-driving-car-machine-learning
|
p13-final-project/ros/src/tl_detector/light_classification/scripts/visualize_bosch.ipynb
|
mit
|
import os, yaml
import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas.io.json import json_normalize
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Visualize the Bosch Small Traffic Lights Dataset
The Bosch small traffic lights dataset can be downloaded from here: https://hci.iwr.uni-heidelberg.de/node/6132
The dataset used in this notebook is the RGB dataset
1. Import necessary modules
End of explanation
"""
#Define path to the dataset and annotation filenames
DATA_FOLDER = os.path.join('.', 'data', 'bosch')
TRAIN_DATA_FOLDER = os.path.join(DATA_FOLDER, 'dataset_train_rgb')
TEST_DATA_FOLDER = os.path.join(DATA_FOLDER, 'dataset_test_rgb')
TRAIN_IMAGE_FOLDER = os.path.join(TRAIN_DATA_FOLDER, 'rgb', 'train')
TEST_IMAGE_FOLDER = os.path.join(TEST_DATA_FOLDER, 'rgb', 'test')
TRAIN_ANNOTATIONS_FILE = os.path.join(TRAIN_DATA_FOLDER, 'train.yaml')
TEST_ANNOTATIONS_FILE = os.path.join(TEST_DATA_FOLDER, 'test.yaml')
#Read in all the image files
train_image_files = glob.glob(os.path.join(TRAIN_IMAGE_FOLDER,'**','*.png'), recursive=True)
test_image_files = glob.glob(os.path.join(TEST_IMAGE_FOLDER,'*.png'), recursive=True)
#Read in all the annotations
train_annotations = yaml.load(open(TRAIN_ANNOTATIONS_FILE, 'rb').read())
test_annotations = yaml.load(open(TEST_ANNOTATIONS_FILE, 'rb').read())
assert(len(train_image_files) == len(train_annotations)), "Number of training annotations does not match training images!"
assert(len(test_image_files) == len(test_annotations)), "Number of test annotations does not match test images!"
"""
Explanation: 2. Load the dataset
End of explanation
"""
#Summarize the data
n_train_samples = len(train_annotations)
n_test_samples = len(test_annotations)
sample_train = train_annotations[10]
sample_test = test_annotations[10]
print("Number of training examples: {:d}".format(n_train_samples))
print("Number of test examples: {:d}\n".format(n_test_samples))
print('The annotation files are a {} of {} with the following keys: \n{}\n'
.format(type(train_annotations).__name__,
type(sample_train).__name__,
sample_train.keys()))
print('The boxes key has values that are a {} of {} with keys: \n{}\n'
.format(type(sample_train['boxes']).__name__,
type(sample_train['boxes'][0]).__name__,
sample_train['boxes'][0].keys()))
print('The path key in the training dataset has the following format: \n{}\n'.format(sample_train['path']))
print('The path key in the test dataset has the following format: \n{}\n'.format(sample_test['path']))
#Load the data into dataframes to get the unique labels and instances of each label
train_df = pd.io.json.json_normalize(train_annotations)
test_df = pd.io.json.json_normalize(test_annotations)
trainIdx = train_df.set_index(['path']).boxes.apply(pd.Series).stack().index
testIdx = test_df.set_index(['path']).boxes.apply(pd.Series).stack().index
train_df = pd.DataFrame(train_df.set_index(['path'])
.boxes.apply(pd.Series).stack().values.tolist(),index=trainIdx).reset_index().drop('level_1',1)
test_df = pd.DataFrame(test_df.set_index(['path'])
.boxes.apply(pd.Series).stack().values.tolist(),index=testIdx).reset_index().drop('level_1',1)
print('The training annotations have the following class distributions :\n{}\n'.format(train_df.label.value_counts()))
print('The test annotations have the following class distribution:\n{}\n'.format(test_df.label.value_counts()))
plt.figure(figsize = (22,8))
plt.subplot(1,2,1)
pd.value_counts(test_df['label']).plot(kind='barh', color=['g', 'r', 'k', 'y'])
plt.title('Test annotations class distribution')
plt.subplot(1,2,2)
pd.value_counts(train_df['label']).plot(kind='barh')
plt.title('Test annotations class distribution')
plt.show()
train_df.groupby(['occluded', 'label'])['label'].count().unstack('occluded').plot(kind='barh', stacked=True, figsize=(10,5) )
plt.title('Training annotation class distribution')
test_df.groupby(['occluded', 'label'])['label'].count().unstack('occluded').plot(kind='barh', stacked=True, figsize=(10,5))
plt.title('Test annotation class distribution')
plt.show()
"""
Explanation: 3. Explore the data
End of explanation
"""
|
agushman/coursera
|
src/cours_2/week_5/task_3.ipynb
|
mit
|
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=0)
def write_answer(data, file_name):
with open(file_name, 'w') as fout:
fout.write(str(data))
"""
Explanation: Задание по программированию: 1NN против RandomForest
В этом задании будет использоваться датасет digits из sklearn.datasets. Оставьте последние 25% объектов для контроля качества, разделив X и y на X_train, y_train и X_test, y_test.
Целью задания будет реализовать самый простой метрический классификатор — метод ближайшего соседа, а также сравнить качество работы реализованного вами 1NN с RandomForestClassifier из sklearn на 1000 деревьях.
End of explanation
"""
def euclidean_metric(a, b):
return sum((a - b)**2)
lst = list()
for i in range(X.shape[0]):
tmp = list()
for j in range(y.shape[0]):
if i is not j:
tmp.append((euclidean_metric(X[i, :], X[j, :]), y[j]))
lst.append(y[i] == min(tmp, key = lambda t: t[0])[1])
trues, falses = 0.0, 0.0
for el in lst:
if el == True:
trues += 1
elif el == False:
falses +=1
print('precision is ', (trues / (trues + falses)) * 100.0, '%')
write_answer(data = (falses / (falses + trues)), file_name = 'ans_1.txt')
"""
Explanation: Задание 1
Реализуйте самостоятельно метод одного ближайшего соседа с евклидовой метрикой для задачи классификации. Можно не извлекать корень из суммы квадратов отклонений, т.к. корень — монотонное преобразование и не влияет на результат работы алгоритма.
Никакой дополнительной работы с признаками в этом задании делать не нужно — мы еще успеем этим заняться в других курсах. Ваша реализация может быть устроена следующим образом: можно для каждого классифицируемого объекта составлять список пар (расстояние до точки из обучающей выборки, метка класса в этой точке), затем сортировать этот список (по умолчанию сортировка будет сначала по первому элементу пары, затем по второму), а затем брать первый элемент (с наименьшим расстоянием).
Сортировка массива длиной N требует порядка N log N сравнений (строже говоря, она работает за O(N log N)). Подумайте, как можно легко улучшить получившееся время работы. Кроме простого способа найти ближайший объект всего за N сравнений, можно попробовать придумать, как разбить пространство признаков на части и сделать структуру данных, которая позволит быстро искать соседей каждой точки. За выбор метода поиска ближайших соседей в KNeighborsClassifier из sklearn отвечает параметр algorithm — если у вас уже есть некоторый бэкграунд в алгоритмах и структурах данных, вам может быть интересно познакомиться со структурами данных ball tree и kd tree.
Доля ошибок, допускаемых 1NN на тестовой выборке, — ответ в задании 1.
End of explanation
"""
from sklearn import ensemble
clf = ensemble.RandomForestClassifier(n_estimators = 1000).fit(X_train, y_train)
lst = [ y == a for y, a in zip(y_test, clf.predict(X_test)) ]
trues, falses = 0.0, 0.0
for el in lst:
if el == True:
trues += 1
elif el == False:
falses += 1
print('precision is ', (trues / (trues + falses)) * 100.0, '%')
write_answer(data = (falses / (falses + trues)), file_name = 'ans_2.txt')
"""
Explanation: Задание 2
Теперь обучите на обучающей выборке RandomForestClassifier(n_estimators=1000) из sklearn. Сделайте прогнозы на тестовой выборке и оцените долю ошибок классификации на ней. Эта доля — ответ в задании 2. Обратите внимание на то, как соотносится качество работы случайного леса с качеством работы, пожалуй, одного из самых простых методов — 1NN. Такое различие — особенность данного датасета, но нужно всегда помнить, что такая ситуация тоже может иметь место, и не забывать про простые методы.
End of explanation
"""
|
QuantStack/quantstack-talks
|
2018-11-14-PyParis-widgets/notebooks/3.ipyleaflet.ipynb
|
bsd-3-clause
|
from ipyleaflet import Map, basemaps, basemap_to_tiles
center = (52.204793, 360.121558)
m = Map(
layers=(basemap_to_tiles(basemaps.NASAGIBS.ModisTerraTrueColorCR, "2018-11-12"), ),
center=center,
zoom=4
)
m
"""
Explanation: <center><img src="src/ipyleaflet.svg" width="50%"></center>
Repository: https://github.com/jupyter-widgets/ipyleaflet
Installation:
conda install -c conda-forge ipyleaflet
Base map
End of explanation
"""
from ipyleaflet import Marker, Icon
icon = Icon(icon_url='https://leafletjs.com/examples/custom-icons/leaf-red.png', icon_size=[38, 95], icon_anchor=[22,94])
mark = Marker(location=center, icon=icon, rotation_origin='22px 94px')
m.add_layer(mark)
import time
for _ in range(40):
mark.rotation_angle += 15
time.sleep(0.1)
"""
Explanation: Layers
Marker
End of explanation
"""
from sidecar import Sidecar
from IPython.display import display
sc = Sidecar(title='Map widget')
with sc:
display(m)
"""
Explanation: <center><img src="src/jupyterlab-sidecar.svg" width="50%"></center>
Repository: https://github.com/jupyter-widgets/jupyterlab-sidecar
Installation:
pip install jupyterlab_sidecar
End of explanation
"""
from ipywidgets import Button, IntSlider, link
from ipyleaflet import Heatmap
from random import gauss
import time
center = (37.09, -103.66)
zoom = 5
def create_random_data(length):
"Return a list of some random lat/lon/value triples."
return [[gauss(center[0], 2),
gauss(center[1], 4),
gauss(700, 300)] for i in range(length)]
m.center = center
m.zoom = zoom
heat = Heatmap(locations=create_random_data(1000), radius=20, blur=10)
m.add_layer(heat)
def generate(_):
heat.locations = create_random_data(1000)
button = Button(description='Generate data', button_style='success')
button.on_click(generate)
button
slider = IntSlider(min=10, max=30, value=heat.radius)
link((slider, 'value'), (heat, 'radius'))
slider
"""
Explanation: Heatmap layer
End of explanation
"""
from ipyleaflet import Velocity
import xarray as xr
center = (0, 0)
zoom = 4
m2 = Map(center=center, zoom=zoom, interpolation='nearest', basemap=basemaps.CartoDB.DarkMatter)
sc2 = Sidecar(title='Map Velocity')
with sc2:
display(m2)
ds = xr.open_dataset('src/wind-global.nc')
display_options = {
'velocityType': 'Global Wind',
'displayPosition': 'bottomleft',
'displayEmptyString': 'No wind data'
}
wind = Velocity(data=ds,
zonal_speed='u_wind',
meridional_speed='v_wind',
latitude_dimension='lat',
longitude_dimension='lon',
velocity_scale=0.01,
max_velocity=20,
display_options=display_options)
m2.add_layer(wind)
"""
Explanation: Velocity
End of explanation
"""
from ipyleaflet import Map, basemaps, basemap_to_tiles, SplitMapControl
m = Map(center=(42.6824, 365.581), zoom=5)
right_layer = basemap_to_tiles(basemaps.NASAGIBS.ModisTerraTrueColorCR, "2017-11-11")
left_layer = basemap_to_tiles(basemaps.NASAGIBS.ModisAquaBands721CR, "2017-11-11")
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
m.add_control(control)
m
"""
Explanation: Controls
End of explanation
"""
import numpy as np
import bqplot.pyplot as plt
from bqplot import *
from traitlets import observe
from sidecar import Sidecar
from ipywidgets import VBox, Button
from ipyleaflet import Map, Marker, Popup
axes_options = {'x': {'label': 'x'}, 'y': {'label': 'y'}}
x = np.arange(40)
y = np.cumsum(np.random.randn(2, 40), axis=1)
fig = plt.figure(animation_duration=1000)
lines = plt.plot(x=x, y=y, colors=['red', 'green'], axes_options=axes_options)
def generate(_):
lines.y = np.cumsum(np.random.randn(2, 40), axis=1)
button = Button(description='Generate data', button_style='success')
button.on_click(generate)
box_plot = VBox([fig, button])
fig
center = (52.204793, 360.121558)
m = Map(center=center, zoom=9, close_popup_on_click=False)
marker = Marker(location=(52.1, 359.9))
m.add_layer(marker)
marker.popup = box_plot
sc = Sidecar(title='Map and bqplot')
with sc:
display(m)
"""
Explanation: Plays well with other widgets libraries
End of explanation
"""
from ipywidgets import Widget
Widget.close_all()
"""
Explanation: Clean
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/art_and_science_of_ml/labs/export_data_from_bq_to_gcs.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%pip install google-cloud-bigquery==1.25.0
"""
Explanation: Exporting data from BigQuery to Google Cloud Storage
In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
End of explanation
"""
# Importing necessary tensorflow library and printing the TF version.
import tensorflow as tf
print("Tensorflow version: ",tf.__version__)
import os
from google.cloud import bigquery
"""
Explanation: Please ignore any incompatibility warnings and errors.
Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart).
End of explanation
"""
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
"""
Explanation: Change the following cell as necessary:
End of explanation
"""
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
"""
Explanation: Create BigQuery tables
If you haven not already created a BigQuery dataset for our data, run the following cell:
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
Explanation: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
Explanation: Make the validation dataset be 1/10 the size of the training dataset.
End of explanation
"""
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
"""
Explanation: Export the tables as CSV files
End of explanation
"""
|
arcyfelix/Courses
|
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
|
apache-2.0
|
import numpy as np
#Creating sample array
arr = np.arange(0, 11)
#Show
arr
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
NumPy Indexing and Selection
In this lecture we will discuss how to select elements or groups of elements from an array.
End of explanation
"""
#Get a value at an index
arr[8]
#Get values in a range
arr[1:5]
#Get values in a range
arr[0:5]
"""
Explanation: Bracket Indexing and Selection
The simplest way to pick one or some elements of an array looks very similar to python lists:
End of explanation
"""
#Setting a value with index range (Broadcasting)
arr[0:5] = 100
#Show
arr
# Reset array, we'll see why I had to reset in a moment
arr = np.arange(0, 11)
#Show
arr
#Important notes on Slices
slice_of_arr = arr[0:6]
#Show slice
slice_of_arr
#Change Slice
slice_of_arr[:] = 99
#Show Slice again
slice_of_arr
"""
Explanation: Broadcasting
Numpy arrays differ from a normal Python list because of their ability to broadcast:
End of explanation
"""
arr
"""
Explanation: Now note the changes also occur in our original array!
End of explanation
"""
#To get a copy, need to be explicit
arr_copy = arr.copy()
arr_copy
"""
Explanation: Data is not copied, it's a view of the original array! This avoids memory problems!
End of explanation
"""
arr_2d = np.array(([5, 10, 15], [20, 25, 30], [35, 40, 45]))
#Show
arr_2d
#Indexing row
arr_2d[1]
# Format is arr_2d[row][col] or arr_2d[row,col]
# Getting individual element value
arr_2d[1][0]
# Getting individual element value
arr_2d[1,0]
# 2D array slicing
#Shape (2,2) from top right corner
arr_2d[:2,1:]
#Shape bottom row
arr_2d[2]
#Shape bottom row
arr_2d[2,:]
"""
Explanation: Indexing a 2D array (matrices)
The general format is arr_2d[row][col] or arr_2d[row,col]. I recommend usually using the comma notation for clarity.
End of explanation
"""
arr = np.arange(1, 11)
arr
arr > 4
bool_arr = arr > 4
bool_arr
arr[bool_arr]
arr[arr > 2]
x = 2
arr[arr > x]
"""
Explanation: More Indexing Help
Indexing a 2d matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching NumPy indexing to fins useful images, like this one:
<img src= 'http://memory.osu.edu/classes/python/_images/numpy_indexing.png' width=500/>
Conditional Selection
This is a very fundamental concept that will directly translate to pandas later on, make sure you understand this part!
Let's briefly go over how to use brackets for selection based off of comparison operators.
End of explanation
"""
|
NYUDataBootcamp/Projects
|
MBA_S16/Ahmad-Shah-NBA Contract Analysis.ipynb
|
mit
|
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
"""
Explanation: NBA Contract Year Performance Analysis
May 2016
Written by Amar Shah (ads691@stern.nyu.edu) and Yasser Ahmad (ya715@stern.nyu.edu ) at NYU Stern with help from Professor David Backus
Background
Amar Shah is a recent graduate of the NYU Stern School of Business, where he specialized in strategy and corporate finance. Before attending Stern, Amar worked at Groupon in their Financial Planning and Analysis group in Chicago. Upon graduating from Stern, Amar will be joining Amazon as a Senior Financial Analyst in their International Finance Retail organization. Amazon being a large believer in the power of data, the ability to breakdown, analyze, and present data will be very important and as such Python will come in handy.
Yasser Ahmad is a recent graduate of the NYU Stern School of Business, where he specialized in strategy, finance, and management. Mr. Ahmad possesses strong analytical skills as well as strong interpersonal skills, the quintessential combination of EQ + IQ. Prior to NYU Stern, Mr. Ahmad worked as a management consultant, advising clients in over 30 countries on their most pressing business problems. Mr. Ahmad will be returning to management consulting this summer.
Abstract
The NBA contract year performance phenomenon is one that is talked about more and more amongst NBA general managers, analysts, and fans. Essentially they are speaking about the incentives a player has to statisically improve their level of play prior to receiving a contract extention and then the moral hazard that exists thereafter. A player may exert extra effort to show that they are an elite player or at least better than what their historical averages present them to be capable of. This will get the attention of NBA general managers and the player hopes that the associated payday will be refelcted primairly on their most recent performance. Once the contract has been signed, the player is essentially guaranteed the agreed upon yearly salary for the agreed upon time frame. The theory then is that since the player has no incentive to play any harder since the payout is locked in, the player's statistics across the board may drop as a result. We wanted to look into this to see if its a routine occurence and if so what areas it stands out in so that the party giving the contract can make a well informed decision.
Methodology
Step 1:
We first isolated 2013 to identify which NBA players were in a contract year that year. The following website had this data:
http://www.spotrac.com/nba/free-agents/2013/
This data was not readily available in a downloadable format and therefore we had to use import.io to web scrape the data into a .xlsx file. This file includes player name, position, age, type of free agent, previous and new team, contract length, contract value, and average yearly contract value. This will allow us to analyze the effects through many different slices of the data.
Step 2:
Next we had to merge the data from above into one that contains statistics for these players for three years prior and three years after the contract was enacted. For this we used the .csv files found on http://www.basketball-reference.com/leagues/ to compile this data into a .xlsx file for the following years: 10-11, 11-12, 12-13, 13-14, 14-15, 15-16. This meant using the following links:
http://www.basketball-reference.com/leagues/NBA_2016_per_game.html,
http://www.basketball-reference.com/leagues/NBA_2015_per_game.html,
http://www.basketball-reference.com/leagues/NBA_2014_per_game.html,
http://www.basketball-reference.com/leagues/NBA_2013_per_game.html,
http://www.basketball-reference.com/leagues/NBA_2012_per_game.html,
http://www.basketball-reference.com/leagues/NBA_2011_per_game.html
Step 3:
We then stored these two .xlsx files on a github depository (addresses shown in code below) and we will do a merge based off the player values to join both sets of data.
We now have all the data needed to start analyzing and producing graphs for our project.
While we understand this limits the automated functionality of this code, we found it difficult to come across readily downloadable files to pull all this for us.
Step 4:
Lastly we will try to isolate this behavior across several different metrics to showcase any trends we see that lead us to believe the contract year phenomenon is present. The metrics include points per game, usage rate (what percent of plays is that player actively involved in), and PER (player efficiency rating - a weighted measure across multiple categories that gives a well rounded statistic for a player). We will also show a histogram of all players across those three metrics to see if there is a lack of talent that results in the overpaying of some atheletes (supply issue that forces up price).
Import Packages
End of explanation
"""
url1 = 'https://github.com/amars16/NBA_Contract/blob/master/Contract_Data.xlsx?raw=true'
url2= 'https://github.com/amars16/NBA_Contract/blob/master/Player_Stats.xlsx?raw=true'
c2013 = pd.read_excel(url1,'2013')
p10_11 = pd.read_excel(url2,'10-11')
p11_12 = pd.read_excel(url2,'11-12')
p12_13 = pd.read_excel(url2,'12-13')
p13_14 = pd.read_excel(url2,'13-14')
p14_15 = pd.read_excel(url2,'14-15')
p15_16 = pd.read_excel(url2,'15-16')
c2013_a = pd.merge(c2013, p10_11, how='left', on='PLAYER')
c2013_b = pd.merge(c2013, p11_12, how='left', on='PLAYER')
c2013_c = pd.merge(c2013, p12_13, how='left', on='PLAYER')
c2013_d = pd.merge(c2013, p13_14, how='left', on='PLAYER')
c2013_e = pd.merge(c2013, p14_15, how='left', on='PLAYER')
c2013_f = pd.merge(c2013, p15_16, how='left', on='PLAYER')
"""
Explanation: Import and Merge Data
End of explanation
"""
c_PTS = pd.concat([c2013_a['PTS'].rename('10-11'),c2013_b['PTS'].rename('11-12'),c2013_c['PTS'].rename('12-13'),c2013_d['PTS'].rename('13-14'),c2013_e['PTS'].rename('14-15'),c2013_f['PTS'].rename('15-16')],axis=1).T
c_USG = pd.concat([c2013_a['USG%'].rename('10-11'),c2013_b['USG%'].rename('11-12'),c2013_c['USG%'].rename('12-13'),c2013_d['USG%'].rename('13-14'),c2013_e['USG%'].rename('14-15'),c2013_f['USG%'].rename('15-16')],axis=1).T
c_PER = pd.concat([c2013_a['PER'].rename('10-11'),c2013_b['PER'].rename('11-12'),c2013_c['PER'].rename('12-13'),c2013_d['PER'].rename('13-14'),c2013_e['PER'].rename('14-15'),c2013_f['PER'].rename('15-16')],axis=1).T
c_PTS_C = pd.concat([c2013_a[(c2013_a.POS=='C')]['PTS'].rename('10-11'),c2013_b[(c2013_b.POS=='C')]['PTS'].rename('11-12'),c2013_c[(c2013_c.POS=='C')]['PTS'].rename('12-13'),c2013_d[(c2013_d.POS=='C')]['PTS'].rename('13-14'),c2013_e[(c2013_e.POS=='C')]['PTS'].rename('14-15'),c2013_f[(c2013_f.POS=='C')]['PTS'].rename('15-16')],axis=1).T
c_PTS_F = pd.concat([pd.concat([c2013_a[(c2013_a.POS==('SF'))],c2013_a[(c2013_a.POS==('PF'))]])['PTS'].rename('10-11'),pd.concat([c2013_b[(c2013_b.POS==('SF'))],c2013_b[(c2013_b.POS==('PF'))]])['PTS'].rename('11-12'),pd.concat([c2013_c[(c2013_c.POS==('SF'))],c2013_c[(c2013_c.POS==('PF'))]])['PTS'].rename('12-13'),pd.concat([c2013_d[(c2013_d.POS==('SF'))],c2013_d[(c2013_d.POS==('PF'))]])['PTS'].rename('13-14'),pd.concat([c2013_e[(c2013_e.POS==('SF'))],c2013_e[(c2013_e.POS==('PF'))]])['PTS'].rename('14-15'),pd.concat([c2013_f[(c2013_f.POS==('SF'))],c2013_f[(c2013_f.POS==('PF'))]])['PTS'].rename('15-16')],axis=1).T
c_PTS_G = pd.concat([pd.concat([c2013_a[(c2013_a.POS==('SG'))],c2013_a[(c2013_a.POS==('PG'))]])['PTS'].rename('10-11'),pd.concat([c2013_b[(c2013_b.POS==('SG'))],c2013_b[(c2013_b.POS==('PG'))]])['PTS'].rename('11-12'),pd.concat([c2013_c[(c2013_c.POS==('SG'))],c2013_c[(c2013_c.POS==('PG'))]])['PTS'].rename('12-13'),pd.concat([c2013_d[(c2013_d.POS==('SG'))],c2013_d[(c2013_d.POS==('PG'))]])['PTS'].rename('13-14'),pd.concat([c2013_e[(c2013_e.POS==('SG'))],c2013_e[(c2013_e.POS==('PG'))]])['PTS'].rename('14-15'),pd.concat([c2013_f[(c2013_f.POS==('SG'))],c2013_f[(c2013_f.POS==('PG'))]])['PTS'].rename('15-16')],axis=1).T
c_USG_C = pd.concat([c2013_a[(c2013_a.POS=='C')]['USG%'].rename('10-11'),c2013_b[(c2013_b.POS=='C')]['USG%'].rename('11-12'),c2013_c[(c2013_c.POS=='C')]['USG%'].rename('12-13'),c2013_d[(c2013_d.POS=='C')]['USG%'].rename('13-14'),c2013_e[(c2013_e.POS=='C')]['USG%'].rename('14-15'),c2013_f[(c2013_f.POS=='C')]['USG%'].rename('15-16')],axis=1).T
c_USG_F = pd.concat([pd.concat([c2013_a[(c2013_a.POS==('SF'))],c2013_a[(c2013_a.POS==('PF'))]])['USG%'].rename('10-11'),pd.concat([c2013_b[(c2013_b.POS==('SF'))],c2013_b[(c2013_b.POS==('PF'))]])['USG%'].rename('11-12'),pd.concat([c2013_c[(c2013_c.POS==('SF'))],c2013_c[(c2013_c.POS==('PF'))]])['USG%'].rename('12-13'),pd.concat([c2013_d[(c2013_d.POS==('SF'))],c2013_d[(c2013_d.POS==('PF'))]])['USG%'].rename('13-14'),pd.concat([c2013_e[(c2013_e.POS==('SF'))],c2013_e[(c2013_e.POS==('PF'))]])['USG%'].rename('14-15'),pd.concat([c2013_f[(c2013_f.POS==('SF'))],c2013_f[(c2013_f.POS==('PF'))]])['USG%'].rename('15-16')],axis=1).T
c_USG_G = pd.concat([pd.concat([c2013_a[(c2013_a.POS==('SG'))],c2013_a[(c2013_a.POS==('PG'))]])['USG%'].rename('10-11'),pd.concat([c2013_b[(c2013_b.POS==('SG'))],c2013_b[(c2013_b.POS==('PG'))]])['USG%'].rename('11-12'),pd.concat([c2013_c[(c2013_c.POS==('SG'))],c2013_c[(c2013_c.POS==('PG'))]])['USG%'].rename('12-13'),pd.concat([c2013_d[(c2013_d.POS==('SG'))],c2013_d[(c2013_d.POS==('PG'))]])['USG%'].rename('13-14'),pd.concat([c2013_e[(c2013_e.POS==('SG'))],c2013_e[(c2013_e.POS==('PG'))]])['USG%'].rename('14-15'),pd.concat([c2013_f[(c2013_f.POS==('SG'))],c2013_f[(c2013_f.POS==('PG'))]])['USG%'].rename('15-16')],axis=1).T
c_PER_C = pd.concat([c2013_a[(c2013_a.POS=='C')]['PER'].rename('10-11'),c2013_b[(c2013_b.POS=='C')]['PER'].rename('11-12'),c2013_c[(c2013_c.POS=='C')]['PER'].rename('12-13'),c2013_d[(c2013_d.POS=='C')]['PER'].rename('13-14'),c2013_e[(c2013_e.POS=='C')]['PER'].rename('14-15'),c2013_f[(c2013_f.POS=='C')]['PER'].rename('15-16')],axis=1).T
c_PER_F = pd.concat([pd.concat([c2013_a[(c2013_a.POS==('SF'))],c2013_a[(c2013_a.POS==('PF'))]])['PER'].rename('10-11'),pd.concat([c2013_b[(c2013_b.POS==('SF'))],c2013_b[(c2013_b.POS==('PF'))]])['PER'].rename('11-12'),pd.concat([c2013_c[(c2013_c.POS==('SF'))],c2013_c[(c2013_c.POS==('PF'))]])['PER'].rename('12-13'),pd.concat([c2013_d[(c2013_d.POS==('SF'))],c2013_d[(c2013_d.POS==('PF'))]])['PER'].rename('13-14'),pd.concat([c2013_e[(c2013_e.POS==('SF'))],c2013_e[(c2013_e.POS==('PF'))]])['PER'].rename('14-15'),pd.concat([c2013_f[(c2013_f.POS==('SF'))],c2013_f[(c2013_f.POS==('PF'))]])['PER'].rename('15-16')],axis=1).T
c_PER_G = pd.concat([pd.concat([c2013_a[(c2013_a.POS==('SG'))],c2013_a[(c2013_a.POS==('PG'))]])['PER'].rename('10-11'),pd.concat([c2013_b[(c2013_b.POS==('SG'))],c2013_b[(c2013_b.POS==('PG'))]])['PER'].rename('11-12'),pd.concat([c2013_c[(c2013_c.POS==('SG'))],c2013_c[(c2013_c.POS==('PG'))]])['PER'].rename('12-13'),pd.concat([c2013_d[(c2013_d.POS==('SG'))],c2013_d[(c2013_d.POS==('PG'))]])['PER'].rename('13-14'),pd.concat([c2013_e[(c2013_e.POS==('SG'))],c2013_e[(c2013_e.POS==('PG'))]])['PER'].rename('14-15'),pd.concat([c2013_f[(c2013_f.POS==('SG'))],c2013_f[(c2013_f.POS==('PG'))]])['PER'].rename('15-16')],axis=1).T
"""
Explanation: Reshape Tables and Breakout by Position
End of explanation
"""
p12_13[p12_13.Pos ==('C')]['PER'].plot(kind='hist',color='blue', title='2013 PER')
pd.concat([p12_13[(p12_13.Pos==('SF'))],p12_13[(p12_13.Pos==('PF'))]])['PER'].plot(kind='hist',color='green')
pd.concat([p12_13[(p12_13.Pos==('PG'))],p12_13[(p12_13.Pos==('SG'))]])['PER'].plot(kind='hist',color='red')
"""
Explanation: Dispersion of Player Efficieny Rating by Position
The chart below attempts to show the relative supply of talented players by position, with red being guards, blue being centers, and green being forwards. What the chart shows us is that great guards are relatively harder to come by than centers and forwards (assuming anything above 15 is considered great). What this means is that a general manager may choose to pay a guard more per unit of PER than the other posiitons simply based off the lack of relative supply of talented players at this position.
End of explanation
"""
plt.style.use('fivethirtyeight')
c_PTS_C.plot(legend=False, subplots = False, title = "2013 PPG by Centers")
plt.axvline(2.5,ls='--')
plt.style.use('fivethirtyeight')
c_PTS_F.plot(legend=False, subplots = False, title = "2013 PPG by Forwards")
plt.axvline(2.5,ls='--')
plt.style.use('fivethirtyeight')
c_PTS_G.plot(legend=False, subplots = False, title = "2013 PPG by Guards")
plt.axvline(2.5,ls='--')
"""
Explanation: Points per Game by Position
The charts below provide us with the following insights. In general a player's points per game production decreases sharply in the years after signing a new contract. This observation is most pronounced among the Center position. The only positions were players were able to significantly improve their point production was the guard position. We also notice that amongst players who play the guard position, point production improves materially in the contract year.
End of explanation
"""
plt.style.use('fivethirtyeight')
c_USG_C.plot(legend=False, subplots = False, title = "2013 Usage Rate for Centers")
plt.axvline(2.5,ls='--')
plt.style.use('fivethirtyeight')
c_USG_F.plot(legend=False, subplots = False, title = "2013 Usage Rate for Forwards")
plt.axvline(2.5,ls='--')
plt.style.use('fivethirtyeight')
c_USG_G.plot(legend=False, subplots = False, title = "2013 Usage Rate for Guards")
plt.axvline(2.5,ls='--')
"""
Explanation: Usage Rate by Position
The usage rate charts below provide us with the following insights. For players who play the center position, the usage rate increases the year after they sign a new contract. This makes sense. Teams sign players that they think will be an integral part of the team. Therefore, there is pressure on the coach to make sure that newly signed players receive playing time, justifying th signing to the fan base.
What is surprising is that amongst players who play the forward position, usage rate general declines a little bit in the year after they sign a new contract. The point guard position is a mixed bag; most point guards see increased usage rates, while a few see a dip in usage rates.
End of explanation
"""
plt.style.use('fivethirtyeight')
c_PER_C.plot(legend=False, subplots = False, title = "2013 PER for Centers")
plt.axvline(2.5,ls='--')
plt.style.use('fivethirtyeight')
c_PER_F.plot(legend=False, subplots = False, title = "2013 PER for Forwards")
plt.axvline(2.5,ls='--')
plt.style.use('fivethirtyeight')
c_PER_G.plot(legend=False, subplots = False, title = "2013 PER for Guards")
plt.axvline(2.5,ls='--')
"""
Explanation: Player Efficiency Rating by Position
The most important statiscal metric is the Player Efficiency Ranking (PER). PER incorporates multiple statistical categories: points, assists, rebounds, etc. This provides coaches and general managers with a more accurate picture of a player's contribution.
The graphs below provide the following insights. For centers and forwards PER generally declines in the contract year. This is a surprising finding as one would expect players to increase their value to a team in the contract year. However, players may be gaming the system and focusing on a singular statistic such as rebounds or blocks. While these statistics may look appealing they may not provide the greatest value to a team. In the years following the new contracts a center's and forward's production declines using the PER metric.
Again, with guards it is a mixed bag of results. Most guards see an increase in the PER metric in the contract year. In the years following the contract year, some guards see an increase in PER while others see a decrease. After the 3 year point, most guards observe a decrease in the PER metric.
End of explanation
"""
|
r1rajiv92/data-512-a1
|
hcds-a1-data-curation.ipynb
|
mit
|
import requests
import pandas
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/r1rajiv92', 'From' : 'rajiv92@uw.edu'}
yearMonthCombinations = { '2015' : [ 7, 8, 9, 10, 11, 12],
'2016' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2017' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9] }
for accessType in [ 'desktop', 'mobile-web', 'mobile-app' ]:
for year in range(2015, 2018):
for month in yearMonthCombinations[str(year)]:
if int(month / 10) == 0:
startMonth = ''.join( [ '0', str(month) ] )
startParam = ''.join([ str(year), startMonth, '0100' ])
if int((month+1) / 10) == 0:
endParam = ''.join([ str(year), '0', str(month+1), '0100' ])
else:
endParam = ''.join([ str(year), str(month+1), '0100' ])
else:
startMonth = str(month)
startParam = ''.join([ str(year), startMonth, '0100' ])
endParam = ''.join([ str(year), str(month+1), '0100' ])
if month + 1 == 13:
endParam = ''.join( [str(year+1), '01', '0100'])
params = {'project' : 'en.wikipedia.org',
'access' : accessType,
'agent' : 'user',
'granularity' : 'monthly',
'start' : startParam,
'end' : endParam
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
for result in response['items']:
result['year'] = str(year)
result['month'] = startMonth
if 'PageViewDataFrame' in locals():
PageViewDataFrame = pandas.concat([ PageViewDataFrame, pandas.DataFrame.from_dict(response['items']) ])
else:
PageViewDataFrame = pandas.DataFrame.from_dict(response['items'])
"""
Explanation: English Wikipedia page views, 2008 - 2017
For this assignment, your job is to analyze traffic on English Wikipedia over time, and then document your process and the resulting dataset and visualization according to best practices for open research that were outlined for you in class.
PageView API
Here, I have queried the APIs multiple times and also converted the results into a dataframe of the required format.
We don't have mobile site date between 2008 and 2014 for pageCounts.
End of explanation
"""
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/r1rajiv92', 'From' : 'rajiv92@uw.edu'}
yearMonthCombinations = { '2008' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2009' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2010' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2011' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2012' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2013' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2014' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2015' : [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'2016' : [ 1, 2, 3, 4, 5, 6, 7]
}
for accessType in [ 'desktop-site', 'mobile-site' ]:
for year in range(2008, 2017):
for month in yearMonthCombinations[str(year)]:
if int(month / 10) == 0:
startMonth = ''.join( [ '0', str(month) ] )
startParam = ''.join([ str(year), startMonth, '0100' ])
if int((month+1) / 10) == 0:
endParam = ''.join([ str(year), '0', str(month+1), '0100' ])
else:
endParam = ''.join([ str(year), str(month+1), '0100' ])
else:
startMonth = str(month)
startParam = ''.join([ str(year), startMonth, '0100' ])
endParam = ''.join([ str(year), str(month+1), '0100' ])
if month + 1 == 13:
endParam = ''.join( [str(year+1), '01', '0100'])
params = {'project' : 'en.wikipedia.org',
'access-site' : accessType,
'granularity' : 'monthly',
'start' : startParam,
'end' : endParam
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
if 'items' in response.keys():
for result in response['items']:
result['year'] = str(year)
result['month'] = startMonth
else:
print('Page Count Data Missing for', accessType, 'on', month, year )
continue
if 'PageCountDataFrame' in locals():
PageCountDataFrame = pandas.concat([ PageCountDataFrame, pandas.DataFrame.from_dict(response['items']) ])
else:
PageCountDataFrame = pandas.DataFrame.from_dict(response['items'])
"""
Explanation: PageCount API
Here, I have queried the APIs multiple times and also converted the results into a dataframe of the required format.
End of explanation
"""
PageViewDataFrame.loc[PageViewDataFrame.access != 'desktop', 'access'] = 'mobile'
PageCountDataFrame['access'] = "desktop"
PageCountDataFrame.loc[PageCountDataFrame['access-site'] != 'desktop-site', 'access'] = 'mobile'
pageCounts = pandas.DataFrame( PageCountDataFrame.groupby(['year', 'month', 'access'])['count'].sum().reset_index() )
pageViews = pandas.DataFrame( PageViewDataFrame.groupby(['year', 'month', 'access'])['views'].sum().reset_index() )
"""
Explanation: Summing up accross mobile-app and mobile-site for PageViews into a single access type called 'mobile'. Also, grouping by year, month and accessTypes to get the results
End of explanation
"""
finalCSVDataFrame = pandas.DataFrame( columns = ['year', 'month', 'pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views',
'pageview_all_views', 'pageview_desktop_views', 'pageview_mobile_views' ] )
for year in range( 2008, 2018):
for month in range(1,13):
if year == 2017 and month > 9:
continue
else:
if month < 9:
monthString = ''.join(['0', str(month)])
pagecount_desktop_views = pageCounts[(pageCounts['year'] == str(year)) & (pageCounts['month'] == monthString)
& (pageCounts['access'] == 'desktop' )]
if len(pagecount_desktop_views) != 0:
pagecount_desktop_views = int(pagecount_desktop_views['count'])
else:
pagecount_desktop_views = 0
pagecount_mobile_views = pageCounts[(pageCounts['year'] == str(year)) & (pageCounts['month'] == monthString)
& (pageCounts['access'] == 'mobile' )]
if len(pagecount_mobile_views) != 0:
pagecount_mobile_views = int(pagecount_mobile_views['count'])
else:
pagecount_mobile_views = 0
pagecount_all_views = pagecount_desktop_views + pagecount_mobile_views
pageview_desktop_views = pageViews[(pageViews['year'] == str(year)) & (pageViews['month'] == monthString)
& (pageViews['access'] == 'desktop' )]
if len(pageview_desktop_views) != 0:
pageview_desktop_views = int(pageview_desktop_views['views'])
else:
pageview_desktop_views = 0
pageview_mobile_views = pageViews[(pageViews['year'] == str(year)) & (pageViews['month'] == monthString)
& (pageViews['access'] == 'mobile' )]
if len(pageview_mobile_views) != 0:
pageview_mobile_views = int(pageview_mobile_views['views'])
else:
pageview_mobile_views = 0
pageview_all_views = pageview_desktop_views + pageview_mobile_views
finalCSVDataFrame = finalCSVDataFrame.append( {'year': int(year),
'month': int(month),
'pagecount_all_views': int(pagecount_all_views),
'pagecount_desktop_views': int(pagecount_desktop_views),
'pagecount_mobile_views': int(pagecount_mobile_views),
'pageview_all_views': int(pageview_all_views),
'pageview_desktop_views': int(pageview_desktop_views),
'pageview_mobile_views': int(pageview_mobile_views)
}, ignore_index=True )
finalCSVDataFrame
finalCSVDataFrame.to_csv('finalCSV.csv')
"""
Explanation: Generating the final CSV Data frame used for visualization!!
End of explanation
"""
import matplotlib.pyplot as plt
dates = pandas.date_range('2008-01', '2017-10',freq='M')
plt.plot(dates, finalCSVDataFrame["pagecount_all_views"]/1000000, label = " All PageCounts",
color = "black", linewidth = 0.5)
plt.plot(dates, finalCSVDataFrame["pagecount_desktop_views"]/1000000, label = "Desktop PageCounts",
color = "blue", linewidth = 0.5)
plt.plot(dates, finalCSVDataFrame["pagecount_mobile_views"]/1000000, label = "Mobile PageCounts",
color = "brown", linewidth = 0.5)
plt.plot(dates, finalCSVDataFrame["pageview_all_views"]/1000000, label = " All PageViews",
color = "black", linewidth = 2)
plt.plot(dates, finalCSVDataFrame["pageview_desktop_views"]/1000000, label = "Desktop PageViews",
color = "blue", linewidth = 2)
plt.plot(dates, finalCSVDataFrame["pageview_mobile_views"]/1000000, label = "Mobile PageViews",
color = "brown", linewidth = 2)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 15
fig_size[1] = 10
plt.xlabel("Dates")
plt.ylabel("PageViews - millions")
plt.title("Pageviews on English Wikipedia from July 2015 to Sept 2017")
plt.legend(loc=2)
plt.show()
"""
Explanation: Plotting the time series Visualization
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a/td1a_cenonce_session2.ipynb
|
mit
|
from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.1 - Variables, boucles, tests
Répétitions de code, exécuter une partie plutôt qu'une autre.
End of explanation
"""
i = 3 # entier = type numérique (type int)
r = 3.3 # réel = type numérique (type float)
s = "exemple" # chaîne de caractères = type str (exemple n'est pas une variable)
s = 'exemple' # " et ' peuvent être utilisées pour définir une chaîne de caractères
sl = """ exemple sur
plusieurs lignes""" # on peut définir une chaîne sur plusieurs lignes avec """ ou '''
n = None # None signifie que la variable existe mais qu'elle ne contient rien
# elle est souvent utilisée pour signifier qu'il n'y a pas de résultat
# car... une erreur s'est produite, il n'y a pas de résultat
# (racine carrée de -1 par exemple)
i,r,s,n, sl # avec les notebooks, le dernier print n'est pas nécessaire, il suffit d'écrire
# i,r,s,n
v = "anything" # affectation
print ( v ) # affichage
v1, v2 = 5, 6 # double affectation
v1,v2
"""
Explanation: Partie 1 : variables, int, float, str, list
Un algorithme manipule des données. Ces données ne sont pas connues au moment où on écrit l'algorithme. Les variables servent à nommer ces données afin de pouvoir écrire cet algorithme. On procède la plupart du temps dans l'ordre suivant :
On écrit l'algorithme.
On affecte des valeurs aux variables.
On exécute l'algorithme.
Quelques exemples à essayer autour des variables :
End of explanation
"""
x = 3
print(x)
y = 5 * x
y
x,y = 4,5
s = "addition"
"{3} de {0} et {1} donne : {0} + {1} = {2}".format (x,y,x+y,s)
"""
Explanation: Par défaut, le notebook affiche le résultat présent sur la dernière ligne de la cellule. S'il on souhaite en afficher plusieurs, il faut utiliser la fonction print.
End of explanation
"""
for prenom in [ "xavier", "sloane"] :
print ("Monsieur {0}, vous avez gagné...".format(prenom))
"""
Explanation: La dernière écriture permet d'assembler différentes valeurs en une seule chaînes de caractères. C'est très pratique lorsqu'on veut répéter le même assemblage un grand nombre de fois. Ce mécanisme ressemble à celui des lettres type : c'est un texte à trou qu'on remplit avec des valeurs différentes à chaque fois.
End of explanation
"""
print ( type ( v ) ) # affiche le type d'une variable
print ( isinstance (v, str) ) # pour déterminer si v est de type str
"""
Explanation: Type d'une variable :
End of explanation
"""
c = (4,5) # couple de valeurs (ou tuple)
l = [ 4, 5, 6.5] # listes de valeurs ou tableaux
x = l [0] # obtention du premier élément de la liste l
y = c [1] # obtention du second élément
le = [ ] # un tableau vide
c,l,x,y,le
l = [ 4, 5 ]
l += [ 6 ] # ajouter un élément
l.append ( 7 ) # ajouter un élément
l.insert (1, 8) # insérer un élément en seconde position
print(l)
del l [0] # supprimer un élément
del l [0:2] # supprimer les deux premiers éléments
l
"""
Explanation: Les tableaux ou listes (list) :
End of explanation
"""
l = [ 4, 5, 6 ]
print ( len(l) ) # affiche la longueur du tableau
print ( max(l) ) # affiche le plus grand élément
s = l * 3 # création de la liste [ 4, 5, 6, 4, 5, 6, 4, 5, 6 ]
t = s [ 4:7 ] # extraction de la sous-liste de la position 4 à 7 exclu
s [4:7] = [ 4 ] # remplacement de cette liste par [ 4 ]
s
"""
Explanation: Longueur d'une liste et autres :
End of explanation
"""
l1 = [ 0, 1 ,2 ]
l2 = l1
l2[0] = -1
l1,l2
"""
Explanation: Type mutable et immutable (voir aussi Qu'est-ce qu'un type immuable ou immutable ?) : une liste est un type mutable. Cela veut dire que par défaut, l'instruction list1=list2 ne recopie pas la liste, elle lui donne un autre nom qui peut être utilisé en même temps que le premier.
End of explanation
"""
l1 = [ 0, 1 ,2 ]
l2 = list(l1)
l2[0] = -1
l1,l2
"""
Explanation: Les deux listes sont en apparence modifiées. En fait, il n'y en a qu'une. Pour créer une copie, il faut explicitement demander une copie.
End of explanation
"""
v = 2
if v == 2 :
print ("v est égal à 2")
else :
print ("v n'est pas égal à 2")
"""
Explanation: C'est un point très important du langage qu'il ne faut pas oublier. On retrouve cette convention dans la plupart des langages interprétés car faire une copie alourdit le temps d'exécution.
Partie 2 : Tests
Les tests permettent de faire un choix : selon la valeur d'une condition, on fait soit une séquence d'instructions soit une autre.
End of explanation
"""
v = 2
if v == 2 :
print ("v est égal à 2")
"""
Explanation: La clause else n'est obligatoire :
End of explanation
"""
v = 2
if v == 2 :
print ("v est égal à 2")
elif v > 2 :
print ("v est supérieur à 2")
else :
print ("v est inférieur à 2")
"""
Explanation: Plusieurs tests enchaînés :
End of explanation
"""
for i in range (0, 10) : # on répète 10 fois
print ("dedans",i) # l'affichage de i
# ici, on est dans la boucle
# ici, on n'est plus dans la boucle
"dehors",i # on ne passe par 10
"""
Explanation: Partie 3 : boucles
Les boucles permettent de répéter un nombre fini ou infini de fois la même séquence d'instructions.
Quelques exemples à essayer autour des boucles :
End of explanation
"""
i = 0
while i < 10 :
print (i)
i += 1
"""
Explanation: Boucle while :
End of explanation
"""
for i in range (0, 10) :
if i == 2 :
continue # on passe directement au suivant
print (i)
if i > 5 :
break # interruption définitive
"""
Explanation: Interrompre une boucle :
End of explanation
"""
l = [ 5, 3, 5, 7 ]
for i in range (0, len(l)) :
print ("élément ",i, "=", l [ i ] )
l = [ 5, 3, 5, 7 ]
for v in l :
print ("élément ", v )
l = [ 5, 3, 5, 7 ]
for i,v in enumerate(l) :
print ("élément ",i, "=", v )
"""
Explanation: Parcours d'une liste : observer les différences entre les trois écritures
End of explanation
"""
l = [ 4, 3, 0, 2, 1 ]
i = 0
while l[i] != 0 :
i = l[i]
print (i) # que vaut l[i] à la fin ?
"""
Explanation: Que fait le programme suivant ?
End of explanation
"""
l = [ ]
for i in range (10) :
l.append( i*2+1)
l
"""
Explanation: On peut jouer avec des cartes pour faire dénouer le côté sybillin de ce programme : La programmation avec les cartes.
Partie 4 : les listes compactes, les ensembles
Plutôt que d'écrire :
End of explanation
"""
l = [ i*2+1 for i in range(10) ]
l
"""
Explanation: On peut écrire :
End of explanation
"""
l = [ i*2 for i in range(0,10) ]
l # qu'affiche l ?
l = [ i*2 for i in range(0,10) if i%2==0 ]
l # qu'affiche l ?
"""
Explanation: Quelques examples à essayer :
End of explanation
"""
l = [ "a","b","c", "a", 9,4,5,6,7,4,5,9.0]
s = set(l)
s
"""
Explanation: Les ensembles ou set sont des listes pour lesqueelles les éléments sont uniques. Si deux nombres nombres int et float sont égaux, seul le premier sera conservé.
End of explanation
"""
l = [ 3, 6, 2 , 7, 9 ]
x = 7
# ......
print ( position )
"""
Explanation: Partie 5 : recherche non dichotomique (exercice)
On veut écrire quelques instructions pour trouver la position du nombre x = 7 dans la liste l. Il faut compléter le programme suivant en utilisant une boucle et un test.
End of explanation
"""
l = sorted( [ 4, 7, -1,3, 9, 5, -5 ] )
# recherche dichotomique
# la position retournée est correspond à celle de l'élément dans le tableau trié
"""
Explanation: Partie 6 : Recherche dichotomique
La recherche dichotomique consiste à chercher un élément e dans un tableau trié l. On cherche sa position :
On commence par comparer e à l'élément placé au milieu du tableau d'indice m, s'ils sont égaux, on a trouvé,
s'il est inférieur, on sait qu'il se trouve entre les indices 0 et m-1,
s'il est supérieur, on sait qu'il se trouve entre les indices m+1 et la fin du tableau.
Avec une comparaison, on a déjà éliminé une moitié de tableau dans laquelle on sait que p ne se trouve pas. On applique le même raisonnement à l'autre moitié pour réduire la partie du tableau dans laquelle on doit chercher.
End of explanation
"""
|
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/aws/aws.ipynb
|
mit
|
!ssh -i key.pem ubuntu@ipaddress
"""
Explanation: This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Amazon Web Services (AWS)
SSH to EC2
Boto
S3cmd
s3-parallel-put
S3DistCp
Redshift
Kinesis
Lambda
<h2 id="ssh-to-ec2">SSH to EC2</h2>
Connect to an Ubuntu EC2 instance through SSH with the given key:
End of explanation
"""
!ssh -i key.pem ec2-user@ipaddress
"""
Explanation: Connect to an Amazon Linux EC2 instance through SSH with the given key:
End of explanation
"""
!pip install Boto
"""
Explanation: Boto
Boto is the official AWS SDK for Python.
Install Boto:
End of explanation
"""
aws_access_key_id = YOURACCESSKEY
aws_secret_access_key = YOURSECRETKEY
"""
Explanation: Configure boto by creating a ~/.boto file with the following:
End of explanation
"""
import boto
s3 = boto.connect_s3()
"""
Explanation: Work with S3:
End of explanation
"""
import boto.ec2
ec2 = boto.ec2.connect_to_region('us-east-1')
"""
Explanation: Work with EC2:
End of explanation
"""
import boto
import time
s3 = boto.connect_s3()
# Create a new bucket. Buckets must have a globally unique name (not just
# unique to your account).
bucket = s3.create_bucket('boto-demo-%s' % int(time.time()))
# Create a new key/value pair.
key = bucket.new_key('mykey')
key.set_contents_from_string("Hello World!")
# Sleep to ensure the data is eventually there.
# This is often referred to as "S3 eventual consistency".
time.sleep(2)
# Retrieve the contents of ``mykey``.
print key.get_contents_as_string()
# Delete the key.
key.delete()
# Delete the bucket.
bucket.delete()
"""
Explanation: Create a bucket and put an object in that bucket:
End of explanation
"""
!sudo apt-get install s3cmd
"""
Explanation: Each service supports a different set of commands. Refer to the following for more details:
* AWS Docs
* Boto Docs
<h2 id="s3cmd">S3cmd</h2>
Before I discovered S3cmd, I had been using the S3 console to do basic operations and boto to do more of the heavy lifting. However, sometimes I just want to hack away at a command line to do my work.
I've found S3cmd to be a great command line tool for interacting with S3 on AWS. S3cmd is written in Python, is open source, and is free even for commercial use. It offers more advanced features than those found in the AWS CLI.
Install s3cmd:
End of explanation
"""
!s3cmd --configure
"""
Explanation: Running the following command will prompt you to enter your AWS access and AWS secret keys. To follow security best practices, make sure you are using an IAM account as opposed to using the root account.
I also suggest enabling GPG encryption which will encrypt your data at rest, and enabling HTTPS to encrypt your data in transit. Note this might impact performance.
End of explanation
"""
# List all buckets
!s3cmd ls
# List the contents of the bucket
!s3cmd ls s3://my-bucket-name
# Upload a file into the bucket (private)
!s3cmd put myfile.txt s3://my-bucket-name/myfile.txt
# Upload a file into the bucket (public)
!s3cmd put --acl-public --guess-mime-type myfile.txt s3://my-bucket-name/myfile.txt
# Recursively upload a directory to s3
!s3cmd put --recursive my-local-folder-path/ s3://my-bucket-name/mydir/
# Download a file
!s3cmd get s3://my-bucket-name/myfile.txt myfile.txt
# Recursively download files that start with myfile
!s3cmd --recursive get s3://my-bucket-name/myfile
# Delete a file
!s3cmd del s3://my-bucket-name/myfile.txt
# Delete a bucket
!s3cmd del --recursive s3://my-bucket-name/
# Create a bucket
!s3cmd mb s3://my-bucket-name
# List bucket disk usage (human readable)
!s3cmd du -H s3://my-bucket-name/
# Sync local (source) to s3 bucket (destination)
!s3cmd sync my-local-folder-path/ s3://my-bucket-name/
# Sync s3 bucket (source) to local (destination)
!s3cmd sync s3://my-bucket-name/ my-local-folder-path/
# Do a dry-run (do not perform actual sync, but get information about what would happen)
!s3cmd --dry-run sync s3://my-bucket-name/ my-local-folder-path/
# Apply a standard shell wildcard include to sync s3 bucket (source) to local (destination)
!s3cmd --include '2014-05-01*' sync s3://my-bucket-name/ my-local-folder-path/
"""
Explanation: Frequently used S3cmds:
End of explanation
"""
!sudo apt-get install boto
!sudo apt-get install git
"""
Explanation: <h2 id="s3-parallel-put">s3-parallel-put</h2>
s3-parallel-put is a great tool for uploading multiple files to S3 in parallel.
Install package dependencies:
End of explanation
"""
!git clone https://github.com/twpayne/s3-parallel-put.git
"""
Explanation: Clone the s3-parallel-put repo:
End of explanation
"""
!export AWS_ACCESS_KEY_ID=XXX
!export AWS_SECRET_ACCESS_KEY=XXX
"""
Explanation: Setup AWS keys for s3-parallel-put:
End of explanation
"""
!s3-parallel-put --bucket=bucket --prefix=PREFIX SOURCE
"""
Explanation: Sample usage:
End of explanation
"""
!s3-parallel-put --bucket=bucket --host=s3.amazonaws.com --put=stupid --dry-run --prefix=prefix/ ./
"""
Explanation: Dry run of putting files in the current directory on S3 with the given S3 prefix, do not check first if they exist:
End of explanation
"""
!rvm --default ruby-1.8.7-p374
"""
Explanation: <h2 id="s3distcp">S3DistCp</h2>
S3DistCp is an extension of DistCp that is optimized to work with Amazon S3. S3DistCp is useful for combining smaller files and aggregate them together, taking in a pattern and target file to combine smaller input files to larger ones. S3DistCp can also be used to transfer large volumes of data from S3 to your Hadoop cluster.
To run S3DistCp with the EMR command line, ensure you are using the proper version of Ruby:
End of explanation
"""
!./elastic-mapreduce --create --instance-group master --instance-count 1 \
--instance-type m1.small --instance-group core --instance-count 4 \
--instance-type m1.small --jar /home/hadoop/lib/emr-s3distcp-1.0.jar \
--args "--src,s3://my-bucket-source/,--groupBy,.*([0-9]{4}-01).*,\
--dest,s3://my-bucket-dest/,--targetSize,1024"
"""
Explanation: The EMR command line below executes the following:
* Create a master node and slave nodes of type m1.small
* Runs S3DistCp on the source bucket location and concatenates files that match the date regular expression, resulting in files that are roughly 1024 MB or 1 GB
* Places the results in the destination bucket
End of explanation
"""
--outputCodec,lzo
"""
Explanation: For further optimization, compression can be helpful to save on AWS storage and bandwidth costs, to speed up the S3 to/from EMR transfer, and to reduce disk I/O. Note that compressed files are not easy to split for Hadoop. For example, Hadoop uses a single mapper per GZIP file, as it does not know about file boundaries.
What type of compression should you use?
Time sensitive job: Snappy or LZO
Large amounts of data: GZIP
General purpose: GZIP, as it’s supported by most platforms
You can specify the compression codec (gzip, lzo, snappy, or none) to use for copied files with S3DistCp with –outputCodec. If no value is specified, files are copied with no compression change. The code below sets the compression to lzo:
End of explanation
"""
copy table_name from 's3://source/part'
credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX'
csv;
"""
Explanation: <h2 id="redshift">Redshift</h2>
Copy values from the given S3 location containing CSV files to a Redshift cluster:
End of explanation
"""
copy table_name from 's3://source/part'
credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX'
csv delimiter '\t';
"""
Explanation: Copy values from the given location containing TSV files to a Redshift cluster:
End of explanation
"""
select * from stl_load_errors;
"""
Explanation: View Redshift errors:
End of explanation
"""
VACUUM FULL;
"""
Explanation: Vacuum Redshift in full:
End of explanation
"""
analyze compression table_name;
"""
Explanation: Analyze the compression of a table:
End of explanation
"""
cancel 18764;
"""
Explanation: Cancel the query with the specified id:
End of explanation
"""
abort;
"""
Explanation: The CANCEL command will not abort a transaction. To abort or roll back a transaction, you must use the ABORT or ROLLBACK command. To cancel a query associated with a transaction, first cancel the query then abort the transaction.
If the query that you canceled is associated with a transaction, use the ABORT or ROLLBACK. command to cancel the transaction and discard any changes made to the data:
End of explanation
"""
CREATE TABLE part (
p_partkey integer not null sortkey distkey,
p_name varchar(22) not null,
p_mfgr varchar(6) not null,
p_category varchar(7) not null,
p_brand1 varchar(9) not null,
p_color varchar(11) not null,
p_type varchar(25) not null,
p_size integer not null,
p_container varchar(10) not null
);
CREATE TABLE supplier (
s_suppkey integer not null sortkey,
s_name varchar(25) not null,
s_address varchar(25) not null,
s_city varchar(10) not null,
s_nation varchar(15) not null,
s_region varchar(12) not null,
s_phone varchar(15) not null)
diststyle all;
CREATE TABLE customer (
c_custkey integer not null sortkey,
c_name varchar(25) not null,
c_address varchar(25) not null,
c_city varchar(10) not null,
c_nation varchar(15) not null,
c_region varchar(12) not null,
c_phone varchar(15) not null,
c_mktsegment varchar(10) not null)
diststyle all;
CREATE TABLE dwdate (
d_datekey integer not null sortkey,
d_date varchar(19) not null,
d_dayofweek varchar(10) not null,
d_month varchar(10) not null,
d_year integer not null,
d_yearmonthnum integer not null,
d_yearmonth varchar(8) not null,
d_daynuminweek integer not null,
d_daynuminmonth integer not null,
d_daynuminyear integer not null,
d_monthnuminyear integer not null,
d_weeknuminyear integer not null,
d_sellingseason varchar(13) not null,
d_lastdayinweekfl varchar(1) not null,
d_lastdayinmonthfl varchar(1) not null,
d_holidayfl varchar(1) not null,
d_weekdayfl varchar(1) not null)
diststyle all;
CREATE TABLE lineorder (
lo_orderkey integer not null,
lo_linenumber integer not null,
lo_custkey integer not null,
lo_partkey integer not null distkey,
lo_suppkey integer not null,
lo_orderdate integer not null sortkey,
lo_orderpriority varchar(15) not null,
lo_shippriority varchar(1) not null,
lo_quantity integer not null,
lo_extendedprice integer not null,
lo_ordertotalprice integer not null,
lo_discount integer not null,
lo_revenue integer not null,
lo_supplycost integer not null,
lo_tax integer not null,
lo_commitdate integer not null,
lo_shipmode varchar(10) not null
);
"""
Explanation: Reference table creation and setup:
End of explanation
"""
!aws kinesis create-stream --stream-name Foo --shard-count 1 --profile adminuser
"""
Explanation: | Table name | Sort Key | Distribution Style |
|------------|--------------|--------------------|
| LINEORDER | lo_orderdate | lo_partkey |
| PART | p_partkey | p_partkey |
| CUSTOMER | c_custkey | ALL |
| SUPPLIER | s_suppkey | ALL |
| DWDATE | d_datekey | ALL |
Sort Keys
When you create a table, you can specify one or more columns as the sort key. Amazon Redshift stores your data on disk in sorted order according to the sort key. How your data is sorted has an important effect on disk I/O, columnar compression, and query performance.
Choose sort keys for based on these best practices:
If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key.
If you do frequent range filtering or equality filtering on one column, specify that column as the sort key.
If you frequently join a (dimension) table, specify the join column as the sort key.
Distribution Styles
When you create a table, you designate one of three distribution styles: KEY, ALL, or EVEN.
KEY distribution
The rows are distributed according to the values in one column. The leader node will attempt to place matching values on the same node slice. If you distribute a pair of tables on the joining keys, the leader node collocates the rows on the slices according to the values in the joining columns so that matching values from the common columns are physically stored together.
ALL distribution
A copy of the entire table is distributed to every node. Where EVEN distribution or KEY distribution place only a portion of a table's rows on each node, ALL distribution ensures that every row is collocated for every join that the table participates in.
EVEN distribution
The rows are distributed across the slices in a round-robin fashion, regardless of the values in any particular column. EVEN distribution is appropriate when a table does not participate in joins or when there is not a clear choice between KEY distribution and ALL distribution. EVEN distribution is the default distribution style.
<h2 id="kinesis">Kinesis</h2>
Create a stream:
End of explanation
"""
!aws kinesis list-streams --profile adminuser
"""
Explanation: List all streams:
End of explanation
"""
!aws kinesis describe-stream --stream-name Foo --profile adminuser
"""
Explanation: Get info about the stream:
End of explanation
"""
!aws kinesis put-record --stream-name Foo --data "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=" --partition-key shardId-000000000000 --region us-east-1 --profile adminuser
"""
Explanation: Put a record to the stream:
End of explanation
"""
!SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name Foo --query 'ShardIterator' --profile adminuser)
aws kinesis get-records --shard-iterator $SHARD_ITERATOR
"""
Explanation: Get records from a given shard:
End of explanation
"""
!aws kinesis delete-stream --stream-name Foo --profile adminuser
"""
Explanation: Delete a stream:
End of explanation
"""
!aws lambda list-functions \
--region us-east-1 \
--max-items 10
"""
Explanation: <h2 id="lambda">Lambda</h2>
List lambda functions:
End of explanation
"""
!aws lambda upload-function \
--region us-east-1 \
--function-name foo \
--function-zip file-path/foo.zip \
--role IAM-role-ARN \
--mode event \
--handler foo.handler \
--runtime nodejs \
--debug
"""
Explanation: Upload a lambda function:
End of explanation
"""
!aws lambda invoke-async \
--function-name foo \
--region us-east-1 \
--invoke-args foo.txt \
--debug
"""
Explanation: Invoke a lambda function:
End of explanation
"""
!aws lambda update-function-configuration \
--function-name foo \
--region us-east-1 \
--timeout timeout-in-seconds \
"""
Explanation: Update a function:
End of explanation
"""
!aws lambda get-function-configuration \
--function-name foo \
--region us-east-1 \
--debug
"""
Explanation: Return metadata for a specific function:
End of explanation
"""
!aws lambda get-function \
--function-name foo \
--region us-east-1 \
--debug
"""
Explanation: Return metadata for a specific function along with a presigned URL that you can use to download the function's .zip file that you uploaded:
End of explanation
"""
!aws lambda add-event-source \
--region us-east-1 \
--function-name ProcessKinesisRecords \
--role invocation-role-arn \
--event-source kinesis-stream-arn \
--batch-size 100
"""
Explanation: Add an event source:
End of explanation
"""
!aws lambda add-permission \
--function-name CreateThumbnail \
--region us-west-2 \
--statement-id some-unique-id \
--action "lambda:InvokeFunction" \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::sourcebucket \
--source-account bucket-owner-account-id
"""
Explanation: Add permissions:
End of explanation
"""
!aws lambda get-policy \
--function-name function-name
"""
Explanation: Check policy permissions:
End of explanation
"""
!aws lambda delete-function \
--function-name foo \
--region us-east-1 \
--debug
"""
Explanation: Delete a lambda function:
End of explanation
"""
|
pastas/pasta
|
examples/notebooks/07_non_linear_recharge.ipynb
|
mit
|
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions(numba=True)
ps.set_log_level("INFO")
"""
Explanation: Non-linear recharge models
R.A. Collenteur, University of Graz
This notebook explains the use of the RechargeModel stress model to simulate the combined effect of precipitation and potential evaporation on the groundwater levels. For the computation of the groundwater recharge, three recharge models are currently available:
Linear (Berendrecht et al., 2003; von Asmuth et al., 2008)
Berendrecht (Berendrecht et al., 2006)
FlexModel (Collenteur et al., in 2021)
The first model is a simple linear function of precipitation and potential evaporation while the latter two are simulate a non-linear response of recharge to precipitation using a soil-water balance concepts. Detailed descriptions of these models can be found in articles listed in the References at the end of this notebook.
<div class="alert alert-info">
<b>Tip</b>
To run this notebook and the related non-linear recharge models, it is strongly recommended to install Numba (http://numba.pydata.org). This Just-In-Time (JIT) compiler compiles the computationally intensive part of the recharge calculation, making the non-linear model as fast as the Linear recharge model.
</div>
End of explanation
"""
head = pd.read_csv("../data/B32C0639001.csv", parse_dates=['date'],
index_col='date', squeeze=True)
# Make this millimeters per day
evap = ps.read_knmi("../data/etmgeg_260.txt", variables="EV24").series * 1e3
rain = ps.read_knmi("../data/etmgeg_260.txt", variables="RH").series * 1e3
ps.plots.series(head, [evap, rain], figsize=(10,6),
labels=["Head [m]", "Evap [mm/d]", "Rain [mm/d]"]);
"""
Explanation: Read Input data
Input data handling is similar to other stressmodels. The only thing that is necessary to check is that the precipitation and evaporation are provided in mm/day. This is necessary because the parameters for the non-linear recharge models are defined in mm for the length unit and days for the time unit. It is possible to use other units, but this would require manually setting the initial values and parameter boundaries for the recharge models.
End of explanation
"""
ml = ps.Model(head)
# Select a recharge model
rch = ps.rch.FlexModel()
#rch = ps.rch.Berendrecht()
#rch = ps.rch.Linear()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Gamma, name="rch")
ml.add_stressmodel(rm)
ml.solve(noise=True, tmin="1990", report="basic")
ml.plots.results(figsize=(10,6));
"""
Explanation: Make a basic model
The normal workflow may be used to create and calibrate the model.
1. Create a Pastas Model instance
2. Choose a recharge model. All recharge models can be accessed through the recharge subpackage (ps.rch).
3. Create a RechargeModel object and add it to the model
4. Solve and visualize the model
End of explanation
"""
recharge = ml.get_stress("rch").resample("A").sum()
ax = recharge.plot.bar(figsize=(10,3))
ax.set_xticklabels(recharge.index.year)
plt.ylabel("Recharge [mm/year]");
"""
Explanation: Analyze the estimated recharge flux
After the parameter estimation we can take a look at the recharge flux computed by the model. The flux is easy to obtain using the get_stress method of the model object, which automatically provides the optimal parameter values that were just estimated. After this, we can for example look at the yearly recharge flux estimated by the Pastas model.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cmcc/cmip6/models/cmcc-esm2-sr5/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-sr5', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-ESM2-SR5
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
efoley/deep-learning
|
sentiment-rnn/Sentiment RNN.ipynb
|
mit
|
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# Create your dictionary that maps vocab words to integers here
vocab_to_int = {w:i+1 for i,w in enumerate(list(set(words)))}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[w] for w in review.split()] for review in reviews]
#reviews[0].split()
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = [int(l=='positive') for l in labels.split('\n')]
#labels
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints = [r for r in reviews_ints if len(r)>0]
labels = np.array([l for r,l in zip(reviews_ints, labels) if len(r)>0])
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
features = np.array([np.r_[np.zeros(max(0, 200-len(r))), r[:200]] for r in reviews_ints])
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
print("features.shape={}".format(features.shape))
print("labels.shape={}".format(labels.shape))
#print()
val_split_idx = int(round(split_frac * len(labels)))
test_split_idx = int(round((split_frac+(1-split_frac)/2) * len(labels)))
train_x, val_x, test_x = np.split(features, [val_split_idx, test_split_idx])
train_y, val_y, test_y = np.split(labels , [val_split_idx, test_split_idx])
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 100
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
mmadsen/experiment-seriation-classification
|
analysis/sc-1-3/sc-1-seriation-feature-engineering.ipynb
|
apache-2.0
|
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-cont-labels.pkl",'r'))
"""
Explanation: SC-1 Feature Engineering and Classification
The first attempt at seriation classification (https://github.com/mmadsen/experiment-seriation-classification/blob/master/analysis/sc-1/sc-1-seriation-classification-analysis.ipynb) was a partial (but encouraging) success, achieving basically 80% accuracy in correctly labeling which of two regional metapopulation models an IDSS seriation solution is derived from. The "classifier" used was simply k-Nearest Neighbors, with an optimal value of k=3.
The sole "feature" used in classification was the Euclidean distance between sorted Laplacian eigenvalue spectra for each graph, as described in the following lab note: http://goo.gl/HYvyoM
Since I used a single feature in that first analysis, to do better than 80% we're going to have to add more features. One promising approach is to use more of the information in the Laplacian spectrum itself, instead of reducing it to a single distance metric. To that, we can then add other graph theoretic features, keeping in mind that some of them may be highly collinear with information already contained in the Laplacian.
End of explanation
"""
def train_test_split(graph_list, label_list, test_fraction=0.20):
"""
Randomly splits a set of graphs and labels into training and testing data sets. We need a custom function
because the dataset isn't a numeric matrix, but a list of NetworkX Graph objects.
"""
rand_ix = np.random.random_integers(0, len(graph_list), size=int(len(graph_list) * test_fraction))
print "random indices: %s" % rand_ix
test_graphs = []
test_labels = []
train_graphs = []
train_labels = []
# first copy the chosen test values, without deleting anything since that would alter the indices
for ix in rand_ix:
test_graphs.append(graph_list[ix])
test_labels.append(label_list[ix])
# now copy the indices that are NOT in the test index list
for ix in range(0, len(graph_list)):
if ix in rand_ix:
continue
train_graphs.append(graph_list[ix])
train_labels.append(label_list[ix])
return (train_graphs, train_labels, test_graphs, test_labels)
"""
Explanation: The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects.
End of explanation
"""
train_graphs, train_labels, test_graphs, test_labels = train_test_split(all_graphs, all_labels, test_fraction=0.10)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
def graphs_to_eigenvalue_matrix(graph_list, num_eigenvalues = None):
"""
Given a list of NetworkX graphs, returns a numeric matrix where rows represent graphs,
and columns represent the reverse sorted eigenvalues of the Laplacian matrix for each graph,
possibly trimmed to only use the num_eigenvalues largest values. If num_eigenvalues is
unspecified, all eigenvalues are used.
"""
# peek at the first graph and see how many eigenvalues there are
tg = graph_list[0]
n = len(nx.spectrum.laplacian_spectrum(tg, weight=None))
# we either use all of the eigenvalues, or we use the smaller of
# the requested number or the actual number (if it is smaller than requested)
if num_eigenvalues is None:
ev_used = n
else:
ev_used = min(n, num_eigenvalues)
print "(debug) eigenvalues - test graph: %s num_eigenvalues: %s ev_used: %s" % (n, num_eigenvalues, ev_used)
data_mat = np.zeros((len(graph_list),ev_used))
#print "data matrix shape: ", data_mat.shape
for ix in range(0, len(graph_list)):
spectrum = sorted(nx.spectrum.laplacian_spectrum(graph_list[ix], weight=None), reverse=True)
data_mat[ix,:] = spectrum[0:ev_used]
return data_mat
"""
Explanation: Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
End of explanation
"""
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=20)
test_matrix = graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=20)
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
"""
Explanation: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01],
'clf__n_estimators': [10,25,50,100,250,500]
}
grid_search = GridSearchCV(pipeline, params, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
"""
Explanation: Definite improvement over just using the eigenvalue distance, as expected.
I did a run with all 30 eigenvalues and got the same answer as using just the 20 largest eigenvalues, presumably because the smallest 10 are very close to zero and do not vary enough between classes to be useful. But clearly, tuning this hyperparameter will be useful on the margins.
The next step, of course, is to perform a cross validation of the hyperparameters, and write an sklearn-compliant object that makes it easy to cross-validate automatically over the graph objects in various ways, since it would be good to do random splits of the graphs, not just splits of the numeric data matrix.
A second strategy will be to see if augmenting the eigenvalue features with various graph theoretic properties helps at all. Some features, such as the mean degree of the graph, are likely to be highly redundant, since that information is fully captured by the eigenvalues of the Laplacian matrix (specifically, in its diagonal). So the trick will be to find some graph metrics which may not be fully captured by the eigenvalues. That will require more thought, but some of the work I did for the semantic Axelrod paper on graph orbits and symmetries might be useful here.
Finding Optimal Hyperparameters
End of explanation
"""
|
pushpajnc/models
|
predicting-house-prices/housing-project-V1.ipynb
|
mit
|
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
from IPython.display import display
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MDEV']
print prices.size
features = data.drop('MDEV', axis = 1)
display(data.head())
# Success
print "This housing dataset has {} data points with {} variables each.".format(*data.shape)
"""
Explanation: Predicting Housing Prices
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts.
End of explanation
"""
# Minimum price of the data
minimum_price = np.min(prices)
# Maximum price of the data
maximum_price = np.max(prices)
# Mean price of the data
mean_price = np.mean(prices)
# Median price of the data
median_price = np.median(prices)
# Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
"""
Explanation: Data Exploration
We will to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MDEV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
End of explanation
"""
import matplotlib.pyplot as plt
display(features.head())
plt.plot(features.LSTAT, prices, 'o')
plt.title('LSTAT vs PRICES')
plt.xlabel('LSTAT')
plt.ylabel('PRICES')
plt.plot(features.RM, prices, 'o')
plt.title('RM vs PRICES')
plt.xlabel('RM')
plt.ylabel('PRICES')
plt.plot(features.PTRATIO, prices, 'o')
plt.title('PTRATIO vs PRICES')
plt.xlabel('PTRATIO')
plt.ylabel('PRICES')
"""
Explanation: Feature Observation
'RM' is the average number of rooms among homes in the neighborhood.
'LSTAT' is the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood.
'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
As RM will increase, MDEV will increase. As LSTAT will increase, MDEV will decrease. Increase in PTRATIO will lead to a decrease in MDEV.
For a given neighborhood, price per square foot will be more or less constant. Therefore, as the number of rooms will increase in a given neighborhood, the prices of the houses will increase.
An increase LSTAT signifies a decrease in the networth of homeowners in a given neighborhood compared to the rest of the boston residents. Therefore, prices of the houses in a given neighborhood will decrease as LSTAT increases.
PTRATIO is the number of students per teacher. A lower PTRATIO signifies a good school and a well to do neighborhood as the school is able to afford having more teachers for a given number of students. Therefore, an decrease in PTRATIO will lead to an increase in MDEV.
End of explanation
"""
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
"""
Explanation: Developing a Model
Defining a Performance Metric
End of explanation
"""
# Import 'train_test_split'
from sklearn import cross_validation
# Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = (cross_validation.train_test_split(features, prices, test_size=0.2, random_state=0))
# Success
print "Training and testing split was successful."
"""
Explanation: Implementation: Shuffle and Split Data
End of explanation
"""
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
"""
Explanation: Benefit of splitting the data set into Training and Testing sets
Without testing our model, we would not know if the model is suffering from high bias or high variance, ie, we won't know if the model is very simple or very complex. By very simple, I mean, having fewer features than needed or having fewer nonlinear terms. By very complex or high variance, I mean having many features than needed or having many nonlinear terms in the model. Without having test data, we will go on making the model more and more complex just to decrease the training error.
Analyzing Model Performance
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded reigon of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
End of explanation
"""
vs.ModelComplexity(X_train, y_train)
"""
Explanation: Learning the Data
As shown in the plots above, the score of the training curve decreases as more training points are added. The score of the testing curve increases as more training points are included. From the plots above it is that it won't help having more training points because both training and testing scores are already flattening out at number of training points of 250 to 300.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
End of explanation
"""
# Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': [1,2,3,4,5,6,7,8,9,10]}
# Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# Create the grid search object
grid = GridSearchCV(estimator=regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
"""
Explanation: Bias-Variance Tradeoff
When the model is trained with a maxmimum depth of 1, it suffers from High bias. At maximum depth of 10, it suffers from High Variance. At maximum depth of 1, both the training score and the validation scores are small. As we increase the maximum depth, both the scores are increasing. However, as we increase the maximum depth beyond 4, the training score slowly increases whereas the validation score starts to decrease. This means that the model is overfitting the training data and decreasing the training error, however this overfitted model poorly generalizes on the validation data lowering the validation score.
Best-Guess Optimal Model
At maximum depth = 4, the model best generalizes to unseen data. At Max depth of 4, the validation score is the highest, ie, the generalization error in this model is the lowest.
Evaluating Model Performance
Grid Search
We will now use grid search which is basically a parameter sweep for hyperparameter optimization. Hyperparameters are the parameters that are not optimized with the machine learning algorithms such as the penalty value (lambda) in L2 (ridge regression) and L1 (Lasso) regularization. To obtain the optimized lambda, we choose a range of lambda values (a minimum and maximum lambda) and the objective function is optimized at regular intervals in this range. By doing so, one would get training error and CV (or test) error as a function of lambda. The value of lambda at which CV error comes out to be minimum, is the winner!
Cross-Validation
Now will perform k-fold cross-validation, for that we divide the actual training set in k sets and each time we take one set out of these k sets as the test set and the rest of k-1 sets form the "training set". We repeat this procedure k times by taking a different subset as the test set. Benefits of the k-fold CV are that it does not reduce the training set size by taking different chunks of taining and test sets at each fold. Each data point gets to be in the test set exactly once and k-1 times in the training set. Therefore, it doesn't matter how the data gets divided.
Without having the crossvalidation set, we could run into the problem of overfitting the model. We could choose the value of hyperparameter for which training error is minimized but that hyperparameter value could correspond to the high variance in the model. Therefore we need to minimize the CV error with respect to hyperparameter.
Fitting a Model
End of explanation
"""
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
"""
Explanation: Making Predictions
The model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. We can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Optimal Model
End of explanation
"""
# Produce a matrix for client data
client_data = [[5, 34, 15], # Client 1
[4, 55, 22], # Client 2
[8, 7, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
"""
Explanation: Predicting Selling Prices
Let's say we have collected the following information from three of our clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Household net worth (income) | Top 34th percent | Bottom 45th percent | Top 7th percent |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
The questions are what price should we recommend each client to sell his/her home at? Do these prices seem reasonable given the values for the respective features?
End of explanation
"""
import matplotlib.pyplot as plt
plt.hist(prices, bins = 30)
for price in reg.predict(client_data):
plt.axvline(price, c = 'r', lw = 3)
from sklearn.neighbors import NearestNeighbors
num_neighbors=5
def nearest_neighbor_price(x):
def find_nearest_neighbor_indexes(x, X): # x is your vector and X is the data set.
neigh = NearestNeighbors( num_neighbors )
neigh.fit(X)
distance, indexes = neigh.kneighbors( x )
return indexes
indexes = find_nearest_neighbor_indexes(x, features)
sum_prices = []
for i in indexes:
sum_prices.append(prices[i])
neighbor_avg = np.mean(sum_prices)
return neighbor_avg
print nearest_neighbor_price( [4, 55, 22])
index = 0
for i in client_data:
val=nearest_neighbor_price(i)
index += 1
print "The predicted {} nearest neighbors price for home {} is: ${:,.2f}".format(num_neighbors,index, val)
"""
Explanation: Clients 1, 2, and 3 will be recommended to sell their houses at \$324,240.00, \$189,123.53, and \$942,666.67, respectively. By looking at the training data statistics above, the median(~\$439K) and mean (\$454,342) are very close to each other, ie, house prices are more or less normally distributed. This means house prices of 68\% of the houses lie between \$289,171 and \$619,514 and 95 percent of the houses lie between \$124,000 and \$784,685. Hence the client3's house is at the right tail of the distribution. Since the training data contains too few samples in this price range, the error bar on this predicted value will be much higher than the error bar on the client1's predicted value. The price of this house should be above average price. A model with more features than just 3 features listed above can do better for this house. The new features could be the number of bathrooms, lot size, type of flooring, etc.
Client2's predicted value will have smaller error bar than Client3 but larger error bar than Client1. The price of Client2's house would be below average price of a house.
End of explanation
"""
vs.PredictTrials(features, prices, fit_model, client_data)
"""
Explanation: Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. We will run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation
"""
|
dsacademybr/PythonFundamentos
|
Cap06/Notebooks/DSA-Python-Cap06-08-Retornando Dados do MongoDB.ipynb
|
gpl-3.0
|
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 6</font>
Download: http://github.com/dsacademybr
End of explanation
"""
# Importamos o Módulo PyMongo
import pymongo
# Criando a conexão com o MongoDB (neste caso, conexão padrão)
client_con = pymongo.MongoClient()
# Listando os bancos de dados disponíveis
# client_con.database_names()
client_con.list_database_names()
# Definindo o objeto db
db = client_con.cadastrodb
# Listando as coleções disponíveis
# db.collection_names()
db.list_collection_names()
# Criando uma coleção
db.create_collection("mycollection")
# Listando as coleções disponíveis
# db.collection_names()
db.list_collection_names()
# Inserindo um documento na coleção criada
db.mycollection.insert_one({
'titulo': 'MongoDB com Python',
'descricao': 'MongoDB é um Banco de Dados NoSQL',
'by': 'Data Science Academy',
'url': 'http://www.datascienceacademy.com.br',
'tags': ['mongodb', 'database', 'NoSQL'],
'likes': 100
})
# Retornando o documento criado
db.mycollection.find_one()
# Preparando um documento
doc1 = {"Nome":"Donald","sobrenome":"Trump","twitter":"@POTUS"}
# Inserindo um documento
db.mycollection.insert_one(doc1)
# Preparando um documento
doc2 = {"Site":"http://www.datascienceacademy.com.br",
"facebook":"facebook.com/dsacademybr"}
# Inserindo um documento
db.mycollection.insert_one(doc2)
# Retornando os documentos na coleção
for rec in db.mycollection.find():
print(rec)
# Conectando a uma coleção
col = db["mycollection"]
type(col)
# Contando os documentos em uma coleção
# col.count()
col.estimated_document_count()
# Encontrando um único documento
redoc = col.find_one()
redoc
"""
Explanation: Retornando Dados no MongoDB com PyMongo
End of explanation
"""
|
stephenpardy/PythonNotebooks
|
astro/IntroIllustrisNotebook.ipynb
|
gpl-2.0
|
!pip install astropy
import numpy as np
import matplotlib.pyplot as plt
import h5py
import astropy.table as atpy
import requests
import os
%matplotlib inline
#input your own api key; your key is listed here after login: http://www.illustris-project.org/data/
apikey=
def get(path, params=None):
# make HTTP GET request to path
headers = {"api-key":str(apikey)}
r = requests.get(path, params=params, headers=headers)
# raise exception if response code is not HTTP SUCCESS (200)
r.raise_for_status()
if r.headers['content-type'] == 'application/json':
return r.json() # parse json responses automatically
if 'content-disposition' in r.headers:
filename = r.headers['content-disposition'].split("filename=")[1]
with open(filename, 'wb') as f:
f.write(r.content)
return filename # return the filename string
return r
#simulation
Isim=1
#snapshot number, snapNum=135 -> z=0
snapNum = 135
#cosmology
H0=70.
h=H0/100.
"""
Explanation: Illustris web API details and example scripts available at: http://www.illustris-project.org/data/docs/api/
End of explanation
"""
#url to snapshot detail
url='http://www.illustris-project.org/api/Illustris-'+str(Isim)+'/snapshots/'+str(snapNum)
print(url)
#access this info from web
metadata=get(url)
#available headers
print(metadata.keys())
print('redshift at snapshot='+str(snapNum)+':',metadata['redshift'])
print('number of subhalos at snapshot='+str(snapNum)+':', metadata['num_groups_subfind'])
print('number of FOF groups at snapshot='+str(snapNum)+':', metadata['num_groups_fof'])
"""
Explanation: 1. Meta data: how many groups and subhalos exist at a given snapshot?
End of explanation
"""
#ID of FOF group we'll use for the example
groupID=1000
#weblink to group info
url='http://www.illustris-project.org/api/Illustris-'+str(Isim)+'/snapshots/'+str(snapNum)+'/halos/'+str(groupID)+'/info.json'
print(url)
#access this info
group=get(url)['Group']
print(group.keys())
#M200 mass
print('group M200:',group['Group_M_Crit200'])
"""
Explanation: 2. Halo catalogs: information about a specific FOF group
FOF halos data specifications: http://www.illustris-project.org/data/docs/specifications/#sec2a
End of explanation
"""
#print out other properties of this group here
#
#subhaloID of central subhalo of FOF group
bcgID=group['GroupFirstSub']
print('ID of central galaxy:',bcgID)
#subhaloIDs of all subhalos of group
Nsubs=group['GroupNsubs']
subhaloIDlist=np.arange(bcgID,bcgID+Nsubs)
print('number of subhalos in group:',Nsubs)
print('subhalo ID list:',subhaloIDlist)
#to check:
url='http://www.illustris-project.org/api/Illustris-'+str(Isim)+'/snapshots/135/halos/'+str(groupID)+'/'
print(url)
"""
Explanation: Interactive break:
print out other properties of this group
End of explanation
"""
#let's access the information for the central subhalo of our group
subhaloID=bcgID
print(subhaloID)
#weblink to subhalo info
url='http://www.illustris-project.org/api/Illustris-'+str(Isim)+'/snapshots/'+str(snapNum)+'/subhalos/'+str(subhaloID)+'/info.json'
print(url)
subhalo=get(url)['Subhalo']
print(subhalo.keys())
#Total mass of subhalo
print('subhalo mass:',subhalo['SubhaloMass'])
"""
Explanation: 3. Subhalo catalog: information about a specific subhalo
Subhalos data specifications: http://www.illustris-project.org/data/docs/specifications/#sec2b
End of explanation
"""
#print out other properties of this subhalo here
#
"""
Explanation: Interactive break:
print out other properties of this subhalo
End of explanation
"""
#let's look at the MPB merger tree for our central subhalo
subhaloID=bcgID
#MPB merger tree for bcg -- NB: MPB specified in url
url='http://www.illustris-project.org/api/Illustris-'+str(Isim)+'/snapshots/'+str(snapNum)+'/subhalos/'+str(subhaloID)+'/sublink/mpb.hdf5'
mpb_filename=get(url)
#put tree data into another data structure: astropy tables
tree=atpy.Table()
with h5py.File(mpb_filename) as ft:
for key in ft.keys():
tree.add_column(atpy.Column(name=str(key), data=np.array(ft[str(key)])))
#remove tree file - hdf5 files remain in working directory otherwise
if os.path.isfile('./sublink_mpb_'+str(subhaloID)+'.hdf5')==True:
os.remove('./sublink_mpb_'+str(subhaloID)+'.hdf5')
print(tree.columns)
#print out some columns
print(tree['SnapNum','SubfindID','SubhaloGrNr','SubhaloMass'])
#plot mass evolution of subhalo
fig1=plt.figure(1,(5,5))
fig1.clf()
ax=fig1.add_subplot(1,1,1)
plt.plot(tree['SnapNum'],tree['SubhaloMass'],'b')
plt.gca().invert_xaxis()
plt.xlabel('SnapNum')
plt.ylabel('SubhaloMass')
"""
Explanation: 4. Merger trees
Merger trees data specifications: http://www.illustris-project.org/data/docs/specifications/#sec3a
End of explanation
"""
#plot other tree properties here
#
"""
Explanation: Q:
How can the above axis labels be improved?
Interactive break:
plot the evolution of other properties from the merger tree \
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/structured/labs/5b_deploy_keras_ai_platform_babyweight.ipynb
|
apache-2.0
|
import os
"""
Explanation: LAB 5b: Deploy and predict with Keras model on Cloud AI Platform.
Learning Objectives
Setup up the environment
Deploy trained Keras model to Cloud AI Platform
Online predict from model on Cloud AI Platform
Batch predict from model on Cloud AI Platform
Introduction
In this notebook, we'll deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Import necessary libraries.
End of explanation
"""
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "cloud-training-demos" # TODO: Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # TODO: Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
%%bash
gcloud config set compute/region $REGION
gcloud config set ai_platform/region global
"""
Explanation: Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
"""
!gsutil cp -r ../babyweight gs://<bucket-name> # TODO: Replace with your bucket-name
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
"""
Explanation: Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
End of explanation
"""
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=# TODO: Add GCS path to saved_model.pb file.
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7
"""
Explanation: Lab Task #2: Deploy trained model.
Deploying the trained model to act as a REST web service is a simple gcloud call. Complete #TODO by providing location of saved_model.pb file to Cloud AI Platoform model deployment service. The deployment will take a few minutes.
End of explanation
"""
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = # TODO: Add model name
MODEL_VERSION = # TODO: Add model version
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
# TODO: Create another instance
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
"""
Explanation: Lab Task #3: Use model to make online prediction.
Complete __#TODO__s for both the Python and gcloud Shell API methods of calling our deployed model on Cloud AI Platform for online prediction.
Python API
We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
End of explanation
"""
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
"""
Explanation: The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).
gcloud shell API
Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.
End of explanation
"""
%%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=# TODO: Add model version
"""
Explanation: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.
End of explanation
"""
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=# TODO: Add model version
"""
Explanation: Lab Task #4: Use model to make batch prediction.
Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. Complete #TODO__s so we can call our deployed model on Cloud AI Platform for batch prediction.
__NOTE: If you get any internal error after running the job, Please wait for a few minutes and re-run the below cell.
End of explanation
"""
|
NAU-CFL/Python_Learning_Source
|
06_Functions_Lecture.ipynb
|
mit
|
def average(n1, n2, n3): # Function Header
# Function Body
res = (n1+n2+n3)/3.0
return res
num1 = 10
num2 = 25
num3 = 16
print(average(num1, num2, num3))
average(100, 90, 29)
average(1.2, 6.7, 8)
def power(n1, n2):
return (n1 ** n2)
print(power(2, 3))
2**3
"""
Explanation: Functions
Sometimes what we have to do is to write long and complex solutions to solve our problems and without the help of functions the task will be really complicated. In order to manage the complexity of a large problem, we have to broke it down into smaller subproblems.
That's exactyl what functions are doing for us. The large program is divided into manageable pieces called program routines for us we focus particularly functions.
We have been using some built-in function such as range and len, but now we will implement our own functions to make things less complex.
A routine is named group of instructions performing some task.
A routine can be invoked(called) as many times as needed.
A function is Python's version of a program routine.
Some functions are designed to return a value, while other are designed for other purposes.
Python
def average(n1, n2, n3): # Function Header
# Function Body
res = (n1+n2+n3)/3.0
return res
def -> Keyword for functions (define)
average -> identifier, which is the function's name
(n1, n2, n3) -> list of identifiers called formal parameters or simply parameters
When we use functions we call the function with actual arguments which replaces the parameters.
```Python
num1 = 10
num2 = 25
num3 = 16
print(average(num1, num2, num3))
```
End of explanation
"""
val = power(2,3)
print(val)
def displayWelcome():
print('This program will do this: ')
displayWelcome()
"""
Explanation: A value-returing function is a program routine called for it's return function like we used in examples above.
Python
def power(n1, n2):
return n1 ** n2
A non-value-returning function is called not for a returned value, but for its side effects.
End of explanation
"""
|
drphilmarshall/StatisticalMethods
|
tutorials/Week3/Metropolis.ipynb
|
gpl-2.0
|
import numpy as np
import statsmodels.api as sm
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import scipy.stats
%matplotlib inline
class SolutionMissingError(Exception):
def __init__(self):
Exception.__init__(self,"You need to complete the solution for this code to work!")
def REPLACE_WITH_YOUR_SOLUTION():
raise SolutionMissingError
REMOVE_THIS_LINE = REPLACE_WITH_YOUR_SOLUTION
"""
Explanation: Week 3 Tutorial
The Metropolis Sampler
This notebook is for playing with the Metropolis algorithm in the context of fitting a linear model.
As usual, there is some code provided below, which you will need to complete in places (or write your own from scratch, alternatively).
Reminder!
After pulling down the tutorial notebook, immediately make a copy. Then do not modify the original. Do your work in the copy. This will prevent the possibility of git conflicts should the version-controlled file change at any point in the future. (The same exhortation applies to homeworks.)
Preliminaries
Import things
End of explanation
"""
true_a = np.pi
true_b = np.sqrt(2.0)
try:
exec(open('Solution/mock_data.py').read())
except IOError:
ndata = 50
x = np.random.normal(np.exp(1.0), 1.0, ndata)
y = REPLACE_WITH_YOUR_SOLUTION()
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.plot(x, y, 'o');
"""
Explanation: First, we'll generate a mock data set to fit. You can choose different values below, but otherwise let's use some easily recognizable numbers.
$x \sim \mathrm{Normal}(e, 1)$
$y \sim \mathrm{Normal}(\pi+\sqrt{2} ~ x, 1)$
For simplicity, we'll take the $x$ values to be measured precisely, and the $y$ values to have (known) unit error bars, accounting for the scatter in $y$ above. Hence the model parameters to be fit are just $a$ and $b$, the intercept and slope of the linear model.
End of explanation
"""
class ExactPosterior:
def __init__(self, x, y, a0, b0):
# Here's the linear algebra done manually
##X = np.matrix(np.vstack([np.ones(len(x)), x]).T)
##Y = np.matrix(y).T
##self.invcov = X.T * X
##self.covariance = np.linalg.inv(self.invcov)
##self.mean = self.covariance * X.T * Y
# It's more easily generalizable to use a library instead
model = sm.OLS(y, sm.add_constant(x))
ols = model.fit()
self.mean = np.matrix(ols.params).T
self.covariance = ols.normalized_cov_params
self.invcov = np.linalg.inv(self.covariance)
self.a_array = np.arange(0.0, 6.0, 0.02)
self.b_array = np.arange(0.0, 3.25, 0.02)
self.P_of_a = np.array([self.marg_a(a) for a in self.a_array])
self.P_of_b = np.array([self.marg_b(b) for b in self.b_array])
self.P_of_ab = np.array([[self.lnpost(a,b) for a in self.a_array] for b in self.b_array])
self.P_of_ab = np.exp(self.P_of_ab)
self.renorm = 1.0/np.sum(self.P_of_ab)
self.P_of_ab = self.P_of_ab * self.renorm
self.levels = scipy.stats.chi2.cdf(np.arange(4,1,-1)**2, 1) # confidence levels corresponding to contours below
self.contourLevels = self.renorm*np.exp(self.lnpost(a0,b0)-0.5*scipy.stats.chi2.ppf(self.levels, 2))
def lnpost(self, a, b): # the 2D posterior
z = self.mean - np.matrix([[a],[b]])
return -0.5 * (z.T * self.invcov * z)[0,0]
def marg_a(self, a): # marginal posterior of a
return scipy.stats.norm.pdf(a, self.mean[0,0], np.sqrt(self.covariance[0,0]))
def marg_b(self, b): # marginal posterior of b
return scipy.stats.norm.pdf(b, self.mean[1,0], np.sqrt(self.covariance[1,1]))
exact = ExactPosterior(x, y, true_a, true_b)
"""
Explanation: It will be convenient to compare what we get from MCMC with the exact solution, which is easy to calculate in this case (assuming uniform priors). Here is a class that packages that up. Note that if you made changed the model that the data are drawn from, the plot ranges below may need to be updated to reflect that.
End of explanation
"""
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.plot(exact.a_array, exact.P_of_a); plt.xlabel('a');
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.plot(exact.b_array, exact.P_of_b); plt.xlabel('b');
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels);
plt.plot(true_a, true_b, 'o', color='red'); plt.xlabel('a'); plt.ylabel('b');
"""
Explanation: Demo some plots of the exact posterior distribution
End of explanation
"""
def lnPrior(params):
return 0.0
"""
Explanation: Coding the model
Use uniform priors, for ease of comparison to the exact solution.
End of explanation
"""
try:
exec(open('Solution/lnLike.py').read())
except IOError:
REMOVE_THIS_LINE()
def lnLike(params, x, y):
a = params[0]
b = params[1]
REPLACE_WITH_YOUR_SOLUTION()
"""
Explanation: Define a likelihood function.
End of explanation
"""
try:
exec(open('Solution/lnPost.py').read())
except IOError:
REMOVE_THIS_LINE()
def lnPost(params, x, y):
REPLACE_WITH_YOUR_SOLUTION()
"""
Explanation: Package up a log-posterior function.
End of explanation
"""
def propose(params, width):
return params + width * np.random.randn(params.shape[0])
"""
Explanation: Coding the sampler
Improve as you see fit!
Let's use a simple Gaussian proposal distribution, for lack of any better ideas. The width of the distribution might as well be an option.
End of explanation
"""
try:
exec(open('Solution/step.py').read())
except IOError:
REMOVE_THIS_LINE()
def step(current_params, current_lnP, width=1.0):
trial_params = REPLACE_WITH_YOUR_SOLUTION()
trial_lnP = REPLACE_WITH_YOUR_SOLUTION()
if REPLACE_WITH_YOUR_SOLUTION():
return (trial_params, trial_lnP)
else:
return (current_params, current_lnP)
"""
Explanation: Next, we need a function to propose a step and decide whether to accept or reject it.
End of explanation
"""
# choose an intial state and evaluate the posterior there
params = -5.0 + np.random.rand(2) * 10.0
lnP = lnPost(params, x, y)
# set up an array to hold the chain
try:
exec(open('Solution/Nsamples.py').read())
except IOError:
Nsamples = REPLACE_WITH_YOUR_SOLUTION()
samples = np.zeros((Nsamples, 2))
# run the sampler
for i in range(Nsamples):
params, lnP = step(params, lnP)
samples[i,:] = params
"""
Explanation: And away we go
Here we set a random initial value on [-5,5] for each parameter, then run the sampler.
Start with Nsamples set to something small (like 100) to verify that the code works as expected. You'll probably need to use a larger value to get convergence, though.
End of explanation
"""
plt.rcParams['figure.figsize'] = (7.0, 7.0)
plt.plot(samples[:,0], samples[:,1]);
plt.plot(samples[0,0], samples[0,1], 'ro');
plt.xlabel('a'); plt.ylabel('b');
"""
Explanation: Visualize the chain in two dimensions
End of explanation
"""
plt.rcParams['figure.figsize'] = (12.0, 3.0)
plt.plot(samples[:,0], 'o', ms=1.0); plt.ylabel('a');
plt.rcParams['figure.figsize'] = (12.0, 3.0)
plt.plot(samples[:,1], 'o', ms=1.0); plt.ylabel('b');
"""
Explanation: Look at the traces of each parameter (vs time).
End of explanation
"""
samples = samples[np.arange(int(0.5*Nsamples),Nsamples),:]
"""
Explanation: On the basis of these diagnostics, we should identify a burn-in period to throw away hereafter. We could also thin the remaining chain to reduce the number of highly correlated and therefore redundant points.
Here I've removed the first half of the chain, but you should change this to whatever makes sense.
End of explanation
"""
plt.rcParams['figure.figsize'] = (7.0, 7.0)
plt.plot(samples[:,0], samples[:,1]);
plt.xlabel('a'); plt.ylabel('b');
plt.rcParams['figure.figsize'] = (12.0, 3.0)
plt.plot(samples[:,0], 'o', ms=1.0); plt.ylabel('a');
plt.rcParams['figure.figsize'] = (12.0, 3.0)
plt.plot(samples[:,1], 'o', ms=1.0); plt.ylabel('b');
"""
Explanation: Repeat earlier plots, "zoomed in" on the remaining samples
End of explanation
"""
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.hist(samples[:,0], 20, normed=True, color='cyan');
plt.plot(exact.a_array, exact.P_of_a, color='red');
plt.xlabel('a');
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.hist(samples[:,1], 20, normed=True, color='cyan');
plt.plot(exact.b_array, exact.P_of_b, color='red');
plt.xlabel('b');
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.plot(samples[:,0], samples[:,1], 'o', ms=1.0);
plt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='red', levels=exact.contourLevels);
plt.xlabel('a'); plt.ylabel('b');
"""
Explanation: Compare the marginal and joint posterior distributions to the exact solution.
End of explanation
"""
|
d00d/quantNotebooks
|
Notebooks/quantopian_research_public/notebooks/lectures/Random_Variables/notebook.ipynb
|
unlicense
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.stats as stats
from statsmodels.stats import stattools
from __future__ import division
"""
Explanation: Discrete and Continuous Random Variables
by Maxwell Margenot
Revisions by Delaney Granizo Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
A random variable is variable that takes on values according to chance. When discussing random variables, we typically describe them in terms of probability distributions. That is, the probability that each value can come out of the random variable. The classic example of this is a die, which can produce the values 1-6 with uniform probability.
We typically separate random variables into two different classes:
Discrete random variables
Continuous random variables
How each of these is handled varies, but the principles underlying them remain the same. We can easily see how modeling random variables can come in handy when dealing with finance; financial assets are often expressed has moving according to deterministic and random patterns, with the random patterns being expressed with random variables. To do this we would 'sample' from the random variable at each timestep, then move the financial instrument by that amount. This analysis is used because much of the motion in assets is unexplained using determinstic models.
Each random variable follows a probability distribution, a function which describes it. The probability distribution assigns probabilities to all possible values of a random variable. For a given random variable $X$, we express the probability that $X$ is equal to a value $x$ as $P(X = x)$. For discrete random variables, we can express $p(x) = P(X = x)$ in shorthand. This is also known as the probability mass function (PMF). For continuous random variables we cannot use a PMF, as we will cover later, so we must use a probability density function (PDF). Probability distributions form the basis for the Black-Scholes and binomial pricing models as well as the CAPM. An understanding of them is also necessary in order to perform Monte Carlo simulations.
For each probability distribution function, we also have a cumulative distribution function (CDF). This is defined as $P(X \leq x)$, the probability that the random variable is less than or equal to a particular value. The shorthand for the CDF is $F(x) = P(X \leq x)$. In order to find $F(x)$ in the discrete case, we sum up the values of the PMF for all outcomes less than or equal to $x$. In the continuous case, we use calculus to integrate the PDF over all values up to $x$.
End of explanation
"""
class DiscreteRandomVariable:
def __init__(self, a=0, b=1):
self.variableType = ""
self.low = a
self.high = b
return
def draw(self, numberOfSamples):
samples = np.random.random_integers(self.low, self.high, numberOfSamples)
return samples
"""
Explanation: Discrete Random Variables
A discrete random variable is one with a countable number of outcomes. Each of these outcomes has a separate probability associated with it. Consider a coin flip or a die roll, some of the most basic uniformly distributed random variables. For the coin flip, there are two possible outcomes, either heads or tails, each with a $1/2$ probability of occurring. Discrete random variables do not always have equal weights for all outcomes. The basic unit of a discrete random variable is its probability mass function (PMF), another name for the probability function $p(x)$. The PMF, or probability function, gives a probability, a mass, to each point in the domain of the probability distribution. A probability function has two main properties:
$0 \leq p(x) \leq 1$ because all probabilities are in the interval $[0, 1]$
The sum of all probabilities $p(x)$ over all values of X is equal to $1$. The total weights for all values of the random variable must add to $1$.
Here we will consider some examples of the most prevalent discrete probability distributions.
End of explanation
"""
DieRolls = DiscreteRandomVariable(1, 6)
plt.hist(DieRolls.draw(10), bins = [1,2,3,4,5,6,7], align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Die Rolls']);
"""
Explanation: Uniform Distribution
The most basic type of probability distribution is the uniform distribution. With a discrete uniform distribution, equal weight is assigned to all outcomes. Take the example of rolling a die. It has six faces, numbered $1$ through $6$, each equally likely to occur with a $1/6$ chance each. With this, we know the the PMF must be $p(x) = 1/6$ for all values of our uniform random variable $X$.
End of explanation
"""
plt.hist(DieRolls.draw(10000), bins = [1,2,3,4,5,6,7], align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Die Rolls']);
"""
Explanation: Each time we roll the die, we have an equal chance of getting each face. In the short run this looks uneven, but if we take many samples it is apparent that each face is occurring the same percentage of rolls.
End of explanation
"""
class BinomialRandomVariable(DiscreteRandomVariable):
def __init__(self, numberOfTrials = 10, probabilityOfSuccess = 0.5):
self.variableType = "Binomial"
self.numberOfTrials = numberOfTrials
self.probabilityOfSuccess = probabilityOfSuccess
return
def draw(self, numberOfSamples):
samples = np.random.binomial(self.numberOfTrials, self.probabilityOfSuccess, numberOfSamples)
return samples
"""
Explanation: So with a die roll, we can easily see illustrated that the $p(x) = 1/6$ for all values of the random variable $X$. Let's look at the possibilities for all values of both the probability function and the cumulative distribution function:
Value: $X = x$ | PMF: $p(x) = P(X = x)$ | CDF: $F(x) = P(X \leq x)$ |
--- | --- | --- |
1 | $1/6$ | $1/6$
2 | $1/6$ | $1/3$
3 | $1/6$ | $1/2$
4 | $1/6$ | $2/3$
5 | $1/6$ | $5/6$
6 | $1/6$ | $1$
Using this table we can easily see that the probability function satisfies the necessary conditions. Each value of the probability function is in the interval $[0,1]$, satisfying the first condition. The second condition is satisfied because all values of $p(x)$ sum to $1$, as evidenced in the cumulative distribution function. The demonstrates two properties of the cumulative distribution function:
The CDF is between $0$ and $1$ for all $x$. This parallels the value of the probability distribution function.
The CDF is nondecreasing in $x$. This means that as $x$ increases, the CDF either increases or remains constant.
When attempting to sample other probability distributions, we can use compositions of the uniform distribution with certain functions in order to get the appropriate samples. However, this method can be tremendously inefficient. As such, we will instead use the built-in NumPy functions for each distribution to simplify matters.
Binomial Distribution
A binomial distribution is used to describe successes and failures. This can be very useful in an investment context as many of our choices tend to be binary like this. When we take a single success/failure trial, we call it a Bernoulli trial. With the Bernoulli random variable, we have two possible outcomes:
$$p(1) = P(Y = 1) = p \ \ \ \ \ \ \
p(0) = P(Y = 0) = 1-p$$
We consider $Y$ taking on a value of $1$ to be a success, so the probability of a success occurring in a single trial is $p$.
A binomial distribution takes a set of $n$ Bernoulli trials. As such, we can have somewhere between $0$ and $n$ successes. Each trial has the same probability of success, $p$, and all of the trials are independent of each other. We can describe the entire binomial random variable using only $n$ and $p$, signified by the notation $X$ ~ $B(n, p)$. This states that $X$ is a binomial random variable with parameters $n$ and $p$.
In order to define the probability function of a binomial random variable, we must be able to choose some number of successes out of the total number of trials. This idea lends itself easily to the combination idea in combinatorics. A combination describes all possible ways of selecting items out of a collection such that order does not matter. For example, if we have $6$ pairs of socks and we want to choose $2$ of them, we would write the total number of combinations possible as $\binom{6}{2}$. This is expanded as:
$$
\binom{6}{2} = \frac{6!}{4! \ 2!} = 15
$$
Where $!$ denotes factorial and $n! = (n)(n-1)(n-2)\ldots (1)$. In order to write the formula for a combination more generally, we write:
$$
\binom{n}{x} = \frac{n!}{(n-x)! \ x!}
$$
We use this notation in order to choose successes with our binomial random variable. The combination serves the purpose of computing how many different ways we can reach the same result. The resulting probability function is:
$$
p(x) = P(X = x) = \binom{n}{x}p^x(1-p)^{n-x} = \frac{n!}{(n-x)! \ x!} p^x(1-p)^{n-x}
$$
If $X$ is a binomial random variable distributed with $B(n, p)$.
End of explanation
"""
StockProbabilities = BinomialRandomVariable(5, 0.50)
plt.hist(StockProbabilities.draw(50), bins = [0, 1, 2, 3, 4, 5, 6], align = 'left')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Die Rolls']);
"""
Explanation: Take the example of a stock price moving up or down, each with probability $p = 0.5$. We can consider a move up, or $U$, to be a success and a move down, or $D$ to be a failure. With this, we can analyze the probability of each event using a binomial random variable. We will also consider an $n$-value of $5$ for $5$ observations of the stock price over time. The following table shows the probability of each event:
Number of Up moves, $x$ | Ways of reaching $x$ Up moves $\binom{n}{x}$ | Independent Trials with $p = 0.50$ | $p(x)$ Value | CDF: $F(x) = P(X \leq x)$ |
--- | --- | --- | --- | --- |
$0$ | $1$ | $0.50^0 (1 - 0.50)^5 = 0.03125$ | $0.03125$ | $0.03125$
$1$ | $5$ | $0.50^1 (1 - 0.50)^4 = 0.03125$ | $0.15635$ | $0.18750$
$2$ | $10$ | $0.50^2 (1 - 0.50)^3 = 0.03125$ | $0.31250$ | $0.50000$
$3$ | $10$ | $0.50^3 (1 - 0.50)^2 = 0.03125$ | $0.31250$ | $0.81250$
$4$ | $5$ | $0.50^4 (1 - 0.50)^1 = 0.03125$ | $0.15635$ | $0.96875$
$5$ | $1$ | $0.50^5 (1 - 0.50)^0 = 0.03125$ | $0.03125$ | $1.00000$
Here we see that in the particular case where $p = 0.50$, the binomial distribution is symmetric. Because we have an equal probability for both an upward and a downward move, the only differentiating factor between probabilities ends up being the combination aspect of the probability function, which is itself symmetric. If we were to slightly modify the value of $p$ we would end up with an asymmetric distribution.
Now we will draw some samples for the parameters above, where $X$ ~ $B(5, 0.50)$:
End of explanation
"""
plt.hist(StockProbabilities.draw(10000), bins = [0, 1, 2, 3, 4, 5, 6], align = 'left')
plt.xlabel('Value')
plt.ylabel('Occurences');
"""
Explanation: Again, as in all cases of sampling, the more samples that you take, the more consistent your resulting distribution looks:
End of explanation
"""
StockProbabilities = BinomialRandomVariable(5, 0.25)
plt.hist(StockProbabilities.draw(10000), bins = [0, 1, 2, 3, 4, 5, 6], align = 'left')
plt.xlabel('Value')
plt.ylabel('Occurences');
"""
Explanation: Say that we changed our parameters so that $p = 0.25$. This makes it so that $P(X = 0) = 0.23730$, skewing our distribution much more towards lower values. We can see this easily in the following graph:
End of explanation
"""
class ContinuousRandomVariable:
def __init__(self, a = 0, b = 1):
self.variableType = ""
self.low = a
self.high = b
return
def draw(self, numberOfSamples):
samples = np.random.uniform(self.low, self.high, numberOfSamples)
return samples
"""
Explanation: Changing the value of $p$ from $0.50$ to $0.25$ clearly makes our distribution asymmetric. We can extend this idea of stock price moving with a binomial random variable into a framework that we call the Binomial Model of Stock Price Movement. This is used as one of the foundations for option pricing. In the Binomial Model, it is assumed that for any given time period a stock price can move up or down by a value determined by the up or down probabilities. This turns the stock price into the function of a binomial random variable, the magnitude of upward or downward movement, and the initial stock price. We can vary these parameters in order to approximate different stock price distributions.
Continuous Random Variables
Continuous random variables differ from discrete random variables in that continuous ones can take infinitely many outcomes. They cannot be counted or described as a list. As such, it means very little when we assign individual probabilities to outcomes. Because there are infinitely many outcomes, the probability of hitting any individual outcome has a probability of 0.
We can resolve this issue by instead taking probabilities across ranges of outcomes. This is managed by using calculus, though in order to use our sampling techniques here we do not actually have to use any. With a continuous random variable $P(X = 0)$ is meaningless. Instead we would look for something more like $P(-1 < X < 1)$. For continous random variables, rather than using a PMF, we define a probability density function (PDF), $f_X(x)$, such that we can say:
$$P(a < X < b) = \int_a^b f_X(x)dx$$
Similar to our requirement for discrete distributions that all probabilities add to $1$, here we require that:
$f_X(x) \geq 0$ for all values of $X$
$P(-\infty < X < \infty) = \int_{-\infty}^{\infty} f_X(x) dx = 1$
It is worth noting that because the probability at an individual point with a continuous distribution is $0$, the probabilities at the endpoints of a range are $0$. Hence, $P(a \leq X \leq b) = P(a < X \leq b) = P(a \leq X < B) = P(a < X < b)$. If we integrate the PDF across all possibilities, over the total possible range, the value should be $1$.
End of explanation
"""
a = 0.0
b = 8.0
x = np.linspace(a, b, 100)
y = [1/(b-a) for i in x]
plt.plot(x, y)
plt.xlabel('Value')
plt.ylabel('Probability');
"""
Explanation: Uniform Distribution
The uniform distribution can also be defined within the framework of a continous random variable. We take $a$ and $b$ to be constant, where $b$ is the highest possible value and $a$ is the lowest possible value that the outcome can obtain. Then the PDF of a uniform random variable is:
$$f(x) = \begin{cases}\frac{1}{b - a} & \text{for $a < x < b$} \ 0 & \text{otherwise}\end{cases}$$
Since this function is defined on a continuous interval, the PDF covers all values between $a$ and $b$. Here we have a plot of the PDF (feel free to vary the values of $a$ and $b$):
End of explanation
"""
y = [(i - a)/(b - a) for i in x]
plt.plot(x, y)
plt.xlabel('Value')
plt.ylabel('Probability');
"""
Explanation: As before in the discrete uniform case, the continuous uniform distribution PDF is constant for all values the variable can take on. The only difference here is that we cannot take the probability for any individual point. The CDF, which we get from integrating the PDF is:
$$ F(x) = \begin{cases} 0 & \text{for $x \leq a$} \ \frac{x - a}{b - a} & \text{for $a < x < b$} \ 1 & \text{for $x \geq b$}\end{cases}$$
And is plotted on the same interval as the PDF as:
End of explanation
"""
class NormalRandomVariable(ContinuousRandomVariable):
def __init__(self, mean = 0, variance = 1):
ContinuousRandomVariable.__init__(self)
self.variableType = "Normal"
self.mean = mean
self.standardDeviation = np.sqrt(variance)
return
def draw(self, numberOfSamples):
samples = np.random.normal(self.mean, self.standardDeviation, numberOfSamples)
return samples
"""
Explanation: Normal Distribution
The normal distribution is a very common and important distribution in statistics. Many important tests and methods in statistics, and by extension, finance, are based on the assumption of normality. A large part of this is due to the results of the Central Limit Theorem (CLT), which states that large enough samples of independent trials are normally distributed. The convenience of the normal distribution finds its way into certain algorithmic trading strategies as well. For example, as covered in the pairs trading notebook, we can search for stock pairs that are cointegrated, and bet on the direction the spread between them will change based on a normal distribution.
End of explanation
"""
mu_1 = 0
mu_2 = 0
sigma_1 = 1
sigma_2 = 2
x = np.linspace(-8, 8, 200)
y = (1/(sigma_1 * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu_1)*(x - mu_1) / (2 * sigma_1 * sigma_1))
z = (1/(sigma_2 * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu_2)*(x - mu_2) / (2 * sigma_2 * sigma_2))
plt.plot(x, y, x, z)
plt.xlabel('Value')
plt.ylabel('Probability');
"""
Explanation: When describing a normal random variable we only need to know its mean ($\mu$) and variance ($\sigma^2$, where $\sigma$ is the standard deviation). We denote a random variable $X$ as a normal random variable by saying $X$ ~ $N(\mu, \sigma^2)$. In modern portfolio theory, stock returns are generally assumed to follow a normal distribution. One major characteristic of a normal random variable is that a linear combination of two or more normal random variables is another normal random variable. This is useful for considering mean returns and variance of a portfolio of multiple stocks. Up until this point, we have only considered single variable, or univariate, probability distributions. When we want to describe random variables at once, as in the case of observing multiple stocks, we can instead look at a multivariate distribution. A multivariate normal distribution is described entirely by the means of each variable, their variances, and the distinct correlations between each and every pair of variables. This is important when determining characteristics of portfolios because the variance of the overall portfolio depends on both the variances of its securities and the correlations between them.
The PDF of a normal random variable is:
$$
f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x - \mu)^2}{2\sigma^2}}
$$
And is defined for $-\infty < x < \infty$. When we have $\mu = 0$ and $\sigma = 1$, we call this the standard normal distribution.
End of explanation
"""
n = 50
p = 0.25
X = BinomialRandomVariable(n, p)
X_samples = X.draw(10000)
Z_samples = (X_samples - n * p) / np.sqrt(n * p * (1 - p))
plt.hist(X_samples, bins = range(0, n + 2), align = 'left')
plt.xlabel('Value')
plt.ylabel('Probability');
plt.hist(Z_samples, bins=20)
plt.xlabel('Value')
plt.ylabel('Probability');
"""
Explanation: By changing the mean and standard deviation of the normal distribution, we can change the depth and width of the bell curve. With a larger standard deviation, the values of the distribution are less concentrated around the mean.
Rather than using normal distribution to model stock prices, we use it to model returns. Stock prices cannot go below $0$ while the normal distribution can take on all values on the real line, making it better suited to returns. Given the mean and variance of a normal distribution, we can make the following statements:
Around $68\%$ of all observations fall within one standard deviations around the mean ($\mu \pm \sigma$)
Around $95\%$ of all observations fall within two standard deviations around the mean ($\mu \pm 2\sigma$)
Around $99\%$ of all observations fall within three standard deviations aroud the mean ($\mu \pm 3\sigma$)
These values are important for understanding confidence intervals as they relate to the normal distribution. When considering the mean and variance of a sample distribution, we like to look at different confidence intervals around the mean.
Using the central limit theorem, we can standardize different random variables so that they become normal random variables. A very common tool in statistics is a standard normal probability table, used for looking up the values of the standard normal CDF for given values of $x$. By changing random variables into a standard normal we can simply check these tables for probability values. We standardize a random variable $X$ by subtracting the mean and dividing by the variance, resulting in the standard normal random variable $Z$.
$$
Z = \frac{X - \mu}{\sigma}
$$
Let's look at the case where $X$ ~ $B(n, p)$ is a binomial random variable. In the case of a binomial random variable, the mean is $\mu = np$ and the variance is $\sigma^2 = np(1 - p)$.
End of explanation
"""
Y_initial = 100
X = NormalRandomVariable(0, 1)
Y_returns = X.draw(100) # generate 100 daily returns
Y = pd.Series(np.cumsum(Y_returns), name = 'Y') + Y_initial
Y.plot()
plt.xlabel('Time')
plt.ylabel('Value');
"""
Explanation: The idea that we can standardize random variables is very important. By changing a random variable to a distribution that we are more familiar with, the standard normal distribution, we can easily answer any probability questions that we have about the original variable. This is dependent, however, on having a large enough sample size.
Let's assume that stock returns are normally distributed. Say that $Y$ is the price of a stock. We will simulate its returns and plot it.
End of explanation
"""
Z_initial = 50
Z_returns = X.draw(100)
Z = pd.Series(np.cumsum(Z_returns), name = 'Z') + Z_initial
Z.plot()
plt.xlabel('Time')
plt.ylabel('Value');
"""
Explanation: Say that we have some other stock, $Z$, and that we have a portfolio of $Y$ and $Z$, called $W$.
End of explanation
"""
Y_quantity = 20
Z_quantity = 50
Y_weight = Y_quantity/(Y_quantity + Z_quantity)
Z_weight = 1 - Y_weight
W_initial = Y_weight * Y_initial + Z_weight * Z_initial
W_returns = Y_weight * Y_returns + Z_weight * Z_returns
W = pd.Series(np.cumsum(W_returns), name = 'Portfolio') + W_initial
W.plot()
plt.xlabel('Time')
plt.ylabel('Value');
pd.concat([Y, Z, W], axis = 1).plot()
plt.xlabel('Time')
plt.ylabel('Value');
"""
Explanation: We construct $W$ by taking a weighted average of $Y$ and $Z$ based on their quantity.
End of explanation
"""
plt.hist(W_returns);
plt.xlabel('Return')
plt.ylabel('Occurrences');
"""
Explanation: Note how the returns of our portfolio, $W$, are also normally distributed
End of explanation
"""
start = '2015-01-01'
end = '2016-01-01'
prices = get_pricing('TSLA', fields=['price'], start_date=start, end_date=end)
# Take the daily returns
returns = prices.pct_change()[1:]
#Set a cutoff
cutoff = 0.01
# Get the p-value of the JB test
_, p_value, skewness, kurtosis = stattools.jarque_bera(returns)
print "The JB test p-value is: ", p_value
print "We reject the hypothesis that the data are normally distributed ", p_value < cutoff
print "The skewness of the returns is: ", skewness
print "The kurtosis of the returns is: ", kurtosis
plt.hist(returns.price, bins = 20)
plt.xlabel('Value')
plt.ylabel('Occurrences');
"""
Explanation: The normal distribution is very widely utilized in finance especially in risk and portfolio theory. Extensive literature can be found utilizing the normal distribution for purposes ranging from risk analysis to stock price modeling.
Fitting a Distribution
Now we will attempt to fit a probability distribution to the returns of a stock. We will take the returns of Tesla and try to fit a normal distribution to them. The first thing to check is whether the returns actually exhibit properties of a normal distribution. For this purpose, we will use the Jarque-Bera test, which indicates non-normality if the p-value is below a cutoff.
End of explanation
"""
# Take the sample mean and standard deviation of the returns
sample_mean = np.mean(returns.price)
sample_std_dev = np.std(returns.price)
"""
Explanation: The low p-value of the JB test leads us to reject the null hypothesis that the returns are normally distributed. This is due to the high kurtosis (normal distributions have a kurtosis of $3$).
We will proceed from here assuming that the returns are normally distributed so that we can go through the steps of fitting a distribution. Next we calculate the sample mean and standard deviation of the series.
End of explanation
"""
x = np.linspace(-(sample_mean + 4 * sample_std_dev), (sample_mean + 4 * sample_std_dev), len(returns))
sample_distribution = ((1/(sample_std_dev * 2 * np.pi)) *
np.exp(-(x - sample_mean)*(x - sample_mean) / (2 * sample_std_dev * sample_std_dev)))
plt.hist(returns.price, bins = 20, normed = True);
plt.plot(x, sample_distribution)
plt.xlabel('Value')
plt.ylabel('Occurrences');
"""
Explanation: Now let's see how a theoretical normal curve fits against the actual values.
End of explanation
"""
|
GoogleCloudPlatform/ml-design-patterns
|
03_problem_representation/neutral.ipynb
|
apache-2.0
|
import numpy as np
import pandas as pd
def create_synthetic_dataset(N, shuffle):
# random array
prescription = np.full(N, fill_value='acetominophen', dtype='U20')
prescription[:N//2] = 'ibuprofen'
np.random.shuffle(prescription)
# neutral class
p_neutral = np.full(N, fill_value='Neutral', dtype='U20')
# 10% is patients with history of liver disease
jaundice = np.zeros(N, dtype=bool)
jaundice[0:N//10] = True
prescription[0:N//10] = 'ibuprofen'
p_neutral[0:N//10] = 'ibuprofen'
# 10% is patients with history of stomach problems
ulcers = np.zeros(N, dtype=bool)
ulcers[(9*N)//10:] = True
prescription[(9*N)//10:] = 'acetominophen'
p_neutral[(9*N)//10:] = 'acetominophen'
df = pd.DataFrame.from_dict({
'jaundice': jaundice,
'ulcers': ulcers,
'prescription': prescription,
'prescription_with_neutral': p_neutral
})
if shuffle:
return df.sample(frac=1).reset_index(drop=True)
else:
return df
create_synthetic_dataset(10, False)
df = create_synthetic_dataset(1000, shuffle=True)
from sklearn import linear_model
for label in ['prescription', 'prescription_with_neutral']:
ntrain = 8*len(df)//10 # 80% of data for training
lm = linear_model.LogisticRegression()
lm = lm.fit(df.loc[:ntrain-1, ['jaundice', 'ulcers']], df[label][:ntrain])
acc = lm.score(df.loc[ntrain:, ['jaundice', 'ulcers']], df[label][ntrain:])
print('label={} accuracy={}'.format(label, acc))
"""
Explanation: Neutral Class Design Pattern
This notebook demonstrates on a synthetic dataset
that creating a separate Neutral class can be helpful.
Then, carries this to a real-world problem.
On synthetic dataset
Patients with a history of jaundice will be assumed to be at risk of liver damage and prescribed ibuprofen while patients with a history of stomach ulcers will be prescribed acetaminophen. The remaining patients will be arbitrarily assigned to either category.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL mlpatterns.neutral_2classes
OPTIONS(model_type='logistic_reg', input_label_cols=['health']) AS
SELECT
IF(apgar_1min >= 9, 'Healthy', 'NeedsAttention') AS health,
plurality,
mother_age,
gestation_weeks,
ever_born
FROM `bigquery-public-data.samples.natality`
WHERE apgar_1min <= 10
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL mlpatterns.neutral_2classes)
"""
Explanation: On the Natality data
Let's do this on real data.
A baby with an Apgar score of 10 is healthy and one with an Apgar score of <= 7 requires some medical attention.
What about babies with scores of 8-9? They are neither perfectly healthy, nor do they need serious medical intervention.
Let's see how the model does with a 2-class model and with a 3-class model that includes a Neutral class.
First, without the Neutral class
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL mlpatterns.neutral_3classes
OPTIONS(model_type='logistic_reg', input_label_cols=['health']) AS
SELECT
IF(apgar_1min = 10, 'Healthy', IF(apgar_1min >= 8, 'Neutral', 'NeedsAttention')) AS health,
plurality,
mother_age,
gestation_weeks,
ever_born
FROM `bigquery-public-data.samples.natality`
WHERE apgar_1min <= 10
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL mlpatterns.neutral_3classes)
"""
Explanation: With 3 classes (including a neutral class)
End of explanation
"""
|
pligor/predicting-future-product-prices
|
02_preprocessing/exploration03-price_history_standardization.ipynb
|
agpl-3.0
|
stds_threshold = std*3
stds_threshold
min(df_norm_prices.iloc[0])
"""
Explanation: Trust only up to three standard deviations.
Which is expected, ~75 euros difference from the original price is the maximum of what
normally see as a customer
End of explanation
"""
keep_inds = [ii for ii in range(len(df_norm_prices))
if max(df_norm_prices.iloc[ii]) < stds_threshold and
-stds_threshold < min(df_norm_prices.iloc[ii])
]
len(keep_inds)
"""
Explanation: this is to trim according to minimum and maximum value
End of explanation
"""
keep_inds = [ii for ii in range(len(df_norm_prices))
if np.all(
np.absolute(df_norm_prices.iloc[ii][1:] -
np.roll(df_norm_prices.iloc[ii], 1)[1:]) < stds_threshold
)
]
len(keep_inds)
"""
Explanation: another idea is to trim according to the rate of change
End of explanation
"""
df_seq_start_trimmed = df_with_seq_start.iloc[keep_inds]
df_seq_start_trimmed.shape
flatvals2 = df_norm_prices.iloc[keep_inds].values.flatten()
len(flatvals2)
plt.figure(figsize=(15,6))
sns.distplot(flatvals2)
plt.show()
csv_path = "../price_history_03_seq_start_suddens_trimmed.csv"
df_seq_start_trimmed.to_csv(csv_path, encoding='utf-8', quoting=csv.QUOTE_ALL)
"""
Explanation: and indeed those changing more than 75 euros / day are outliers
TODO maybe later we can filter those when choosing windows directly, because now we neglect entire time series
End of explanation
"""
|
j-coll/opencga
|
opencga-client/src/main/python/notebooks/pyopencga_basic_notebook_003-variants.ipynb
|
apache-2.0
|
# Initialize PYTHONPATH for pyopencga
import sys
import os
from pprint import pprint
cwd = os.getcwd()
print("current_dir: ...."+cwd[-10:])
base_modules_dir = os.path.dirname(cwd)
print("base_modules_dir: ...."+base_modules_dir[-10:])
sys.path.append(base_modules_dir)
from pyopencga.opencga_config import ConfigClient
from pyopencga.opencga_client import OpenCGAClient
import json
"""
Explanation: pyOpenCGA basic variant and interpretation usage
[NOTE] The server methods used by pyopencga client are defined in the following swagger URL:
- http://bioinfodev.hpc.cam.ac.uk/opencga-test/webservices
[NOTE] Current implemented methods are registered at the following spreadsheet:
- https://docs.google.com/spreadsheets/d/1QpU9yl3UTneqwRqFX_WAqCiCfZBk5eU-4E3K-WVvuoc/edit?usp=sharing
Loading pyOpenCGA
End of explanation
"""
## Reading user config/credentials to connect to server
user_config_json = "./__user_config.json"
with open(user_config_json,"r") as f:
user_credentials = json.loads(f.read())
print('User: {}***'.format(user_credentials["user"][:3]))
user = user_credentials["user"]
passwd = user_credentials["pwd"]
"""
Explanation: Setting credentials for LogIn
Credentials
Plese add the credentials for opencga login into a file in json format and read them from there.
i.e:
file: __user_config.json
flie_content: {"user":"xxx","pwd":"yyy"}
End of explanation
"""
## Creating ConfigClient
host = 'http://bioinfodev.hpc.cam.ac.uk/opencga-test'
cc = ConfigClient()
config_dict = cc.get_basic_config_dict(host)
print("Config information:\n",config_dict)
"""
Explanation: Creating ConfigClient for server connection configuration
End of explanation
"""
oc = OpenCGAClient(configuration=config_dict,
user=user,
pwd=passwd)
## Getting the session id / token
token = oc.session_id
print("Session token:\n{}...".format(token[:10]))
oc = OpenCGAClient(configuration=config_dict,
session_id=token)
"""
Explanation: LogIn with user credentials
End of explanation
"""
|
RaoUmer/lightning-example-notebooks
|
images/image-poly.ipynb
|
mit
|
from lightning import Lightning
from sklearn import datasets
"""
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Image polygon plots in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
"""
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
"""
Explanation: Connect to server
End of explanation
"""
imgs = datasets.load_sample_images().images
viz = lgn.imagepoly(imgs[0])
viz
"""
Explanation: <hr> Region drawing
The image-poly visualization let's you draw polygonal regions on images and then query them in the same notebook!
<br>
Try drawing a region on the image. Hold command to pan, and option to edit the region.
<br>
Note that we assign the visualization to an output variable, so that we can query it later, but we must print that output to get the image to show.
End of explanation
"""
p = viz.polygons()
lgn.imagepoly(imgs[0], polygons=p)
"""
Explanation: Draw some regions on the image above. Then check the value of viz.coords.
End of explanation
"""
|
sanabasangare/data-visualization
|
fin_MPT.ipynb
|
mit
|
import numpy as np
import pandas as pd
from pandas_datareader import data as web
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
import warnings; warnings.simplefilter('ignore')
"""
Explanation: Modern Portfolio Theory (MPT) analysis with python
Modern portfolio theory (MPT) also known as Mean-Variance Portfolio Theory (MVP) is a mathematical framework by Markowitz introduced in a 1952 essay, for which he was awarded a Nobel Prize in economics. Wikipedia entry
Necessary Imports
Import the required modules/packages.
End of explanation
"""
symbols = ['AAPL', 'AMZN', 'GOOG', 'IBM', 'MSFT'] # stock symbols
data = pd.DataFrame() # empty DataFrame
for sym in symbols:
data[sym] = web.DataReader(sym, data_source='google')['Close']
"""
Explanation: Retrieving Stock Price Data
Here, I'm retrieving stock price data to build a portfolio of tech companies.
End of explanation
"""
data.columns
"""
Explanation: Print the columns in the Dataframe
End of explanation
"""
data.tail() # the final five rows
"""
Explanation: Display the final five rows of the DataFrame
End of explanation
"""
(data / data.ix[0] * 100).plot(figsize=(20, 10));
"""
Explanation: A graphical comparison of the time series data with the starting values of 100.
End of explanation
"""
log_rets = np.log(data / data.shift(1))
"""
Explanation: Portfolio Returns
To calculate a portfolio return, let's compute the annualized returns of the stocks based on the log returns for the respective time series.
vectorized calculation of the log returns
End of explanation
"""
rets = log_rets.mean() * 252
rets
"""
Explanation: Annualized average log returns
End of explanation
"""
weights = np.array([0.2, 0.2, 0.2, 0.2, 0.2])
"""
Explanation: An equal weighting scheme can be used to represent a portfolio by (normalized) weightings for the single stocks
The equal weightings
End of explanation
"""
np.dot(weights, rets)
"""
Explanation: portfolio return (equal weights)
End of explanation
"""
log_rets.cov() * 252
"""
Explanation: Portfolio Variance
The annualized covariance matrix can be calculted in python like this:
End of explanation
"""
pvar = np.dot(weights.T, np.dot(log_rets.cov() * 252, weights))
pvar
"""
Explanation: Calculating the portfolio variance with Numpy
End of explanation
"""
pvol = pvar ** 0.5
pvol
"""
Explanation: The portfolio volatility, in this case, is:
End of explanation
"""
# random numbers
weights = np.random.random(5)
weights /= np.sum(weights)
# generated portfolio composition
weights
"""
Explanation: Random Portfolio Compositions
First, generate a random portfolio composition before calculating the portfolio return and variance.
End of explanation
"""
np.dot(weights, rets)
"""
Explanation: portfolio return (random weights)
End of explanation
"""
np.dot(weights.T, np.dot(log_rets.cov() * 252, weights))
"""
Explanation: portfolio variance (random weights)
End of explanation
"""
%%time
prets = []
pvols = []
for p in xrange(5000):
weights = np.random.random(5)
weights /= np.sum(weights)
prets.append(np.sum(log_rets.mean() * weights) * 252)
pvols.append(np.sqrt(np.dot(weights.T,
np.dot(log_rets.cov() * 252, weights))))
prets = np.array(prets)
pvols = np.array(pvols)
portfolio = pd.DataFrame({'return': prets, 'volatility': pvols})
"""
Explanation: The Monte Carlo method is implemented to collect the resulting portfolio returns and volatilities.
End of explanation
"""
portfolio.plot(x='volatility', y='return', kind='scatter', figsize=(12, 8));
"""
Explanation: The results allow for an insightful visualization that can show the area of the minimum variance portfolio as well as the efficient frontier.
End of explanation
"""
|
thunder-project/thunder-docs
|
tutorials/registration.ipynb
|
mit
|
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from showit import image, tile
sns.set_style('darkgrid')
sns.set_context('notebook')
import thunder as td
"""
Explanation: Image registration
A common problem when working with collections of images is registering or aligning them, relative to a reference. The thunder-registration package implements a set of registration algorithms all exposed through a common API. These algorithms support parallelization through Spark, but can also be run locally on numpy arrays. Here, we generate example data for performing registration, apply a registration algorithm, and validate the results.
Setup imports
End of explanation
"""
data = td.images.fromexample('mouse')
data
"""
Explanation: Generating data
We will use a toy example dataset test registration algorithms. These data do not actually have any motion, so to test the algorithms, we will induce fake motion. First we'll load and inspect the data.
End of explanation
"""
from numpy import random
from scipy.ndimage.filters import gaussian_filter
t = 20
dx = gaussian_filter(random.randn(t), 1.5) * 10
dy = gaussian_filter(random.randn(t), 1.5) * 10
plt.plot(dx);
plt.plot(dy);
"""
Explanation: There are 500 images (corresponding to 500 time points), and the data are two-dimensional, so we'll want to generate 500 random shifts in x and y. We'll use smoothing functions from scipy to make sure the drift varies slowly over time, which will be easier to look at.
End of explanation
"""
from scipy.ndimage import shift
shifted = data.map(lambda (k, v): shift(v, (dx[k], dy[k]), mode='nearest', order=0), with_keys=True)
"""
Explanation: Now let's use these drifts to shift the data. We'll use the apply method on our data, which applies an arbitrary function to each record; in this case, the function is to shift by an amount given by the corresponding entry in our list of shifts.
End of explanation
"""
im1 = data[0].toarray()
im2 = shifted[0].toarray()
tile([im1, im2, im1-im2], clim=[(0,300), (0,300), (-300,300)], grid=(1,3), size=14);
"""
Explanation: Look at the first entry of both the original images and the shifted images, and their difference
End of explanation
"""
tile([data.mean(), shifted.mean()], size=14);
"""
Explanation: It's also useful to look at the mean of the raw images and the shifted images, the mean of the shifted images should be much more blurry!
End of explanation
"""
from registration import CrossCorr
algorithm = CrossCorr()
"""
Explanation: Registration
To run registration, first we create a registration method by importing the algorithm CrossCorr
End of explanation
"""
reference = shifted.mean().toarray()
image(reference);
"""
Explanation: This method computes a cross-correlation between every image and a reference. First, we'll compute a reference using the mean of the images.
End of explanation
"""
model = algorithm.fit(shifted, reference=reference)
"""
Explanation: Now we use the registration method reg and fit it to the shifted data, returning a fitted RegistrationModel
End of explanation
"""
model
"""
Explanation: Inspect the model
End of explanation
"""
model.transformations[(0,)]
"""
Explanation: The model is a dictionary mapping tuple indices to transformations. You can inspect them:
End of explanation
"""
clrs = sns.color_palette('deep')
plt.plot(model.toarray()[:,0], color=clrs[0])
plt.plot(dx, '--', color=clrs[0])
plt.plot(model.toarray()[:,1], color=clrs[1])
plt.plot(dy, '--', color=clrs[1]);
"""
Explanation: You can also convert the full collection of transformations into an array, which is useful for plotting. Here we'll plot the estimated transformations relative to the ground truth (as dashed lines), they should be fairly similar.
End of explanation
"""
reference = data.mean().toarray()
model = algorithm.fit(shifted, reference=reference)
"""
Explanation: Note that, while following a similar pattern as the ground truth, the estimates are not correct. That's because we didn't use the true reference to estimate the displacements, but rather the mean of the displaced data, which biases the estimated displacements. To see that we get the exact displacements back, let's compute a reference from the original, unshifted data.
End of explanation
"""
plt.plot(model.toarray()[:,0], color=clrs[0])
plt.plot(dx, '--', color=clrs[0])
plt.plot(model.toarray()[:,1], color=clrs[1])
plt.plot(dy, '--', color=clrs[1]);
"""
Explanation: Now the estimates should be exact (up to rounding error)! But note that this is sort of cheating, because in general we don't know the reference exactly.
End of explanation
"""
corrected = model.transform(shifted)
"""
Explanation: We can now use our model to transform a set of images, which applies the estimated transformations. The API design makes it easy to apply the transformations to the dataset we used to estimate the transformations, or a different one. We'll use the model we just estimates, which used the true reference, because it will be easy to see that it did the right thing.
End of explanation
"""
im1 = data[0].toarray()
im2 = corrected[0].toarray()
tile([im1, im2, im1-im2], clim=[(0,300), (0,300), (-300,300)], grid=(1,3), size=14);
"""
Explanation: Let's again look at the first image from the orignal and corrected, and their difference. Whereas before they were different, now they should be the same, except for minor near the boundaries (where the image has been replaced with its nearest neighbors).
End of explanation
"""
tile([shifted.mean(), corrected.mean()], size=14);
"""
Explanation: As a final check on the registation, we can compare the mean of the shifted data, and the mean of the regsitered data. The latter should be much sharper.
End of explanation
"""
|
basp/aya
|
.ipynb_checkpoints/noise_old-checkpoint.ipynb
|
mit
|
v0 = 2
v1 = 5
plt.plot([0, 1], [2, 5], '--')
t = 1.0 / 3
vt = noise.lerp(2, 5, t)
plt.plot(t, vt, 'ro')
"""
Explanation: linear interpolation
We need a function ${f}$ that, given values ${v_0}$ and ${v_1}$ and some interval ${t}$ where $0 \le {t} \le 1$, returns an interpolated value between ${v_0}$ and ${v_1}$.
The best way to start is with linear interpolation and that's what the lerp function does.
Let's assume we have to values ${v_0}$ and ${v_1}$:
End of explanation
"""
x = np.linspace(0, 1.0)
y1 = noise.ss3(x)
y2 = noise.ss5(x)
plt.plot(x, y1, label="smooth")
plt.plot(x, y2, label="smoother")
plt.legend(loc=2)
"""
Explanation: smoothstep
End of explanation
"""
class Vector:
def __init__(self, *components):
self.components = np.array(components)
def mag(self):
return np.sqrt(sum(self.components**2))
def __len__(self):
return len(self.components)
def __iter__(self):
for c in self.components:
yield c
"""
Explanation: vectors
End of explanation
"""
np.random.ranf((2,3,2,2)) # seed in n-dimensions
"""
Explanation: seeding
End of explanation
"""
c4 = noise.Field(d=(8,8,8,8), seed = 5)
"""
Explanation: noise field
For instance, to create a hypercube of noise we could do something like this:
End of explanation
"""
q = np.arange(0, 8)
x = [c4(x, 0, 0, 0) for x in q]
y = [c4(0, y, 0, 0) for y in q]
plt.plot(q, x, 'bo')
plt.plot(q, y, 'ro')
"""
Explanation: We can plot any course through this field, for example:
End of explanation
"""
# a one-dimensional noise field of 8 samples
c1 = noise.Field(d=(8,))
x = np.linspace(0, 7, 8)
y = [c1(x) for x in x]
# this will use matplotlib interpolation and not ours
plt.plot(x, y)
"""
Explanation: We could render a graph but that would be like cheating. We would be using the matplotlib linear interpolation instead of our own:
End of explanation
"""
x = np.linspace(0, 1.0, 100)
y = noise.ss3(x)
plt.plot(x, y)
"""
Explanation: We can do better though by using one of the smoothstep functions. Instead of calculating ${v_t}$ directly we can do some tricks on ${t}$ to modify the outcome.
For convience let's start with the ss3 function and plot it so we know what it looks like:
End of explanation
"""
samples = 32
gen = noise.Field(d=(samples,))
def noise1(x, curve = lambda x: x):
xi = int(x)
xmin = xi % samples
xmax = 0 if xmin == (samples - 1) else xmin + 1
t = x - xi
return noise.lerp(gen(xmin), gen(xmax), curve(t))
x = np.linspace(0, 10, 100)
y1 = [noise1(x) for x in x]
y2 = [noise1(x, noise.ss5) for x in x]
plt.plot(x, y1, '--')
plt.plot(x, y2)
"""
Explanation: Now we setup a noise field and define a helper function noise1 in order to get our coherent noise.
End of explanation
"""
x = np.linspace(0, 4, 100)
y1 = [perlin.noise2d(x, 0) for x in x]
y2 = [0.5 * perlin.noise2d(x * 2, 0) for x in x]
y3 = [0.25 * perlin.noise2d(x * 4, 0) for x in x]
y4 = [0.125 * perlin.noise2d(x * 8, 0) for x in x]
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.plot(x, y4)
x = np.linspace(0, 10, 100)
y = [perlin.fbm(x, 0) for x in x]
plt.plot(x, y)
"""
Explanation: perlin noise
End of explanation
"""
|
Condla/notebooks
|
IPythonMachineLearningIntro.ipynb
|
gpl-2.0
|
%matplotlib inline
from sklearn import datasets
from sklearn import linear_model
from sklearn import cross_validation
import matplotlib.pyplot as plt
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Introduction: Python + Machine Learning
This IPython notebook is public, can be used freely and was created only for demonstration purposes.
Please pack any error or mistake you encounter and contact me, e.g., on <a href="https://plus.google.com/+StefanDunkler/">Google Plus</a>, where you can also ask questions.
What this notebook does:
* It takes the widely used iris dataset.
* It divides the dataset into random samples of test and training data
* It fits the data to a simple linear model
* It predicts the outcomes of the test data and calculates the score
* It iterates through the different parameters and does above things for every combination of parameters
We take the following modules from the scikit-learn package: (http://scikit-learn.org/stable/)
* We need a dataset: We take one of the data sets that comes with scikit-learn
* We need to model our data set: linear_model
* We need to cross validate our prediction: cross_validation
We need a plotting library: matplotlib is the python package for beautiful data visualization. (http://matplotlib.org/)
We need a data analysis library: Pandas (http://pandas.pydata.org/)
The following code imports all the necessary python modules for this demonstration:
End of explanation
"""
iris = datasets.load_iris()
columns = iris.feature_names
dataframe= pd.DataFrame(iris.data, columns=columns)
dataframe['name'] = (iris.target)
data = dataframe[columns][dataframe["name"] != 1]
target = dataframe["name"][dataframe["name"] != 1]
dataframe
"""
Explanation: The next part is where we load the iris dataset and prepare it in a pandas data frame. Also, we print it in order to get familiar with its shape and get a feeling for its content. We see that there are three types of irises: 0, 1, 2. In this example we want only two possible outcomes. Therefore we remove type 1 from our data set.
End of explanation
"""
def learn_it(parameter1, parameter2):
'''
Here, the scikit-learn magic happens. A simple 2 dimensional model is defined
and a fit on a training subset is performed. It is scored to the training subset,
in order to determine how well the model has performed.
Also, the slope and intercept of the line from the fit result is returned.
'''
X, X_test, y, y_test = cross_validation.train_test_split(
data[[parameter1, parameter2]],
target,
test_size=0.1,
random_state=0)
model = linear_model.LogisticRegression()
model.fit(X, y)
y_prediction = model.predict(X_test)
slope = -model.coef_[0][0]/model.coef_[0][1]
intercept = -model.intercept_/model.coef_[0][1]
score = model.score(X_test, y_test)
return (y_prediction, score, slope, intercept)
"""
Explanation: These few lines of code take care of the machine learning part:
* For reusability they are embedded in a python function.
* Below the function definition is the doc string which explains what the code does.
End of explanation
"""
def plot_it(parameter1, parameter2, slope, intercept, score):
'''
Plot the data!
'''
plt.figure(figsize=(6,4))
x_values1 = data[dataframe["name"] == 0][parameter1]
y_values1 = data[dataframe["name"] == 0][parameter2]
x_values2 = data[dataframe["name"] == 2][parameter1]
y_values2 = data[dataframe["name"] == 2][parameter2]
x_min = min(x_values1.min(), x_values2.min())
x_max = max(x_values1.max(), x_values2.max())
# y_min = min(y_values1.min(), y_values2.min())
# y_max = max(y_values1.max(), y_values2.max())
plt.plot(x_values1, y_values1, 'co') # plot data of a certain type of iris
plt.plot(x_values2, y_values2 ,'mv') # plot data of another type of iris
plt.plot([x_min, x_max], [slope*x_min+intercept, slope*x_max+intercept]) # plot the discriminating line
plt.legend([parameter1, parameter2, "model"], loc="lower right")
plt.title("Iris Discriminator: Types:" + str(0) + " and " + str(2))
facecolor = 'green' # green indicator
if score < 0.5:
facecolor = 'red' # if score is low: red indicator
plt.text(0.1, 0.9, "Score: " + str(score), horizontalalignment='center',
verticalalignment='center',
transform=plt.axes().transAxes,
bbox=dict(facecolor=facecolor, alpha=0.3))
plt.xlabel(parameter1)
plt.ylabel(parameter2)
plt.tight_layout()
plt.show()
"""
Explanation: Here is another function, that does all the plotting:
* First we prepare the data.
* Then we plot the 0 irises with cyan dots ('co').
* We plot the "2" irises with magenta triangles ('mv').
* We plot the line from the linear fit.
End of explanation
"""
def do_it(): # main function
for parameter1 in columns:
for parameter2 in columns:
(y_prediction, score, slope, intercept) = learn_it(parameter1, parameter2)
plot_it(parameter1, parameter2, slope, intercept, score)
# actually do it:
do_it()
"""
Explanation: In the following function, we iterate over all different combinations that 4 parameters allow in two dimensions and call the learn_it function above to apply the model and plot the data afterwards.
Then we call the function and see the (honestly not as beautifully tweaked as I wanted) plots visualizing the results.
End of explanation
"""
|
pycrystem/pycrystem
|
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
|
gpl-3.0
|
%matplotlib inline
import numpy as np
import diffpy.structure
import pyxem as pxm
import hyperspy.api as hs
accelarating_voltage = 200 # kV
camera_length = 0.2 # m
diffraction_calibration = 0.032 # px / Angstrom
"""
Explanation: Phase/Orientation Mapping
This tutorial demonstrates how to achieve phase and orientation mapping via scanning electron diffraction using both pattern and vector matching.
The data was acquired from a GaAs nanowire displaying polymorphism between zinc blende and wurtzite structures.
This functionaility has been checked to run in pyxem-0.13.0 (Feb 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues
<a href='#loa'> Load & Inspect Data</a>
<a href='#pre'> Pre-processing</a>
<a href='#tem'> Template matching</a>
<a href='#tema'> [Build Template Library]</a>
<a href='#temb'>[Indexing]</a>
<a href='#vec'> Vector Matching</a>
<a href='#veca'> [Build Vector Library]</a>
<a href='#vecb'>[Indexing Vectors]</a>
Import pyxem and other required libraries
End of explanation
"""
dp = hs.load('./data/02/polymorphic_nanowire.hdf5')
dp
"""
Explanation: <a id='loa'></a>
1. Loading and Inspection
Load the demo data
End of explanation
"""
dp.data = dp.data.astype('float64')
dp.data *= 1 / dp.data.max()
"""
Explanation: Set data type, scale intensity range and set calibration
End of explanation
"""
dp.metadata
"""
Explanation: Inspect metadata
End of explanation
"""
roi = hs.roi.CircleROI(cx=72, cy=72, r_inner=0, r=2)
dp.plot_integrated_intensity(roi=roi, cmap='viridis')
"""
Explanation: Plot an interactive virtual image to inspect data
End of explanation
"""
scale_x = 0.995
scale_y = 1.031
offset_x = 0.631
offset_y = -0.351
dp.apply_affine_transformation(np.array([[scale_x, 0, offset_x],
[0, scale_y, offset_y],
[0, 0, 1]]))
"""
Explanation: <a id='pre'></a>
2. Pre-processing
Apply affine transformation to correct for off axis camera geometry
End of explanation
"""
from pyxem.utils.expt_utils import investigate_dog_background_removal_interactive
dp_test_area = dp.inav[0, 0]
gauss_stddev_maxs = np.arange(2, 12, 0.2) # min, max, step
gauss_stddev_mins = np.arange(1, 4, 0.2) # min, max, step
investigate_dog_background_removal_interactive(dp_test_area,
gauss_stddev_maxs,
gauss_stddev_mins)
"""
Explanation: Perform difference of gaussian background subtraction with various parameters on one selected diffraction pattern and plot to identify good parameters
End of explanation
"""
dp = dp.subtract_diffraction_background('difference of gaussians',
min_sigma=2, max_sigma=8,
lazy_result=False)
"""
Explanation: Remove background using difference of gaussians method with parameters identified above
End of explanation
"""
dp.data -= dp.data.min()
dp.data *= 1 / dp.data.max()
"""
Explanation: Perform further adjustments to the data ranges
End of explanation
"""
dp = pxm.signals.ElectronDiffraction2D(dp) #this is needed because of a bug in the code
dp.set_diffraction_calibration(diffraction_calibration)
dp.set_scan_calibration(10)
"""
Explanation: Set diffraction calibration and scan calibration
End of explanation
"""
from diffsims.libraries.structure_library import StructureLibrary
from diffsims.generators.diffraction_generator import DiffractionGenerator
from diffsims.generators.library_generator import DiffractionLibraryGenerator
from diffsims.generators.zap_map_generator import get_rotation_from_z_to_direction
from diffsims.generators.rotation_list_generators import get_grid_around_beam_direction
from pyxem.generators.indexation_generator import TemplateIndexationGenerator
"""
Explanation: <a id='tem'></a>
3. Pattern Matching
Pattern matching generates a database of simulated diffraction patterns and then compares all simulated patterns against each experimental pattern to find the best match
Import generators required for simulation and indexation
End of explanation
"""
structure_zb = diffpy.structure.loadStructure('./data/02/GaAs_mp-2534_conventional_standard.cif')
structure_wz = diffpy.structure.loadStructure('./data/02/GaAs_mp-8883_conventional_standard.cif')
"""
Explanation: 3.1. Define Library of Structures & Orientations
Define the crystal phases to be included in the simulated library
End of explanation
"""
za110c = get_rotation_from_z_to_direction(structure_zb, [1,1,0])
rot_list_cubic = get_grid_around_beam_direction(beam_rotation=za110c, resolution=1, angular_range=(0,180))
za110h = get_rotation_from_z_to_direction(structure_wz, [1,1,0])
rot_list_hex = get_grid_around_beam_direction(beam_rotation=za110h, resolution=1, angular_range=(0,180))
"""
Explanation: Create a basic rotations list.
End of explanation
"""
struc_lib = StructureLibrary(['ZB','WZ'],
[structure_zb,structure_wz],
[rot_list_cubic,rot_list_hex])
"""
Explanation: Construct a StructureLibrary defining crystal structures and orientations for which diffraction will be simulated
End of explanation
"""
diff_gen = DiffractionGenerator(accelerating_voltage=accelarating_voltage)
"""
Explanation: <a id='temb'></a>
3.2. Simulate Diffraction for all Structures & Orientations
Define a diffsims DiffractionGenerator with diffraction simulation parameters
End of explanation
"""
lib_gen = DiffractionLibraryGenerator(diff_gen)
"""
Explanation: Initialize a diffsims DiffractionLibraryGenerator
End of explanation
"""
target_pattern_dimension_pixels = dp.axes_manager.signal_shape[0]
half_size = target_pattern_dimension_pixels // 2
reciprocal_radius = diffraction_calibration*(half_size - 1)
diff_lib = lib_gen.get_diffraction_library(struc_lib,
calibration=diffraction_calibration,
reciprocal_radius=reciprocal_radius,
half_shape=(half_size, half_size),
max_excitation_error=1/10,
with_direct_beam=False)
"""
Explanation: Calulate library of diffraction patterns for all phases and unique orientations
End of explanation
"""
#diff_lib.pickle_library('./GaAs_cubic_hex.pickle')
"""
Explanation: Optionally, save the library for later use.
End of explanation
"""
#from diffsims.libraries.diffraction_library import load_DiffractionLibrary
#diff_lib = load_DiffractionLibrary('./GaAs_cubic_hex.pickle', safety=True)
"""
Explanation: If saved, the library can be loaded as follows
End of explanation
"""
indexer = TemplateIndexationGenerator(dp, diff_lib)
indexation_results = indexer.correlate(n_largest=3)
"""
Explanation: <a id='temb'></a>
3.3. Pattern Matching Indexation
Initialize TemplateIndexationGenerator with the experimental data and diffraction library and perform correlation, returning the n_largest matches with highest correlation.
<div class="alert alert-block alert-warning"><b>Note:</b> This workflow has been changed from previous version, make sure you have pyxem 0.13.0 or later installed</div>
End of explanation
"""
if False:
indexation_results.plot_best_matching_results_on_signal(dp, diff_lib)
"""
Explanation: Check the solutions via a plotting (can be slow, so we don't run by default)
End of explanation
"""
crystal_map = indexation_results.to_crystal_map()
"""
Explanation: Get crystallographic map from indexation results
End of explanation
"""
from matplotlib import pyplot as plt
from orix import plot
fig, ax = plt.subplots(subplot_kw=dict(projection="plot_map"))
im = ax.plot_map(crystal_map)
"""
Explanation: crystal_map is now a CrystalMap object, which comes from orix, see their documentation for details. Below we lift their code to plot a phase map
End of explanation
"""
from diffsims.generators.library_generator import VectorLibraryGenerator
from diffsims.libraries.structure_library import StructureLibrary
from diffsims.libraries.vector_library import load_VectorLibrary
from pyxem.generators.indexation_generator import VectorIndexationGenerator
from pyxem.generators.subpixelrefinement_generator import SubpixelrefinementGenerator
from pyxem.signals.diffraction_vectors import DiffractionVectors
"""
Explanation: <a id='vec'></a>
4. Vector Matching
<div class="alert alert-block alert-danger"><b>Note:</b> This workflow is less well developed than the template matching one, and may well be broken</div>
Vector matching generates a database of vector pairs (magnitues and inter-vector angles) and then compares all theoretical values against each measured diffraction vector pair to find the best match
Import generators required for simulation and indexation
End of explanation
"""
structure_zb = diffpy.structure.loadStructure('./data/02/GaAs_mp-2534_conventional_standard.cif')
structure_wz = diffpy.structure.loadStructure('./data/02/GaAs_mp-8883_conventional_standard.cif')
structure_library = StructureLibrary(['ZB', 'WZ'],
[structure_zb, structure_wz],
[[], []])
"""
Explanation: <a id='veca'></a>
4.1. Define Library of Structures
Define crystal structure for which to determine theoretical vector pairs
End of explanation
"""
vlib_gen = VectorLibraryGenerator(structure_library)
"""
Explanation: Initialize VectorLibraryGenerator with structures to be considered
End of explanation
"""
reciprocal_radius = diffraction_calibration*(half_size - 1)/2
reciprocal_radius
vec_lib = vlib_gen.get_vector_library(reciprocal_radius)
"""
Explanation: Determine VectorLibrary with all vectors within given reciprocal radius
End of explanation
"""
#vec_lib.pickle_library('./GaAs_cubic_hex_vectors.pickle')
#vec_lib = load_VectorLibrary('./GaAs_cubic_hex_vectors.pickle',safety=True)
"""
Explanation: Optionally, save the library for later use
End of explanation
"""
dp.find_peaks(interactive=False)
"""
Explanation: 4.2. Find Diffraction Peaks
Tune peak finding parameters interactively
End of explanation
"""
peaks = dp.find_peaks(method='difference_of_gaussian',
min_sigma=0.005,
max_sigma=5.0,
sigma_ratio=2.0,
threshold=0.06,
overlap=0.8,
interactive=False)
"""
Explanation: Perform peak finding on the data with parameters from above
End of explanation
"""
peaks = DiffractionVectors(peaks).T
"""
Explanation: coaxing peaks back into a DiffractionVectors
End of explanation
"""
peaks = peaks.inav[:2,:2]
peaks.calculate_cartesian_coordinates?
peaks.calculate_cartesian_coordinates(accelerating_voltage=accelarating_voltage,
camera_length=camera_length)
"""
Explanation: peaks now contain the 2D positions of the diffraction spots on the detector. The vector matching method works in 3D coordinates, which are found by projecting the detector positions back onto the Ewald sphere. Because the methods that follow are slow, we constrain ourselves to looking at a smaller subset of the data
End of explanation
"""
#indexation_generator = VectorIndexationGenerator(peaks, vec_lib)
#indexation_results = indexation_generator.index_vectors(mag_tol=3*diffraction_calibration,
# angle_tol=4, # degree
# index_error_tol=0.2,
# n_peaks_to_index=7,
# n_best=5,
# show_progressbar=True)
#indexation_results.data
"""
Explanation: <a id='vecb'></a>
4.3. Vector Matching Indexation
Initialize VectorIndexationGenerator with the experimental data and vector library and perform indexation using n_peaks_to_index and returning the n_best indexation results.
<div class="alert alert-block alert-danger"><b>Alert: This code no longer works on this example, and may even be completely broken. Caution is advised.</b> </div>
End of explanation
"""
#refined_results = indexation_generator.refine_n_best_orientations(indexation_results,
# accelarating_voltage=accelarating_voltage,
# camera_length=camera_length,
# index_error_tol=0.2,
# vary_angles=True,
# vary_scale=True,
# method="leastsq")"""
"""
Explanation: Refine all crystal orientations for improved phase reliability and orientation reliability maps.
End of explanation
"""
#crystal_map = refined_results.get_crystallographic_map()
"""
Explanation: Get crystallographic map from optimized indexation results.
End of explanation
"""
#crystal_map?
"""
Explanation: See the objections documentation for further details
End of explanation
"""
|
JasonSanchez/w261
|
exams/w261mt/Midterm MRjob code.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import pylab
size = 1000
x = np.random.uniform(-40, 40, size)
y = x * 1.0 - 4 + np.random.normal(0,5,size)
data = zip(range(size),y,x)
#data = np.concatenate((y, x), axis=1)
np.savetxt('LinearRegression.csv',data,'%i,%f,%f')
data[:10]
"""
Explanation: DATASCI W261: Machine Learning at Scale
Version 1: One MapReduce Stage (join data at the first reducer)
Data Generation
Data Information:
+ Sizes: 1000 points
+ True model: y = 1.0 * x - 4
+ Noise:Normal Distributed mean = 0, var = 5
End of explanation
"""
pylab.plot(x, y,'*')
pylab.show()
"""
Explanation: Data Visualiazation
End of explanation
"""
%%writefile linearRegressionXSquare.py
#Version 1: One MapReduce Stage (join data at the first reducer)
from mrjob.job import MRJob
class MRMatrixX2(MRJob):
#Emit all the data need to caculate cell i,j in result matrix
def mapper(self, _, line):
v = line.split(',')
# add 1s to calculate intercept
v.append('1.0')
for i in range(len(v)-2):
for j in range(len(v)-2):
yield (j,i),(int(v[0]),float(v[i+2]))
yield (i,j),(int(v[0]),float(v[i+2]))
# Sum up the product for cell i,j
def reducer(self, key, values):
idxdict = {}
s = 0.0
preidx = -1
preval = 0
f = []
for idx, value in values:
if str(idx) in idxdict:
s = s + value * idxdict[str(idx)]
else:
idxdict[str(idx)] = value
yield key,s
if __name__ == '__main__':
MRMatrixX2.run()
%%writefile linearRegressionXy.py
from mrjob.job import MRJob
class MRMatrixXY(MRJob):
def mapper(self, _, line):
v = line.split(',')
# product of y*xi
for i in range(len(v)-2):
yield i, float(v[1])*float(v[i+2])
# To calculate Intercept
yield i+1, float(v[1])
# Sum up the products
def reducer(self, key, values):
yield key,sum(values)
if __name__ == '__main__':
MRMatrixXY.run()
"""
Explanation: MrJob class code
The solution of linear model $$ \textbf{Y} = \textbf{X}\theta $$ is:
$$ \hat{\theta} = (\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T\textbf{y} $$
If $\textbf{X}^T\textbf{X}$ is denoted by $A$, and $\textbf{X}^T\textbf{y}$ is denoted by $b$, then
$$ \hat{\theta} = A^{-1}b $$
There are two MrJob classes to calculate intermediate results:
+ linearRegressionXSquare.py calculates $A = \textbf{X}^T\textbf{X}$
+ linearRegressionXy.py calculates $b = \textbf{X}^T\textbf{y}$
End of explanation
"""
from numpy import linalg,array,empty
from linearRegressionXSquare import MRMatrixX2
from linearRegressionXy import MRMatrixXY
mr_job1 = MRMatrixX2(args=['LinearRegression.csv'])
mr_job2 = MRMatrixXY(args=['LinearRegression.csv'])
X_Square = []
X_Y = []
# Calculate XT*X Covariance Matrix
print "Matrix XT*X:"
with mr_job1.make_runner() as runner:
# Run MrJob MatrixMultiplication Job
runner.run()
# Extract the output I.E. ship data to driver be careful if data you ship is too big
for line in runner.stream_output():
key,value = mr_job1.parse_output_line(line)
X_Square.append((key,value))
print key, value
print " "
# Calculate XT*Y
print "Vector XT*Y:"
with mr_job2.make_runner() as runner:
runner.run()
for line in runner.stream_output():
key,value = mr_job2.parse_output_line(line)
X_Y.append((key,value))
print key, value
print " "
#Local Processing the output from two MrJob
n = len(X_Y)
if(n*n!=len(X_Square)):
print 'Error!'
else:
XX = empty(shape=[n,n])
for v in X_Square:
XX[v[0][0],v[0][1]] = v[1]
XY = empty(shape=[n,1])
for v in X_Y:
XY[v[0],0] = v[1]
print XX
print
print XY
theta = linalg.solve(XX,XY)
print "Coefficients:",theta[0,0],',',theta[1,0]
"""
Explanation: Driver:
Driver run tow MrJob class to get $\textbf{X}^T\textbf{X}$ and $\textbf{X}^T\textbf{y}$. And it calculate $(\textbf{X}^T\textbf{X})^{-1}$ by numpy.linalg.solve.
End of explanation
"""
%%writefile MrJobBatchGDUpdate_LinearRegression.py
from mrjob.job import MRJob
# This MrJob calculates the gradient of the entire training set
# Mapper: calculate partial gradient for each example
#
class MrJobBatchGDUpdate_LinearRegression(MRJob):
# run before the mapper processes any input
def read_weightsfile(self):
# Read weights file
with open('weights.txt', 'r') as f:
self.weights = [float(v) for v in f.readline().split(',')]
# Initialze gradient for this iteration
self.partial_Gradient = [0]*len(self.weights)
self.partial_count = 0
# Calculate partial gradient for each example
def partial_gradient(self, _, line):
D = (map(float,line.split(',')))
# y_hat is the predicted value given current weights
y_hat = self.weights[0]+self.weights[1]*D[1]
# Update parial gradient vector with gradient form current example
self.partial_Gradient = [self.partial_Gradient[0]+ D[0]-y_hat, self.partial_Gradient[1]+(D[0]-y_hat)*D[1]]
self.partial_count = self.partial_count + 1
#yield None, (D[0]-y_hat,(D[0]-y_hat)*D[1],1)
# Finally emit in-memory partial gradient and partial count
def partial_gradient_emit(self):
yield None, (self.partial_Gradient,self.partial_count)
# Accumulate partial gradient from mapper and emit total gradient
# Output: key = None, Value = gradient vector
def gradient_accumulater(self, _, partial_Gradient_Record):
total_gradient = [0]*2
total_count = 0
for partial_Gradient,partial_count in partial_Gradient_Record:
total_count = total_count + partial_count
total_gradient[0] = total_gradient[0] + partial_Gradient[0]
total_gradient[1] = total_gradient[1] + partial_Gradient[1]
yield None, [v/total_count for v in total_gradient]
def steps(self):
return [self.mr(mapper_init=self.read_weightsfile,
mapper=self.partial_gradient,
mapper_final=self.partial_gradient_emit,
reducer=self.gradient_accumulater)]
if __name__ == '__main__':
MrJobBatchGDUpdate_LinearRegression.run()
from numpy import random, array
from MrJobBatchGDUpdate_LinearRegression import MrJobBatchGDUpdate_LinearRegression
learning_rate = 0.05
stop_criteria = 0.000005
# Generate random values as inital weights
weights = array([random.uniform(-3,3),random.uniform(-3,3)])
# Write the weights to the files
with open('weights.txt', 'w+') as f:
f.writelines(','.join(str(j) for j in weights))
# Update centroids iteratively
i = 0
while(1):
# create a mrjob instance for batch gradient descent update over all data
mr_job = MrJobBatchGDUpdate_LinearRegression(args=['--file', 'weights.txt', 'LinearRegression.csv'])
print "iteration ="+str(i)+" weights =",weights
# Save weights from previous iteration
weights_old = weights
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
# value is the gradient value
key,value = mr_job.parse_output_line(line)
# Update weights
weights = weights - learning_rate*array(value)
i = i + 1
if i>100: break
# Write the updated weights to file
with open('weights.txt', 'w+') as f:
f.writelines(','.join(str(j) for j in weights))
# Stop if weights get converged
if(sum((weights_old-weights)**2)<stop_criteria):
break
print "Final weights\n"
print weights
"""
Explanation: Gradient descent - doesn't work
End of explanation
"""
|
flohorovicic/pynoddy
|
docs/notebooks/Feature-Analysis.ipynb
|
gpl-2.0
|
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import numpy as np
%matplotlib inline
"""
Explanation: Analysis of classification results
Objective: read back in the classification results and compare to original model
End of explanation
"""
import pynoddy.output
reload(pynoddy.output)
output_name = "feature_out"
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x',
colorbar = True, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
"""
Explanation: Load original model:
End of explanation
"""
# f_set1 = open("../../sandbox/jack/features_lowres-5 with class ID.csv").readlines()
f_set1 = open(r"/Users/flow/Documents/01_work/01_own_docs/02_paper_drafts/jack/classification_result_100Iter.csv").readlines()
f_set1[0]
# initialise classification results array
cf1 = np.empty_like(nout.block)
# iterate through results and append
for f in f_set1:
fl = f.rstrip().split(",")
cf1[int(fl[0]),int(fl[1]),int(fl[2])] = int(fl[-1])
nout.plot_section('x', data = cf1,
colorbar = True, title="", layer_labels = range(5),
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
# compare to original model:
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
nout.plot_section('x', ax = ax1,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.plot_section('x', data = cf1,ax = ax2,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
"""
Explanation: Load sample classification results
The implemented classification method does not return a single best-fit model, but an ensemble of probable model (as it is an MCMC sampling from the posterior). As a first test, we will therefore import single models first and check the misclassification rate defined as:
$$\mbox{MCR} = \frac{\mbox{Number of misclassified voxels}}{\mbox{Total number of voxels}}$$
End of explanation
"""
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(cf1[15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
print np.unique(nout.block)
print np.unique(cf1)
# define id mapping from cluster results to original:
# id_mapping = {2:1, 3:2, 4:5, 5:3, 1:4}
# remapping for result 4:
# id_mapping = {4:5, 3:4, 1:3, 5:2, 2:1}
# remapping for result 5:
id_mapping = {3:5, 1:4, 2:3, 4:2, 5:1}
"""
Explanation: Results of the classification do not necessarily contain the same ids as the units in the initial model. This seems to be the case here, as well. Re-sort:
End of explanation
"""
def re_map(id_val):
return id_mapping[id_val]
re_map_vect = np.vectorize(re_map)
cf1_remap = re_map_vect(cf1)
# compare to original model:
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
nout.plot_section('x', ax = ax1,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.plot_section('x', data = cf1_remap, ax = ax2,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
feature_diff = (nout.block != cf1_remap)
nout.plot_section('x', data = feature_diff,
colorbar = False, title="Difference between real and matched model",
cmap = 'YlOrRd')
# Calculate the misclassification:
np.sum(feature_diff) / float(nout.n_total)
# Export misclassification to VTK:
misclass = feature_diff.astype('int')
nout.export_to_vtk(vtk_filename = "misclass", data=misclass)
"""
Explanation: Now remap results and compare again:
Note: create a vectorised function to enable a direct re-mapping of the entire array while keeping the structure!
End of explanation
"""
def calc_misclassification(nout, filename):
"""Calculate misclassification for classification results data stored in file
**Arguments**:
- *nout* = NoddyOutput: original model (Noddy object)
- *filename* = filename (with path): file with classification results
"""
f_set1 = open(filename).readlines()
# initialise classification results array
cf1 = np.empty_like(nout.block)
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
cf1[int(fl[0]),int(fl[1]),int(fl[2])] = int(fl[6])
# remap ids
cf1_remap = re_map_vect(cf1)
# determine differences in class ids:
feature_diff = (nout.block != cf1_remap)
# Calculate the misclassification:
misclass = np.sum(feature_diff) / float(nout.n_total)
return misclass
# filename = r"../../sandbox/jack/features_lowres-4 with class ID.csv"
# calc_misclassification(nout, filename)
"""
Explanation: Combined analysis in a single function
Note: function assumes correct EOL character in data file (check/ adjust with vi: %s/\r/\r/g)
Problem: remapping is unfortunatley not identical!
End of explanation
"""
# f_set1 = open("../../sandbox/jack/features_lowres-6 with class ID and Prob.csv").readlines()
f_set1 = open("../../sandbox/jack/features_lowres-8 with Prob (weak Beta).csv").readlines()
f_set1[0]
# initialise classification results array
cf1 = np.empty_like(nout.block)
# Initialise probability array
probs = np.empty((5, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(5):
probs[i2,i,j,k] = float(fl[i2+6])
"""
Explanation: Determine validity of uncertainty estimate
In addition to single model realisations, an esitmate of model uncertainty is calculated (this is, actually, also one of the main "selling points" of the paper). So, we will now check if the correct model is actually in the range of the estimated model uncertainty bounds (i.e.: if all voxets values from the original model actually have a non-zero probability in the estimated model)!
First step: load estimated class probabilities:
End of explanation
"""
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(probs[4,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# Note: map now ids from original model to probability fields in results:
prob_mapping = {4:0, 5:1, 3:2, 1:3, 2:4}
# Check membership for each class in original model
for i in range(1,6):
tmp = np.ones_like(nout.block) * (nout.block==i)
# test if voxels have non-zero probability by checking conjunction with zero-prob voxels
prob_zero = probs[prob_mapping[i],:,:,:] == 0
misidentified = np.sum(tmp * prob_zero)
print i, misidentified
prob_zero = probs[prob_mapping[1],:,:,:] == 0
"""
Explanation: We now need to perform the remapping similar to before, but now for the probability fields:
End of explanation
"""
# f_set1 = open("../../sandbox/jack/features_lowres-7 with 151 realizations.csv").readlines()
f_set1 = open(r"/Users/flow/Documents/01_work/01_own_docs/02_paper_drafts/jack/classification_result_100Iter.csv").readlines()
# Initialise results array
all_results = np.empty((96, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(96):
try:
all_results[i2,i,j,k] = float(fl[i2+5])
except IndexError:
print i2, i, j, k
"""
Explanation: Determination of misclassification statistics
Next step: use multiple results from one chain to determine misclassification statistics.
End of explanation
"""
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[5,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# mapping from results to original:
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
def re_map(id_val):
return id_mapping[id_val]
re_map_vect = np.vectorize(re_map)
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[1:,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[85,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
"""
Explanation: First, we again need to check the assignment of the units/ class ids:
End of explanation
"""
all_misclass = np.empty(90)
for i in range(90):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of suite lowres-7")
plt.xlabel("Model id")
plt.ylabel("MCR")
"""
Explanation: We can now determine the misclassification for all results:
End of explanation
"""
f_set1 = open("../../sandbox/jack/features_lowres-9 with 151 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((151, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(151):
try:
all_results[i2,i,j,k] = float(fl[i2+6])
except IndexError:
print i2, i, j, k
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[20,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# define re-mapping
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[1:,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[115,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
all_misclass = np.empty(150)
for i in range(150):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of suite lowres-9")
plt.xlabel("Model id")
plt.ylabel("MCR")
f_set1 = open("../../sandbox/jack/features_lowres-10 with 2000 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((2000, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(2000):
try:
all_results[i2,i,j,k] = float(fl[i2+6])
except IndexError:
print i2, i, j, k
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[20,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# define re-mapping
# id_mapping = {3:5, 4:4, 2:3, 1:2, 5:1, 0:0}
id_mapping = {3:5, 1:4, 2:3, 4:2, 5:1}
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[2:,:,:,:])
np.unique(all_results[0,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[11,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
all_misclass = np.empty(94)
for i in range(94):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of new model suite")
plt.xlabel("Model id")
plt.ylabel("MCR")
plt.hist(all_misclass[100:])
"""
Explanation: It seems to be the case that the upper thin layer vanishes after approimately 30-40 iterations. From then on, the misclassification rate is approximately constant at around 9.5 percent (which is still quite acceptable!).
Let's compare this now to classifications with another (lower) beta value (which should put more weight to the data?):
End of explanation
"""
# f_set1 = open("../../sandbox/jack/features_lowres-6 with class ID and Prob.csv").readlines()
f_set1 = open("../../sandbox/jack/features_lowres-10 with Prob (weak Beta).csv").readlines()
# initialise classification results array
cf1 = np.empty_like(nout.block)
f_set1[0]
# Initialise probability array
probs = np.empty((5, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(5):
probs[i2,i,j,k] = float(fl[i2+6])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(probs[0,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# Note: map now ids from original model to probability fields in results:
prob_mapping = {2:0, 3:1, 5:2, 4:3, 1:4}
# Check membership for each class in original model
for i in range(1,6):
tmp = np.ones_like(nout.block) * (nout.block==i)
# test if voxels have non-zero probability by checking conjunction with zero-prob voxels
prob_zero = probs[prob_mapping[i],:,:,:] == 0
misidentified = np.sum(tmp * prob_zero)
print i, misidentified
info_entropy = np.zeros_like(nout.block)
for prob in probs:
info_entropy[prob > 0] -= prob[prob > 0] * np.log2(prob[prob > 0])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(info_entropy[1,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
nout.export_to_vtk(vtk_filename = "../../sandbox/jack/info_entropy", data = info_entropy)
np.max(probs)
np.max(info_entropy)
"""
Explanation: Determine validity of estimated probability
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/mohc/cmip6/models/hadgem3-gc31-ll/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-LL
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
poolio/unrolled_gan
|
Unrolled GAN demo.ipynb
|
mit
|
%pylab inline
from collections import OrderedDict
import tensorflow as tf
ds = tf.contrib.distributions
slim = tf.contrib.slim
from keras.optimizers import Adam
try:
from moviepy.video.io.bindings import mplfig_to_npimage
import moviepy.editor as mpy
generate_movie = True
except:
print("Warning: moviepy not found.")
generate_movie = False
"""
Explanation: Unrolled generative adversarial networks on a toy dataset
This notebook demos a simple implementation of unrolled generative adversarial networks on a 2d mixture of Gaussians dataset. See the paper for a better description of the technique, experiments, results, and other good stuff. Note that the architecture and hyperparameters used in this notebook are not identical to the one in the paper.
Motivation
The GAN learning problem is to find the optimal parameters $\theta_G^$ for a generator function $G\left( z; \theta_G\right)$ in a minimax objective,
$$\begin{align}
\theta_G^ &= \underset{\theta_G}{\text{argmin}} \underset{\theta_D}{\max} f\left(\theta_G, \theta_D\right) \
&= \underset{\theta_G}{\text{argmin}} \;f\left(\theta_G, \theta_D^\left(\theta_G\right)\right)\
\theta_D^\left(\theta_G\right) &= \underset{\theta_D}{\max} \;f\left(\theta_G, \theta_D\right),
\end{align}$$
where the saddle objective $f$ is the standard GAN loss:
$$f\left(\theta_G, \theta_D\right) =
\mathbb{E}{x\sim p{data}}\left[\mathrm{log}\left(D\left(x; \theta_D\right)\right)\right] +
\mathbb{E}_{z \sim \mathcal{N}(0,I)}\left[\mathrm{log}\left(1 - D\left(G\left(z; \theta_G\right); \theta_D\right)\right)\right].
$$
In unrolled GANs, we approximate $\theta_D^\left(\theta_G\right)$ using a few steps of gradient ascent:
$$\theta_D^\left(\theta_G\right) \approx \hat{\theta}_D\left(\theta_G\right) \equiv\text{ a few steps of SGD maximizing}\;f\left(\theta_G, \theta_D\right).$$
We can then compute the update for the generator parameters, $\theta_G$, by computing the gradient of the saddle objective with respect to $\theta_G$ and the optimized discriminator parameters, $\hat{\theta}_D$:
$$\frac{d}{d \theta_G} f\left(\theta_G, \hat{\theta}_D\left(\theta_G\right)\right)$$.
Implementation details
To backpropagate through the optimization process, we need to create a symbolic computational graph that includes all the operations from the initial weights to the optimized weights. TensorFlow's built-in optimizers use custom C++ code for efficiency, and do not construct a symbolic graph that is differentiable. For this notebook, we use the optimization routines from keras to compute updates. Next, we use tf.contrib.graph_editor.graph_replace to build a copy of the graph containing the mapping from initial weights to updated weights after one optimization iteration, but replacing the initial weights with the last iteration's weights:
This yields a new graph that allows us to backprop from $\theta_D^2$ back to $\theta_D^0$. We can then plug $\theta_D^2$ into the loss function to get the final objective that the generator optimizes. Using the magic of graph_replace we can write the unrolled optimization procedure in just a few lines:
```python
update_dict contains a dictionary mapping from variables (\theta_D^0)
to their values after one step of optimization (\theta_D^1)
cur_update_dict = update_dict
for i in xrange(params['unrolling_steps'] - 1):
# Compute variable updates given the previous iteration's updated variable
cur_update_dict = graph_replace(update_dict, cur_update_dict)
Final unrolled loss uses the parameters at the last time step
unrolled_loss = graph_replace(loss, cur_update_dict)
```
Note there are many other ways of implementing unrolled optimization that don't use graph rewriting. For example, if we created a function that takes weights as inputs and returns the updated weights, we could just iteratively call that function.
End of explanation
"""
_graph_replace = tf.contrib.graph_editor.graph_replace
def remove_original_op_attributes(graph):
"""Remove _original_op attribute from all operations in a graph."""
for op in graph.get_operations():
op._original_op = None
def graph_replace(*args, **kwargs):
"""Monkey patch graph_replace so that it works with TF 1.0"""
remove_original_op_attributes(tf.get_default_graph())
return _graph_replace(*args, **kwargs)
"""
Explanation: graph_replace is broken in TensorFlow 1.0 (see this issue). We get around this issue with an ugly hack that removes the problematic attribute from all ops in the graph on every call to graph_replace.
End of explanation
"""
def extract_update_dict(update_ops):
"""Extract variables and their new values from Assign and AssignAdd ops.
Args:
update_ops: list of Assign and AssignAdd ops, typically computed using Keras' opt.get_updates()
Returns:
dict mapping from variable values to their updated value
"""
name_to_var = {v.name: v for v in tf.global_variables()}
updates = OrderedDict()
for update in update_ops:
var_name = update.op.inputs[0].name
var = name_to_var[var_name]
value = update.op.inputs[1]
if update.op.type == 'Assign':
updates[var.value()] = value
elif update.op.type == 'AssignAdd':
updates[var.value()] = var + value
else:
raise ValueError("Update op type (%s) must be of type Assign or AssignAdd"%update_op.op.type)
return updates
"""
Explanation: Utility functions
End of explanation
"""
def sample_mog(batch_size, n_mixture=8, std=0.01, radius=1.0):
thetas = np.linspace(0, 2 * np.pi, n_mixture)
xs, ys = radius * np.sin(thetas), radius * np.cos(thetas)
cat = ds.Categorical(tf.zeros(n_mixture))
comps = [ds.MultivariateNormalDiag([xi, yi], [std, std]) for xi, yi in zip(xs.ravel(), ys.ravel())]
data = ds.Mixture(cat, comps)
return data.sample(batch_size)
"""
Explanation: Data creation
End of explanation
"""
def generator(z, output_dim=2, n_hidden=128, n_layer=2):
with tf.variable_scope("generator"):
h = slim.stack(z, slim.fully_connected, [n_hidden] * n_layer, activation_fn=tf.nn.tanh)
x = slim.fully_connected(h, output_dim, activation_fn=None)
return x
def discriminator(x, n_hidden=128, n_layer=2, reuse=False):
with tf.variable_scope("discriminator", reuse=reuse):
h = slim.stack(x, slim.fully_connected, [n_hidden] * n_layer, activation_fn=tf.nn.tanh)
log_d = slim.fully_connected(h, 1, activation_fn=None)
return log_d
"""
Explanation: Generator and discriminator architectures
End of explanation
"""
params = dict(
batch_size=512,
disc_learning_rate=1e-4,
gen_learning_rate=1e-3,
beta1=0.5,
epsilon=1e-8,
max_iter=25000,
viz_every=5000,
z_dim=256,
x_dim=2,
unrolling_steps=5,
)
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
data = sample_mog(params['batch_size'])
noise = ds.Normal(tf.zeros(params['z_dim']),
tf.ones(params['z_dim'])).sample(params['batch_size'])
# Construct generator and discriminator nets
with slim.arg_scope([slim.fully_connected], weights_initializer=tf.orthogonal_initializer(gain=1.4)):
samples = generator(noise, output_dim=params['x_dim'])
real_score = discriminator(data)
fake_score = discriminator(samples, reuse=True)
# Saddle objective
loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=real_score, labels=tf.ones_like(real_score)) +
tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_score, labels=tf.zeros_like(fake_score)))
gen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "generator")
disc_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "discriminator")
# Vanilla discriminator update
d_opt = Adam(lr=params['disc_learning_rate'], beta_1=params['beta1'], epsilon=params['epsilon'])
updates = d_opt.get_updates(disc_vars, [], loss)
d_train_op = tf.group(*updates, name="d_train_op")
# Unroll optimization of the discrimiantor
if params['unrolling_steps'] > 0:
# Get dictionary mapping from variables to their update value after one optimization step
update_dict = extract_update_dict(updates)
cur_update_dict = update_dict
for i in xrange(params['unrolling_steps'] - 1):
# Compute variable updates given the previous iteration's updated variable
cur_update_dict = graph_replace(update_dict, cur_update_dict)
# Final unrolled loss uses the parameters at the last time step
unrolled_loss = graph_replace(loss, cur_update_dict)
else:
unrolled_loss = loss
# Optimize the generator on the unrolled loss
g_train_opt = tf.train.AdamOptimizer(params['gen_learning_rate'], beta1=params['beta1'], epsilon=params['epsilon'])
g_train_op = g_train_opt.minimize(-unrolled_loss, var_list=gen_vars)
"""
Explanation: Construct model and training ops
End of explanation
"""
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
from tqdm import tqdm
xmax = 3
fs = []
frames = []
np_samples = []
n_batches_viz = 10
viz_every = params['viz_every']
for i in tqdm(xrange(params['max_iter'])):
f, _, _ = sess.run([[loss, unrolled_loss], g_train_op, d_train_op])
fs.append(f)
if i % viz_every == 0:
np_samples.append(np.vstack([sess.run(samples) for _ in xrange(n_batches_viz)]))
xx, yy = sess.run([samples, data])
fig = figure(figsize=(5,5))
scatter(xx[:, 0], xx[:, 1], edgecolor='none')
scatter(yy[:, 0], yy[:, 1], c='g', edgecolor='none')
axis('off')
if generate_movie:
frames.append(mplfig_to_npimage(fig))
show()
"""
Explanation: Train!
End of explanation
"""
import seaborn as sns
np_samples_ = np_samples[::1]
cols = len(np_samples_)
bg_color = sns.color_palette('Greens', n_colors=256)[0]
figure(figsize=(2*cols, 2))
for i, samps in enumerate(np_samples_):
if i == 0:
ax = subplot(1,cols,1)
else:
subplot(1,cols,i+1, sharex=ax, sharey=ax)
ax2 = sns.kdeplot(samps[:, 0], samps[:, 1], shade=True, cmap='Greens', n_levels=20, clip=[[-xmax,xmax]]*2)
ax2.set_axis_bgcolor(bg_color)
xticks([]); yticks([])
title('step %d'%(i*viz_every))
ax.set_ylabel('%d unrolling steps'%params['unrolling_steps'])
gcf().tight_layout()
fs = np.array(fs)
plot(fs)
legend(('loss', 'unrolled loss'))
plot(fs[:, 0] - fs[:, 1])
legend('optimized loss - initial loss')
#clip = mpy.ImageSequenceClip(frames[::], fps=30)
#clip.ipython_display()
"""
Explanation: Visualize results
End of explanation
"""
|
TheKingInYellow/PySeidon
|
PySeidon_tuto_3.ipynb
|
agpl-3.0
|
%pylab inline
"""
Explanation: PySeison - Tutorial 3: ADCP class
End of explanation
"""
from pyseidon import *
"""
Explanation: 1. PySeidon - ADCP object initialisation
Similarly to the "TideGauge class" and the "Drifter class", the "ADCP class" is a measurement-based object.
1.1. Package importation
As any other library in Python, PySeidon has to be first imported before to be used. Here we will use an alternative import statement compared to the one previoulsy presented:
End of explanation
"""
ADCP?
"""
Explanation: Star here means all. Usually this form of statements would import the entire library. In the case of PySeidon, this statement will import the following object classes: FVCOM, Station, Validation, ADCP, Tidegauge and Drifter. Only the ADCP class will be tackle in this tutorial. However note should note that the architecture design and functioning between each classes are very similar.
1.2. Object definition
Python is by definition an object oriented language...and so is matlab. PySeidon is based on this notion of object, so let us define our first "ADCP" object.
Exercise 1:
- Unravel ADCP documentation with Ipython shortcuts
Answer:
End of explanation
"""
adcp=ADCP('./data4tutorial/adcp_GP_01aug2013.mat')
"""
Explanation: According to the documentation, in order to define a ADCP object, the only required input is a filename. This string input represents path to a file (e.g. testAdcp=ADCP('./path_to_matlab_file/filename') and whose file must be a matlab file (i.e. .mat).
Note that, at the current stage, the package only handle fully processed ADCP matlab data previously quality-controlled as well as formatted through "EnsembleData_FlowFile" matlab script at the mo. All the tool necessary for this processing and quality-control can be found in ./pyseidon/utilities/BP_tools.py and save_FlowFile_BPFormat.py. Additionally, a template for the ADCP file is provided in the package under data4tutorial
Exercise 2:
- define a adcp object named adcp from the following template: ./data4tutorial/adcp_GP_01aug2013.mat
- Tip: adapt the file's path to your local machine.
Answer:
End of explanation
"""
print "(lon, lat) coordinates: ("+str(adcp.Variables.lon)+", "+str(adcp.Variables.lat)+")"
vel = adcp.Utils.velo_norm()
fI, eI, pa, pav = adcp.Utils.ebb_flood_split()
adcp.Utils.speed_histogram(time_ind=fI)
dveldz = adcp.Utils.verti_shear(time_ind=eI)
harmo = adcp.Utils.Harmonic_analysis(elevation=False, velocity=True)
print harmo
velos = adcp.Utils.Harmonic_reconstruction(harmo)
"""
Explanation: 1.3. Object attributes, functions, methods & special methods
The ADCP object possesses 3 attributes and 3 methods. They would appear by typing adcp. Tab for instance.
An attribute is a quantity intrinsic to its object. A method is an intrinsic function which changes an attribute of its object. Contrarily a function will generate its own output:
The Station attributes are:
- History: history metadata that keeps track of the object changes
- Data: gathers the raw/unchanged data of the specified .mat file
- Variables*: gathers the hydrodynamics related data. Note that methods will generate new fields in this attribute
The Station methods & functions are:
- Utils: gathers utility methods and functions for use with 2D and 3D variables
- Plots: gathers plotting methods for use with 2D and 3D variables
- dump_profile_data: dumps profile data (x,y) in a *.csv file.
2. PySeidon - Hands-on (15 mins)
Utils
Exercise 3:
- Print the (lon,lat) coordinates of the adcp object (hint: look into Variables)
- Use Utils.velo_norm to compute the velocity norm over all time steps and all vertical levels...and accessorily plot the mean velocity vertical profile.
- Use Utils.ebb_flood_split function to get the ebb and flood time indices of the time series.
- Plot the flood flow speed hitogram
- Compute & Plot the ebb vertical shear
- Perform a harmonic analysis of the velocities and print out the result
- Reconstruction these velocities based on the harmonic results of the previous question
Answer:
End of explanation
"""
import numpy as np
da_vel = np.nanmean(vel,axis=1)
station.dump_profile_data(adcp.Variables.matlabTime[:], da_vel, title='flow_speed_time_series', xlabel='matlab time', ylabel='speed')
"""
Explanation: Save functions
Exercise 5:
- Dump depth-averaged velocity and time step data in a .csv file
- Hint:* use numpy for averaging
Answer:
End of explanation
"""
|
intel-analytics/BigDL
|
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Explanation: <a href="https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2016 The BigDL Authors.
End of explanation
"""
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
"""
Explanation: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
End of explanation
"""
import sys
# Set current python version
python_version = "3.7.10"
# Install Miniconda
!wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
# Update Conda
!conda install --channel defaults conda python=$python_version --yes
!conda update --channel defaults --all --yes
# Append to the sys.path
_ = (sys.path
.append(f"/usr/local/lib/python3.7/site-packages"))
os.environ['PYTHONHOME']="/usr/local"
"""
Explanation: Install BigDL Orca
Conda is needed to prepare the Python environment for running this example.
Note: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the install guide for more details.
End of explanation
"""
# Install latest pre-release version of BigDL Orca
# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-orca[automl]
# Install xgboost
!pip install xgboost
"""
Explanation: You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca[automl].
End of explanation
"""
# load data
from sklearn.datasets import load_boston
boston = load_boston()
y = boston['target']
X = boston['data']
# split the data into train and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
"""
Explanation: Distributed Automl for xgboost using Orca AutoXGBoost
Orca AutoXGBoost enables distributed automated hyper-parameter tuning for XGBoost, which includes AutoXGBRegressor and AutoXGBClassifier for sklearn XGBRegressor and XGBClassifier respectively. See more about xgboost scikit-learn API.
In this guide we will describe how to use Orca AutoXGBoost for automated xgboost tuning in 4 simple steps.
Step 0: Prepare dataset
We use sklearn boston house-price dataset for demonstration.
End of explanation
"""
# import necesary libraries and modules
from __future__ import print_function
import os
import argparse
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca import OrcaContext
# recommended to set it to True when running BigDL in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=6, memory="2g", init_ray_on_spark=True) # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4, init_ray_on_spark=True) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", init_ray_on_spark=True,
driver_memory="10g", driver_cores=1) # run on Hadoop YARN cluster
"""
Explanation: Step 1: Init Orca Context
End of explanation
"""
from bigdl.orca.automl import hp
search_space = {
"n_estimators": hp.grid_search([50, 100, 200]),
"max_depth": hp.choice([2, 4, 6]),
}
"""
Explanation: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
### Step 2: Define Search space
You should define a dictionary as your hyper-parameter search space for XGBRegressor. The keys are hyper-parameter names you want to search for XGBRegressor, and you can specify how you want to sample each hyper-parameter in the values of the search space. See automl.hp for more details.
End of explanation
"""
from bigdl.orca.automl.xgboost import AutoXGBRegressor
auto_xgb_reg = AutoXGBRegressor(cpus_per_trial=2,
name="auto_xgb_classifier",
min_child_weight=3,
random_state=2)
"""
Explanation: Step 3: Automatically fit and search with Orca AutoXGBoost
We will then fit AutoXGBoost automatically on Boston Housing dataset.
First create an AutoXGBRegressor.
You could also pass the sklearn XGBRegressor parameters to AutoXGBRegressor. Note that the XGBRegressor parameters shouldn't include the hyper-parameters in search_space or n_jobs, which is the same with cpus_per_trial.
End of explanation
"""
auto_xgb_reg.fit(data=(X_train, y_train),
validation_data=(X_test, y_test),
search_space=search_space,
n_sampling=2,
metric="rmse")
"""
Explanation: Next, use the auto xgboost regressor to fit and search for the best hyper-parameter set.
End of explanation
"""
best_model = auto_xgb_reg.get_best_model()
"""
Explanation: Step 4: Get best model and hyper parameters
You can get the best learned model from the fitted auto xgboost regressor, which is an sklearn XGBRegressor instance.
End of explanation
"""
best_config = auto_xgb_reg.get_best_config()
print(best_config)
"""
Explanation: You can also get the best hyper-parameter set.
End of explanation
"""
y_pred = best_model.predict(X_test)
from sklearn.metrics import mean_squared_error
print(mean_squared_error(y_test, y_pred))
# stop orca context when program finishes
stop_orca_context()
"""
Explanation: Then, you can use the best learned model as you want. Here, we demonstrate how to predict and evaluate on the test dataset.
End of explanation
"""
|
metpy/MetPy
|
dev/_downloads/324acb7faa1ec1d6ac5849ea2223364d/Smoothing.ipynb
|
bsd-3-clause
|
from itertools import product
import matplotlib.pyplot as plt
import numpy as np
import metpy.calc as mpcalc
"""
Explanation: Smoothing
Using MetPy's smoothing functions.
This example demonstrates the various ways that MetPy's smoothing function
can be utilized. While this example utilizes basic NumPy arrays, these
functions all work equally well with Pint Quantities or xarray DataArrays.
End of explanation
"""
np.random.seed(61461542)
size = 128
x, y = np.mgrid[:size, :size]
distance = np.sqrt((x - size / 2) ** 2 + (y - size / 2) ** 2)
raw_data = np.random.random((size, size)) * 0.3 + distance / distance.max() * 0.7
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
ax.set_title('Raw Data')
ax.imshow(raw_data, vmin=0, vmax=1)
ax.axis('off')
plt.show()
"""
Explanation: Start with a base pattern with random noise
End of explanation
"""
fig, ax = plt.subplots(3, 3, figsize=(12, 12))
for i, j in product(range(3), range(3)):
ax[i, j].axis('off')
# Gaussian Smoother
ax[0, 0].imshow(mpcalc.smooth_gaussian(raw_data, 3), vmin=0, vmax=1)
ax[0, 0].set_title('Gaussian - Low Degree')
ax[0, 1].imshow(mpcalc.smooth_gaussian(raw_data, 8), vmin=0, vmax=1)
ax[0, 1].set_title('Gaussian - High Degree')
# Rectangular Smoother
ax[0, 2].imshow(mpcalc.smooth_rectangular(raw_data, (3, 7), 2), vmin=0, vmax=1)
ax[0, 2].set_title('Rectangular - 3x7 Window\n2 Passes')
# 5-point smoother
ax[1, 0].imshow(mpcalc.smooth_n_point(raw_data, 5, 1), vmin=0, vmax=1)
ax[1, 0].set_title('5-Point - 1 Pass')
ax[1, 1].imshow(mpcalc.smooth_n_point(raw_data, 5, 4), vmin=0, vmax=1)
ax[1, 1].set_title('5-Point - 4 Passes')
# Circular Smoother
ax[1, 2].imshow(mpcalc.smooth_circular(raw_data, 2, 2), vmin=0, vmax=1)
ax[1, 2].set_title('Circular - Radius 2\n2 Passes')
# 9-point smoother
ax[2, 0].imshow(mpcalc.smooth_n_point(raw_data, 9, 1), vmin=0, vmax=1)
ax[2, 0].set_title('9-Point - 1 Pass')
ax[2, 1].imshow(mpcalc.smooth_n_point(raw_data, 9, 4), vmin=0, vmax=1)
ax[2, 1].set_title('9-Point - 4 Passes')
# Arbitrary Window Smoother
ax[2, 2].imshow(mpcalc.smooth_window(raw_data, np.diag(np.ones(5)), 2), vmin=0, vmax=1)
ax[2, 2].set_title('Custom Window (Diagonal) \n2 Passes')
plt.show()
"""
Explanation: Now, create a grid showing different smoothing options
End of explanation
"""
|
ioam/scipy-2017-holoviews-tutorial
|
solutions/07-working-with-large-datasets-with-solutions.ipynb
|
bsd-3-clause
|
import pandas as pd
import holoviews as hv
import dask.dataframe as dd
import datashader as ds
import geoviews as gv
from holoviews.operation.datashader import datashade, aggregate
hv.extension('bokeh')
"""
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>07. Working with large datasets</h2></div>
HoloViews supports even high-dimensional datasets easily, and the standard mechanisms discussed already work well as long as you select a small enough subset of the data to display at any one time. However, some datasets are just inherently large, even for a single frame of data, and cannot safely be transferred for display in any standard web browser. Luckily, HoloViews makes it simple for you to use the separate datashader together with any of the plotting extension libraries, including Bokeh and Matplotlib. The datashader library is designed to complement standard plotting libraries by providing faithful visualizations for very large datasets, focusing on revealing the overall distribution, not just individual data points.
Datashader uses computations accellerated using Numba, making it fast to work with datasets of millions or billions of datapoints stored in dask dataframes. Dask dataframes provide an API that is functionally equivalent to pandas, but allows working with data out of core while scaling out to many processors and even clusters. Here we will use Dask to load a large CSV file of taxi coordinates.
<div>
<img align="left" src="./assets/numba.png" width='140px'/>
<img align="left" src="./assets/dask.png" width='85px'/>
<img align="left" src="./assets/datashader.png" width='158px'/>
</div>
How does datashader work?
<img src="./assets/datashader_pipeline.png" width="80%"/>
Tools like Bokeh map Data (left) directly into an HTML/JavaScript Plot (right)
datashader instead renders Data into a plot-sized Aggregate array, from which an Image can be constructed then embedded into a Bokeh Plot
Only the fixed-sized Image needs to be sent to the browser, allowing millions or billions of datapoints to be used
Every step automatically adjusts to the data, but can be customized
When not to use datashader
Plotting less than 1e5 or 1e6 data points
When every datapoint matters; standard Bokeh will render all of them
For full interactivity (hover tools) with every datapoint
When to use datashader
Actual big data; when Bokeh/Matplotlib have trouble
When the distribution matters more than individual points
When you find yourself sampling or binning to better understand the distribution
End of explanation
"""
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'])
ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour
# If your machine is low on RAM (<8GB) don't persist (though everything will be much slower)
ddf = ddf.persist()
print('%s Rows' % len(ddf))
print('Columns:', list(ddf.columns))
"""
Explanation: Load the data
As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf:
End of explanation
"""
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
"""
Explanation: Create a dataset
In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions:
End of explanation
"""
%opts RGB [width=600 height=500 bgcolor="black"]
datashade(points)
"""
Explanation: We could now simply type points, and Bokeh will attempt to display this data as a standard Bokeh plot. Before doing that, however, remember that we have 12 million rows of data, and no current plotting program will handle this well! Instead of letting Bokeh see this data, let's convert it to something far more tractable using the datashader operation. This operation will aggregate the data on a 2D grid, apply shading to assign pixel colors to each bin in this grid, and build an RGB Element (just a fixed-sized image) we can safely display:
End of explanation
"""
datashade.streams
# Exercise: Plot the taxi pickup locations ('pickup_x' and 'pickup_y' columns)
# Warning: Don't try to display hv.Points() directly; it's too big! Use datashade() for any display
# Optional: Change the cmap on the datashade operation to inferno
from datashader.colors import inferno
points = hv.Points(ddf, kdims=['pickup_x', 'pickup_y'])
datashade(points, cmap=inferno)
"""
Explanation: If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked streams are automatically instantiated and dynamically supply the plot size, x_range, and y_range from the Bokeh plot to the operation based on your current viewport as you zoom or pan:
End of explanation
"""
%opts RGB [xaxis=None yaxis=None]
import geoviews as gv
from bokeh.models import WMTSTileSource
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
wmts = WMTSTileSource(url=url)
gv.WMTS(wmts) * datashade(points)
%opts RGB [xaxis=None yaxis=None]
# Exercise: Overlay the taxi pickup data on top of the Wikipedia tile source
wiki_url = 'https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png'
wmts = WMTSTileSource(url=wiki_url)
gv.WMTS(wmts) * datashade(points)
"""
Explanation: Adding a tile source
Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it:
End of explanation
"""
selected = points.select(total_amount=(None, 1000))
selected.data = selected.data.persist()
gv.WMTS(wmts) * datashade(selected, aggregator=ds.mean('total_amount'))
# Exercise: Use the ds.min or ds.max aggregator to visualize ``tip_amount`` by dropoff location
# Optional: Eliminate outliers by using select
selected = points.select(tip_amount=(None, 1000))
selected.data = selected.data.persist()
gv.WMTS(wmts) * datashade(selected, aggregator=ds.max('tip_amount')) # Try using ds.min
"""
Explanation: Aggregating with a variable
So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade operation. Here use the ds.mean aggregator to compute the average cost of a trip at a dropoff location:
End of explanation
"""
%opts Image [width=600 height=500 logz=True xaxis=None yaxis=None]
taxi_ds = hv.Dataset(ddf)
grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour'], dynamic=True)
aggregate(grouped).redim.values(hour=range(24))
%%opts Image [width=300 height=200 xaxis=None yaxis=None]
# Exercise: Facet the trips in the morning hours as an NdLayout using aggregate(grouped.layout())
# Hint: You can reuse the existing grouped variable or select a subset before using the .to method
taxi_ds = hv.Dataset(ddf).select(hour=(2, 8))
taxi_ds.data = taxi_ds.data.persist()
grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour'])
aggregate(grouped.layout()).cols(3)
"""
Explanation: Grouping by a variable
Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day:
End of explanation
"""
%%opts QuadMesh [width=800 height=400 tools=['hover']] (alpha=0 hover_line_alpha=1 hover_fill_alpha=0)
hover_info = aggregate(points, width=40, height=20, streams=[hv.streams.RangeXY]).map(hv.QuadMesh, hv.Image)
gv.WMTS(wmts) * datashade(points) * hover_info
"""
Explanation: Additional features
The actual points are never given directly to Bokeh, and so the normal Bokeh hover (and other) tools will not normally be useful with Datashader output. However, we can easily verlay an invisible QuadMesh to reveal information on hover, providing information about values in a local area while still only ever sending a fixed-size array to the browser to avoid issues with large data.
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence
|
Python/5 Linear Regression/Simple-Linear-Regression-with-SciKit-Learn.ipynb
|
gpl-2.0
|
import pandas as pd
"""
Explanation: Simple Linear Regression with SciKit-Learn
We import the module pandas. This module implements so called <em style="color:blue;">data frames</em> and is more convenient than the module csv when reading a <tt>csv</tt> file.
End of explanation
"""
cars = pd.read_csv('cars.csv')
cars
"""
Explanation: The data we want to read is contained in the <tt>csv</tt> file 'cars.csv'.
End of explanation
"""
import numpy as np
X = np.array(cars['displacement'])
Y = np.array(cars['mpg'])
"""
Explanation: The variable cars contains a so called data frame.
We want to convert the columns containing mpg and displacement into NumPy arrays.
End of explanation
"""
X = 0.0163871 * X
"""
Explanation: We convert <em style="color:blue;">cubic inches</em> into <em style="color:blue;">litres</em>.
End of explanation
"""
X = np.reshape(X, (len(X), 1))
X
"""
Explanation: In order to use SciKit-Learn we have to reshape the array X into a matrix.
End of explanation
"""
Y = 1.60934 / 3.78541 * Y
"""
Explanation: We convert <em style="color:blue;">miles per gallon</em> into <em style="color:blue;">kilometer per litre</em>.
End of explanation
"""
Y = 100 / Y
"""
Explanation: We convert <em style="color:blue;">kilometer per litre</em> into <em style="color:blue;">litre per 100 kilometer</em>.
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.figure(figsize=(12, 10))
sns.set(style='whitegrid')
plt.scatter(X, Y, c='b', s=4) # 'b' is blue color
plt.xlabel('engine displacement in litres')
plt.ylabel('litre per 100 km')
plt.title('Fuel Consumption vs Engine Displacement')
plt.show()
"""
Explanation: We plot fuel consumption versus engine displacement.
End of explanation
"""
import sklearn.linear_model as lm
"""
Explanation: We import the linear_model from SciKit-Learn:
End of explanation
"""
model = lm.LinearRegression()
"""
Explanation: We create a <em style="color:blue;">linear model</em>.
End of explanation
"""
M = model.fit(X, Y)
"""
Explanation: We train this model using the data we have.
End of explanation
"""
ϑ0 = M.intercept_
ϑ0
ϑ1 = M.coef_[0]
ϑ1
"""
Explanation: The model M represents a linear relationship between X and Y of the form
$$ \texttt{Y} = \vartheta_0 + \vartheta_1 \cdot \texttt{X} $$
We extract the coefficients $\vartheta_0$ and $\vartheta_1$.
End of explanation
"""
model.score(X, Y)
"""
Explanation: Let's check the quality of our linear model. The coefficient of determination $R^2$ is computed by the function score.
End of explanation
"""
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: The values for $\vartheta_0$, $\vartheta_0$, and $R^2$ are, no surprise there, the same values that we had already computed with the notebook Simple-Linear-Regression.ipynb. We plot the data together with the regression line.
The next line is needed to suppress a deprecation warning from one of the libraries.
End of explanation
"""
xMax = max(X) + 0.2
plt.figure(figsize=(12, 10))
sns.set(style='whitegrid')
plt.scatter(X, Y, c='b', s=4)
plt.plot([0, xMax], [ϑ0, ϑ0 + ϑ1 * xMax], c='r')
plt.xlabel('engine displacement in cubic inches')
plt.ylabel('fuel consumption in litres per 100 km')
plt.title('Fuel Consumption vs. Engine Displacement')
plt.show()
"""
Explanation: Let's plot the regression line with the data.
End of explanation
"""
|
jamesdj/tobit
|
tobit.ipynb
|
mit
|
rs = np.random.RandomState(seed=10)
ns = 100
nf = 10
x, y_orig, coef = make_regression(n_samples=ns, n_features=nf, coef=True, noise=0.0, random_state=rs)
x = pd.DataFrame(x)
y = pd.Series(y_orig)
n_quantiles = 3 # two-thirds of the data is truncated
quantile = 100/float(n_quantiles)
lower = np.percentile(y, quantile)
upper = np.percentile(y, (n_quantiles - 1) * quantile)
left = y < lower
right = y > upper
cens = pd.Series(np.zeros((ns,)))
cens[left] = -1
cens[right] = 1
y = y.clip(upper=upper, lower=lower)
hist = plt.hist(y)
tr = TobitModel()
result = tr.fit(x, y, cens, verbose=False)
fig, ax = plt.subplots()
ind = np.arange(len(coef))
width = 0.25
rects1 = ax.bar(ind, coef, width, color='g', label='True')
rects2 = ax.bar(ind + width, tr.coef_, width, color='r', label='Tobit')
rects3 = ax.bar(ind + (2 * width), tr.ols_coef_, width, color='b', label='OLS')
plt.ylabel("Coefficient")
plt.xlabel("Index of regressor")
plt.title("Tobit vs. OLS on censored data")
leg = plt.legend(loc=(0.22, 0.65))
"""
Explanation: Recovers true coefficients on artificial censored regression data
End of explanation
"""
data_file = 'tobit_data.txt'
df = pd.read_table(data_file, sep=' ')
df.loc[df.gender=='male', 'gender'] = 1
df.loc[df.gender=='female', 'gender'] = 0
df.loc[df.children=='yes', 'children'] = 1
df.loc[df.children=='no', 'children'] = 0
df = df.astype(float)
df.head()
y = df.affairs
x = df.drop(['affairs', 'gender', 'education', 'children'], axis=1)
cens = pd.Series(np.zeros((len(y),)))
cens[y==0] = -1
cens.value_counts()
tr = TobitModel()
tr = tr.fit(x, y, cens, verbose=False)
tr.coef_
"""
Explanation: Note that the truncation values do not have to be the same for e.g. all left-censored observations, or all right-censored observations, as in this example. However, the model does assume that the errors will be normally-distributed.
Comparison to R censReg package result on AER data
Commands in R for Tobit analysis of Affairs data:
install.packages('censReg')
library(censReg)
install.packages('AER')
data('Affairs', package='AER')
write.table(Affairs, 'tobit_data.txt', quote=FALSE, row.names=FALSE)
estResult <- censReg( affairs ~ age + yearsmarried + religiousness +occupation + rating, data = Affairs)
summary(estResult)
Python analysis of same data
End of explanation
"""
|
cucs-numpde/class
|
Fundamentals.ipynb
|
bsd-2-clause
|
%matplotlib notebook
import numpy
from matplotlib import pyplot
pyplot.style.use('ggplot')
def u_n(n):
x = numpy.linspace(0,1,n)
return x, 1 + x + x**2/2 + x**3/6
for n in (40, 20, 10):
x, y = u_n(n)
pyplot.plot(x, y, 'o', label='$u_{%d}(x)$' % n)
pyplot.plot(x, numpy.exp(x), label='$\exp(x)$')
pyplot.legend(loc='upper left')
"""
Explanation: Jupyter notebooks
This is a Jupyter notebook using Python. You can install Jupyter locally to edit and interact with this notebook.
Approximation of functions
Numerical analysis is the study of algorithms for the problems of continuous mathematics. -- L. N. Trefethen
This course is concerned with the approximation of functions $u(x)$ for $x$ in some domain $\Omega \subset \mathbb R^d$ (where the dimension $d \le 4$ for most problems that we will consider). We will demand that $u$ satisfy some equations in its interior and on the boundary $\partial \Omega$ of our domain. We will usually be solving well-posed problems, for which there is a unique exact solution $u_(x)$. While such exact solutions are rarely available in applications, we can manufacture* exact solutions to test our numerical schemes.
Cost and convergence
There is typically no known finite representation of the exact solution $u_*$ so we will use discrete approximations $u_n$ where $n$ is some measure of the cost. Given an approximation $u_n(x)$, we need a measure of discretization error.
End of explanation
"""
pyplot.figure()
for n in (10, 20, 40, 80):
x, y = u_n(n)
pyplot.plot(n, numpy.linalg.norm(y - numpy.exp(x)), 'o')
"""
Explanation: How accurate is this approximation?
The $u_{10}(x)$ line here is visually on top of the $u_{80}(x)$ line so we might reasonably say that both should be about the same accuracy. Let's see what happens when we compare the norm of the vector $\exp(x)$ with the approximations $u_n(x)$.
End of explanation
"""
pyplot.figure()
for n in (10, 20, 40, 80, 160, 320):
x, y = u_n(n)
pyplot.semilogx(n, numpy.linalg.norm(y - numpy.exp(x))/numpy.sqrt(n), 'o')
pyplot.title('Scaled like trapezoid rule integration of $L^2((0,1))$')
"""
Explanation: Which norm?
The norm of a vector $y$ of length $n$ is
$$ \lVert y \rVert = \sqrt{y^T y} = \sqrt{\sum_{i=1}^n |y_i|^2} .$$
Clearly this depends on the length of $x$. How can we measure norms of functions?
$$ \lVert u(x) \rVert_{L^2(\Omega)} = \sqrt{\langle u, u \rangle} = \sqrt{\int_{\Omega} u^2} .$$
Definition: norm
A norm on a vector space $V$ is a functional $\lVert \cdot \rVert : V \to \mathbb R$ satisfying all of the following
$\lVert \alpha y \rVert = |\alpha| \lVert y \rVert $
$\lVert x + y \rVert \le \lVert x \rVert + \lVert y \rVert $ "trangle inequality"
$\lVert y \rVert \ge 0$
$\lVert y \rVert = 0$ if and only if $y = \mathbb 0$.
If the last condition is not met, we call it a seminorm.
Examples of norms and seminorms
$L^2$ norm
$\max$ (or $L^\infty$) norm
$\lVert u \rVert = |u(x_i)|$ seminorm
$\lVert u \rVert = |\int_{\Gamma} u|$ seminorm
Computing the $L^2(\Omega)$ norm
Can this norm be computed exactly for arbitrary functions?
If we have a discrete function $u_n(x_i)|{i=1}^n$, what would we need to compute
$$ \lVert u_n - u* \rVert_{L^2(\Omega)} $$
accurately?
Discuss examples of catastrophic failure.
End of explanation
"""
def matmult1(A, x):
"""Entries of y are dot products of rows of A with x"""
y = numpy.zeros_like(A[:,0])
for i in range(len(A)):
row = A[i,:]
for j in range(len(row)):
y[i] += row[j] * x[j]
return y
A = numpy.array([[1,2],[3,5],[7,11]])
x = numpy.array([10,20])
matmult1(A, x)
def matmult2(A, x):
"""Same idea, but more compactly"""
y = numpy.zeros_like(A[:,0])
for i,row in enumerate(A):
y[i] = row.dot(x)
return y
matmult2(A, x)
def matmult3(A, x):
"""y is a linear expansion of the columns of A"""
y = numpy.zeros_like(A[:,0])
for j,col in enumerate(A.T):
y += col * x[j]
return y
matmult3(A, x)
# We will use this version
A.dot(x)
"""
Explanation: Convergence
We will usually seek methods that are convergent in a norm,
$$ \lim_{n\to \infty} \lVert u_n - u_* \rVert = 0. $$
For scientific and engineering purposes, it might be sufficient for the method to be convergent in a seminorm. For example, lift and drag coefficients for an airfoil might be sufficient.
We will be interested in the rate of convergence as $n$ is increased. For an approximation problem (PDE or otherwise) in $d$ dimensions, $\Omega \subset \mathbb R^d$, we will say that a method converges at order $p$ in the norm $ \lVert \cdot \rVert $ if
$$ \lVert u_n - u_*\rVert \le C n^{-p/d} $$
for some constant $C$ that is independent of resolution. This bound is often expressed in terms of a nominal grid spacing
$$ h = n^{-1/d} $$
in which case we have
$$ \lVert u_h - u_* \rVert \le C h^p .$$
Differential equations
Consider the boundary value problem
$$ \begin{gather} -\frac{d^2 u}{dx^2} = f(x) \quad x \in \Omega = (-1,1) \
u(-1) = a \quad \frac{du}{dx}(1) = b . \end{gather} $$
$f(x)$ is the "forcing" term and we have a Dirichlet boundary condition at the left end of the domain and a Neumann condition on the right end. We need to choose
* how to represent $u(x)$, including evaluating it on the boundary,
* how to compute derivatives of $u$,
* in what sense to ask for the differential equation to be satisfied,
* where to evaluate $f(x)$ or integrals thereof,
* how to enforce boundary conditions.
Finite difference, finite volume, and finite element methods provide related but distinct frameworks for making these choices.
Linear Algebra
You have all seen basic linear algebra before, but this will summarize some different ways of thinking about the fundamental operations.
Linear algebra is the study of linear transformations on vectors, which represent points in a finite dimensional space. The matrix-vector product $y = A x$ is a linear combination of the columns of $A$. The familiar definition,
$$ y_i = \sum_j A_{i,j} x_j $$
can also be viewed as
$$ y = \Bigg[ A_{:,0} \Bigg| A_{:,1} \Bigg| \dotsm \Bigg] \begin{bmatrix} x_0 \ x_1 \ \vdots \end{bmatrix}
= \Bigg[ A_{:,0} \Bigg] x_0 + \Bigg[ A_{:,1} \Bigg] x_1 + \dotsb . $$
The notation $A_{i,j}$ corresponds to the Python syntax A[i,j] and the colon : means the entire range (row or column). So $A_{:,j}$ is the $j$th column and $A_{i,:}$ is the $i$th row. The corresponding Python syntax is A[:,j] and A[i,:].
End of explanation
"""
B = numpy.array([[2, 3],[0, 4]])
print(B)
print(B.dot(B.T), B.T.dot(B))
Binv = numpy.linalg.inv(B)
Binv.dot(B), B.dot(Binv)
"""
Explanation: Some common terminology
The range of $A$ is the space spanned by its columns. This definition coincides with the range of a function $f(x)$ when $f(x) = A x$.
The nullspace of $A$ is the space of vectors $x$ such that $A x = 0$.
The rank of $A$ is the dimension of its range.
A matrix has full rank if the nullspace of either $A$ or $A^T$ is empty (only the 0 vector). Equivalently, if all the columns of $A$ (or $A^T$) are linearly independent.
A nonsingular (or invertible) matrix is a square matrix of full rank. We call the inverse $A^{-1}$ and it satisfies $A^{-1} A = A A^{-1} = I$.
$\DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\null}{null} $
If $A \in \mathbb{R}^{m\times m}$, which of these doesn't belong?
1. $A$ has an inverse $A^{-1}$
2. $\rank (A) = m$
3. $\null(A) = {0}$
4. $A A^T = A^T A$
5. $\det(A) \ne 0$
6. $A x = 0$ implies that $x = 0$
When we write $x = A^{-1} y$, we mean that $x$ is the unique vector such that $A x = y$.
(It is rare that we explicitly compute a matrix $A^{-1}$, though it's not as "bad" as people may have told you.)
A vector $y$ is equivalent to $\sum_i e_i y_i$ where $e_i$ are columns of the identity.
Meanwhile, $x = A^{-1} y$ means that we are expressing that same vector $y$ in the basis of the columns of $A$, i.e., $\sum_i A_{:,i} x_i$.
End of explanation
"""
# Make some polynomials
x = numpy.linspace(-1,1)
A = numpy.vander(x, 4)
q0 = A.dot(numpy.array([0,0,0,.5])) # .5
q1 = A.dot(numpy.array([0,0,1,0])) # x
q2 = A.dot(numpy.array([0,1,0,0])) # x^2
pyplot.figure()
pyplot.plot(x, numpy.array([q0, q1, q2]).T)
x
# Inner products of even and odd functions
q0 = q0 / numpy.linalg.norm(q0)
q1.dot(q0), q2.dot(q0), q2.dot(q1)
q0
# What is the constant component of q2?
pyplot.figure()
pyplot.plot(x, q2.dot(q0)*q0)
# Let's project that away so that q2 is orthogonal to q0
q2 = q2 - q2.dot(q0)*q0
Q = numpy.array([q0, q1, q2]).T
print(Q.T.dot(Q))
pyplot.figure()
pyplot.plot(x, Q)
"""
Explanation: Inner products and orthogonality
The inner product
$$ x^T y = \sum_i x_i y_i $$
of vectors (or columns of a matrix) tell us about their magnitude and about the angle.
The norm is induced by the inner product,
$$ \lVert x \rVert = \sqrt{x^T x} $$
and the angle $\theta$ is defined by
$$ \cos \theta = \frac{x^T y}{\lVert x \rVert \, \lVert y \rVert} . $$
Inner products are bilinear, which means that they satisfy some convenient algebraic properties
$$ \begin{split}
(x + y)^T z &= x^T z + y^T z \
x^T (y + z) &= x^T y + x^T z \
(\alpha x)^T (\beta y) &= \alpha \beta x^T y \
\end{split} . $$
The pairwise inner products between two sets of vectors can be expressed by collecting the sets as columns in matrices and writing $A = X^T Y$ where $A_{i,j} = x_i^T y_j$.
It follows from this definition that
$$ (X^T Y)^T = Y^T X .$$
Orthogonal matrices
If $x^T y = 0$ then we say $x$ and $y$ are orthogonal (or "$x$ is orthogonal to $y$").
A vector is said to be normalized if $\lVert x \rVert = 1$.
If $x$ is orthogonal to $y$ and $\lVert x \rVert = \lVert y \rVert = 1$ then we say $x$ and $y$ are orthonormal.
A matrix with orthonormal columns is said to be an orthogonal matrix.
We typically use $Q$ or $U$ and $V$ for matrices that are known/constructed to be orthogonal.
Orthogonal matrices are always full rank -- the columns are linearly independent.
The inverse of a square orthogonal matrix is its transpose:
$$ Q^T Q = Q Q^T = I . $$
Orthogonal matrices are a powerful building block for robust numerical algorithms.
End of explanation
"""
def gram_schmidt_naive(X):
Q = numpy.zeros_like(X)
R = numpy.zeros((len(X.T),len(X.T)))
for i in range(len(Q.T)):
v = X[:,i].copy()
for j in range(i):
r = v.dot(Q[:,j])
R[j,i] = r
v -= r * Q[:,j] # "modified Gram-Schmidt" - remove each component before next dot product
R[i,i] = numpy.linalg.norm(v)
Q[:,i] = v / R[i,i]
return Q, R
x = numpy.linspace(-1,1,50)
k = 6
A = numpy.vander(x, k, increasing=True)
Q, R = gram_schmidt_naive(A)
print(numpy.linalg.norm(Q.T.dot(Q) - numpy.eye(k)))
print(numpy.linalg.norm(Q.dot(R)-A))
pyplot.figure()
pyplot.plot(x, Q)
A.shape
Q, R = gram_schmidt_naive(numpy.vander(x, 4, increasing=True))
pyplot.figure()
pyplot.plot(x, Q)
"""
Explanation: Gram-Schmidt Orthogonalization
Given a collection of vectors (columns of a matrix), we can find an orthogonal basis by applying the above procedure one column at a time.
End of explanation
"""
eps = numpy.float32(1)
while numpy.float32(1) + eps > 1:
eps /= numpy.float64(2)
eps_machine = 2*eps # We call this "machine epsilon"
print(eps_machine)
format((.2 - 1/3) + 2/15, '.20f')
format(.1, '.20f')
"""
Explanation: Theorem: all full-rank $m\times n$ matrices ($m \ge n$) have a unique $Q R$ factorization with $R_{j,j} > 0$.
Absolute condition number
Consider a function $f: X \to Y$ and define the absolute condition number
$$ \hat\kappa = \lim_{\delta \to 0} \max_{|\delta x| < \delta} \frac{|f(x + \delta x) - f(x)|}{|\delta x|} = \max_{\delta x} \frac{|\delta f|}{|\delta x|}. $$
If $f$ is differentiable, then $\hat\kappa = |f'(x)|$.
Floating point arithmetic
Floating point arithmetic $x \circledast y := \text{float}(x * y)$ is exact within a relative accuracy $\epsilon_{\text{machine}}$. Formally,
$$ x \circledast y = (x * y) (1 + \epsilon) $$
for some $|\epsilon| \le \epsilon_{\text{machine}}$.
End of explanation
"""
numpy.log(1 + 1e-10) - numpy.log1p(1e-10)
"""
Explanation: Relative condition number
Given the relative nature of floating point arithmetic, it is more useful to discuss relative condition number,
$$ \kappa = \max_{\delta x} \frac{|\delta f|/|f|}{|\delta x|/|x|}
= \max_{\delta x} \Big[ \frac{|\delta f|/|\delta x|}{|f| / |x|} \Big] $$
or, if $f$ is differentiable,
$$ \kappa = \max_{\delta x} |f'(x)| \frac{|x|}{|f|} . $$
How does a condition number get big?
Take-home message
The relative accuracy of the best-case algorithm will not be reliably better than $\epsilon_{\text{machine}}$ times the condition number.
$$ \max_{\delta x} \frac{|\delta f|}{|f|} \ge \kappa \cdot \epsilon_{\text{machine}} $$
End of explanation
"""
x1 = numpy.array([-0.9, 0.1, 0.5, 0.8]) # points where we know values
y = numpy.array([1, 2.4, -0.2, 1.3]) # values at those points
pyplot.figure()
pyplot.plot(x1, y, '*')
B = numpy.vander(x1, 4) # Vandermonde matrix at the known points
Q, R = gram_schmidt_naive(B)
p = numpy.linalg.solve(R, Q.T.dot(y)) # Compute the polynomial coefficients
print(p)
pyplot.plot(x, numpy.vander(x,4).dot(p)) # Plot the polynomial evaluated at all points
print('B =', B, '\np =', p)
m = 20
V = numpy.vander(numpy.linspace(-1,1,m), increasing=False)
Q, R = gram_schmidt_naive(V)
def qr_test(qr, V):
Q, R = qr(V)
m = len(Q.T)
print(qr.__name__,
numpy.linalg.norm(Q.dot(R) - V),
numpy.linalg.norm(Q.T.dot(Q) - numpy.eye(m)))
qr_test(gram_schmidt_naive, V)
qr_test(numpy.linalg.qr, V)
def gram_schmidt_classical(X):
Q = numpy.zeros_like(X)
R = numpy.zeros((len(X.T),len(X.T)))
for i in range(len(Q.T)):
v = X[:,i].copy()
R[:i,i] = Q[:,:i].T.dot(v)
v -= Q[:,:i].dot(R[:i,i])
R[i,i] = numpy.linalg.norm(v)
Q[:,i] = v / R[i,i]
return Q, R
qr_test(gram_schmidt_classical, V[:,:15])
# Q, R = numpy.linalg.qr(V)
#print(Q[:,0])
"""
Explanation: Stability
We use the notation $\tilde f(x)$ to mean a numerical algorithm for approximating $f(x)$. Additionally, $\tilde x = x (1 + \epsilon)$ is some "good" approximation of the exact input $x$.
(Forward) Stability
"nearly the right answer to nearly the right question"
$$ \frac{\lvert \tilde f(x) - f(\tilde x) \rvert}{| f(\tilde x) |} \in O(\epsilon_{\text{machine}}) $$
for some $\tilde x$ that is close to $x$
Backward Stability
"exactly the right answer to nearly the right question"
$$ \tilde f(x) = f(\tilde x) $$
for some $\tilde x$ that is close to $x$
Every backward stable algorithm is stable.
Not every stable algorithm is backward stable.
Example: $\tilde f(x) = \text{float}(x) + 1$
The algorithm computes
$$\tilde f(x) = \text{float}(x) \oplus 1 = x(1+\epsilon_1) + 1 = (x + 1 + x\epsilon_1)(1 + \epsilon_2) $$
and we can express any $\tilde x = x(1 + \epsilon_3)$.
To see if if the algorithm is stable, we compute
$$ \frac{\tilde f(x) - f(\tilde x)}{|f(\tilde x)|} = \frac{(x + 1 + x\epsilon_1)(1 + \epsilon_2) - [x(1+ \epsilon_3) + 1]}{\tilde x + 1} = \frac{(x + 1)\epsilon_2 + x(\epsilon_1 - \epsilon_3) + O(\epsilon^2)}{x + 1 + x\epsilon_3} . $$
If we can choose $\epsilon_3$ to make this small, then the method will be (forward) stable, and if we can make this expression exactly zero, then we'll have backward stability.
Trying for the latter, we solve for $\epsilon_3$ by setting the numerator equal to zero,
$$ \epsilon_3 = \frac{x + 1}{x}\epsilon_2 + \epsilon_1 + O(\epsilon^2)/x $$
which is small so long as $|x| \gg 0$, but the first term blows up as $x \to 0$.
In other words, the fact that $\epsilon_2$ can produce a large error relative to the input causes this algorithm to not be backward stable.
In contrast, this $x\to 0$ case is not a problem for forward stability because $\epsilon_3 = \epsilon_1$ yields error on the order of $\epsilon_2$.
Example: $\tilde f(x,y) = \text{float}(x) \oplus \text{float}(y)$
Now we are interested in
$$ \frac{\tilde f(x,y) - f(\tilde x,\tilde y)}{f(\tilde x,\tilde y)} $$
and we can vary both $\tilde x$ and $\tilde y$. If we choose $y=1$, then the ability to vary $\tilde y$ is powerful enough to ensure backward stability.
Accuracy of backward stable algorithms (Theorem)
A backward stable algorithm for computing $f(x)$ has relative accuracy
$$ \left\lvert \frac{\tilde f(x) - f(x)}{f(x)} \right\rvert \in O(\kappa(f) \epsilon_{\text{machine}}) . $$
This is a rewording of a statement made earlier -- backward stability is the best case.
Orthogonal polynomials
We used x = numpy.linspace(-1,1) which uses $m=50$ points by default. The number 50 is arbitrary and as we use more points, our columns become better approximations of continuous functions and the vector inner product becomes an integral (up to scaling):
$$ \frac 2 m \sum_{i=1}^m p_i q_i \approx \int_{-1}^1 p(x) q(x) . $$
When we orthogonalize the monomials using this inner product, we get the Legendre Polynomials (up to scaling). These polynomials have important applications in physics and engineering, as well as playing an important role in approximation (which we will go into in more detail).
Solving equations using QR
To solve
$$ A x = b $$
we can compute $A = QR$ and then
$$ x = R^{-1} Q^T b . $$
This also works for non-square systems!
End of explanation
"""
def gram_schmidt_modified(X):
Q = X.copy()
R = numpy.zeros((len(X.T), len(X.T)))
for i in range(len(Q.T)):
R[i,i] = numpy.linalg.norm(Q[:,i])
Q[:,i] /= R[i,i]
R[i,i+1:] = Q[:,i+1:].T.dot(Q[:,i])
Q[:,i+1:] -= numpy.outer(Q[:,i], R[i,i+1:])
return Q, R
qr_test(gram_schmidt_modified, V)
"""
Explanation: Classical Gram-Schmidt is highly parallel, but unstable, as evidenced by the lack of orthogonality in $Q$.
Right-looking algorithms
The implementations above have been "left-looking"; when working on column $i$, we compare it only to columns to the left (i.e., $j < i$). We can reorder the algorithm to look to the right by projecting $q_i$ out of all columns $j > i$. This algorithm is stable while being just as parallel as gram_schmidt_classical.
End of explanation
"""
def householder_Q_times(V, x):
"""Apply orthogonal matrix represented as list of Householder reflectors"""
y = x.copy()
for i in reversed(range(len(V))):
y[i:] -= 2 * V[i] * V[i].dot(y[i:])
return y
def qr_householder1(A):
"Compute QR factorization using naive Householder reflection"
m, n = A.shape
R = A.copy()
V = []
for i in range(n):
x = R[i:,i]
v = -x
v[0] += numpy.linalg.norm(x)
v = v/numpy.linalg.norm(v) # Normalized reflector plane
R[i:,i:] -= 2 * numpy.outer(v, v.dot(R[i:,i:]))
V.append(v) # Storing reflectors is equivalent to storing orthogonal matrix
Q = numpy.eye(m, n)
for i in range(n):
Q[:,i] = householder_Q_times(V, Q[:,i])
return Q, numpy.triu(R[:n,:])
qr_test(qr_householder1, numpy.array([[1.,2],[3,4],[5,6]]))
qr_test(qr_householder1, V)
qr_test(numpy.linalg.qr, V)
"""
Explanation: Householder triangularization
Gram-Schmidt methods perform triangular transformations to build an orthogonal matrix. As we have seen, $X = QR$ is satisfied accurately, but $Q$ may not be orthogonal when $X$ is ill-conditioned. Householder triangularization instead applies a sequence of orthogonal transformations to build a triangular matrix.
$$ \underbrace{Q_{n-1} \dotsb Q_0}_{Q^T} A = R $$
The structure of the algorithm is
$$ \underbrace{\begin{bmatrix} * & * & * \ * & * & * \ * & * & * \ * & * & * \ * & * & * \ \end{bmatrix}}{A} \to
\underbrace{\begin{bmatrix} * & * & * \ 0 & * & * \ 0 & * & * \ 0 & * & * \ 0 & * & * \ \end{bmatrix}}{Q_0 A} \to
\underbrace{\begin{bmatrix} * & * & * \ 0 & * & * \ 0 & 0 & * \ 0 & 0 & * \ 0 & 0 & * \ \end{bmatrix}}{Q_1 Q_0 A} \to
\underbrace{\begin{bmatrix} * & * & * \ 0 & * & * \ 0 & 0 & * \ 0 & 0 & 0 \ 0 & 0 & 0 \ \end{bmatrix}}{Q_2 Q_1 Q_0 A}
$$
where the elementary orthogonal matrices $Q_i$ chosen to introduce zeros below the diagonal in the $i$th column of $R$.
Each of these transformations will have the form
$$Q_i = \begin{bmatrix} I_i & 0 \ 0 & F \end{bmatrix}$$
where $F$ is a "reflection" that achieves
$$ F x = \begin{bmatrix} \lVert x \rVert \ 0 \ 0 \ \vdots \end{bmatrix} $$
where $x$ is the column of $R$ from the diagonal down.
This transformation is a reflection across a plane with normal $v = Fx - x = \lVert x \rVert e_1 - x$.
The reflection, as depected above by Trefethen and Bau (1999) can be written $F = I - 2 \frac{v v^T}{v^T v}$.
End of explanation
"""
qr_test(qr_householder1, numpy.eye(1))
qr_test(qr_householder1, numpy.eye(3,2))
"""
Explanation: Choice of two projections
It turns out our implementation has a nasty deficiency.
End of explanation
"""
qr_test(qr_householder1, numpy.array([[1.,1], [2e-8,1]]))
print(qr_householder1(numpy.array([[1.,1], [2e-8,1]])))
"""
Explanation: Inside qr_householder1, we have the lines
x = R[i:,i]
v = -x
v[0] += numpy.linalg.norm(x)
v = v/numpy.linalg.norm(v) # Normalized reflector plane
What happens when $$x = \begin{bmatrix}1 \ 0 \end{bmatrix}$$
(i.e., the column of $R$ is already upper triangular)?
We are trying to define a reflector plane (via its normal vector) from the zero vector,
$$v = \lVert x \rVert e_0 - x .$$
When we try to normalize this vector, we divide zero by zero and the algorithm breaks down (nan). Maybe we just need to test for this special case and "skip ahead" when no reflection is needed? And if so, how would we define $Q$?
End of explanation
"""
def qr_householder2(A):
"Compute QR factorization using Householder reflection"
m, n = A.shape
R = A.copy()
V = []
for i in range(n):
v = R[i:,i].copy()
v[0] += numpy.copysign(numpy.linalg.norm(v), v[0]) # Choose the further of the two reflections
v = v/numpy.linalg.norm(v) # Normalized reflector plane
R[i:,i:] -= 2 * numpy.outer(v, v.dot(R[i:,i:]))
V.append(v) # Storing reflectors is equivalent to storing orthogonal matrix
Q = numpy.eye(m, n)
for i in range(n):
Q[:,i] = householder_Q_times(V, Q[:,i])
return Q, numpy.triu(R[:n,:])
qr_test(qr_householder2, numpy.eye(3,2))
qr_test(qr_householder2, numpy.array([[1.,1], [1e-8,1]]))
print(qr_householder2(numpy.array([[1.,1], [1e-8,1]])))
qr_test(qr_householder2, V)
"""
Explanation: The error $QR - A$ is still $10^{-8}$ for this very well-conditioned matrix so something else must be at play here.
End of explanation
"""
def R_solve(R, b):
"""Solve Rx = b using back substitution."""
x = b.copy()
m = len(b)
for i in reversed(range(m)):
x[i] -= R[i,i+1:].dot(x[i+1:])
x[i] /= R[i,i]
return x
x = numpy.linspace(-1,1,15)
A = numpy.vander(x, 4)
print(A.shape)
Q, R = numpy.linalg.qr(A)
b = Q.T.dot(A.dot(numpy.array([1,2,3,4])))
numpy.linalg.norm(R_solve(R, b) - numpy.linalg.solve(R, b))
R_solve(R, b)
"""
Explanation: We now have a usable implementation of Householder QR. There are some further concerns for factoring rank-deficient matrices. We will visit the concept of pivoting later, in the context of LU and Cholesky factorization.
Condition number of a matrix
We may have informally referred to a matrix as "ill-conditioned" when the columns are nearly linearly dependent, but let's make this concept for precise. Recall the definition of (relative) condition number from the Rootfinding notes,
$$ \kappa = \max_{\delta x} \frac{|\delta f|/|f|}{|\delta x|/|x|} . $$
We understood this definition for scalar problems, but it also makes sense when the inputs and/or outputs are vectors (or matrices, etc.) and absolute value is replaced by vector (or matrix) norms. Let's consider the case of matrix-vector multiplication, for which $f(x) = A x$.
$$ \kappa(A) = \max_{\delta x} \frac{\lVert A (x+\delta x) - A x \rVert/\lVert A x \rVert}{\lVert \delta x\rVert/\lVert x \rVert}
= \max_{\delta x} \frac{\lVert A \delta x \rVert}{\lVert \delta x \rVert} \, \frac{\lVert x \rVert}{\lVert A x \rVert} = \lVert A \rVert \frac{\lVert x \rVert}{\lVert A x \rVert} . $$
There are two problems here:
I wrote $\kappa(A)$ but my formula depends on $x$.
What is that $\lVert A \rVert$ beastie?
Stack push: Matrix norms
Vector norms are built into the linear space (and defined in term of the inner product). Matrix norms are induced by vector norms, according to
$$ \lVert A \rVert = \max_{x \ne 0} \frac{\lVert A x \rVert}{\lVert x \rVert} . $$
This equation makes sense for non-square matrices -- the vector norms of the input and output spaces may differ.
Due to linearity, all that matters is direction of $x$, so it could equivalently be written
$$ \lVert A \rVert = \max_{\lVert x \rVert = 1} \lVert A x \rVert . $$
Stack pop
Now we understand the formula for condition number, but it depends on $x$. Consider the matrix
$$ A = \begin{bmatrix} 1 & 0 \ 0 & 0 \end{bmatrix} . $$
What is the norm of this matrix?
What is the condition number when $x = [1,0]^T$?
What is the condition number when $x = [0,1]^T$?
The condition number of matrix-vector multiplication depends on the vector. The condition number of the matrix is the worst case (maximum) of the condition number for any vector, i.e.,
$$ \kappa(A) = \max_{x \ne 0} \lVert A \rVert \frac{\lVert x \rVert}{\lVert A x \rVert} .$$
If $A$ is invertible, then we can rephrase as
$$ \kappa(A) = \max_{x \ne 0} \lVert A \rVert \frac{\lVert A^{-1} (A x) \rVert}{\lVert A x \rVert} =
\max_{A x \ne 0} \lVert A \rVert \frac{\lVert A^{-1} (A x) \rVert}{\lVert A x \rVert} = \lVert A \rVert \lVert A^{-1} \rVert . $$
Evidently multiplying by a matrix is just as ill-conditioned of an operation as solving a linear system using that matrix.
End of explanation
"""
# Test accuracy of solver for an ill-conditioned square matrix
x = numpy.linspace(-1,1,19)
A = numpy.vander(x)
print('cond(A) = ',numpy.linalg.cond(A))
Q, R = numpy.linalg.qr(A)
print('cond(R^{-1} Q^T A) =', numpy.linalg.cond(numpy.linalg.solve(R, Q.T.dot(A))))
L = numpy.linalg.cholesky(A.T.dot(A))
print('cond(L^{-T} L^{-1} A^T A) =', numpy.linalg.cond(numpy.linalg.solve(L.T, numpy.linalg.solve(L, A.T.dot(A)))))
"""
Explanation: Cost of Householder factorization
The dominant cost comes from the line
Python
R[i:,i:] -= 2 * numpy.outer(v, v.dot(R[i:,i:]))
were R[i:,i:] is an $(m-i)\times(n-i)$ matrix.
This line performs $2(m-i)(n-i)$ operations in v.dot(R[i:,i:]), another $(m-i)(n-i)$ in the "outer" product and again in subtraction. As written, multiplication by 2 would be another $(m-i)(n-i)$ operations, but is only $m-i$ operations if we rewrite as
Python
w = 2*v
R[i:,i:] -= numpy.outer(w, v.dot(R[i:,i:]))
in which case the leading order cost is $4(m-i)(n-i)$. To compute the total cost, we need to sum over all columns $i$,
$$\begin{split} \sum_{i=1}^n 4(m-i)(n-i) &= 4 \Big[ \sum_{i=1}^n (m-n)(n-i) + \sum_{i=1}^n (n-i)^2 \Big] \
&= 4 (m-n) \sum_{i=1}^n i + 4 \sum_{i=1}^n i^2 \
&\approx 2 (m-n) n^2 + 4 n^3/3 \
&= 2 m n^2 - \frac 2 3 n^3 .
\end{split}$$
Recall that Gram-Schmidt QR cost $2 m n^2$, so Householder costs about the same when $m \gg n$ and is markedly less expensive when $m \approx n$.
Least squares and the normal equations
A least squares problem takes the form: given an $m\times n$ matrix $A$ ($m \ge n$), find $x$ such that
$$ \lVert Ax - b \rVert $$
is minimized. If $A$ is square and full rank, then this minimizer will satisfy $A x - b = 0$, but that is not the case in general because $b$ is not in the range of $A$.
The residual $A x - b$ must be orthogonal to the range of $A$.
Is this the same as saying $A^T (A x - b) = 0$?
If $QR = A$, is it the same as $Q^T (A x - b) = 0$?
In HW2, we showed that $QQ^T$ is an orthogonal projector onto the range of $Q$. If $QR = A$,
$$ QQ^T (A x - b) = QQ^T(Q R x - b) = Q (Q^T Q) R x - QQ^T b = QR x - QQ^T b = A x - QQ^T b . $$
So if $b$ is in the range of $A$, we can solve $A x = b$. If not, we need only orthogonally project $b$ into the range of $A$.
Solution by QR (Householder)
Solve $R x = Q^T b$.
QR factorization costs $2 m n^2 - \frac 2 3 n^3$ operations and is done once per matrix $A$.
Computing $Q^T b$ costs $4 (m-n)n + 2 n^2 = 4 mn - 2n^2$ (using the elementary reflectors, which are stable and lower storage than naive storage of $Q$).
Solving with $R$ costs $n^2$ operations. Total cost per right hand side is thus $4 m n - n^2$.
This method is stable and accurate.
Solution by Cholesky
The mathematically equivalent form $(A^T A) x = A^T b$ are called the normal equations. The solution process involves factoring the symmetric and positive definite $n\times n$ matrix $A^T A$.
Computing $A^T A$ costs $m n^2$ flops, exploiting symmetry.
Factoring $A^T A = R^T R$ costs $\frac 1 3 n^3$ flops. The total factorization cost is thus $m n^2 + \frac 1 3 n^3$.
Computing $A^T b$ costs $2 m n$.
Solving with $R^T$ costs $n^2$.
Solving with $R$ costs $n^2$. Total cost per right hand side is thus $2 m n + 2 n^2$.
The product $A^T A$ is ill-conditioned: $\kappa(A^T A) = \kappa(A)^2$ and can reduce the accuracy of a least squares solution.
Solution by Singular Value Decomposition
Next, we will discuss a factorization
$$ U \Sigma V^T = A $$
where $U$ and $V$ have orthonormal columns and $\Sigma$ is diagonal with nonnegative entries.
The entries of $\Sigma$ are called singular values and this decomposition is the singular value decomposition (SVD).
It may remind you of an eigenvalue decomposition $X \Lambda X^{-1} = A$, but
* the SVD exists for all matrices (including non-square and deficient matrices)
* $U,V$ have orthogonal columns (while $X$ can be arbitrarily ill-conditioned).
Indeed, if a matrix is symmetric and positive definite (all positive eigenvalues), then $U=V$ and $\Sigma = \Lambda$.
Computing an SVD requires a somewhat complicated iterative algorithm, but a crude estimate of the cost is $2 m n^2 + 11 n^3$. Note that this is similar to the cost of $QR$ when $m \gg n$, but much more expensive for square matrices.
Solving with the SVD involves
* Compute $U^T b$ at a cost of $2 m n$.
* Solve with the diagonal $n\times n$ matrix $\Sigma$ at a cost of $n$.
* Apply $V$ at a cost of $2 m n$. The total cost per right hand side is thus $4 m n$.
Pseudoinverse
An alternative is to explicitly form the $n\times m$ pseudoinverse $A^\dagger = R^{-1} Q^T$ (at a cost of $mn^2$) at which point each right hand side costs $2 mn$. Why might we do this?
Lots of right hand sides
Real-time solution
End of explanation
"""
|
kvr777/deep-learning
|
tv-script-generation/.ipynb_checkpoints/dlnd_tv_script_generation-checkpoint.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (10, 20)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
from collections import Counter
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab, 1)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
table = ['.=||Period||' ,
',=||Comma||' ,
'"=||QuotationMark||',
';=||Semicolon||',
'!=||ExclamationMark||',
'?=||QuestionMark||',
'(=||LeftParentheses||',
')=||RightParentheses||',
'--=||Dash||',
'\n=||Return||']
# a = [ 'abc=lalalla', 'appa=kdkdkdkd', 'kkakaka=oeoeoeo']
token_lookup = dict(s.split('=',1) for s in table)
return token_lookup
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
return None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return None, None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
jdhp-docs/python-notebooks
|
python_collections_en.ipynb
|
mit
|
import collections
"""
Explanation: Import directives
End of explanation
"""
d = collections.OrderedDict()
d["2"] = 2
d["3"] = 3
d["1"] = 1
print(d)
print(type(d.keys()))
print(list(d.keys()))
print(type(d.values()))
print(list(d.values()))
for k, v in d.items():
print(k, v)
"""
Explanation: Ordered dictionaries
See https://docs.python.org/3/library/collections.html#collections.OrderedDict
End of explanation
"""
|
oroszl/mezo
|
1D.ipynb
|
gpl-3.0
|
%pylab inline
from ipywidgets import *
"""
Explanation: Exploring 1D scattering problems on a lattice
First let us load matplotlib and numpy by evoking pylab and also let us import interactive widgets from ipywidgets. This is a quick and easy way to set up a simple environment for numerical calculations.
End of explanation
"""
class lead1D:
'A class for simple 1D leads'
def __init__(self,eps0=0,gamma=-1,**kwargs):
'We assume real hopping \gamma and onsite \epsilon_0 parameters!'
self.eps0=eps0
self.gamma=gamma
return
def Ek(self,k):
'Spectrum as a function of k'
return self.eps0+2*self.gamma*cos(k)
def kE(self,E,**kwargs):
'''
Spectrum as a function of E.
If keyword a=True is given than
it gives back two k values,
one positive and one negative.
'''
a = kwargs.get('a',False)
k=arccos((E-self.eps0)/(2*self.gamma))
if a:
return array([-k,k])
else:
return k
def vE(self,E=0,**kwargs):
'''
Group velocity as a function of E.
If keyword a=True is given than
it gives back two v values,
one positive and one negative.
'''
a = kwargs.get('a',False)
k=self.kE(E)
v= -2*self.gamma*sin(k)
if a:
return array([-v,v])
else:
return v
def sgf(self,E=0):
'''
Surgace Green's function of a seminfinite 1D lead.
'''
return exp(1.0j *self.kE(E))/self.gamma
def sgfk(self,k=pi/2):
'''
Surgace Green's function of a seminfinite 1D lead in terms of k.
'''
return exp(1.0j*k)/self.gamma
def vk(self,k=pi/2):
'''
Group velocity in terms of k
'''
return -2*self.gamma*sin(k)
"""
Explanation: Now let us write a small class implementing 1D semi-infinite leads. The class below is just a simple collection of the analytic formilae for spectrum, groupvelocity and the surface Green's function. Most functions have some meaningfull standard parameters (e.g.: hopping $\gamma=-1$, on-site potential $\varepsilon_0=0$). The functions are written keeping in mind that for a generic scattering problem energy is the important variable and not the wavenumber.
End of explanation
"""
def Smat_tunnel(E,alpha):
#Definition of the leads
L1=lead1D()
L2=lead1D()
E=E+0.0000001j # In order to make stuff meaningfull for
# outside of the band we add a tiny
# imaginary part to the energy
#Green's function of decoupled leads
g0= array([[L1.sgf(E=E),0 ],
[0 ,L2.sgf(E=E)]])
#Potential coupling the leads
V= array([[0 ,alpha],
[alpha,0]])
#Dyson's equation
G=inv(inv(g0)-V)
#is the channel open?
#since both sides have the same
#structure they are open or closed
#at the same time
isopen=int(imag(L1.kE(E))<0.001)
#vector of the sqrt of the velocities
vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]]))
#Scattering matrix from Fisher-Lee relations
return matrix(1.0j*G*(vs*vs.T)-eye(2))*isopen
"""
Explanation: Let us investigate the following simple scattering setups:
For each system we shall write a small function that generates the scattering matrix of the problem as a function of the energy $E$ and other relevant parameters. We start by the tunnel junction where we only have $\alpha$ the hopping matrix element coupling the two leads as parameter
End of explanation
"""
energy_range=linspace(-4,4,1000) # this will be the plotted energy range
figsize(8,6) # setting the figure size
fts=20 # the default font is a bit too small so we make it larger
#using the interact decorator we can have an interactive plot
@interact(alpha=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\alpha$'))
def tunnel(alpha=-1):
'''
This function draws a picture of the transmission and reflection coefficitents
of a 1D tunneljunction
'''
TR=[] # we shall collect the values to be plotted in these variables
REF=[]
for ene in energy_range: # energy scan
SS=Smat_tunnel(ene,alpha) # obtain S-matrix
TR.append(abs(SS[0,1])**2) # extract transmission coeff.
REF.append(abs(SS[0,0])**2) # extract reflection coeff.
TR=array(TR)
REF=array(REF)
# make a pretty plot
plot(energy_range,TR,label='T',linewidth=3)
plot(energy_range,REF,label='R',linewidth=2)
plot(energy_range,REF+TR,label='T+R',linewidth=2)
plot(energy_range,zeros_like(energy_range),'k-')
ylim(-0.2,1.8);
xticks(fontsize=fts)
yticks(fontsize=fts)
xlabel('Energy',fontsize=fts);
legend(fontsize=fts);
grid();
title('Transmission for a tunnel junction',fontsize=fts)
"""
Explanation: Now we write a small script tog generate a figure interactively depending on $\alpha$ so we can explore the parameterspace. We have also included an extra factor on top of the Fisher-Lee relations taking in if a channel is open or not.
End of explanation
"""
def Smat_BW(E,t1,t2,eps1):
#Definition of the leads
L1=lead1D()
L2=lead1D()
E=E+0.000000001j # In order to make stuff meaningfull for
# outside of the band we add a tiny
# imaginary part to the energy
#Green's function of decoupled system
#Note that the Green's function of a
#decoupled single site is just the reciprocal
#of (E-eps1) !!
g0= array([[L1.sgf(E=E),0 ,0],
[0 ,L2.sgf(E=E),0],
[0 ,0 ,1/(E-eps1)]])
#Potential coupling the leads
V= array([[0 ,0 ,t1],
[0 ,0 ,t2],
[t1,t2,0 ]])
#Dyson's equation
G=inv(inv(g0)-V)
#is the channel open?
isopen=int(imag(L1.kE(E))<0.001)
#vector of the sqrt of the velocities
vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]]))
#Scattering matrix from Fisher-Lee relations
#Note that we only need the matrix elements
#of the Green's functions on the "surface"
#that is only the upper 2x2 part!
return matrix(1.0j*G[:2,:2]*(vs*vs.T)-eye(2))*isopen
"""
Explanation: Similarly to the tunnel junction we start by a function that generates the scattering matrix. We have to be carefull since now the Green's function of the decoupled system is a $3\times3$ object.
End of explanation
"""
energy_range=linspace(-4,4,1000)
figsize(8,6)
fts=20
@interact(t1=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_L$'),
t2=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_R$'),
eps1=FloatSlider(min=-2,max=2,step=0.1,value=0,description=r'$\varepsilon_1$'))
def BW(t1=-1,t2=-1,eps1=0):
TR=[]
REF=[]
for ene in energy_range:
SS=Smat_BW(ene,t1,t2,eps1)
TR.append(abs(SS[0,1])**2)
REF.append(abs(SS[0,0])**2)
TR=array(TR)
REF=array(REF)
plot(energy_range,TR,label='T',linewidth=3)
plot(energy_range,REF,label='R',linewidth=2)
plot(energy_range,REF+TR,label='T+R',linewidth=1)
plot(energy_range,zeros_like(energy_range),'k-')
ylim(-0.2,1.8);
xticks(fontsize=fts)
yticks(fontsize=fts)
xlabel('Energy',fontsize=fts);
legend(fontsize=fts);
grid();
title('Transmission for a resonant level',fontsize=fts)
"""
Explanation: Now we can write again a script for a nice interactive plot
End of explanation
"""
def Smat_step(E,V0):
#Definition of the leads
L1=lead1D()
L2=lead1D(eps0=V0)
E=E+0.0001j # In order to make stuff meaningfull for
# outside of the band we add a tiny
# imaginary part to the energy
#Green's function of decoupled leads
g0= array([[L1.sgf(E=E),0 ],
[0 ,L2.sgf(E=E)]])
#Potential coupling the leads
V= array([[0 ,-1],
[-1,0]])
#Dyson's equation
G=inv(inv(g0)-V)
#is the channel open?
isopen=array([[float(imag(L1.kE(E))<0.001)],[float(imag(L2.kE(E))<0.001)]])
#vector of the sqrt of the velocities
vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]]))
#Scattering matrix from Fisher-Lee relations
return matrix((1.0j*G*(vs*vs.T)-eye(2))*isopen*isopen.T)
energy_range=linspace(-4,4,1000)
figsize(8,6)
fts=20
@interact(V0=FloatSlider(min=-2,max=2,step=0.1,value=0,description=r'$V_0$'))
def step(V0):
TR=[]
REF=[]
for ene in energy_range:
SS=Smat_step(ene,V0)
TR.append(abs(SS[0,1])**2)
REF.append(abs(SS[0,0])**2)
TR=array(TR)
REF=array(REF)
plot(energy_range,TR,label='T',linewidth=3)
plot(energy_range,REF,label='R',linewidth=2)
plot(energy_range,REF+TR,label='T+R',linewidth=1)
plot(energy_range,zeros_like(energy_range),'k-')
plot()
ylim(-0.2,1.8);
xticks(fontsize=fts)
yticks(fontsize=fts)
xlabel('Energy',fontsize=fts);
legend(fontsize=fts);
grid();
title('Transmission for a potential step',fontsize=fts)
"""
Explanation: Exercises
Using the above two examples write a a code that explores Fano resonance in a resonant level coupled sideways to a lead!
Using the above examples write a code that explores the Aharonov-Bohm effect!
Some more examples
A simple potential step. The onsite potential in the right lead is shifted by $V_0$. Now in general the number of open chanels in the left and right lead are not the same. If the right channel is closed than we can only have reflection.
End of explanation
"""
def Smat_FP(E,t1,t2,N):
# Definition of the leads
L1=lead1D()
L2=lead1D()
E=E+0.000000001j # In order to make stuff meaningfull for
# outside of the band we add a tiny
# imaginary part to the energy
# Green's function of decoupled system
# leads
g0L= array([[L1.sgf(E=E),0 ],
[0 ,L2.sgf(E=E)]])
# the quantum dot
g0S= inv(E*eye(N)-(-eye(N,N,-1)-eye(N,N,1)))
Z=zeros((len(g0L[0,:]),len(g0S[:,0])))
# decoupled full Green's function is built up by stacking the
#
g0=vstack((hstack((g0L,Z )),
hstack((Z.T,g0S))))
v=zeros_like(Z)
v[0,0]=t1
v[-1,-1]=t2
#Potential coupling the leads
V=vstack((hstack((zeros_like(g0L),v )),
hstack((v.T ,zeros_like(g0S))) ))
#Dyson's equation
G=inv(inv(g0)-V)
#is the channel open?
isopen=array([[float(imag(L1.kE(E))<0.001)],[float(imag(L2.kE(E))<0.001)]])
#vector of the sqrt of the velocities
vs=sqrt(array([[L1.vE(E=E)],[L2.vE(E=E)]]))
#Scattering matrix from Fisher-Lee relations
return matrix((1.0j*G[0:2,0:2]*(vs*vs.T)-eye(2))*isopen*isopen.T)
energy_range=linspace(-4,4,1000)
figsize(8,6)
fts=20
@interact(t1=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_L$'),
t2=FloatSlider(min=-2,max=2,step=0.1,value=-1,description=r'$\gamma_R$'),
N=IntSlider(min=1,max=10,value=1,description=r'$N$'))
def FP(t1=-1,t2=-1,N=1):
TR=[]
REF=[]
for ene in energy_range:
SS=Smat_FP(ene,t1,t2,N)
TR.append(abs(SS[0,1])**2)
REF.append(abs(SS[0,0])**2)
TR=array(TR)
REF=array(REF)
plot(energy_range,TR,label='T',linewidth=3)
plot(energy_range,REF,label='R',linewidth=2)
plot(energy_range,REF+TR,label='T+R',linewidth=1)
plot(energy_range,zeros_like(energy_range),'k-')
ylim(-0.2,1.8);
xticks(fontsize=fts)
yticks(fontsize=fts)
xlabel('Energy',fontsize=fts);
legend(fontsize=fts);
grid();
title('Transmission for a Fabry-Perot resonator',fontsize=fts)
"""
Explanation: This is a simple model of a Fabry-Perot resonator realised by two tunnel barriers. This example also illustrates how we can have a larger scattering region.
End of explanation
"""
|
bosscha/alma-calibrator
|
notebooks/2mass/11_PCA_combine_test_matchagain.ipynb
|
gpl-2.0
|
#obj = ["3C 454.3", 343.49062, 16.14821, 1.0]
#obj = ["PKS J0006-0623", 1.55789, -6.39315, 1.0]
obj = ["M87", 187.705930, 12.391123, 1.0]
#### name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
"""
Explanation: Get the data
2MASS => J, H K, angular resolution ~4"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
End of explanation
"""
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 1.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
"""
Explanation: Matching coordinates
End of explanation
"""
# from matching of 2 cats (2MASS and WISE) coordinate
w1 = data_wise[idx_wise]['w1mpro']
j = data_2mass[idx_2mass]['j_m']
w1j = w1-j
# match between WISE and 2MASS
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
cutw1j = -1.7
galaxy = data_wise_matchwith_2mass[w1j < cutw1j] # https://academic.oup.com/mnras/article/448/2/1305/1055284
print("Number of galaxy from cut W1-J:", len(galaxy))
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
"""
Explanation: Plot $W_1-J$ vs $W_1$
End of explanation
"""
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of GALEX match in 2MASS cut (with WISE): ", num_galex1)
print("Number of GALEX match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
"""
Explanation: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
End of explanation
"""
# Choose which one is smaller!
if num_galex1 < num_galex2:
select_from_galex = idx_galex1
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# 2MASS from GALEX_selected
_idx_galex1, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
match_2mass = data_2mass[_idx_2mass]
# WISE from 2MASS_selected
_ra_match_2mass = match_2mass['ra']
_dec_match_2mass = match_2mass['dec']
_c_match_2mass = coordinates.SkyCoord(ra=_ra_match_2mass, dec=_dec_match_2mass, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_wise, d2d, d3d = c_wise.search_around_sky(_c_match_2mass, sep_min)
match_wise = data_wise[_idx_wise]
else:
select_from_galex = idx_galex2
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# WISE from GALEX_selected
_idx_galex1, _idx_wise, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
match_wise = data_wise[_idx_wise]
# 2MASS from WISE_selected
_ra_match_wise = match_wise['ra']
_dec_match_wise = match_wise['dec']
_c_match_wise = coordinates.SkyCoord(ra=_ra_match_wise, dec=_dec_match_wise, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(_c_match_wise, sep_min)
match_2mass = data_2mass[_idx_2mass]
print("Number of match in GALEX: ", len(match_galex))
print("Number of match in 2MASS: ", len(match_2mass))
print("Number of match in WISE : ", len(match_wise))
"""
Explanation: Filter all Cats
End of explanation
"""
joindata = np.array([match_2mass['j_m'],
match_2mass['j_m']-match_2mass['h_m'],
match_2mass['j_m']-match_2mass['k_m'],
match_2mass['j_m']-match_wise['w1mpro'],
match_2mass['j_m']-match_wise['w2mpro'],
match_2mass['j_m']-match_wise['w3mpro'],
match_2mass['j_m']-match_wise['w4mpro'],
match_2mass['j_m']-match_galex['NUVmag']])
joindata = joindata.T
"""
Explanation: Collect relevant data
End of explanation
"""
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
X = scale(joindata)
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color='blue')
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker="*", color="red", )
"""
Explanation: Analysis
PCA
End of explanation
"""
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = joindata #scale(joindata)
db = DBSCAN(eps=1, min_samples=3).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
#print(labels)
"""
Explanation: DBSCAN
End of explanation
"""
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
## J vs J-W1
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=8)
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.plot(X[i,3], X[i,0], marker="X", markerfacecolor='red', markeredgecolor='none', markersize=8)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
"""
Explanation: Plot $J-W_1$ vs $J$
End of explanation
"""
from sklearn.manifold import TSNE
X = joindata #scale(joindata)
X_r = TSNE(n_components=2).fit_transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color="blue")
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker='*', color="red")
"""
Explanation: t-SNE
End of explanation
"""
|
UltronAI/Deep-Learning
|
CS231n/assignment3/StyleTransfer-TensorFlow.ipynb
|
mit
|
%load_ext autoreload
%autoreload 2
from scipy.misc import imread, imresize
import numpy as np
from scipy.misc import imread
import matplotlib.pyplot as plt
# Helper functions to deal with image preprocessing
from cs231n.image_utils import load_image, preprocess_image, deprocess_image
%matplotlib inline
def get_session():
"""Create a session that dynamically allocates memory."""
# See: https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
return session
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Older versions of scipy.misc.imresize yield different results
# from newer versions, so we check to make sure scipy is up to date.
def check_scipy():
import scipy
vnum = int(scipy.__version__.split('.')[1])
assert vnum >= 16, "You must install SciPy >= 0.16.0 to complete this notebook."
check_scipy()
"""
Explanation: Style Transfer
In this notebook we will implement the style transfer technique from "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2015).
The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.
The deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency.
Here's an example of the images you'll be able to produce by the end of this notebook:
Setup
End of explanation
"""
from cs231n.classifiers.squeezenet import SqueezeNet
import tensorflow as tf
tf.reset_default_graph() # remove all existing variables in the graph
sess = get_session() # start a new Session
# Load pretrained SqueezeNet model
SAVE_PATH = 'cs231n/datasets/squeezenet.ckpt'
if not os.path.exists(SAVE_PATH):
raise ValueError("You need to download SqueezeNet!")
model = SqueezeNet(save_path=SAVE_PATH, sess=sess)
# Load data for testing
content_img_test = preprocess_image(load_image('styles/tubingen.jpg', size=192))[None]
style_img_test = preprocess_image(load_image('styles/starry_night.jpg', size=192))[None]
answers = np.load('style-transfer-checks-tf.npz')
"""
Explanation: Load the pretrained SqueezeNet model. This model has been ported from PyTorch, see cs231n/classifiers/squeezenet.py for the model architecture.
To use SqueezeNet, you will need to first download the weights by changing into the cs231n/datasets directory and running get_squeezenet_tf.sh . Note that if you ran get_assignment3_data.sh then SqueezeNet will already be downloaded.
End of explanation
"""
def content_loss(content_weight, content_current, content_original):
"""
Compute the content loss for style transfer.
Inputs:
- content_weight: scalar constant we multiply the content_loss by.
- content_current: features of the current image, Tensor with shape [1, height, width, channels]
- content_target: features of the content image, Tensor with shape [1, height, width, channels]
Returns:
- scalar content loss
"""
pass
"""
Explanation: Computing Loss
We're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below.
Content loss
We can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent not on the parameters of the model, but instead on the pixel values of our original image.
Let's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\ell$), that has feature maps $A^\ell \in \mathbb{R}^{1 \times C_\ell \times H_\ell \times W_\ell}$. $C_\ell$ is the number of filters/channels in layer $\ell$, $H_\ell$ and $W_\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\ell \in \mathbb{R}^{N_\ell \times M_\ell}$ be the feature map for the current image and $P^\ell \in \mathbb{R}^{N_\ell \times M_\ell}$ be the feature map for the content source image where $M_\ell=H_\ell\times W_\ell$ is the number of elements in each feature map. Each row of $F^\ell$ or $P^\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.
Then the content loss is given by:
$L_c = w_c \times \sum_{i,j} (F_{ij}^{\ell} - P_{ij}^{\ell})^2$
End of explanation
"""
def content_loss_test(correct):
content_layer = 3
content_weight = 6e-2
c_feats = sess.run(model.extract_features()[content_layer], {model.image: content_img_test})
bad_img = tf.zeros(content_img_test.shape)
feats = model.extract_features(bad_img)[content_layer]
student_output = sess.run(content_loss(content_weight, c_feats, feats))
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
content_loss_test(answers['cl_out'])
"""
Explanation: Test your content loss. You should see errors less than 0.001.
End of explanation
"""
def gram_matrix(features, normalize=True):
"""
Compute the Gram matrix from features.
Inputs:
- features: Tensor of shape (1, H, W, C) giving features for
a single image.
- normalize: optional, whether to normalize the Gram matrix
If True, divide the Gram matrix by the number of neurons (H * W * C)
Returns:
- gram: Tensor of shape (C, C) giving the (optionally normalized)
Gram matrices for the input image.
"""
pass
"""
Explanation: Style loss
Now we can tackle the style loss. For a given layer $\ell$, the style loss is defined as follows:
First, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.
Given a feature map $F^\ell$ of shape $(1, C_\ell, M_\ell)$, the Gram matrix has shape $(1, C_\ell, C_\ell)$ and its elements are given by:
$$G_{ij}^\ell = \sum_k F^{\ell}{ik} F^{\ell}{jk}$$
Assuming $G^\ell$ is the Gram matrix from the feature map of the current image, $A^\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\ell$ a scalar weight term, then the style loss for the layer $\ell$ is simply the weighted Euclidean distance between the two Gram matrices:
$$L_s^\ell = w_\ell \sum_{i, j} \left(G^\ell_{ij} - A^\ell_{ij}\right)^2$$
In practice we usually compute the style loss at a set of layers $\mathcal{L}$ rather than just a single layer $\ell$; then the total style loss is the sum of style losses at each layer:
$$L_s = \sum_{\ell \in \mathcal{L}} L_s^\ell$$
Begin by implementing the Gram matrix computation below:
End of explanation
"""
def gram_matrix_test(correct):
gram = gram_matrix(model.extract_features()[5])
student_output = sess.run(gram, {model.image: style_img_test})
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
gram_matrix_test(answers['gm_out'])
"""
Explanation: Test your Gram matrix code. You should see errors less than 0.001.
End of explanation
"""
def style_loss(feats, style_layers, style_targets, style_weights):
"""
Computes the style loss at a set of layers.
Inputs:
- feats: list of the features at every layer of the current image, as produced by
the extract_features function.
- style_layers: List of layer indices into feats giving the layers to include in the
style loss.
- style_targets: List of the same length as style_layers, where style_targets[i] is
a Tensor giving the Gram matrix the source style image computed at
layer style_layers[i].
- style_weights: List of the same length as style_layers, where style_weights[i]
is a scalar giving the weight for the style loss at layer style_layers[i].
Returns:
- style_loss: A Tensor contataining the scalar style loss.
"""
# Hint: you can do this with one for loop over the style layers, and should
# not be very much code (~5 lines). You will need to use your gram_matrix function.
pass
"""
Explanation: Next, implement the style loss:
End of explanation
"""
def style_loss_test(correct):
style_layers = [1, 4, 6, 7]
style_weights = [300000, 1000, 15, 3]
feats = model.extract_features()
style_target_vars = []
for idx in style_layers:
style_target_vars.append(gram_matrix(feats[idx]))
style_targets = sess.run(style_target_vars,
{model.image: style_img_test})
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
student_output = sess.run(s_loss, {model.image: content_img_test})
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
style_loss_test(answers['sl_out'])
"""
Explanation: Test your style loss implementation. The error should be less than 0.001.
End of explanation
"""
def tv_loss(img, tv_weight):
"""
Compute total variation loss.
Inputs:
- img: Tensor of shape (1, H, W, 3) holding an input image.
- tv_weight: Scalar giving the weight w_t to use for the TV loss.
Returns:
- loss: Tensor holding a scalar giving the total variation loss
for img weighted by tv_weight.
"""
# Your implementation should be vectorized and not require any loops!
pass
"""
Explanation: Total-variation regularization
It turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or "total variation" in the pixel values.
You can compute the "total variation" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$:
$L_{tv} = w_t \times \sum_{c=1}^3\sum_{i=1}^{H-1} \sum_{j=1}^{W-1} \left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \right)$
In the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.
End of explanation
"""
def tv_loss_test(correct):
tv_weight = 2e-2
t_loss = tv_loss(model.image, tv_weight)
student_output = sess.run(t_loss, {model.image: content_img_test})
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
tv_loss_test(answers['tv_out'])
"""
Explanation: Test your TV loss implementation. Error should be less than 0.001.
End of explanation
"""
def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,
style_layers, style_weights, tv_weight, init_random = False):
"""Run style transfer!
Inputs:
- content_image: filename of content image
- style_image: filename of style image
- image_size: size of smallest image dimension (used for content loss and generated image)
- style_size: size of smallest style image dimension
- content_layer: layer to use for content loss
- content_weight: weighting on content loss
- style_layers: list of layers to use for style loss
- style_weights: list of weights to use for each layer in style_layers
- tv_weight: weight of total variation regularization term
- init_random: initialize the starting image to uniform random noise
"""
# Extract features from the content image
content_img = preprocess_image(load_image(content_image, size=image_size))
feats = model.extract_features(model.image)
content_target = sess.run(feats[content_layer],
{model.image: content_img[None]})
# Extract features from the style image
style_img = preprocess_image(load_image(style_image, size=style_size))
style_feat_vars = [feats[idx] for idx in style_layers]
style_target_vars = []
# Compute list of TensorFlow Gram matrices
for style_feat_var in style_feat_vars:
style_target_vars.append(gram_matrix(style_feat_var))
# Compute list of NumPy Gram matrices by evaluating the TensorFlow graph on the style image
style_targets = sess.run(style_target_vars, {model.image: style_img[None]})
# Initialize generated image to content image
if init_random:
img_var = tf.Variable(tf.random_uniform(content_img[None].shape, 0, 1), name="image")
else:
img_var = tf.Variable(content_img[None], name="image")
# Extract features on generated image
feats = model.extract_features(img_var)
# Compute loss
c_loss = content_loss(content_weight, feats[content_layer], content_target)
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
t_loss = tv_loss(img_var, tv_weight)
loss = c_loss + s_loss + t_loss
# Set up optimization hyperparameters
initial_lr = 3.0
decayed_lr = 0.1
decay_lr_at = 180
max_iter = 200
# Create and initialize the Adam optimizer
lr_var = tf.Variable(initial_lr, name="lr")
# Create train_op that updates the generated image when run
with tf.variable_scope("optimizer") as opt_scope:
train_op = tf.train.AdamOptimizer(lr_var).minimize(loss, var_list=[img_var])
# Initialize the generated image and optimization variables
opt_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=opt_scope.name)
sess.run(tf.variables_initializer([lr_var, img_var] + opt_vars))
# Create an op that will clamp the image values when run
clamp_image_op = tf.assign(img_var, tf.clip_by_value(img_var, -1.5, 1.5))
f, axarr = plt.subplots(1,2)
axarr[0].axis('off')
axarr[1].axis('off')
axarr[0].set_title('Content Source Img.')
axarr[1].set_title('Style Source Img.')
axarr[0].imshow(deprocess_image(content_img))
axarr[1].imshow(deprocess_image(style_img))
plt.show()
plt.figure()
# Hardcoded handcrafted
for t in range(max_iter):
# Take an optimization step to update img_var
sess.run(train_op)
if t < decay_lr_at:
sess.run(clamp_image_op)
if t == decay_lr_at:
sess.run(tf.assign(lr_var, decayed_lr))
if t % 100 == 0:
print('Iteration {}'.format(t))
img = sess.run(img_var)
plt.imshow(deprocess_image(img[0], rescale=True))
plt.axis('off')
plt.show()
print('Iteration {}'.format(t))
img = sess.run(img_var)
plt.imshow(deprocess_image(img[0], rescale=True))
plt.axis('off')
plt.show()
"""
Explanation: Style Transfer
Lets put it all together and make some beautiful images! The style_transfer function below combines all the losses you coded up above and optimizes for an image that minimizes the total loss.
End of explanation
"""
# Composition VII + Tubingen
params1 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/composition_vii.jpg',
'image_size' : 192,
'style_size' : 512,
'content_layer' : 3,
'content_weight' : 5e-2,
'style_layers' : (1, 4, 6, 7),
'style_weights' : (20000, 500, 12, 1),
'tv_weight' : 5e-2
}
style_transfer(**params1)
# Scream + Tubingen
params2 = {
'content_image':'styles/tubingen.jpg',
'style_image':'styles/the_scream.jpg',
'image_size':192,
'style_size':224,
'content_layer':3,
'content_weight':3e-2,
'style_layers':[1, 4, 6, 7],
'style_weights':[200000, 800, 12, 1],
'tv_weight':2e-2
}
style_transfer(**params2)
# Starry Night + Tubingen
params3 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [300000, 1000, 15, 3],
'tv_weight' : 2e-2
}
style_transfer(**params3)
"""
Explanation: Generate some pretty pictures!
Try out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook.
The content_image is the filename of content image.
The style_image is the filename of style image.
The image_size is the size of smallest image dimension of the content image (used for content loss and generated image).
The style_size is the size of smallest style image dimension.
The content_layer specifies which layer to use for content loss.
The content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).
style_layers specifies a list of which layers to use for style loss.
style_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.
tv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content.
Below the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.
End of explanation
"""
# Feature Inversion -- Starry Night + Tubingen
params_inv = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss
'tv_weight' : 2e-2,
'init_random': True # we want to initialize our image to be random
}
style_transfer(**params_inv)
"""
Explanation: Feature Inversion
The code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations).
Now, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image.
(Similarly, you could do "texture synthesis" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.)
[1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015
End of explanation
"""
|
sns-chops/multiphonon
|
tests/notebooks/getdos-multiple-Ei.ipynb
|
mit
|
# where am I now?
!pwd
# create a new working directory and change into it
workdir = '~/reduction/ARCS/getdos-multiple-Ei-demo'
!mkdir -p {workdir}
%cd {workdir}
# Data to reduce. Change the IPTS number and run numbers to suit your need
samplenxs = "/SNS/ARCS/IPTS-15398/shared/mantid_reduce/non-radC/non-radC_130p00.nxspe"
mtnxs = "/SNS/ARCS/IPTS-15398/shared/mantid_reduce/MT/MT_130p00.nxspe"
initdos = '/SNS/ARCS/IPTS-15398/shared/getdos/graphite-Ei_300-dos.h5'
"""
Explanation: Density of States Analysis Example
Given sample and empty-can data, compute phonon DOS
To use this notebook, first click jupyter menu File->Make a copy
Click the title of the copied jupyter notebook and change it to a new title
Start executing cells
Preparation
End of explanation
"""
# import tools
import os, numpy as np
from multiphonon.getdos import notebookUI
import histogram.hdf as hh, histogram as H
%matplotlib notebook
from matplotlib import pyplot as plt
# create the UI for the first time
notebookUI(samplenxs, mtnxs, initdos=initdos, load_options_path='/SNS/ARCS/IPTS-15398/shared/getdos/130meV-getdos-opts.yaml')
"""
Explanation: Run GetDOS
End of explanation
"""
ls work/
dos0= hh.load(initdos)
plt.plot(dos0.E, dos0.I, '+', label='DOS from Ei=300meV data')
dos = hh.load('work/final-dos.h5')
plt.plot(dos.E, dos.I, label='new DOS')
plt.xlim(0, 230)
plt.legend(loc='top left')
"""
Explanation: Check output
End of explanation
"""
# if you need to run getdos again with slightly modified options
# you can start from the previous settings
notebookUI(samplenxs, mtnxs, load_options_path="./work/getdos-opts.yaml")
"""
Explanation: Refine GetDOS
End of explanation
"""
|
turbomanage/training-data-analyst
|
courses/machine_learning/cloudmle/cloudmle.ipynb
|
apache-2.0
|
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.4' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
"""
Explanation: <h1> Scaling up ML using Cloud ML Engine </h1>
In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud MLE. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates how to package up a TensorFlow model to run it within Cloud ML.
Later in the course, we will look at ways to make a more effective machine learning model.
<h2> Environment variables for project and bucket </h2>
Note that:
<ol>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
End of explanation
"""
%%bash
PROJECT_ID=$PROJECT
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print(response['serviceAccount'])")
echo "Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET
"""
Explanation: Allow the Cloud ML Engine service account to read/write to the bucket containing training data.
End of explanation
"""
!find taxifare
!cat taxifare/trainer/model.py
"""
Explanation: <h2> Packaging up the code </h2>
Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> contain the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>).
End of explanation
"""
%%bash
echo $PWD
rm -rf $PWD/taxi_trained
cp $PWD/../tensorflow/taxi-train.csv .
cp $PWD/../tensorflow/taxi-valid.csv .
head -1 $PWD/taxi-train.csv
head -1 $PWD/taxi-valid.csv
"""
Explanation: <h2> Find absolute paths to your data </h2>
Note the absolute paths below. /content is mapped in Datalab to where the home icon takes you
End of explanation
"""
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths="${PWD}/taxi-train*" \
--eval_data_paths=${PWD}/taxi-valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=1000 --job-dir=./tmp
%%bash
ls $PWD/taxi_trained/export/exporter/
%%writefile ./test.json
{"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2}
## local predict doesn't work with Python 3 yet
#%bash
#model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
#gcloud ai-platform local predict \
# --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
# --json-instances=./test.json
"""
Explanation: <h2> Running the Python module from the command-line </h2>
End of explanation
"""
%%bash
rm -rf taxifare.tar.gz taxi_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
-- \
--train_data_paths=${PWD}/taxi-train.csv \
--eval_data_paths=${PWD}/taxi-valid.csv \
--train_steps=1000 \
--output_dir=${PWD}/taxi_trained
"""
Explanation: <h2> Running locally using gcloud </h2>
End of explanation
"""
!ls $PWD/taxi_trained
"""
Explanation: When I ran it (due to random seeds, your results will be different), the average_loss (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13.
If the above step (to stop TensorBoard) appears stalled, just move on to the next step. You don't need to wait for it to return.
End of explanation
"""
%%bash
echo $BUCKET
gsutil -m rm -rf gs://${BUCKET}/taxifare/smallinput/
gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/taxifare/smallinput/
%%bash
OUTDIR=gs://${BUCKET}/taxifare/smallinput/taxi_trained
JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${BUCKET}/taxifare/smallinput/taxi-train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/smallinput/taxi-valid*" \
--output_dir=$OUTDIR \
--train_steps=10000
"""
Explanation: <h2> Submit training job using gcloud </h2>
First copy the training data to the cloud. Then, launch a training job.
After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress.
<b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job.
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter
%%bash
MODEL_NAME="taxifare"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter | tail -1)
echo "Run these commands one-by-one (the very first time, you'll create a model and then create a version)"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
"""
Explanation: Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
<h2> Deploy model </h2>
Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>.
End of explanation
"""
%%bash
gcloud ai-platform predict --model=taxifare --version=v1 --json-instances=./test.json
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{
'pickuplon': -73.885262,
'pickuplat': 40.773008,
'dropofflon': -73.987232,
'dropofflat': 40.732403,
'passengers': 2,
}
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'taxifare', 'v1')
response = api.projects().predict(body=request_data, name=parent).execute()
print("response={0}".format(response))
"""
Explanation: <h2> Prediction </h2>
End of explanation
"""
%%bash
XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line.
OUTDIR=gs://${BUCKET}/taxifare/ch3/taxi_trained
JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)
CRS_BUCKET=cloud-training-demos # use the already exported data
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${CRS_BUCKET}/taxifare/ch3/train.csv" \
--eval_data_paths="gs://${CRS_BUCKET}/taxifare/ch3/valid.csv" \
--output_dir=$OUTDIR \
--train_steps=100000
"""
Explanation: <h2> Train on larger dataset </h2>
I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow.
Go to http://bigquery.cloud.google.com/ and type the query:
<pre>
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'nokeyindata' AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(HASH(pickup_datetime)) % 1000 == 1
</pre>
Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.):
<ol>
<li> Click on the "Save As Table" button and note down the name of the dataset and table.
<li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name.
<li> Click on "Export Table"
<li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu)
<li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv)
<li> Download the two files, remove the header line and upload it back to GCS.
</ol>
<p/>
<p/>
<h2> Run Cloud training on 1-million row dataset </h2>
This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help.
End of explanation
"""
|
radajin/whoscored
|
recomend_position/recomend_position_1.ipynb
|
mit
|
%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import MySQLdb
from sklearn.tree import export_graphviz
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
"""
Explanation: Recomend Better Postion Who Have Multy Position Player
- Using Decision Tree, Gaussian and Ensemble
Index
Connect DB and Make QUERY
Make Pandas DataFrame Each Position Player
Set Position Category and Concat Each Datafream
Make Training and Test Data
Make Decision Tree Classifier Model
Check Confusion Matrix
Cecck Classification Report
Recomend Position
Import Package
End of explanation
"""
db = MySQLdb.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"football",
charset='utf8',
)
def make_query(position):
"""
parameter------------
position : M, D, F, G
return---------------
SQL_QUERY String
"""
SQL_QUERY = """
SELECT *
FROM player
"""
if position == "F":
SQL_QUERY += """
WHERE position not like "%,%" and position like "%FW%" and mins > 270
"""
if position == "M":
SQL_QUERY += """
WHERE position not like "%,%" and position like "%M%" and mins > 270
"""
if position == "D":
SQL_QUERY += """
WHERE position not like "%,%" and position like "%D%" and position not like " DMC" and mins > 270
"""
if position == "G":
SQL_QUERY += """
WHERE position not like "%,%" and position like "%G%" and mins > 270
"""
return SQL_QUERY
"""
Explanation: 1. Connect DB and Make QUERY
End of explanation
"""
# forword
SQL_QUERY = make_query("F")
forword_df = pd.read_sql(SQL_QUERY, db)
# midfilder
SQL_QUERY = make_query("M")
midfilder_df = pd.read_sql(SQL_QUERY, db)
# defencer
SQL_QUERY = make_query("D")
defencer_df = pd.read_sql(SQL_QUERY, db)
# goalkeeper
SQL_QUERY = make_query("G")
goalkeeper_df = pd.read_sql(SQL_QUERY, db)
len(forword_df), len(midfilder_df), len(defencer_df), len(goalkeeper_df)
"""
Explanation: 2. Make Pandas DataFrame Each Position Player
End of explanation
"""
forword_df["position"] = 0
forword_df
midfilder_df["position"] = 1
midfilder_df
defencer_df["position"] = 2
defencer_df
goalkeeper_df["position"] = 3
goalkeeper_df
concated_df = pd.concat([forword_df, midfilder_df, defencer_df, goalkeeper_df])
concated_df.tail()
"""
Explanation: 3. Set Position Category and Concat Each Datafream
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(concated_df.ix[:,:-1], concated_df.ix[:,-1], test_size=0.2, random_state=1)
"""
Explanation: 4. Make Training and Test Data
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
model_entropy = DecisionTreeClassifier(criterion='entropy', max_depth=3).fit(X_train, y_train)
model_gini = DecisionTreeClassifier(criterion='gini', max_depth=3).fit(X_train, y_train)
from sklearn.naive_bayes import GaussianNB
model_gaussian = GaussianNB().fit(X_train, y_train)
from sklearn.ensemble import VotingClassifier
clf1 = DecisionTreeClassifier(criterion='entropy', max_depth=3)
clf2 = DecisionTreeClassifier(criterion='gini', max_depth=3)
clf3 = GaussianNB()
eclf = VotingClassifier(estimators=[('entropy', clf1), ('gini', clf2), ('naive', clf3)], voting='soft', weights=[2, 1, 1])
model_ensemble = eclf.fit(X_train, y_train)
"""
Explanation: 5. Make Decision Tree Classifier Model
End of explanation
"""
cm_entropy = confusion_matrix(y_test, model_entropy.predict(X_test))
cm_gini = confusion_matrix(y_test, model_gini.predict(X_test))
cm_gaussian = confusion_matrix(y_test, model_gaussian.predict(X_test))
cm_ensemble = confusion_matrix(y_test, model_ensemble.predict(X_test))
print("entropy"+"="*12)
print(cm_entropy)
print("gini"+"="*15)
print(cm_gini)
print("gaussian"+"="*11)
print(cm_gaussian)
print("ensemble"+"="*11)
print(cm_ensemble)
"""
Explanation: 6. Check Confusion Matrix
End of explanation
"""
print("entropy"+"="*50)
print(classification_report(y_test, model_entropy.predict(X_test)))
print("gini"+"="*50)
print(classification_report(y_test, model_gini.predict(X_test)))
print("gaussian"+"="*50)
print(classification_report(y_test, model_gaussian.predict(X_test)))
print("ensemble"+"="*50)
print(classification_report(y_test, model_ensemble.predict(X_test)))
"""
Explanation: 7. Cecck Classification Report
End of explanation
"""
SQL_QUERY = """
SELECT
tall, weight, apps_sub, mins, goals, assists
, spg, ps_x, motm, aw, tackles, inter, fouls, clear, drb
, owng, keyp_x, fouled, off, disp, unstch, avgp, position
FROM player
WHERE position like "%,%" and mins > 270
;
"""
many_position_player_df = pd.read_sql(SQL_QUERY, db)
len(many_position_player_df)
predict_data = model_ensemble.predict(many_position_player_df.ix[:,:-1])
many_position_player_df["recomend_position"] = predict_data
# Recomend Result
# 0 : Forword, 1 : Midfilder, 2 : Defencer, 3 : Goalkeeper
many_position_player_df.ix[:10,-2:]
"""
Explanation: 8. Recomend Position
End of explanation
"""
|
tbarrongh/cosc-learning-labs
|
src/notebook/03_management_interface.ipynb
|
apache-2.0
|
help('learning_lab.03_management_interface')
"""
Explanation: COSC Learning Lab
03_management_interface.py
Related Scripts:
* 03_interface_configuration.py
* 01_device_control.py
* 01_inventory_mounted.py
Table of Contents
Table of Contents
Documentation
Implementation
Execution
HTTP
Documentation
End of explanation
"""
from importlib import import_module
script = import_module('learning_lab.03_management_interface')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
"""
Explanation: Implementation
End of explanation
"""
run ../learning_lab/03_management_interface.py
"""
Explanation: Execution
End of explanation
"""
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
"""
Explanation: HTTP
End of explanation
"""
|
tensorflow/docs
|
site/en/guide/migrate/model_mapping.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
!pip uninstall -y -q tensorflow
# Install tf-nightly as the DeterministicRandomTestTool is available only in
# Tensorflow 2.8
!pip install -q tf-nightly
import tensorflow as tf
import tensorflow.compat.v1 as v1
import sys
import numpy as np
from contextlib import contextmanager
"""
Explanation: Use TF1.x models in TF2 workflows
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/model_mapping"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/model_mapping.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/model_mapping.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/model_mapping.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This guide provides an overview and examples of a modeling code shim that you can employ to use your existing TF1.x models in TF2 workflows such as eager execution, tf.function, and distribution strategies with minimal changes to your modeling code.
Scope of usage
The shim described in this guide is designed for TF1.x models that rely on:
1. tf.compat.v1.get_variable and tf.compat.v1.variable_scope to control variable creation and reuse, and
1. Graph-collection based APIs such as tf.compat.v1.global_variables(), tf.compat.v1.trainable_variables, tf.compat.v1.losses.get_regularization_losses(), and tf.compat.v1.get_collection() to keep track of weights and regularization losses
This includes most models built on top of tf.compat.v1.layer, tf.contrib.layers APIs, and TensorFlow-Slim.
The shim is NOT necessary for the following TF1.x models:
Stand-alone Keras models that already track all of their trainable weights and regularization losses via model.trainable_weights and model.losses respectively.
tf.Modules that already track all of their trainable weights via module.trainable_variables, and only create weights if they have not already been created.
These models are likely to work in TF2 with eager execution and tf.functions out-of-the-box.
Setup
Import TensorFlow and other dependencies.
End of explanation
"""
class DenseLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
out = inputs
with tf.compat.v1.variable_scope("dense"):
# The weights are created with a `regularizer`,
# so the layer should track their regularization losses
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=tf.keras.regularizers.L2(),
initializer=tf.compat.v1.initializers.glorot_normal,
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=tf.compat.v1.initializers.zeros,
name="bias")
out = tf.linalg.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
layer = DenseLayer(10)
x = tf.random.normal(shape=(8, 20))
layer(x)
"""
Explanation: The track_tf1_style_variables decorator
The key shim described in this guide is tf.compat.v1.keras.utils.track_tf1_style_variables, a decorator that you can use within methods belonging to tf.keras.layers.Layer and tf.Module to track TF1.x-style weights and capture regularization losses.
Decorating a tf.keras.layers.Layer's or tf.Module's call methods with tf.compat.v1.keras.utils.track_tf1_style_variables allows variable creation and reuse via tf.compat.v1.get_variable (and by extension tf.compat.v1.layers) to work correctly inside of the decorated method rather than always creating a new variable on each call. It will also cause the layer or module to implicitly track any weights created or accessed via get_variable inside the decorated method.
In addition to tracking the weights themselves under the standard
layer.variable/module.variable/etc. properties, if the method belongs
to a tf.keras.layers.Layer, then any regularization losses specified via the
get_variable or tf.compat.v1.layers regularizer arguments will get
tracked by the layer under the standard layer.losses property.
This tracking mechanism enables using large classes of TF1.x-style model-forward-pass code inside of Keras layers or tf.Modules in TF2 even with TF2 behaviors enabled.
Usage examples
The usage examples below demonstrate the modeling shims used to decorate tf.keras.layers.Layer methods, but except where they are specifically interacting with Keras features they are applicable when decorating tf.Module methods as well.
Layer built with tf.compat.v1.get_variable
Imagine you have a layer implemented directly on top of tf.compat.v1.get_variable as follows:
python
def dense(self, inputs, units):
out = inputs
with tf.compat.v1.variable_scope("dense"):
# The weights are created with a `regularizer`,
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], units],
regularizer=tf.keras.regularizers.L2(),
initializer=tf.compat.v1.initializers.glorot_normal,
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[units,],
initializer=tf.compat.v1.initializers.zeros,
name="bias")
out = tf.linalg.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
Use the shim to turn it into a layer and call it on inputs.
End of explanation
"""
layer.trainable_variables
layer.losses
"""
Explanation: Access the tracked variables and the captured regularization losses like a standard Keras layer.
End of explanation
"""
print("Resetting variables to zero:", [var.name for var in layer.trainable_variables])
for var in layer.trainable_variables:
var.assign(var * 0.0)
# Note: layer.losses is not a live view and
# will get reset only at each layer call
print("layer.losses:", layer.losses)
print("calling layer again.")
out = layer(x)
print("layer.losses: ", layer.losses)
out
"""
Explanation: To see that the weights get reused each time you call the layer, set all the weights to zero and call the layer again.
End of explanation
"""
inputs = tf.keras.Input(shape=(20))
outputs = DenseLayer(10)(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
x = tf.random.normal(shape=(8, 20))
model(x)
# Access the model variables and regularization losses
model.weights
model.losses
"""
Explanation: You can use the converted layer directly in Keras functional model construction as well.
End of explanation
"""
class CompatV1LayerModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dense(
out, self.units,
kernel_regularizer="l2")
return out
layer = CompatV1LayerModel(10)
x = tf.random.normal(shape=(8, 5, 5, 5))
layer(x)
"""
Explanation: Model built with tf.compat.v1.layers
Imagine you have a layer or model implemented directly on top of tf.compat.v1.layers as follows:
python
def model(self, inputs, units):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dense(
out, units,
kernel_regularizer="l2")
return out
Use the shim to turn it into a layer and call it on inputs.
End of explanation
"""
layer.trainable_variables
layer.losses
"""
Explanation: Warning: For safety reasons, make sure to put all tf.compat.v1.layers inside of a non-empty-string variable_scope. This is because tf.compat.v1.layers with auto-generated names will always auto-increment the name outside of any variable scope. This means the requested variable names will mismatch each time you call the layer/module. So, rather than reusing the already-made weights it will create a new set of variables every call.
Access the tracked variables and captured regularization losses like a standard Keras layer.
End of explanation
"""
print("Resetting variables to zero:", [var.name for var in layer.trainable_variables])
for var in layer.trainable_variables:
var.assign(var * 0.0)
out = layer(x)
print("layer.losses: ", layer.losses)
out
"""
Explanation: To see that the weights get reused each time you call the layer, set all the weights to zero and call the layer again.
End of explanation
"""
inputs = tf.keras.Input(shape=(5, 5, 5))
outputs = CompatV1LayerModel(10)(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
x = tf.random.normal(shape=(8, 5, 5, 5))
model(x)
# Access the model variables and regularization losses
model.weights
model.losses
"""
Explanation: You can use the converted layer directly in Keras functional model construction as well.
End of explanation
"""
class CompatV1BatchNorm(tf.keras.layers.Layer):
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
print("Forward pass called with `training` =", training)
with v1.variable_scope('batch_norm_layer'):
return v1.layers.batch_normalization(x, training=training)
print("Constructing model")
inputs = tf.keras.Input(shape=(5, 5, 5))
outputs = CompatV1BatchNorm()(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
print("Calling model in inference mode")
x = tf.random.normal(shape=(8, 5, 5, 5))
model(x, training=False)
print("Moving average variables before training: ",
{var.name: var.read_value() for var in model.non_trainable_variables})
# Notice that when running TF2 and eager execution, the batchnorm layer directly
# updates the moving averages while training without needing any extra control
# dependencies
print("calling model in training mode")
model(x, training=True)
print("Moving average variables after training: ",
{var.name: var.read_value() for var in model.non_trainable_variables})
"""
Explanation: Capture batch normalization updates and model training args
In TF1.x, you perform batch normalization like this:
```python
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training)
# ...
update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)
train_op = optimizer.minimize(loss)
train_op = tf.group([train_op, update_ops])
``
Note that:
1. The batch normalization moving average updates are tracked byget_collectionwhich was called separately from the layer
2.tf.compat.v1.layers.batch_normalizationrequires atrainingargument (generally calledis_training` when using TF-Slim batch normalization layers)
In TF2, due to eager execution and automatic control dependencies, the batch normalization moving average updates will be executed right away. There is no need to separately collect them from the updates collection and add them as explicit control dependencies.
Additionally, if you give your tf.keras.layers.Layer's forward pass method a training argument, Keras will be able to pass the current training phase and any nested layers to it just like it does for any other layer. See the API docs for tf.keras.Model for more information on how Keras handles the training argument.
If you are decorating tf.Module methods, you need to make sure to manually pass all training arguments as needed. However, the batch normalization moving average updates will still be applied automatically with no need for explicit control dependencies.
The following code snippets demonstrate how to embed batch normalization layers in the shim and how using it in a Keras model works (applicable to tf.keras.layers.Layer).
End of explanation
"""
class NestedModel(tf.keras.Model):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
def build_model(self):
inp = tf.keras.Input(shape=(5, 5))
dense_layer = tf.keras.layers.Dense(
10, name="dense", kernel_regularizer="l2",
kernel_initializer=tf.compat.v1.ones_initializer())
model = tf.keras.Model(inputs=inp, outputs=dense_layer(inp))
return model
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
# Get or create a nested model without assigning it as an explicit property
model = tf.compat.v1.keras.utils.get_or_create_layer(
"dense_model", self.build_model)
return model(inputs)
layer = NestedModel(10)
layer(tf.ones(shape=(5,5)))
"""
Explanation: Variable-scope based variable reuse
Any variable creations in the forward pass based on get_variable will maintain the same variable naming and reuse semantics that variable scopes have in TF1.x. This is true as long as you have at least one non-empty outer scope for any tf.compat.v1.layers with auto-generated names, as mentioned above.
Note: Naming and reuse will be scoped to within a single layer/module instance. Calls to get_variable inside one shim-decorated layer or module will not be able to refer to variables created inside of layers or modules. You can get around this by using Python references to other variables directly if need be, rather than accessing variables via get_variable.
Eager execution & tf.function
As seen above, decorated methods for tf.keras.layers.Layer and tf.Module run inside of eager execution and are also compatible with tf.function. This means you can use pdb and other interactive tools to step through your forward pass as it is running.
Warning: Although it is perfectly safe to call your shim-decorated layer/module methods from inside of a tf.function, it is not safe to put tf.functions inside of your shim-decorated methods if those tf.functions contain get_variable calls. Entering a tf.function resets variable_scopes, which means the TF1.x-style variable-scope-based variable reuse that the shim mimics will break down in this setting.
Distribution strategies
Calls to get_variable inside of @track_tf1_style_variables-decorated layer or module methods use standard tf.Variable variable creations under the hood. This means you can use them with the various distribution strategies available with tf.distribute such as MirroredStrategy and TPUStrategy.
Nesting tf.Variables, tf.Modules, tf.keras.layers & tf.keras.models in decorated calls
Decorating your layer call in tf.compat.v1.keras.utils.track_tf1_style_variables will only add automatic implicit tracking of variables created (and reused) via tf.compat.v1.get_variable. It will not capture weights directly created by tf.Variable calls, such as those used by typical Keras layers and most tf.Modules. This section describes how to handle these nested cases.
(Pre-existing usages) tf.keras.layers and tf.keras.models
For pre-existing usages of nested Keras layers and models, use tf.compat.v1.keras.utils.get_or_create_layer. This is only recommended for easing migration of existing TF1.x nested Keras usages; new code should use explicit attribute setting as described below for tf.Variables and tf.Modules.
To use tf.compat.v1.keras.utils.get_or_create_layer, wrap the code that constructs your nested model into a method, and pass it in to the method. Example:
End of explanation
"""
assert len(layer.weights) == 2
weights = {x.name: x for x in layer.variables}
assert set(weights.keys()) == {"dense/bias:0", "dense/kernel:0"}
layer.weights
"""
Explanation: This method ensures that these nested layers are correctly reused and tracked by tensorflow. Note that the @track_tf1_style_variables decorator is still required on the appropriate method. The model builder method passed into get_or_create_layer (in this case, self.build_model), should take no arguments.
Weights are tracked:
End of explanation
"""
tf.add_n(layer.losses)
"""
Explanation: And regularization loss as well:
End of explanation
"""
class NestedLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def __call__(self, inputs):
out = inputs
with tf.compat.v1.variable_scope("inner_dense"):
# The weights are created with a `regularizer`,
# so the layer should track their regularization losses
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=tf.keras.regularizers.L2(),
initializer=tf.compat.v1.initializers.glorot_normal,
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=tf.compat.v1.initializers.zeros,
name="bias")
out = tf.linalg.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
class WrappedDenseLayer(tf.keras.layers.Layer):
def __init__(self, units, **kwargs):
super().__init__(**kwargs)
self.units = units
# Only create the nested tf.variable/module/layer/model
# once, and then reuse it each time!
self._dense_layer = NestedLayer(self.units)
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('outer'):
outputs = tf.compat.v1.layers.dense(inputs, 3)
outputs = tf.compat.v1.layers.dense(inputs, 4)
return self._dense_layer(outputs)
layer = WrappedDenseLayer(10)
layer(tf.ones(shape=(5, 5)))
"""
Explanation: Incremental migration: tf.Variables and tf.Modules
If you need to embed tf.Variable calls or tf.Modules in your decorated methods (for example, if you are following the incremental migration to non-legacy TF2 APIs described later in this guide), you still need to explicitly track these, with the following requirements:
* Explicitly make sure that the variable/module/layer is only created once
* Explicitly attach them as instance attributes just as you would when defining a typical module or layer
* Explicitly reuse the already-created object in follow-on calls
This ensures that weights are not created new each call and are correctly reused. Additionally, this also ensures that existing weights and regularization losses get tracked.
Here is an example of how this could look:
End of explanation
"""
assert len(layer.weights) == 6
weights = {x.name: x for x in layer.variables}
assert set(weights.keys()) == {"outer/inner_dense/bias:0",
"outer/inner_dense/kernel:0",
"outer/dense/bias:0",
"outer/dense/kernel:0",
"outer/dense_1/bias:0",
"outer/dense_1/kernel:0"}
layer.trainable_weights
"""
Explanation: Note that explicit tracking of the nested module is needed even though it is decorated with the track_tf1_style_variables decorator. This is because each module/layer with decorated methods has its own variable store associated with it.
The weights are correctly tracked:
End of explanation
"""
layer.losses
"""
Explanation: As well as regularization loss:
End of explanation
"""
class CompatV1TemplateScaleByY(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def my_op(x, scalar_name):
var1 = tf.compat.v1.get_variable(scalar_name,
shape=[],
regularizer=tf.compat.v1.keras.regularizers.L2(),
initializer=tf.compat.v1.constant_initializer(1.5))
return x * var1
self.scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y')
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('layer'):
# Using a scope ensures the `scale_by_y` name will not be incremented
# for each instantiation of the layer.
return self.scale_by_y(inputs)
layer = CompatV1TemplateScaleByY()
out = layer(tf.ones(shape=(2, 3)))
print("weights:", layer.weights)
print("regularization loss:", layer.losses)
print("output:", out)
"""
Explanation: Note that if the NestedLayer were a non-Keras tf.Module instead, variables would still be tracked but regularization losses would not be automatically tracked, so you would have to explicitly track them separately.
Guidance on variable names
Explicit tf.Variable calls and Keras layers use a different layer name / variable name autogeneration mechanism than you may be used to from the combination of get_variable and variable_scopes. Although the shim will make your variable names match for variables created by get_variable even when going from TF1.x graphs to TF2 eager execution & tf.function, it cannot guarantee the same for the variable names generated for tf.Variable calls and Keras layers that you embed within your method decorators. It is even possible for multiple variables to share the same name in TF2 eager execution and tf.function.
You should take special care with this when following the sections on validating correctness and mapping TF1.x checkpoints later on in this guide.
Using tf.compat.v1.make_template in the decorated method
It is highly recommended you directly use tf.compat.v1.keras.utils.track_tf1_style_variables instead of using tf.compat.v1.make_template, as it is a thinner layer on top of TF2.
Follow the guidance in this section for prior TF1.x code that was already relying on tf.compat.v1.make_template.
Because tf.compat.v1.make_template wraps code that uses get_variable, the track_tf1_style_variables decorator allows you to use these templates in layer calls and successfully track the weights and regularization losses.
However, do make sure to call make_template only once and then reuse the same template in each layer call. Otherwise, a new template will be created each time you call the layer along with a new set of variables.
For example,
End of explanation
"""
class CompatModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dropout(out, training=training)
out = tf.compat.v1.layers.dense(
out, self.units,
kernel_regularizer="l2")
return out
"""
Explanation: Warning: Avoid sharing the same make_template-created template across multiple layer instances as it may break the variable and regularization loss tracking mechanisms of the shim decorator. Additionally, if you plan to use the same make_template name inside of multiple layer instances then you should nest the created template's usage inside of a variable_scope. If not, the generated name for the template's variable_scope will increment with each new instance of the layer. This could alter the weight names in unexpected ways.
Incremental migration to Native TF2
As mentioned earlier, track_tf1_style_variables allows you to mix TF2-style object-oriented tf.Variable/tf.keras.layers.Layer/tf.Module usage with legacy tf.compat.v1.get_variable/tf.compat.v1.layers-style usage inside of the same decorated module/layer.
This means that after you have made your TF1.x model fully-TF2-compatible, you can write all new model components with native (non-tf.compat.v1) TF2 APIs and have them interoperate with your older code.
However, if you continue to modify your older model components, you may also choose to incrementally switch your legacy-style tf.compat.v1 usage over to the purely-native object-oriented APIs that are recommended for newly written TF2 code.
tf.compat.v1.get_variable usage can be replaced with either self.add_weight calls if you are decorating a Keras layer/model, or with tf.Variable calls if you are decorating Keras objects or tf.Modules.
Both functional-style and object-oriented tf.compat.v1.layers can generally be replaced with the equivalent tf.keras.layers layer with no argument changes required.
You may also consider chunks parts of your model or common patterns into individual layers/modules during your incremental move to purely-native APIs, which may themselves use track_tf1_style_variables.
A note on Slim and contrib.layers
A large amount of older TF 1.x code uses the Slim library, which was packaged with TF 1.x as tf.contrib.layers. Converting code using Slim to native TF 2 is more involved than converting v1.layers. In fact, it may make sense to convert your Slim code to v1.layers first, then convert to Keras. Below is some general guidance for converting Slim code.
Ensure all arguments are explicit. Remove arg_scopes if possible. If you still need to use them, split normalizer_fn and activation_fn into their own layers.
Separable conv layers map to one or more different Keras layers (depthwise, pointwise, and separable Keras layers).
Slim and v1.layers have different argument names and default values.
Note that some arguments have different scales.
Migration to Native TF2 ignoring checkpoint compatibility
The following code sample demonstrates an incremental move of a model to purely-native APIs without considering checkpoint compatibility.
End of explanation
"""
class PartiallyMigratedModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
self.conv_layer = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_layer(inputs)
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dropout(out, training=training)
out = tf.compat.v1.layers.dense(
out, self.units,
kernel_regularizer="l2")
return out
"""
Explanation: Next, replace the compat.v1 APIs with their native object-oriented equivalents in a piecewise manner. Start by switching the convolution layer to a Keras object created in the layer constructor.
End of explanation
"""
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
layer = CompatModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
original_output = layer(inputs)
# Grab the regularization loss as well
original_regularization_loss = tf.math.add_n(layer.losses)
print(original_regularization_loss)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
layer = PartiallyMigratedModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
migrated_output = layer(inputs)
# Grab the regularization loss as well
migrated_regularization_loss = tf.math.add_n(layer.losses)
print(migrated_regularization_loss)
# Verify that the regularization loss and output both match
np.testing.assert_allclose(original_regularization_loss.numpy(), migrated_regularization_loss.numpy())
np.testing.assert_allclose(original_output.numpy(), migrated_output.numpy())
"""
Explanation: Use the v1.keras.utils.DeterministicRandomTestTool class to verify that this incremental change leaves the model with the same behavior as before.
End of explanation
"""
class NearlyFullyNativeModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
self.conv_layer = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.flatten_layer = tf.keras.layers.Flatten()
self.dense_layer = tf.keras.layers.Dense(
self.units,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('model'):
out = self.conv_layer(inputs)
out = self.flatten_layer(out)
out = self.dense_layer(out)
return out
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
layer = NearlyFullyNativeModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
migrated_output = layer(inputs)
# Grab the regularization loss as well
migrated_regularization_loss = tf.math.add_n(layer.losses)
print(migrated_regularization_loss)
# Verify that the regularization loss and output both match
np.testing.assert_allclose(original_regularization_loss.numpy(), migrated_regularization_loss.numpy())
np.testing.assert_allclose(original_output.numpy(), migrated_output.numpy())
"""
Explanation: You have now replaced all of the individual compat.v1.layers with native Keras layers.
End of explanation
"""
class FullyNativeModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
self.conv_layer = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.flatten_layer = tf.keras.layers.Flatten()
self.dense_layer = tf.keras.layers.Dense(
self.units,
kernel_regularizer="l2")
def call(self, inputs):
out = self.conv_layer(inputs)
out = self.flatten_layer(out)
out = self.dense_layer(out)
return out
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
layer = FullyNativeModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
migrated_output = layer(inputs)
# Grab the regularization loss as well
migrated_regularization_loss = tf.math.add_n(layer.losses)
print(migrated_regularization_loss)
# Verify that the regularization loss and output both match
np.testing.assert_allclose(original_regularization_loss.numpy(), migrated_regularization_loss.numpy())
np.testing.assert_allclose(original_output.numpy(), migrated_output.numpy())
"""
Explanation: Finally, remove both any remaining (no-longer-needed) variable_scope usage and the track_tf1_style_variables decorator itself.
You are now left with a version of the model that uses entirely native APIs.
End of explanation
"""
class FunctionalStyleCompatModel(tf.keras.layers.Layer):
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.conv2d(
out, 4, 4,
kernel_regularizer="l2")
out = tf.compat.v1.layers.conv2d(
out, 5, 5,
kernel_regularizer="l2")
return out
layer = FunctionalStyleCompatModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
"""
Explanation: Maintaining checkpoint compatibility during migration to Native TF2
The above migration process to native TF2 APIs changed both the variable names (as Keras APIs produce very different weight names), and the object-oriented paths that point to different weights in the model. The impact of these changes is that they will have broken both any existing TF1-style name-based checkpoints or TF2-style object-oriented checkpoints.
However, in some cases, you might be able to take your original name-based checkpoint and find a mapping of the variables to their new names with approaches like the one detailed in the Reusing TF1.x checkpoints guide.
Some tips to making this feasible are as follows:
- Variables still all have a name argument you can set.
- Keras models also take a name argument as which they set as the prefix for their variables.
- The v1.name_scope function can be used to set variable name prefixes. This is very different from tf.variable_scope. It only affects names, and doesn't track variables and reuse.
With the above pointers in mind, the following code samples demonstrate a workflow you can adapt to your code to incrementally update part of a model while simultaneously updating checkpoints.
Note: Due to the complexity of variable naming with Keras layers, this is not guaranteed to work for all use cases.
Begin by switching functional-style tf.compat.v1.layers to their object-oriented versions.
End of explanation
"""
class OOStyleCompatModel(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_1 = tf.compat.v1.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.conv_2 = tf.compat.v1.layers.Conv2D(
4, 4,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_1(inputs)
out = self.conv_2(out)
out = tf.compat.v1.layers.conv2d(
out, 5, 5,
kernel_regularizer="l2")
return out
layer = OOStyleCompatModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
"""
Explanation: Next, assign the compat.v1.layer objects and any variables created by compat.v1.get_variable as properties of the tf.keras.layers.Layer/tf.Module object whose method is decorated with track_tf1_style_variables (note that any object-oriented TF2 style checkpoints will now save out both a path by variable name and the new object-oriented path).
End of explanation
"""
weights = {v.name: v for v in layer.weights}
assert weights['model/conv2d/kernel:0'] is layer.conv_1.kernel
assert weights['model/conv2d_1/bias:0'] is layer.conv_2.bias
"""
Explanation: Resave a loaded checkpoint at this point to save out paths both by the variable name (for compat.v1.layers), or by the object-oriented object graph.
End of explanation
"""
def record_scope(scope_name):
"""Record a variable_scope to make sure future ones get incremented."""
with tf.compat.v1.variable_scope(scope_name):
pass
class PartiallyNativeKerasLayersModel(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_1 = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.conv_2 = tf.keras.layers.Conv2D(
4, 4,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_1(inputs)
record_scope('conv2d') # Only needed if follow-on compat.v1.layers do not pass a `name` arg
out = self.conv_2(out)
record_scope('conv2d_1') # Only needed if follow-on compat.v1.layers do not pass a `name` arg
out = tf.compat.v1.layers.conv2d(
out, 5, 5,
kernel_regularizer="l2")
return out
layer = PartiallyNativeKerasLayersModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
"""
Explanation: You can now swap out the object-oriented compat.v1.layers for native Keras layers while still being able to load the recently-saved checkpoint. Ensure that you preserve variable names for the remaining compat.v1.layers by still recording the auto-generated variable_scopes of the replaced layers. These switched layers/variables will now only use the object attribute path to the variables in the checkpoint instead of the variable name path.
In general, you can replace usage of compat.v1.get_variable in variables attached to properties by:
Switching them to using tf.Variable, OR
Updating them by using tf.keras.layers.Layer.add_weight. Note that if you are not switching all layers in one go this may change auto-generated layer/variable naming for the remaining compat.v1.layers that are missing a name argument. If that is the case, you must keep the variable names for remaining compat.v1.layers the same by manually opening and closing a variable_scope corresponding to the removed compat.v1.layer's generated scope name. Otherwise the paths from existing checkpoints may conflict and checkpoint loading will behave incorrectly.
End of explanation
"""
weights = set(v.name for v in layer.weights)
assert 'model/conv2d_2/kernel:0' in weights
assert 'model/conv2d_2/bias:0' in weights
"""
Explanation: Saving a checkpoint out at this step after constructing the variables will make it contain only the currently-available object paths.
Ensure you record the scopes of the removed compat.v1.layers to preserve the auto-generated weight names for the remaining compat.v1.layers.
End of explanation
"""
class FullyNativeKerasLayersModel(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_1 = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.conv_2 = tf.keras.layers.Conv2D(
4, 4,
kernel_regularizer="l2")
self.conv_3 = tf.keras.layers.Conv2D(
5, 5,
kernel_regularizer="l2")
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_1(inputs)
out = self.conv_2(out)
out = self.conv_3(out)
return out
layer = FullyNativeKerasLayersModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
"""
Explanation: Repeat the above steps until you have replaced all the compat.v1.layers and compat.v1.get_variables in your model with fully-native equivalents.
End of explanation
"""
|
tridesclous/tridesclous
|
example/example_locust_dataset.ipynb
|
mit
|
%matplotlib inline
import time
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
"""
Explanation: tridesclous example with locust dataset
Here a detail notebook that detail the locust dataset recodring by Christophe Pouzat.
This dataset is our classic.
It has be analyse yet by several tools in R, Python or C:
* https://github.com/christophe-pouzat/PouzatDetorakisEuroScipy2014
* https://github.com/christophe-pouzat/SortingABigDataSetWithPython
* http://xtof.perso.math.cnrs.fr/locust.html
So we can compare the result.
The original datasets is here https://zenodo.org/record/21589
But we will work on a very small subset on github https://github.com/tridesclous/tridesclous_datasets/tree/master/locust
Overview
In tridesclous, the spike sorting is done in several step:
* Define the datasource and working path. (class DataIO)
* Construct a catalogue (class CatalogueConstructor) on a short chunk of data (for instance 60s)
with several sub step :
* signal pre-processing:
* high pass filter (optional)
* removal of common reference (optional)
* noise estimation (median/mad) on a small chunk
* normalisation = robust z-score
* peak detection
* select a subset of peaks. Unecessary and impossible to extract them all.
* extract some waveform.
* project theses waveforms in smaller dimention (pca, ...)
* find cluster
* auto clean cluster with euritics merge/split/trash
* clean manually with GUI (class CatalogueWindow) : merge/split/trash
* save centroids (median+mad + first and second derivative)
* Apply the Peeler (class Peeler) on the long term signals. With several sub steps:
* same signal preprocessing than before
* find peaks
* find the best cluster in catalogue for each peak
* find the intersample jitter
* remove the oversampled waveforms from the signals until there are not peaks in the signals.
* check with GUI (class PeelerWindow)
End of explanation
"""
#download dataset
localdir, filenames, params = tdc.download_dataset(name='locust')
print(filenames)
print(params)
"""
Explanation: Download a small dataset
trideclous provide some datasets than can be downloaded with download_dataset.
Note this dataset contains 2 trials in 2 different files. (the original contains more!)
Each file is considers as a segment. tridesclous automatically deal with it.
End of explanation
"""
#create a DataIO
import os, shutil
dirname = 'tridesclous_locust'
if os.path.exists(dirname):
#remove is already exists
shutil.rmtree(dirname)
dataio = DataIO(dirname=dirname)
# feed DataIO
dataio.set_data_source(type='RawData', filenames=filenames, **params)
print(dataio)
#no need to setup the prb with dataio.set_probe_file() or dataio.download_probe()
#because it is a tetrode
"""
Explanation: DataIO = define datasource and working dir
Theses 2 files are in RawData format this means binary format with interleaved channels.
Our dataset contains 2 segment of 28.8 second each, 4 channels. The sample rate is 15kHz.
Note that there is only one channel_group here (0).
End of explanation
"""
cc = CatalogueConstructor(dataio=dataio)
print(cc)
"""
Explanation: CatalogueConstructor
End of explanation
"""
# global params
cc.set_global_params(chunksize=1024,mode='dense')
# pre processing filetring normalisation
cc.set_preprocessor_params(
common_ref_removal=False,
highpass_freq=300.,
lowpass_freq=5000.,
lostfront_chunksize=64)
cc.set_peak_detector_params(
peak_sign='-',
relative_threshold=6.5,
peak_span_ms=0.1)
"""
Explanation: Set some parameters
For a complet description of each params see main documentation.
End of explanation
"""
cc.estimate_signals_noise(seg_num=0, duration=15.)
print(cc.signals_medians)
print(cc.signals_mads)
"""
Explanation: Estimate the median and mad of noiseon a small chunk of filtered signals.
This compute medians and mad of each channel.
End of explanation
"""
t1 = time.perf_counter()
cc.run_signalprocessor(duration=60.)
t2 = time.perf_counter()
print('run_signalprocessor', t2-t1, 's')
print(cc)
"""
Explanation: Run the main loop: signal preprocessing + peak detection
End of explanation
"""
cc.clean_peaks(alien_value_threshold=60., mode='extremum_amplitude')
print(cc)
"""
Explanation: Clean peaks
Whis try to detect "bad peaks". They are artifact with very big amplitude value.
This peaks have to removed early and not be include in waveform extaction and pca.
Strange peak are tag with -9 (alien)
End of explanation
"""
cc.set_waveform_extractor_params(wf_left_ms=-1.5, wf_right_ms=2.5)
cc.sample_some_peaks(mode='rand', nb_max=20000)
"""
Explanation: sample some peaks for waveforms extraction
Take some waveforms in the signals n_left/n_right must be choosen carfully.
It is not necessary to intensive to select all peaks.
There are several method to select peaks the most simple is to select randomly.
Note that waveform are extracted now. It is too intensive. They are extacted on-the-fly when needed.
End of explanation
"""
cc.extract_some_noise(nb_snippet=300)
"""
Explanation: Extact some noise snippet.
Here a step to extact snippet of noise (in between real peak)
End of explanation
"""
cc.extract_some_features(method='global_pca', n_components=5)
print(cc)
"""
Explanation: Project to smaller space
To reduce dimension of the waveforms (n_peaks, peak_width, n_channel) we chosse global_pca method which is appropriate for tetrode.
It consists of flatenning some_waveforms.shape (n_peaks, peak_width, n_channel) to (n_peaks, peak_width*n_channel) and then apply a standard PCA on it with sklearn.
Let's keep 5 component of it.
In case of more channel we could also do a 'by_channel_pca'
End of explanation
"""
cc.find_clusters(method='kmeans', n_clusters=12)
print(cc)
"""
Explanation: find clusters
There are many option to cluster this features. here a simple one the well known kmeans method.
Unfortunatly we need to choose the number of cluster. Too bad... Let's take 12.
Later on we will be able to refine this manually.
End of explanation
"""
%gui qt5
import pyqtgraph as pg
app = pg.mkQApp()
win = tdc.CatalogueWindow(catalogueconstructor)
win.show()
app.exec_()
"""
Explanation: Manual clean with CatalogueWindow (or visual check)
This open a CatalogueWindow, here we can check, split merge, trash, play as long as we are not happy.
If we are happy, we can save the catalogue.
End of explanation
"""
cc.auto_split_cluster()
cc.auto_merge_cluster()
cc.trash_low_extremum(min_extremum_amplitude=6.6)
cc.trash_small_cluster(minimum_size=10)
#order cluster by waveforms rms
cc.order_clusters(by='waveforms_rms')
#save the catalogue
cc.make_catalogue_for_peeler(inter_sample_oversampling=True)
"""
Explanation: Here a snappshot of CatalogueWindow
<img src="../doc/img/snapshot_cataloguewindow.png">
Auto clean of catatalogue
tridesclous offer some method for auto merge/trash/split some cluster.
After this we can re order cluster and construct the catalogue for the peeler.
End of explanation
"""
catalogue = dataio.load_catalogue(chan_grp=0)
peeler = Peeler(dataio)
peeler.change_params(catalogue=catalogue)
t1 = time.perf_counter()
peeler.run()
t2 = time.perf_counter()
print('peeler.run', t2-t1)
print()
for seg_num in range(dataio.nb_segment):
spikes = dataio.get_spikes(seg_num)
print('seg_num', seg_num, 'nb_spikes', spikes.size)
"""
Explanation: Peeler
Create and run the Peeler.
It should be pretty fast, here the computation take 1.32s for 28.8x2s of signal. This is a speed up of 43 over real time.
End of explanation
"""
%gui qt5
import pyqtgraph as pg
app = pg.mkQApp()
win = tdc.PeelerWindow(dataio=dataio, catalogue=initial_catalogue)
win.show()
app.exec_()
"""
Explanation: Open PeelerWindow for visual checking
End of explanation
"""
|
quantopian/alphalens
|
alphalens/examples/intraday_factor.ipynb
|
apache-2.0
|
%pylab inline --no-import-all
import alphalens
import pandas as pd
import numpy as np
import datetime
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Alphalens: intraday factor
In this notebook we use Alphalens to analyse the performance of an intraday factor, which is computed daily but the stocks are bought at marker open and sold at market close with no overnight positions.
End of explanation
"""
sector_names = {
0 : "information_technology",
1 : "financials",
2 : "health_care",
3 : "industrials",
4 : "utilities",
5 : "real_estate",
6 : "materials",
7 : "telecommunication_services",
8 : "consumer_staples",
9 : "consumer_discretionary",
10 : "energy"
}
ticker_sector = {
"ACN" : 0, "ATVI" : 0, "ADBE" : 0, "AMD" : 0, "AKAM" : 0, "ADS" : 0, "GOOGL" : 0, "GOOG" : 0,
"APH" : 0, "ADI" : 0, "ANSS" : 0, "AAPL" : 0, "AMAT" : 0, "ADSK" : 0, "ADP" : 0, "AVGO" : 0,
"AMG" : 1, "AFL" : 1, "ALL" : 1, "AXP" : 1, "AIG" : 1, "AMP" : 1, "AON" : 1, "AJG" : 1, "AIZ" : 1, "BAC" : 1,
"BK" : 1, "BBT" : 1, "BRK.B" : 1, "BLK" : 1, "HRB" : 1, "BHF" : 1, "COF" : 1, "CBOE" : 1, "SCHW" : 1, "CB" : 1,
"ABT" : 2, "ABBV" : 2, "AET" : 2, "A" : 2, "ALXN" : 2, "ALGN" : 2, "AGN" : 2, "ABC" : 2, "AMGN" : 2, "ANTM" : 2,
"BCR" : 2, "BAX" : 2, "BDX" : 2, "BIIB" : 2, "BSX" : 2, "BMY" : 2, "CAH" : 2, "CELG" : 2, "CNC" : 2, "CERN" : 2,
"MMM" : 3, "AYI" : 3, "ALK" : 3, "ALLE" : 3, "AAL" : 3, "AME" : 3, "AOS" : 3, "ARNC" : 3, "BA" : 3, "CHRW" : 3,
"CAT" : 3, "CTAS" : 3, "CSX" : 3, "CMI" : 3, "DE" : 3, "DAL" : 3, "DOV" : 3, "ETN" : 3, "EMR" : 3, "EFX" : 3,
"AES" : 4, "LNT" : 4, "AEE" : 4, "AEP" : 4, "AWK" : 4, "CNP" : 4, "CMS" : 4, "ED" : 4, "D" : 4, "DTE" : 4,
"DUK" : 4, "EIX" : 4, "ETR" : 4, "ES" : 4, "EXC" : 4, "FE" : 4, "NEE" : 4, "NI" : 4, "NRG" : 4, "PCG" : 4,
"ARE" : 5, "AMT" : 5, "AIV" : 5, "AVB" : 5, "BXP" : 5, "CBG" : 5, "CCI" : 5, "DLR" : 5, "DRE" : 5,
"EQIX" : 5, "EQR" : 5, "ESS" : 5, "EXR" : 5, "FRT" : 5, "GGP" : 5, "HCP" : 5, "HST" : 5, "IRM" : 5, "KIM" : 5,
"APD" : 6, "ALB" : 6, "AVY" : 6, "BLL" : 6, "CF" : 6, "DWDP" : 6, "EMN" : 6, "ECL" : 6, "FMC" : 6, "FCX" : 6,
"IP" : 6, "IFF" : 6, "LYB" : 6, "MLM" : 6, "MON" : 6, "MOS" : 6, "NEM" : 6, "NUE" : 6, "PKG" : 6, "PPG" : 6,
"T" : 7, "CTL" : 7, "VZ" : 7,
"MO" : 8, "ADM" : 8, "BF.B" : 8, "CPB" : 8, "CHD" : 8, "CLX" : 8, "KO" : 8, "CL" : 8, "CAG" : 8,
"STZ" : 8, "COST" : 8, "COTY" : 8, "CVS" : 8, "DPS" : 8, "EL" : 8, "GIS" : 8, "HSY" : 8, "HRL" : 8,
"AAP" : 9, "AMZN" : 9, "APTV" : 9, "AZO" : 9, "BBY" : 9, "BWA" : 9, "KMX" : 9, "CCL" : 9,
"APC" : 10, "ANDV" : 10, "APA" : 10, "BHGE" : 10, "COG" : 10, "CHK" : 10, "CVX" : 10, "XEC" : 10, "CXO" : 10,
"COP" : 10, "DVN" : 10, "EOG" : 10, "EQT" : 10, "XOM" : 10, "HAL" : 10, "HP" : 10, "HES" : 10, "KMI" : 10
}
import pandas_datareader.data as web
tickers = list(ticker_sector.keys())
pan = web.DataReader(tickers, "google", datetime.datetime(2017, 1, 1), datetime.datetime(2017, 6, 1))
"""
Explanation: Below is a simple mapping of tickers to sectors for a small universe of large cap stocks.
End of explanation
"""
today_open = pan['Open']
today_close = pan['Close']
yesterday_close = today_close.shift(1)
factor = (today_open - yesterday_close) / yesterday_close
"""
Explanation: Our example factor ranks the stocks based on their overnight price gap (yesterday close to today open price). We'll see if the factor has some alpha or if it is pure noise.
End of explanation
"""
# Fix time as Yahoo doesn't set it
today_open.index += pd.Timedelta('9h30m')
today_close.index += pd.Timedelta('16h')
# pricing will contain both open and close
pricing = pd.concat([today_open, today_close]).sort_index()
pricing.head()
# Align factor to open price
factor.index += pd.Timedelta('9h30m')
factor = factor.stack()
factor.index = factor.index.set_names(['date', 'asset'])
factor.unstack().head()
"""
Explanation: The pricing data passed to alphalens should contain the entry price for the assets so it must reflect the next available price after a factor value was observed at a given timestamp. Those prices must not be used in the calculation of the factor values for that time. Always double check to ensure you are not introducing lookahead bias to your study.
The pricing data must also contain the exit price for the assets, for period 1 the price at the next timestamp will be used, for period 2 the price after 2 timestamps will be used and so on.
There are no restrinctions/assumptions on the time frequencies a factor should be computed at and neither on the specific time a factor should be traded (trading at the open vs trading at the close vs intraday trading), it is only required that factor and price DataFrames are properly aligned given the rules above.
In our example, we want to buy the stocks at marker open, so the need the open price at the exact timestamps as the factor valules, and we want to sell the stocks at market close so we will add the close prices too, which will be used to compute period 1 forward returns as they appear just after the factor values timestamps. The returns computed by Alphalens will therefore be based on the difference between open to close assets prices.
If we had other prices we could compute other period returns, for example one hour after market open and 2 hours and so on. We could have added those prices right after the open prices and instruct Alphalens to compute 1, 2, 3... periods too and not only period 1 like in this example.
End of explanation
"""
non_predictive_factor_data = alphalens.utils.get_clean_factor_and_forward_returns(factor,
pricing,
periods=(1,2),
groupby=ticker_sector,
groupby_labels=sector_names)
alphalens.tears.create_returns_tear_sheet(non_predictive_factor_data)
alphalens.tears.create_event_returns_tear_sheet(non_predictive_factor_data, pricing)
"""
Explanation: Run Alphalens
Period 1 will show returns from market open to market close while period 2 will show returns from today open to tomorrow open
End of explanation
"""
|
bjackman/lisa
|
ipynb/examples/wlgen/rtapp_example.ipynb
|
apache-2.0
|
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%pylab inline
import json
import os
# Support to initialise and configure your test environment
import devlib
from env import TestEnv
# Support to configure and run RTApp based workloads
from wlgen import RTA, Periodic, Ramp, Step, Pulse
# Suport for FTrace events parsing and visualization
import trappy
# Support for performance analysis of RTApp workloads
from perf_analysis import PerfAnalysis
"""
Explanation: RTA workload
The RTA or RTApp workload represents a type of workload obtained using the rt-app test application.
More details on the test application can be found at https://github.com/scheduler-tools/rt-app.
End of explanation
"""
# Setup a target configuration
my_target_conf = {
# Define the kind of target platform to use for the experiments
"platform" : 'linux', # Linux system, valid other options are:
# android - access via ADB
# linux - access via SSH
# host - direct access
# Preload settings for a specific target
"board" : 'juno', # juno - JUNO board with mainline hwmon
# Define devlib module to load
"modules" : [
'bl', # enable big.LITTLE support
'cpufreq' # enable CPUFreq support
],
# Account to access the remote target
"host" : '192.168.0.1',
"username" : 'root',
"password" : 'juno',
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
'0': 361, '1': 138, '2': 138, '3': 352, '4': 360, '5': 353
}
}
# Setup the required Test Environment supports
my_tests_conf = {
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"cpu_frequency"
],
"buffsize" : 10240
},
}
# Initialize a test environment using
# - the provided target configuration (my_target_conf)
# - the provided test configuration (my_test_conf)
te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)
target = te.target
"""
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
End of explanation
"""
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'simple', calibration=te.calibration())
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
# 3. PERIODIC task
#
# This class defines a task which load is periodic with a configured
# period and duty-cycle.
#
# This class is a specialization of the 'pulse' class since a periodic
# load is generated as a sequence of pulse loads.
#
# Args:
# cuty_cycle_pct (int, [0-100]): the pulses load [%]
# default: 50[%]
# duration_s (float): the duration in [s] of the entire workload
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_per20': Periodic(
period_ms=100, # period
duty_cycle_pct=20, # duty cycle
duration_s=5, # duration
cpus=None, # run on all CPUS
sched={
"policy": "FIFO", # Run this task as a SCHED_FIFO task
},
delay_s=0 # start at the start of RTApp
).get(),
# 4. RAMP task
#
# This class defines a task which load is a ramp with a configured number
# of steps according to the input parameters.
#
# Args:
# start_pct (int, [0-100]): the initial load [%], (default 0[%])
# end_pct (int, [0-100]): the final load [%], (default 100[%])
# delta_pct (int, [0-100]): the load increase/decrease [%],
# default: 10[%]
# increase if start_prc < end_prc
# decrease if start_prc > end_prc
# time_s (float): the duration in [s] of each load step
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_rmp20_5-60': Ramp(
period_ms=100, # period
start_pct=5, # intial load
end_pct=65, # end load
delta_pct=20, # load % increase...
time_s=1, # ... every 1[s]
cpus="0" # run just on first CPU
).get(),
# 5. STEP task
#
# This class defines a task which load is a step with a configured
# initial and final load.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default 0[%])
# end_pct (int, [0-100]): the final load [%]
# default 100[%]
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_stp10-50': Step(
period_ms=100, # period
start_pct=0, # intial load
end_pct=50, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
# 6. PULSE task
#
# This class defines a task which load is a pulse with a configured
# initial and final load.
#
# The main difference with the 'step' class is that a pulse workload is
# by definition a 'step down', i.e. the workload switch from an finial
# load to a final one which is always lower than the initial one.
# Moreover, a pulse load does not generate a sleep phase in case of 0[%]
# load, i.e. the task ends as soon as the non null initial load has
# completed.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default: 0[%]
# end_pct (int, [0-100]): the final load [%]
# default: 100[%]
# NOTE: must be lower than start_pct value
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# NOTE: if end_pct is 0, the task end after the
# start_pct period completed
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_pls5-80': Pulse(
period_ms=100, # period
start_pct=65, # intial load
end_pct=5, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
},
# 7. use this folder for task logfiles
run_dir=target.working_directory
);
"""
Explanation: Workload configuration
To create an instance of an RTApp workload generator you need to provide the following:
- target: target device configuration
- name: name of workload. This is the name of the JSON configuration file reporting the generated RTApp configuration.
- calibration: CPU load calibration values, measured on each core.
An RTApp workload is defined by specifying a kind, provided below through rtapp.conf, which represents the way we want to define the behavior of each task.
The possible kinds of workloads are profile and custom. It's very important to notice that periodic is no longer considered a "kind" of workload but a "class" within the profile kind.
<br><br>
As you see below, when "kind" is "profile", the tasks generated by this workload have a profile which is defined by a sequence of phases. These phases are defined according to the following grammar:<br>
- params := {task, ...} <br>
- task := NAME : {SCLASS, PRIO, [phase, ...]}<br>
- phase := (PTIME, PERIOD, DCYCLE)<br> <br>
There are some pre-defined task classes for the profile kind:
- Step: the load of this task is a step with a configured initial and final load.
- Pulse: the load of this task is a pulse with a configured initial and final load.The main difference with the 'step' class is that a pulse workload is by definition a 'step down', i.e. the workload switches from an initial load to a final one which is always lower than the initial one. Moreover, a pulse load does not generate a sleep phase in case of 0[%] load, i.e. the task ends as soon as the non null initial load has completed.
- Ramp: the load of this task is a ramp with a configured number of steps determined by the input parameters.
- Periodic: the load of this task is periodic with a configured period and duty-cycle.<br><br>
The one below is a workload mix having all types of workloads described above, but each of them can also be specified serapately in the RTApp parameters.
End of explanation
"""
# Initial phase and pinning parameters
ramp = Ramp(period_ms=100, start_pct=5, end_pct=65, delta_pct=20, time_s=1, cpus="0")
# Following phases
medium_slow = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=100)
high_fast = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=10)
medium_fast = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=1)
high_slow = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=100)
#Compose the task
complex_task = ramp + medium_slow + high_fast + medium_fast + high_slow
# Configure this RTApp instance to:
# rtapp.conf(
# # 1. generate a "profile based" set of tasks
# kind='profile',
#
# # 2. define the "profile" of each task
# params={
# 'complex' : complex_task.get()
# },
#
# # 6. use this folder for task logfiles
# run_dir='/tmp'
#)
"""
Explanation: The output of the previous cell reports the main properties of the generated
tasks. Thus for example we see that the first task is configure to be:
- named task_per20
- executed as a SCHED_FIFO task
- generating a load which is calibrated with respect to the CPU 1
- with one single "phase" which defines a peripodic load for the duration of 5[s]
- that periodic load consistes of 50 cycles
- each cycle has a period of 100[ms] and a duty-cycle of 20%,
which means that the task, for every cycle, will run for 20[ms] and then sleep for 80[ms]
All these properties are translated into a JSON configuration file for RTApp which you can see in Collected results below.<br>
Workload composition
Another way of specifying the phases of a task is through workload composition, described in the next cell.<br>
NOTE: We are just giving this as an example of specifying a workload, but this configuration won't be the one used for the following execution and analysis cells. You need to uncomment these lines if you want to use the composed workload.
End of explanation
"""
logging.info('#### Setup FTrace')
te.ftrace.start()
logging.info('#### Start energy sampling')
te.emeter.reset()
logging.info('#### Start RTApp execution')
rtapp.run(out_dir=te.res_dir, cgroup="")
logging.info('#### Read energy consumption: %s/energy.json', te.res_dir)
nrg_report = te.emeter.report(out_dir=te.res_dir)
logging.info('#### Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
logging.info('#### Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('#### Save platform description: %s/platform.json', te.res_dir)
(plt, plt_file) = te.platform_dump(te.res_dir)
"""
Explanation: Workload execution
End of explanation
"""
# Inspect the JSON file used to run the application
with open('{}/simple_00.json'.format(te.res_dir), 'r') as fh:
rtapp_json = json.load(fh, )
logging.info('Generated RTApp JSON file:')
print json.dumps(rtapp_json, indent=4, sort_keys=True)
# All data are produced in the output folder defined by the TestEnv module
logging.info('Content of the output folder %s', te.res_dir)
!ls -la {te.res_dir}
# Dump the energy measured for the LITTLE and big clusters
logging.info('Energy: %s', nrg_report.report_file)
print json.dumps(nrg_report.channels, indent=4, sort_keys=True)
# Dump the platform descriptor, which could be useful for further analysis
# of the generated results
logging.info('Platform description: %s', plt_file)
print json.dumps(plt, indent=4, sort_keys=True)
"""
Explanation: Collected results
End of explanation
"""
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(te.res_dir)
"""
Explanation: Trace inspection
More information on visualization and trace inspection can be found in examples/trappy.
End of explanation
"""
# Parse the RT-App generate log files to compute performance metrics
pa = PerfAnalysis(te.res_dir)
# For each task which has generated a logfile, plot its performance metrics
for task in pa.tasks():
pa.plotPerf(task, "Performance plots for task [{}] ".format(task))
"""
Explanation: RTApp task performance plots
End of explanation
"""
|
anonyXmous/CapstoneProject
|
Mini_Project_Naive_Bayes.ipynb
|
unlicense
|
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from six.moves import range
# Setup Pandas
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
# Setup Seaborn
sns.set_style("whitegrid")
sns.set_context("poster")
"""
Explanation: Basic Text Classification with Naive Bayes
In the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions.
End of explanation
"""
critics = pd.read_csv('./critics.csv')
#let's drop rows with missing quotes
critics = critics[~critics.quote.isnull()]
critics.head()
"""
Explanation: Table of Contents
Rotten Tomatoes Dataset
Explore
The Vector Space Model and a Search Engine
In Code
Naive Bayes
Multinomial Naive Bayes and Other Likelihood Functions
Picking Hyperparameters for Naive Bayes and Text Maintenance
Interpretation
Rotten Tomatoes Dataset
End of explanation
"""
n_reviews = len(critics)
n_movies = critics.rtid.unique().size
n_critics = critics.critic.unique().size
print("Number of reviews: {:d}".format(n_reviews))
print("Number of critics: {:d}".format(n_critics))
print("Number of movies: {:d}".format(n_movies))
df = critics.copy()
df['fresh'] = df.fresh == 'fresh'
grp = df.groupby('critic')
counts = grp.critic.count() # number of reviews by each critic
means = grp.fresh.mean() # average freshness for each critic
means[counts > 100].hist(bins=10, edgecolor='w', lw=1)
plt.xlabel("Average Rating per critic")
plt.ylabel("Number of Critics")
plt.yticks([0, 2, 4, 6, 8, 10]);
"""
Explanation: Explore
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
text = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']
print("Original text is\n{}".format('\n'.join(text)))
vectorizer = CountVectorizer(min_df=0)
# call `fit` to build the vocabulary
vectorizer.fit(text)
# call `transform` to convert text to a bag of words
x = vectorizer.transform(text)
# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to
# convert back to a "normal" numpy array
x = x.toarray()
print("")
print("Transformed text vector is \n{}".format(x))
# `get_feature_names` tracks which word is associated with each column of the transformed x
print("")
print("Words for each feature:")
print(vectorizer.get_feature_names())
# Notice that the bag of words treatment doesn't preserve information about the *order* of words,
# just their frequency
def make_xy(critics, vectorizer=None):
#Your code here
if vectorizer is None:
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(critics.quote)
X = X.tocsc() # some versions of sklearn return COO format
y = (critics.fresh == 'fresh').values.astype(np.int)
return X, y
X, y = make_xy(critics)
"""
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set I</h3>
<br/>
<b>Exercise/Answers:</b>
<br/>
<li> Look at the histogram above. Tell a story about the average ratings per critic.
<b> The average fresh ratings per critic is around 0.6 with a minimum ratings of 0.35 and max of 0.81 </b>
<li> What shape does the distribution look like?
<b> The shape looks like a normal distribution or bell shape </b>
<li> What is interesting about the distribution? What might explain these interesting things?
<b> </b>
</div>
The Vector Space Model and a Search Engine
All the diagrams here are snipped from Introduction to Information Retrieval by Manning et. al. which is a great resource on text processing. For additional information on text mining and natural language processing, see Foundations of Statistical Natural Language Processing by Manning and Schutze.
Also check out Python packages nltk, spaCy, pattern, and their associated resources. Also see word2vec.
Let us define the vector derived from document $d$ by $\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry "slot" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a corpus.
To define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So "hello" may be at index 5 and "world" at index 99.
Suppose we have the following corpus:
A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them.
Suppose we treat each sentence as a document $d$. The vocabulary (often called the lexicon) is the following:
$V = \left{\right.$ a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with$\left.\right}$
Then the document
A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree
may be represented as the following sparse vector of word counts:
$$\bar V(d) = \left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \right)$$
or more succinctly as
[(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1),
(26, 1), (30, 1), (31, 1)]
along with a dictionary
{
0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes,
15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the,
30: tree, 31: vine,
}
Then, a set of documents becomes, in the usual sklearn style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary.
Notice that this representation loses the relative ordering of the terms in the document. That is "cat ate rat" and "rat ate cat" are the same. Thus, this representation is also known as the Bag-Of-Words representation.
Here is another example, from the book quoted above, although the matrix is transposed here so that documents are columns:
Such a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, jealous and jealousy after stemming are the same feature. One could also make use of other "Natural Language Processing" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove "stopwords" from our vocabulary, such as common words like "the". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application.
From the book:
The standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\bar V(d_1)$ and $\bar V(d_2)$:
$$S_{12} = \frac{\bar V(d_1) \cdot \bar V(d_2)}{|\bar V(d_1)| \times |\bar V(d_2)|}$$
There is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below.
The key idea now: to assign to each document d a score equal to the dot product:
$$\bar V(q) \cdot \bar V(d)$$
Then we can use this simple Vector Model as a Search engine.
In Code
End of explanation
"""
# your turn
# split the data set into a training and test set
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
clf = MultinomialNB()
clf.fit(X_train, y_train)
print('accuracy score on training set: ', clf.score(X_train, y_train))
print('accuracy score on test set: ', clf.score(X_test, clf.predict(X_test)))
print('Noticed that the accuracy on test set is 100%.')
print('The model perfectly predicted if the movie will be rated as fresh based on the reviews')
print('This is a very good classifier')
"""
Explanation: Naive Bayes
From Bayes' Theorem, we have that
$$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$
where $c$ represents a class or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the features in the document. $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as
$$P(c \vert f) \propto P(f \vert c) P(c) $$
$P(c)$ is called the prior and is simply the probability of seeing class $c$. But what is $P(f \vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the likelihood and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are conditionally independent given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear within that class. This is a very important distinction. Recall that if two events are independent, then:
$$P(A \cap B) = P(A) \cdot P(B)$$
Thus, conditional independence implies
$$P(f \vert c) = \prod_i P(f_i | c) $$
where $f_i$ is an individual feature (a word in this example).
To make a classification, we then choose the class $c$ such that $P(c \vert f)$ is maximal.
There is a small caveat when computing these probabilities. For floating point underflow we change the product into a sum by going into log space. This is called the LogSumExp trick. So:
$$\log P(f \vert c) = \sum_i \log P(f_i \vert c) $$
There is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \vert c) = 0$ for that term, and thus $P(f \vert c) = \prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\alpha$ to each count. This is called Laplace Smoothing.
$$P(f_i \vert c) = \frac{N_{ic}+\alpha}{N_c + \alpha N_i}$$
where $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\alpha$ is sometimes called a regularization parameter.
Multinomial Naive Bayes and Other Likelihood Functions
Since we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution.
$$P(f \vert c) = \frac{\left( \sum_i f_i \right)!}{\prod_i f_i!} \prod_{f_i} P(f_i \vert c)^{f_i} \propto \prod_{i} P(f_i \vert c)$$
where the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1.
There are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use Gaussian Naive Bayes. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \vert c)$ is given as follows
$$P(f_i = v \vert c) = \frac{1}{\sqrt{2\pi \sigma^2_c}} e^{- \frac{\left( v - \mu_c \right)^2}{2 \sigma^2_c}}$$
<div class="span5 alert alert-info">
<h3>Exercise Set II</h3>
<p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p>
<ol>
<li> split the data set into a training and test set
<li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters.
<li> train the classifier over the training set and test on the test set
<li> print the accuracy scores for both the training and the test sets
</ol>
What do you notice? Is this a good classifier? If not, why not?
<b>Noticed that the accuracy on test set is 100%.
The model perfectly predicted if the movie will be rated as fresh based on the reviews and this is a very good classifier
</b>
</div>
End of explanation
"""
# Your turn.
# contruct the frequency of words
vectorizer = CountVectorizer(stop_words='english')
X = vectorizer.fit_transform(critics.quote)
word_freq_df = pd.DataFrame({'term': vectorizer.get_feature_names(), 'occurrences':np.asarray(X.sum(axis=0)).ravel().tolist()})
word_freq_df['frequency'] = word_freq_df['occurrences']/np.sum(word_freq_df['occurrences'])
word_freq_sorted=word_freq_df.sort_values('occurrences', ascending = False)
word_freq_sorted.reset_index(drop=True, inplace=True)
sum_words = len(word_freq_sorted)
# create the cum frequency distribution
saved_cnt=0
df=[]
for i in range(1, 100):
prev_cnt = len(word_freq_sorted[word_freq_sorted['occurrences']==i])
saved_cnt += prev_cnt
if i==1:
df=pd.DataFrame([[i, prev_cnt, prev_cnt, prev_cnt/sum_words]], columns=['x', 'freq','cumfreq', 'percent'])
else:
df2=pd.DataFrame([[i, prev_cnt, saved_cnt, saved_cnt/sum_words]], columns=['x', 'freq','cumfreq', 'percent'])
df = df.append(df2, ignore_index=True)
# create the bar grapp
plt.bar(df.x, df.percent, align='center', alpha=0.5)
plt.xticks(range(0,101,10))
plt.ylabel('Percentage of words that appears less than x')
plt.xlabel('Document count of words (x)')
plt.title('Cumulative percent distribution of words that appears in the reviews')
plt.show()
"""
Explanation: Picking Hyperparameters for Naive Bayes and Text Maintenance
We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise.
First, let's find an appropriate value for min_df for the CountVectorizer. min_df can be either an integer or a float/decimal. If it is an integer, min_df represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum percentage of documents a word must appear in to be included in the vocabulary. From the documentation:
min_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
<div class="span5 alert alert-info">
<h3>Exercise Set III</h3>
<p><b>ANSWERS:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p>
<b> Done, please see below cell </b>
<p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p>
<b>The curve climbing steeply at 1 and starts to plateau at 60.
min_df=1 while max_df=60</b>
</div>
End of explanation
"""
from sklearn.model_selection import KFold
def cv_score(clf, X, y, scorefunc):
result = 0.
nfold = 5
for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times
clf.fit(X[train], y[train]) # fit the classifier, passed is as clf.
result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data
return result / nfold # average
"""
Explanation: The parameter $\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function cv_score performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold.
End of explanation
"""
def log_likelihood(clf, x, y):
prob = clf.predict_log_proba(x)
rotten = y == 0
fresh = ~rotten
return prob[rotten, 0].sum() + prob[fresh, 1].sum()
"""
Explanation: We use the log-likelihood as the score here in scorefunc. The higher the log-likelihood, the better. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.
The custom scoring function scorefunc allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as the scoring function.
End of explanation
"""
from sklearn.model_selection import train_test_split
_, itest = train_test_split(range(critics.shape[0]), train_size=0.7)
mask = np.zeros(critics.shape[0], dtype=np.bool)
mask[itest] = True
"""
Explanation: We'll cross-validate over the regularization parameter $\alpha$.
Let's set up the train and test masks first, and then we can run the cross-validation procedure.
End of explanation
"""
from sklearn.naive_bayes import MultinomialNB
#the grid of parameters to search over
alphas = [.1, 1, 5, 10, 50]
best_min_df = 1 # YOUR TURN: put your value of min_df here.
#Find the best value for alpha and min_df, and the best classifier
best_alpha = None
maxscore=-np.inf
for alpha in alphas:
vectorizer = CountVectorizer(min_df=best_min_df)
Xthis, ythis = make_xy(critics, vectorizer)
Xtrainthis = Xthis[mask]
ytrainthis = ythis[mask]
# your turn
clf = MultinomialNB(alpha)
clf.fit(Xtrainthis, ytrainthis)
score = cv_score(clf, Xtrainthis, ytrainthis, log_likelihood)
if (best_alpha is None) or (score > best_score):
print('cv_score for ', alpha, score )
best_score = score
best_alpha = alpha
print("alpha: {}".format(best_alpha))
"""
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set IV</h3>
<p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p>
<b> ANSWER: The function log_likelihood is the logarithmic value of the probability </b>
<p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\alpha$ that is too high?</p> <b>ANSWER: A large value of alpha will overfit the model </b>
<p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p>
<b/> ANSWER: the best `alpha` is equal to 1
</div>
End of explanation
"""
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
#your turn. Print the accuracy on the test and training dataset
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(ytest, clf.predict(xtest)))
print(xtest.shape)
"""
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set V: Working with the Best Parameters</h3>
<p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p>
<b/> ANSWER: Yes, it is a better classifier since it improves the accuracy on test data from 72 (`alpha`= .1) to 74 percent (`alpha` = 1)
</div>
End of explanation
"""
words = np.array(vectorizer.get_feature_names())
x = np.matrix(np.identity(xtest.shape[1]), copy=False)
probs = clf.predict_log_proba(x)[:, 0]
ind = np.argsort(probs)
good_words = words[ind[:10]]
bad_words = words[ind[-10:]]
good_prob = probs[ind[:10]]
bad_prob = probs[ind[-10:]]
print("Good words\t P(fresh | word)")
for w, p in list(zip(good_words, good_prob)):
print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p)))
print("Bad words\t P(fresh | word)")
for w, p in list(zip(bad_words, bad_prob)):
print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p)))
"""
Explanation: Interpretation
What are the strongly predictive features?
We use a neat trick to identify strongly predictive features (i.e. words).
first, create a data set such that each row has exactly one feature. This is represented by the identity matrix.
use the trained classifier to make predictions on this matrix
sort the rows by predicted probabilities, and pick the top and bottom $K$ rows
End of explanation
"""
x, y = make_xy(critics, vectorizer)
prob = clf.predict_proba(x)[:, 0]
predict = clf.predict(x)
bad_rotten = np.argsort(prob[y == 0])[:5]
bad_fresh = np.argsort(prob[y == 1])[-5:]
print("Mis-predicted Rotten quotes")
print('---------------------------')
for row in bad_rotten:
print(critics[y == 0].quote.iloc[row])
print("")
print("Mis-predicted Fresh quotes")
print('--------------------------')
for row in bad_fresh:
print(critics[y == 1].quote.iloc[row])
print("")
"""
Explanation: <br/> <b>good words P(fresh | word) </b>
<br/> touching 0.96
<br/> delight 0.95
<br/> delightful 0.95
<br/> brilliantly 0.94
<br/> energetic 0.94
<br/> superb 0.94
<br/> ensemble 0.93
<br/> childhood 0.93
<br/> engrossing 0.93
<br/> absorbing 0.93
<br/> <b> Bad words P(fresh | word) </b>
<br/> sorry 0.13
<br/> plodding 0.13
<br/> dull 0.11
<br/> bland 0.11
<br/> disappointing 0.10
<br/> forced 0.10
<br/> uninspired 0.08
<br/> pointless 0.07
<br/> unfortunately 0.07
<br/> stupid 0.06
<div class="span5 alert alert-info">
<h3>Exercise Set VI</h3>
<p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p>
</div>
The above exercise is an example of feature selection. There are many other feature selection methods. A list of feature selection methods available in sklearn is here. The most common feature selection technique for text mining is the chi-squared $\left( \chi^2 \right)$ method.
Prediction Errors
We can see mis-predictions as well.
End of explanation
"""
#your turn
# Predicting the Freshness for a New Review
docs_new = ['This movie is not remarkable, touching, or superb in any way']
X_new = vectorizer.transform(docs_new)
X_new = X_new.tocsc()
str = "Fresh" if clf.predict(X_new) == 1 else "Rotten"
print('"', docs_new[0], '"==> ', "", str)
"""
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set VII: Predicting the Freshness for a New Review</h3>
<br/>
<div>
<b>Exercise:</b>
<ul>
<li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'*
<li> Is the result what you'd expect? Why (not)?
<b/> The predicted result is "Fresh" which is not I expect. The word 'Not' is not taken into account thus the analysis mistakenly predicted it as "Fresh" based on the words remarkable, touching and superb which have a high probability of being a good review. The solution is to take the analysis into a bi-gram level which will take pair each words together and come up with an analysis based on consecutive pair of words. This will in effect see that the review is rotten since "not remarkable" will be taken as a negative review.
</ul>
</div>
</div>
End of explanation
"""
# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction
# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')
Xtfidf=tfidfvectorizer.fit_transform(critics.quote)
"""
Explanation: Aside: TF-IDF Weighting for Term Importance
TF-IDF stands for
Term-Frequency X Inverse Document Frequency.
In the standard CountVectorizer model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word "movie" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus. There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in scikit-learn differs from that of most textbooks:
$$\mbox{TF-IDF}(t, d) = \mbox{TF}(t, d)\times \mbox{IDF}(t) = n_{td} \log{\left( \frac{\vert D \vert}{\vert d : t \in d \vert} + 1 \right)}$$
where $n_{td}$ is the number of times term $t$ occurs in document $d$, $\vert D \vert$ is the number of documents, and $\vert d : t \in d \vert$ is the number of documents that contain $t$
End of explanation
"""
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
# Your turn
def make_xy_bigram(critics, bigram_vectorizer=None):
#Your code here
if bigram_vectorizer is None:
bigram_vectorizer = CountVectorizer(ngram_range=(1, 2),token_pattern=r'\b\w+\b', min_df=1)
X = bigram_vectorizer.fit_transform(critics.quote)
X = X.tocsc() # some versions of sklearn return COO format
y = (critics.fresh == 'fresh').values.astype(np.int)
return X, y
vectorizer = CountVectorizer(ngram_range=(1, 2),
token_pattern=r'\b\w+\b', min_df=1, stop_words='english')
X, y = make_xy_bigram(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
#your turn. Print the accuracy on the test and training dataset
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
"""
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set VIII: Enrichment</h3>
<p>
There are several additional things we could try. Try some of these as exercises:
<ol>
<li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because "not good" and "so good" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse.
<li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier.
<li> Try adding supplemental features -- information about genre, director, cast, etc.
<li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction.
<li> Use TF-IDF weighting instead of word counts.
</ol>
</p>
<b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result.
</div>
BIGRAM USING NAIVE BAYES
End of explanation
"""
import itertools
import pandas as pd
from nltk.collocations import BigramCollocationFinder
from nltk.metrics import BigramAssocMeasures
def bigram_word_feats(words, score_fn=BigramAssocMeasures.chi_sq, n=200):
bigram_finder = BigramCollocationFinder.from_words(words)
bigrams = bigram_finder.nbest(score_fn, n)
return dict([(ngram, True) for ngram in itertools.chain(words, bigrams)])
import collections
import nltk.classify.util, nltk.metrics
from nltk import precision, recall
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
pos_review = critics[critics['fresh']=='fresh']
neg_review = critics[critics['fresh']=='rotten']
negfeats = [(bigram_word_feats(row['quote'].split()),'neg') for index, row in neg_review.iterrows()]
posfeats = [(bigram_word_feats(row['quote'].split()),'pos') for index, row in pos_review.iterrows()]
negcutoff = int(len(negfeats)*.7)
poscutoff = int(len(posfeats)*.7)
trainfeats = negfeats[:negcutoff] + posfeats[:poscutoff]
testfeats = negfeats[negcutoff:] + posfeats[poscutoff:]
classifier = NaiveBayesClassifier.train(trainfeats)
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
for i, (feats, label) in enumerate(testfeats):
refsets[label].add(i)
observed = classifier.classify(feats)
testsets[observed].add(i)
classifier.show_most_informative_features()
"""
Explanation: Using bigram from nltk package
End of explanation
"""
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=10, max_depth=None,
min_samples_split=2, random_state=0)
scores = cross_val_score(clf, X, y)
scores.mean()
"""
Explanation: Using RANDOM FOREST classifier instead of Naive Bayes
End of explanation
"""
# Create a random forest classifier. By convention, clf means 'classifier'
#clf = RandomForestClassifier(n_jobs=2)
# Train the classifier to take the training features and learn how they relate
# to the training y (the species)
#clf.fit(train[features], y)
critics.head()
"""
Explanation: Try adding supplemental features -- information about genre, director, cast, etc.
End of explanation
"""
from sklearn.decomposition import NMF, LatentDirichletAllocation
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
lda = LatentDirichletAllocation(n_topics=10, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(X)
print("\nTopics in LDA model:")
feature_names = vectorizer.get_feature_names()
print_top_words(lda, feature_names, n_top_words=20)
"""
Explanation: Use word2vec or Latent Dirichlet Allocation to group words into topics and use those topics for prediction.
End of explanation
"""
# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction
# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')
Xtfidf=tfidfvectorizer.fit_transform(critics.quote)
X = Xtfidf.tocsc() # some versions of sklearn return COO format
y = (critics.fresh == 'fresh').values.astype(np.int)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
#your turn. Print the accuracy on the test and training dataset
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
"""
Explanation: Use TF-IDF weighting instead of word counts.
End of explanation
"""
|
anonyXmous/CapstoneProject
|
Mini_Project_Clustering.ipynb
|
unlicense
|
%matplotlib inline
import pandas as pd
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
# Setup Seaborn
sns.set_style("whitegrid")
sns.set_context("poster")
"""
Explanation: Customer Segmentation using Clustering
This mini-project is based on this blog post by yhat. Please feel free to refer to the post for additional information, and solutions.
End of explanation
"""
df_offers = pd.read_excel("./WineKMC.xlsx", sheetname=0)
df_offers.columns = ["offer_id", "campaign", "varietal", "min_qty", "discount", "origin", "past_peak"]
df_offers.head()
"""
Explanation: Data
The dataset contains information on marketing newsletters/e-mail campaigns (e-mail offers sent to customers) and transaction level data from customers. The transactional data shows which offer customers responded to, and what the customer ended up buying. The data is presented as an Excel workbook containing two worksheets. Each worksheet contains a different dataset.
End of explanation
"""
df_transactions = pd.read_excel("./WineKMC.xlsx", sheetname=1)
df_transactions.columns = ["customer_name", "offer_id"]
df_transactions['n'] = 1
df_transactions.head()
"""
Explanation: We see that the first dataset contains information about each offer such as the month it is in effect and several attributes about the wine that the offer refers to: the variety, minimum quantity, discount, country of origin and whether or not it is past peak. The second dataset in the second worksheet contains transactional data -- which offer each customer responded to.
End of explanation
"""
#your turn
# merge the dataframes based on offer id
df_merged = pd.merge(df_transactions, df_offers, on='offer_id')
# create a matrix of customer name and offer id. Replace NaN values with zero and reset index to offer id rather than customer
x_cols = pd.pivot_table(df_merged, values='n', index=['customer_name'], columns=['offer_id']).fillna(0).reset_index()
# create dataframe without customer name
X = x_cols[x_cols.columns[1:]]
"""
Explanation: Data wrangling
We're trying to learn more about how our customers behave, so we can use their behavior (whether or not they purchased something based on an offer) as a way to group similar minded customers together. We can then study those groups to look for patterns and trends which can help us formulate future offers.
The first thing we need is a way to compare customers. To do this, we're going to create a matrix that contains each customer and a 0/1 indicator for whether or not they responded to a given offer.
<div class="span5 alert alert-info">
<h3>Checkup Exercise Set I</h3>
<p><b>Exercise:</b> Create a data frame where each row has the following columns (Use the pandas [`merge`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) and [`pivot_table`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html) functions for this purpose):
<ul>
<li> customer_name
<li> One column for each offer, with a 1 if the customer responded to the offer
</ul>
<p>Make sure you also deal with any weird values such as `NaN`. Read the documentation to develop your solution.</p>
</div>
End of explanation
"""
#your turn
from scipy.spatial.distance import cdist, pdist
from sklearn.cluster import KMeans
import numpy as np
# get Kmean and centroids
K = range(2, 11)
KM = [KMeans(n_clusters=k).fit(X) for k in K]
centroids = [k.cluster_centers_ for k in KM]
# compute euclidean distance
D_k = [cdist(X, mid, 'euclidean') for mid in centroids]
cIdx = [np.argmin(D,axis=1) for D in D_k]
dist = [np.min(D,axis=1) for D in D_k]
# Total with-in sum of square
tss = [sum(d**2) for d in dist]
# Construct a plot showing SSSS for each KK
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlim([1, 11])
ax.plot(K, tss, 'b*-')
ax.plot(K[6], tss[6], marker='o', markersize=12,
markeredgewidth=2, markeredgecolor='r', markerfacecolor='None')
plt.grid(True)
plt.xlabel('Number of clusters')
plt.ylabel('Within-cluster sum of squares')
plt.title('Elbow for KMeans clustering')
# setup KMEans for cluster = 8
cluster = KMeans(n_clusters=8)
# predict and assign to a cluster
x_cols['cluster'] = cluster.fit_predict(X)
y = x_cols.cluster.value_counts()
# index number is the cluster number
cluster = y.index.values
x_lim = np.arange(len(y))
# plot bar chart
plt.bar(x_lim, y, align='center', alpha=0.5)
plt.xticks(x_lim, cluster)
plt.ylabel('Counts')
plt.title('Number of points per cluster')
plt.show()
"""
Explanation: K-Means Clustering
Recall that in K-Means Clustering we want to maximize the distance between centroids and minimize the distance between data points and the respective centroid for the cluster they are in. True evaluation for unsupervised learning would require labeled data; however, we can use a variety of intuitive metrics to try to pick the number of clusters K. We will introduce two methods: the Elbow method, the Silhouette method and the gap statistic.
Choosing K: The Elbow Sum-of-Squares Method
The first method looks at the sum-of-squares error in each cluster against $K$. We compute the distance from each data point to the center of the cluster (centroid) to which the data point was assigned.
$$SS = \sum_k \sum_{x_i \in C_k} \sum_{x_j \in C_k} \left( x_i - x_j \right)^2 = \sum_k \sum_{x_i \in C_k} \left( x_i - \mu_k \right)^2$$
where $x_i$ is a point, $C_k$ represents cluster $k$ and $\mu_k$ is the centroid for cluster $k$. We can plot SS vs. $K$ and choose the elbow point in the plot as the best value for $K$. The elbow point is the point at which the plot starts descending much more slowly.
<div class="span5 alert alert-info">
<h3>Checkup Exercise Set II</h3>
<p><b>Exercise:</b></p>
<ul>
<li> What values of $SS$ do you believe represent better clusterings? Why?
<b> The value of $SS$ that is mininum and the number of clusters is also a minimum </b>
<li> Create a numpy matrix `x_cols` with only the columns representing the offers (i.e. the 0/1 colums)
<b> Done</b>
<li> Write code that applies the [`KMeans`](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) clustering method from scikit-learn to this matrix.
<b> Done</b>
<li> Construct a plot showing $SS$ for each $K$ and pick $K$ using this plot. For simplicity, test $2 \le K \le 10$.
<b> Done</b>
<li> Make a bar chart showing the number of points in each cluster for k-means under the best $K$.
<b> Done</b>
<li> What challenges did you experience using the Elbow method to pick $K$?
<b> 1) Selecting the number of cluster to consider 2) Choosing the best number of cluster (k=8) may be subjective </b>
</ul>
</div>
End of explanation
"""
from __future__ import print_function
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
print(__doc__)
# Generating the sample data from make_blobs
# This particular setting has one distinct cluster and 3 clusters placed close
# together.
X, y = make_blobs(n_samples=500,
n_features=2,
centers=4,
cluster_std=1,
center_box=(-10.0, 10.0),
shuffle=True,
random_state=1) # For reproducibility
range_n_clusters = [2, 3, 4, 5, 6]
for n_clusters in range_n_clusters:
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors)
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1],
marker='o', c="white", alpha=1, s=200)
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50)
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
# Your turn.
from sklearn.metrics import silhouette_samples, silhouette_score
df_sil=[]
for n_clusters in range(2,10):
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
# add data to the list
df_sil.append([n_clusters, silhouette_avg])
# convert into a dataframe
df_sil=pd.DataFrame(df_sil, columns=['cluster', 'avg_score'])
# index number is the cluster number
cluster = df_sil.cluster
x_lim = np.arange(len(df_sil))
y= df_sil.avg_score
# plot bar chart
plt.bar(x_lim, y, align='center', alpha=0.5)
plt.xticks(x_lim, cluster)
plt.ylabel('silhoutte score')
plt.title('Silhoutte score per cluster')
plt.show()
"""
Explanation: Choosing K: The Silhouette Method
There exists another method that measures how well each datapoint $x_i$ "fits" its assigned cluster and also how poorly it fits into other clusters. This is a different way of looking at the same objective. Denote $a_{x_i}$ as the average distance from $x_i$ to all other points within its own cluster $k$. The lower the value, the better. On the other hand $b_{x_i}$ is the minimum average distance from $x_i$ to points in a different cluster, minimized over clusters. That is, compute separately for each cluster the average distance from $x_i$ to the points within that cluster, and then take the minimum. The silhouette $s(x_i)$ is defined as
$$s(x_i) = \frac{b_{x_i} - a_{x_i}}{\max{\left( a_{x_i}, b_{x_i}\right)}}$$
The silhouette score is computed on every datapoint in every cluster. The silhouette score ranges from -1 (a poor clustering) to +1 (a very dense clustering) with 0 denoting the situation where clusters overlap. Some criteria for the silhouette coefficient is provided in the table below.
<pre>
| Range | Interpretation |
|-------------|-----------------------------------------------|
| 0.71 - 1.0 | A strong structure has been found. |
| 0.51 - 0.7 | A reasonable structure has been found. |
| 0.26 - 0.5 | The structure is weak and could be artificial.|
| < 0.25 | No substantial structure has been found. |
</pre>
Source: http://www.stat.berkeley.edu/~spector/s133/Clus.html
Fortunately, scikit-learn provides a function to compute this for us (phew!) called sklearn.metrics.silhouette_score. Take a look at this article on picking $K$ in scikit-learn, as it will help you in the next exercise set.
<div class="span5 alert alert-info">
<h3>Checkup Exercise Set III</h3>
<p><b>Exercise:</b> Using the documentation for the `silhouette_score` function above, construct a series of silhouette plots like the ones in the article linked above.</p>
<p><b>Exercise:</b> Compute the average silhouette score for each $K$ and plot it. What $K$ does the plot suggest we should choose? Does it differ from what we found using the Elbow method?</p>
<b>Based on the silhouette method, the value of K with the max score is K=5. It is different to the elbow method since our data shows that the SSE (elbow) tends to stabilize at K=8 </b>
</div>
End of explanation
"""
#your turn
from sklearn.decomposition import PCA
cluster = KMeans(n_clusters=5)
x_cols['cluster'] = cluster.fit_predict(x_cols[x_cols.columns[1:]])
pca = PCA(n_components=2)
x_cols['x'] = pca.fit_transform(x_cols[x_cols.columns[1:]])[:,0]
x_cols['y'] = pca.fit_transform(x_cols[x_cols.columns[1:]])[:,1]
customer_clusters = x_cols[['customer_name', 'cluster', 'x', 'y']]
df = pd.merge(df_transactions, customer_clusters)
df = pd.merge(df_offers, df)
sns.lmplot('x', 'y',
data=df,
fit_reg=False,
hue="cluster",
scatter_kws={"marker": "D",
"s": 100})
plt.title('Scatter plot of clustered data')
df['is_4'] = df.cluster==4
print(df.groupby("is_4")[['min_qty', 'discount']].mean())
df.groupby("is_4").varietal.value_counts()
"""
Explanation: Choosing $K$: The Gap Statistic
There is one last method worth covering for picking $K$, the so-called Gap statistic. The computation for the gap statistic builds on the sum-of-squares established in the Elbow method discussion, and compares it to the sum-of-squares of a "null distribution," that is, a random set of points with no clustering. The estimate for the optimal number of clusters $K$ is the value for which $\log{SS}$ falls the farthest below that of the reference distribution:
$$G_k = E_n^*{\log SS_k} - \log SS_k$$
In other words a good clustering yields a much larger difference between the reference distribution and the clustered data. The reference distribution is a Monte Carlo (randomization) procedure that constructs $B$ random distributions of points within the bounding box (limits) of the original data and then applies K-means to this synthetic distribution of data points.. $E_n^*{\log SS_k}$ is just the average $SS_k$ over all $B$ replicates. We then compute the standard deviation $\sigma_{SS}$ of the values of $SS_k$ computed from the $B$ replicates of the reference distribution and compute
$$s_k = \sqrt{1+1/B}\sigma_{SS}$$
Finally, we choose $K=k$ such that $G_k \geq G_{k+1} - s_{k+1}$.
Aside: Choosing $K$ when we Have Labels
Unsupervised learning expects that we do not have the labels. In some situations, we may wish to cluster data that is labeled. Computing the optimal number of clusters is much easier if we have access to labels. There are several methods available. We will not go into the math or details since it is rare to have access to the labels, but we provide the names and references of these measures.
Adjusted Rand Index
Mutual Information
V-Measure
Fowlkes–Mallows index
See this article for more information about these metrics.
Visualizing Clusters using PCA
How do we visualize clusters? If we only had two features, we could likely plot the data as is. But we have 100 data points each containing 32 features (dimensions). Principal Component Analysis (PCA) will help us reduce the dimensionality of our data from 32 to something lower. For a visualization on the coordinate plane, we will use 2 dimensions. In this exercise, we're going to use it to transform our multi-dimensional dataset into a 2 dimensional dataset.
This is only one use of PCA for dimension reduction. We can also use PCA when we want to perform regression but we have a set of highly correlated variables. PCA untangles these correlations into a smaller number of features/predictors all of which are orthogonal (not correlated). PCA is also used to reduce a large set of variables into a much smaller one.
<div class="span5 alert alert-info">
<h3>Checkup Exercise Set IV</h3>
<p><b>Exercise:</b> Use PCA to plot your clusters:</p>
<ul>
<li> Use scikit-learn's [`PCA`](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) function to reduce the dimensionality of your clustering data to 2 components
<li> Create a data frame with the following fields:
<ul>
<li> customer name
<li> cluster id the customer belongs to
<li> the two PCA components (label them `x` and `y`)
</ul>
<li> Plot a scatterplot of the `x` vs `y` columns
<li> Color-code points differently based on cluster ID
<li> How do the clusters look?
<b> The clusters look normalized and aligned</b>
<li> Based on what you see, what seems to be the best value for $K$? Moreover, which method of choosing $K$ seems to have produced the optimal result visually?
<b> Based on the scatter plot, the best value fo K is 5. PCA method seems to produced the optimal result visually because the points are grouped closely compared to scatter plots of other methods</b>
</ul>
<p><b>Exercise:</b> Now look at both the original raw data about the offers and transactions and look at the fitted clusters. Tell a story about the clusters in context of the original data. For example, do the clusters correspond to wine variants or something else interesting?</p>
<b> Cluster 4 tends to buy in bulk. That segment has an average of 82 minimum quantity compared to 45 minimum quantity for non-cluster 4. Also, cluster 4 corresponds to mostly buyers of Champange.</b>
</div>
End of explanation
"""
#your turn
# Initialize a new PCA model with a default number of components.
from sklearn.decomposition import PCA
# Do the rest on your own :)
from sklearn import decomposition
pca = PCA()
pca.fit(X)
pca_ratio = (np.round(pca.explained_variance_, decimals=4)*100)
K = [1, 2]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlim([1, 3])
ax.plot(K, pca_ratio, 'b*-')
plt.grid(True)
plt.xlabel('Number of dimensions')
plt.ylabel('PCA Explained variance')
plt.title('Elbow for PCA explained variance')
"""
Explanation: What we've done is we've taken those columns of 0/1 indicator variables, and we've transformed them into a 2-D dataset. We took one column and arbitrarily called it x and then called the other y. Now we can throw each point into a scatterplot. We color coded each point based on it's cluster so it's easier to see them.
<div class="span5 alert alert-info">
<h3>Exercise Set V</h3>
<p>As we saw earlier, PCA has a lot of other uses. Since we wanted to visualize our data in 2 dimensions, restricted the number of dimensions to 2 in PCA. But what is the true optimal number of dimensions?</p>
<p><b>Exercise:</b> Using a new PCA object shown in the next cell, plot the `explained_variance_` field and look for the elbow point, the point where the curve's rate of descent seems to slow sharply. This value is one possible value for the optimal number of dimensions. What is it?</p>
</div>
End of explanation
"""
# your turn
# Affinity propagation
from sklearn.cluster import AffinityPropagation
from sklearn import metrics
af = AffinityPropagation().fit(X)
cluster_centers_indices = af.cluster_centers_indices_
labels = af.labels_
n_clusters_ = len(cluster_centers_indices)
print('Estimated number of clusters: %d' % n_clusters_)
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels, metric='sqeuclidean'))
import matplotlib.pyplot as plt
from itertools import cycle
plt.close('all')
plt.figure(1)
plt.clf()
colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
cluster_center = X[cluster_centers_indices[k]]
plt.plot(X[class_members, 0], X[class_members, 1], col + '.')
plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
for x in X[class_members]:
plt.plot([cluster_center[0], x[0]], [cluster_center[1], x[1]], col)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
# your turn
# Spectral Clustering
from sklearn import cluster
for n_clusters in range(2,3):
#n_clusters = 4
spectral = cluster.SpectralClustering(n_clusters=n_clusters,
eigen_solver='arpack',
affinity="nearest_neighbors")
spectral.fit(X)
labels = spectral.labels_
print('Assigned number of clusters: %d' % n_clusters)
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
plt.scatter(X[:, 0], X[:, 1], c=spectral.labels_, cmap=plt.cm.spectral)
plt.title('Assigned number of clusters: %d' % n_clusters)
# AgglomerativeClustering
from sklearn.cluster import AgglomerativeClustering
for n_clusters in range(2,3):
#n_clusters = 4
linkage= 'ward'
model = AgglomerativeClustering(n_clusters=n_clusters)
model.fit(X)
labels = model.labels_
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
plt.scatter(X[:, 0], X[:, 1], c=model.labels_, cmap=plt.cm.spectral)
plt.title('linkage=%s' % (linkage), fontdict=dict(verticalalignment='top'))
plt.axis('equal')
plt.axis('off')
plt.subplots_adjust(bottom=0, top=.89, wspace=0, left=0, right=1)
plt.suptitle('n_cluster=%i' % (n_clusters), size=17)
plt.show()
# Your turn
# Using DBSCAN
from sklearn.cluster import DBSCAN
from sklearn import metrics
for eps in [.6]:
db = DBSCAN(eps=eps).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
print("Silhouette Coefficient: %0.3f" % metrics.silhouette_score(X, labels))
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in list(zip(unique_labels, colors)):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
"""
Explanation: Other Clustering Algorithms
k-means is only one of a ton of clustering algorithms. Below is a brief description of several clustering algorithms, and the table provides references to the other clustering algorithms in scikit-learn.
Affinity Propagation does not require the number of clusters $K$ to be known in advance! AP uses a "message passing" paradigm to cluster points based on their similarity.
Spectral Clustering uses the eigenvalues of a similarity matrix to reduce the dimensionality of the data before clustering in a lower dimensional space. This is tangentially similar to what we did to visualize k-means clusters using PCA. The number of clusters must be known a priori.
Ward's Method applies to hierarchical clustering. Hierarchical clustering algorithms take a set of data and successively divide the observations into more and more clusters at each layer of the hierarchy. Ward's method is used to determine when two clusters in the hierarchy should be combined into one. It is basically an extension of hierarchical clustering. Hierarchical clustering is divisive, that is, all observations are part of the same cluster at first, and at each successive iteration, the clusters are made smaller and smaller. With hierarchical clustering, a hierarchy is constructed, and there is not really the concept of "number of clusters." The number of clusters simply determines how low or how high in the hierarchy we reference and can be determined empirically or by looking at the dendogram.
Agglomerative Clustering is similar to hierarchical clustering but but is not divisive, it is agglomerative. That is, every observation is placed into its own cluster and at each iteration or level or the hierarchy, observations are merged into fewer and fewer clusters until convergence. Similar to hierarchical clustering, the constructed hierarchy contains all possible numbers of clusters and it is up to the analyst to pick the number by reviewing statistics or the dendogram.
DBSCAN is based on point density rather than distance. It groups together points with many nearby neighbors. DBSCAN is one of the most cited algorithms in the literature. It does not require knowing the number of clusters a priori, but does require specifying the neighborhood size.
Clustering Algorithms in Scikit-learn
<table border="1">
<colgroup>
<col width="15%" />
<col width="16%" />
<col width="20%" />
<col width="27%" />
<col width="22%" />
</colgroup>
<thead valign="bottom">
<tr><th>Method name</th>
<th>Parameters</th>
<th>Scalability</th>
<th>Use Case</th>
<th>Geometry (metric used)</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>K-Means</span></a></td>
<td>number of clusters</td>
<td>Very large<span class="pre">n_samples</span>, medium <span class="pre">n_clusters</span> with
MiniBatch code</td>
<td>General-purpose, even cluster size, flat geometry, not too many clusters</td>
<td>Distances between points</td>
</tr>
<tr><td>Affinity propagation</td>
<td>damping, sample preference</td>
<td>Not scalable with n_samples</td>
<td>Many clusters, uneven cluster size, non-flat geometry</td>
<td>Graph distance (e.g. nearest-neighbor graph)</td>
</tr>
<tr><td>Mean-shift</td>
<td>bandwidth</td>
<td>Not scalable with <span class="pre">n_samples</span></td>
<td>Many clusters, uneven cluster size, non-flat geometry</td>
<td>Distances between points</td>
</tr>
<tr><td>Spectral clustering</td>
<td>number of clusters</td>
<td>Medium <span class="pre">n_samples</span>, small <span class="pre">n_clusters</span></td>
<td>Few clusters, even cluster size, non-flat geometry</td>
<td>Graph distance (e.g. nearest-neighbor graph)</td>
</tr>
<tr><td>Ward hierarchical clustering</td>
<td>number of clusters</td>
<td>Large <span class="pre">n_samples</span> and <span class="pre">n_clusters</span></td>
<td>Many clusters, possibly connectivity constraints</td>
<td>Distances between points</td>
</tr>
<tr><td>Agglomerative clustering</td>
<td>number of clusters, linkage type, distance</td>
<td>Large <span class="pre">n_samples</span> and <span class="pre">n_clusters</span></td>
<td>Many clusters, possibly connectivity constraints, non Euclidean
distances</td>
<td>Any pairwise distance</td>
</tr>
<tr><td>DBSCAN</td>
<td>neighborhood size</td>
<td>Very large <span class="pre">n_samples</span>, medium <span class="pre">n_clusters</span></td>
<td>Non-flat geometry, uneven cluster sizes</td>
<td>Distances between nearest points</td>
</tr>
<tr><td>Gaussian mixtures</td>
<td>many</td>
<td>Not scalable</td>
<td>Flat geometry, good for density estimation</td>
<td>Mahalanobis distances to centers</td>
</tr>
<tr><td>Birch</td>
<td>branching factor, threshold, optional global clusterer.</td>
<td>Large <span class="pre">n_clusters</span> and <span class="pre">n_samples</span></td>
<td>Large dataset, outlier removal, data reduction.</td>
<td>Euclidean distance between points</td>
</tr>
</tbody>
</table>
Source: http://scikit-learn.org/stable/modules/clustering.html
<div class="span5 alert alert-info">
<h3>Exercise Set VI</h3>
<p><b>Exercise:</b> Try clustering using the following algorithms. </p>
<ol>
<li>Affinity propagation
<li>Spectral clustering
<li>Agglomerative clustering
<li>DBSCAN
</ol>
<p>How do their results compare? Which performs the best? Tell a story why you think it performs the best.</p>
<b> Affinity propagation and DBScan will suggest a number of clusters to be used while Spectral and Agglomerative clustering required a pre-assigned number of clusters. Based on the silhouette coefficient, the best algorithm for the given set of data is spectral clustering with silhouette value = 0.71. I think the best algorithms is DBSCAN because it gives a better idea on how the data can be grouped based on the distance of the neighboring points. Affinity propagation tends to give a bigger number of clusters compared to DBScan. </b>
</div>
End of explanation
"""
|
oemof/examples
|
oemof_examples/oemof.solph/v0.4.x/jupyter_tutorials/1_Simple_dispatch_store_results.ipynb
|
gpl-3.0
|
import os
import pandas as pd
from oemof.solph import (Sink, Source, Transformer, Bus, Flow, Model,
EnergySystem, processing, views)
import pickle
"""
Explanation: Energy system optimisation with oemof - how to collect and store results
Import necessary modules
End of explanation
"""
solver = 'cbc'
"""
Explanation: Specify solver
End of explanation
"""
# initialize and provide data
datetimeindex = pd.date_range('1/1/2016', periods=24*10, freq='H')
energysystem = EnergySystem(timeindex=datetimeindex)
filename = 'input_data.csv'
filename = os.path.join(os.getcwd(), filename)
data = pd.read_csv(filename, sep=",")
"""
Explanation: Create an energy system and optimize the dispatch at least costs.
End of explanation
"""
# resource buses
bcoal = Bus(label='coal', balanced=False)
bgas = Bus(label='gas', balanced=False)
boil = Bus(label='oil', balanced=False)
blig = Bus(label='lignite', balanced=False)
# electricity and heat
bel = Bus(label='bel')
bth = Bus(label='bth')
energysystem.add(bcoal, bgas, boil, blig, bel, bth)
# an excess and a shortage variable can help to avoid infeasible problems
energysystem.add(Sink(label='excess_el', inputs={bel: Flow()}))
# shortage_el = Source(label='shortage_el',
# outputs={bel: Flow(variable_costs=200)})
# sources
energysystem.add(Source(label='wind', outputs={bel: Flow(
fix=data['wind'], nominal_value=66.3)}))
energysystem.add(Source(label='pv', outputs={bel: Flow(
fix=data['pv'], nominal_value=65.3)}))
# demands (electricity/heat)
energysystem.add(Sink(label='demand_el', inputs={bel: Flow(
nominal_value=85, fix=data['demand_el'])}))
energysystem.add(Sink(label='demand_th',
inputs={bth: Flow(nominal_value=40,
fix=data['demand_th'],
fixed=True)}))
# power plants
energysystem.add(Transformer(
label='pp_coal',
inputs={bcoal: Flow()},
outputs={bel: Flow(nominal_value=20.2, variable_costs=25)},
conversion_factors={bel: 0.39}))
energysystem.add(Transformer(
label='pp_lig',
inputs={blig: Flow()},
outputs={bel: Flow(nominal_value=11.8, variable_costs=19)},
conversion_factors={bel: 0.41}))
energysystem.add(Transformer(
label='pp_gas',
inputs={bgas: Flow()},
outputs={bel: Flow(nominal_value=41, variable_costs=40)},
conversion_factors={bel: 0.50}))
energysystem.add(Transformer(
label='pp_oil',
inputs={boil: Flow()},
outputs={bel: Flow(nominal_value=5, variable_costs=50)},
conversion_factors={bel: 0.28}))
# combined heat and power plant (chp)
energysystem.add(Transformer(
label='pp_chp',
inputs={bgas: Flow()},
outputs={bel: Flow(nominal_value=30, variable_costs=42),
bth: Flow(nominal_value=40)},
conversion_factors={bel: 0.3, bth: 0.4}))
# heat pump with a coefficient of performance (COP) of 3
b_heat_source = Bus(label='b_heat_source')
energysystem.add(b_heat_source)
energysystem.add(Source(label='heat_source', outputs={b_heat_source: Flow()}))
cop = 3
energysystem.add(Transformer(
label='heat_pump',
inputs={bel: Flow(),
b_heat_source: Flow()},
outputs={bth: Flow(nominal_value=10)},
conversion_factors={bel: 1/3, b_heat_source: (cop-1)/cop}))
"""
Explanation: Create and add components to energysystem
End of explanation
"""
# create optimization model based on energy_system
optimization_model = Model(energysystem=energysystem)
# solve problem
optimization_model.solve(solver=solver,
solve_kwargs={'tee': True, 'keepfiles': False})
"""
Explanation: Optimization
End of explanation
"""
energysystem.results['main'] = processing.results(optimization_model)
energysystem.results['meta'] = processing.meta_results(optimization_model)
string_results = views.convert_keys_to_strings(energysystem.results['main'])
"""
Explanation: Write results into energysystem.results object for later
End of explanation
"""
energysystem.dump(dpath=None, filename=None)
"""
Explanation: Save results - Dump the energysystem (to ~/home/user/.oemof by default)
Specify path and filename if you do not want to overwrite
End of explanation
"""
|
gregnordin/ECEn360_Winter2016
|
transmission_lines/01c_standingwaveanimation.ipynb
|
mit
|
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
# Switch to a backend that supports FuncAnimation
plt.switch_backend('tkagg')
print 'Matplotlib graphics backend in use:',plt.get_backend()
"""
Explanation: Sinusoidal Steady State Voltage on a Transmission Line
The voltage on a lossless transmission line is given by
\begin{aligned} v(z,t) & = v_0 cos(\omega t - \beta z) + \left|{\Gamma_L}\right|v_o cos(\omega t + \beta z + \phi_L)\
& = \Re(\tilde{V}(z) e^{j\omega t} )\end{aligned}
where $\Re()$ is an operator that takes the real part of the enclosed expression, $\omega = 2\pi f$ ($f$ is the frequency of the sinusoidal voltage), $\beta$ is the wavenumber (propagation constant), and $\Gamma_L$ is the load reflection coefficient, which in general is complex such that $\Gamma_L = \left|\Gamma_L\right|\exp(j \phi_L)$. The phase velocity, $u$, is $\omega / \beta$. Since $u = \lambda f$, $\beta = 2 \pi / \lambda$, where $\lambda$ is the wavelength of the sinusoidal voltage.
The voltage phasor is
$$ \tilde{V}(z) = V^+_0 e^{-j\beta z}[1 + \Gamma(z)]$$
where we have used the generalized reflection coefficient
$$ \Gamma(z) = \Gamma_L e^{j2\beta z}. $$
Note that $V^+_0$ can in general be complex such that $V^+_0 = \left|V^+_0\right|e^{j\theta_V}$. The magnitude of the voltage phasor, $\tilde{V}(z)$, is the envelope of the time-varying real voltage and is called the standing wave. It can be calculated as
$$ \left|\tilde{V}(z)\right| = \left|V^+_0\right|\sqrt{1 + 2\left|\Gamma_L\right|cos(2\beta z + \theta_L) + \left|\Gamma_L\right|^2}.$$
The voltage standing wave ratio is given by
$$VSWR = \frac{\left|\tilde{V}(z)\right|{max}}{\left|\tilde{V}(z)\right|{min}} = \frac{1 + \left|\Gamma_L\right|}{1 - \left|\Gamma_L\right|}$$
Import packages and switch to correct matplotlib graphics backend for animations
End of explanation
"""
def vplus(v0,f,t,beta,z):
return v0*np.cos(2*np.pi*f*t - beta*z)
def vminus(v0,f,t,beta,z,gammaLmagnitude,gammaLphase_rad):
return gammaLmagnitude*v0*np.cos(2*np.pi*f*t + beta*z + gammaLphase_rad)
def vtotal(v0,f,t,beta,z,gammaLmagnitude,gammaLphase_rad):
return vplus(v0,f,t,beta,z) + vminus(v0,f,t,beta,z,gammaLmagnitude,gammaLphase_rad)
def phasormagnitude(v0,f,beta,z,gammaLmagnitude,gammaLphase_rad):
return v0*np.sqrt(1 + 2*gammaLmagnitude*np.cos(2*beta*z + gammaLphase_rad) + gammaLmagnitude**2)
# Return string containing text version of complex number
# Handle special cases: angle = 0, pi, -pi, pi/2, and -pi/2
def complextostring(complexnum):
tolerance = 1.0e-3
angle = np.angle(complexnum)
if angle < tolerance and angle > -tolerance: # angle is essentially 0.0
tempstr = "%.2f" % abs(complexnum)
elif angle > np.pi - tolerance or angle < -np.pi + tolerance: # angle close to +pi or -pi?
tempstr = "-%.2f" % abs(complexnum)
elif angle < np.pi/2 + tolerance and angle > np.pi/2 - tolerance: # angle close to np.pi/2?
tempstr = "j%.2f" % abs(complexnum)
elif angle < -np.pi/2 + tolerance and angle > -np.pi/2 - tolerance: # angle close to -np.pi/2?
tempstr = "-j%.2f" % abs(complexnum)
elif angle < 0.0: # put negative sign in front of j, otherwise it will be between j and the number
tempstr = "%.2f exp(-j%.2f)" % (abs(complexnum), -angle)
else:
tempstr = "%.2f exp(j%.2f)" % (abs(complexnum), angle)
return tempstr
"""
Explanation: Function definitions
End of explanation
"""
#-------------------------------------------------------------------------
#
# Set these parameters to model desired transmission line situation
#
#-------------------------------------------------------------------------
# Specify sinusoidal voltage parameters & reflection coefficient
wavelength_m = 2.0 # wavelength in meters
v0 = 1.0 # voltage amplitude in volts
reflcoeffmagn = 1.0 # magnitude of the reflection coefficient
reflcoeffphase_degrees = 0.0 # phase of the reflection coefficient IN DEGREES! (changed 1/21/15)
velocity_mps = 2.0e8 # voltage phase velocity along transmission line
#-------------------------------------------------------------------------
#
# Don't change anything below this point
#
#-------------------------------------------------------------------------
# Set up plot parameters for transmission line
zmin = -10
zmax = 0
numzpnts = 1000
# Set up animation parameters
numframes = 20
framespersec = 15
frameperiod_msec = int(1000.0*float(1)/framespersec)
#print 'Frame period = %d ms' % frameperiod_msec
# Calculate derived parameters
beta = 2*np.pi/wavelength_m
frequency_Hz = velocity_mps / wavelength_m
period_s = 1.0/frequency_Hz
reflcoeffphase_rad = np.radians(reflcoeffphase_degrees)
# Set up sampling grid along transmission line
z = np.linspace(zmin, zmax, numzpnts)
# Calculate standing wave
standingwave = phasormagnitude(v0,frequency_Hz,beta,z,reflcoeffmagn,reflcoeffphase_rad)
standingwavemax = max(standingwave)
standingwavemin = min(standingwave)
if standingwavemin > 1.0e-2:
vswr_text = standingwavemax/standingwavemin
vswr_text = '\nVSWR = %.2f' % vswr_text
else:
vswr_text = '\nVSWR = $\infty$'
# Set up text for plot label
reflcoeffcmplx = reflcoeffmagn * complex(np.cos(reflcoeffphase_rad),np.sin(reflcoeffphase_rad))
labeltext = '$\Gamma_L$ = ' + complextostring(reflcoeffcmplx)
labeltext += '\n$\lambda$ = %.2f m' % wavelength_m
labeltext += '\nf = %.2e Hz' % frequency_Hz
labeltext += '\nu = %.2e m/s' % velocity_mps
labeltext += '\n$V_0$ = %.2f V' % v0
labeltext += vswr_text
# Set up figure, axis, and plot elements, including those to animate (i.e., line1, line2, line3)
fig2 = plt.figure()
ax2 = plt.axes(xlim=(zmin, zmax), ylim=(-2, 4))
line1, = ax2.plot([], [], 'b--', label='$v^+$')
line2, = ax2.plot([], [], 'r--', label='$v^-$')
line3, = ax2.plot([], [], 'g', label='$v_{total} = v^+ + v^-$')
line4, = ax2.plot(z,standingwave, color='black', label='$\mathrm{Standing} \/ \mathrm{wave}$')
ax2.axhline(y=0.0,ls='dotted',color='k')
ax2.legend(loc='upper left')
ax2.set_xlabel('z (m)')
ax2.set_ylabel('Voltage (V)')
ax2.set_title('Transmission Line Voltage - Sinusoidal Steady State')
# initialization function (background of each frame)
def init():
ax2.text(0.55,0.75,labeltext, transform = ax2.transAxes)
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
#ax2.legend.set_zorder(20)
return line1, line2, line3,
# animation function - called sequentially
def animate_vplusandminus(i):
t = period_s * float(i)/numframes
vp = vplus(v0,frequency_Hz,t,beta,z)
line1.set_data(z, vp)
vm = vminus(v0,frequency_Hz,t,beta,z,reflcoeffmagn,reflcoeffphase_rad)
line2.set_data(z, vm)
vtot = vp + vm
line3.set_data(z, vtot)
return line1, line2, line3,
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig2, animate_vplusandminus, init_func=init,
frames=numframes, interval=frameperiod_msec, blit=True)
plt.show()
"""
Explanation: Set transmission line parameters and plot voltages
End of explanation
"""
# Define function to calculate the magnitude of the current phasor
def currentphasormagnitude(v0,f,beta,z,gammaLmagnitude,gammaLphase_rad,z0):
return (v0/z0)*np.sqrt(1 - 2*gammaLmagnitude*np.cos(2*beta*z + gammaLphase_rad) + gammaLmagnitude**2)
"""
Explanation: Sinusoidal Steady State Current on a Transmission Line
The current phasor is
$$ \tilde{I}(z) = \frac{V^+_0}{Z_0} e^{-j\beta z}[1 - \Gamma(z)]$$
The current standing wave is the magnitude of the current phasor:
$$ \left|\tilde{I}(z)\right| = \frac{\left|V^+_0\right|}{Z_0}\sqrt{1 - 2\left|\Gamma_L\right|cos(2\beta z + \theta_L) + \left|\Gamma_L\right|^2}.$$
End of explanation
"""
#-------------------------------------------------------------------------
#
# Set these parameters to model desired transmission line situation
#
#-------------------------------------------------------------------------
# Specify sinusoidal voltage parameters & reflection coefficient
wavelength_m = 2.0 # wavelength in meters
v0 = 1.0 # voltage amplitude in volts
reflcoeffmagn = 0.5 # magnitude of the reflection coefficient
reflcoeffphase_degrees = 0.0 # phase of the reflection coefficient IN DEGREES! (changed 1/21/15)
velocity_mps = 2.0e8 # voltage phase velocity along transmission line
z0 = 50.0 # t-line characteristic impedance
#-------------------------------------------------------------------------
#
# Don't change anything below this point
#
#-------------------------------------------------------------------------
# Set up plot parameters for transmission line
zmin = -10
zmax = 0
numzpnts = 1000
# Set up animation parameters
numframes = 20
framespersec = 15
frameperiod_msec = int(1000.0*float(1)/framespersec)
#print 'Frame period = %d ms' % frameperiod_msec
# Calculate derived parameters
beta = 2*np.pi/wavelength_m
frequency_Hz = velocity_mps / wavelength_m
period_s = 1.0/frequency_Hz
reflcoeffphase_rad = np.radians(reflcoeffphase_degrees)
# Set up sampling grid along transmission line
z = np.linspace(zmin, zmax, numzpnts)
# Calculate standing wave
standingwave = currentphasormagnitude(v0,frequency_Hz,beta,z,reflcoeffmagn,reflcoeffphase_rad,z0)
standingwavemax = max(standingwave)
standingwavemin = min(standingwave)
if standingwavemin > 1.0e-2:
vswr_text = standingwavemax/standingwavemin
vswr_text = '\nVSWR = %.2f' % vswr_text
else:
vswr_text = '\nVSWR = $\infty$'
# Set up text for plot label
reflcoeffcmplx = reflcoeffmagn * complex(np.cos(reflcoeffphase_rad),np.sin(reflcoeffphase_rad))
labeltext = '$\Gamma_L$ = ' + complextostring(reflcoeffcmplx)
labeltext += '\n$\lambda$ = %.2f m' % wavelength_m
labeltext += '\nf = %.2e Hz' % frequency_Hz
labeltext += '\nu = %.2e m/s' % velocity_mps
labeltext += '\n$V_0$ = %.2f V' % v0
labeltext += '\n$Z_0$ = %.2f $\Omega$' % z0
labeltext += vswr_text
# Set up figure, axis, and plot elements, including those to animate (i.e., line1, line2, line3)
fig2 = plt.figure()
ax2 = plt.axes(xlim=(zmin, zmax), ylim=(-2.0/z0, 4.0/z0))
line1, = ax2.plot([], [], 'b--', label='$i^+$')
line2, = ax2.plot([], [], 'r--', label='$i^-$')
line3, = ax2.plot([], [], 'g', label='$i_{total} = i^+ + i^-$')
line4, = ax2.plot(z,standingwave, color='black', label='$\mathrm{Current} \/ \mathrm{standing} \/ \mathrm{wave}$')
ax2.axhline(y=0.0,ls='dotted',color='k')
ax2.legend(loc='upper left')
ax2.set_xlabel('z (m)')
ax2.set_ylabel('Current (A)')
ax2.set_title('Transmission Line Current - Sinusoidal Steady State')
# initialization function (background of each frame)
def init():
ax2.text(0.55,0.7,labeltext, transform = ax2.transAxes)
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
#ax2.legend.set_zorder(20)
return line1, line2, line3,
# animation function - called sequentially
def animate_vplusandminus(i):
t = period_s * float(i)/numframes
ip = vplus(v0,frequency_Hz,t,beta,z) / z0
line1.set_data(z, ip)
im = -vminus(v0,frequency_Hz,t,beta,z,reflcoeffmagn,reflcoeffphase_rad) / z0
line2.set_data(z, im)
itot = ip + im
line3.set_data(z, itot)
return line1, line2, line3,
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig2, animate_vplusandminus, init_func=init,
frames=numframes, interval=frameperiod_msec, blit=True)
plt.show()
"""
Explanation: Set transmission line parameters and plot CURRENTS
End of explanation
"""
|
atulsingh0/MachineLearning
|
ML_UoW/Course00_MLFoundation/03_Classification_Analyzing_Product_Sentiment-Quiz.ipynb
|
gpl-3.0
|
# ignoring the 3 star rating
data2 = data[data['rating'] != 3 ]
data2['sentiment'] = data2['rating'] > 3
data2.head(5)
# training the classifier model
# first, spliting the data into train and test datasets
train_data, test_data = data2.random_split(0.8, seed=0)
sentiment_model = gl.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
# Evaluate the sentiment model
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
def word_count(line):
wc={}
for word in line.split():
wc[word] = wc.get(word, 0) +1
return wc
#data3['word_count_dic'] = data2['review'].apply(word_count)
data2.head(5)
"""
Explanation: Defining Positive and Negative Sentense
Ignore 0 and 3 star ratings
1 and 2 are treated as Negative
4 and 4 are treated as Positive
End of explanation
"""
def selected_word_count(line, selected_words=['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']):
wc={}
for key in selected_words:
if key in line.keys():
wc[key] = line[key]
return wc
#data2['selected_word_count_dic'] = data2['word_count'].apply(selected_word_count)
data2['selected_word_count_dic'] = data2['word_count'].dict_trim_by_keys(selected_words, exclude=False)
def get_count(data, word):
return data.get(word,0)
for word in selected_words:
data2[word+'_count'] = data2['selected_word_count_dic'].apply(lambda line: get_count(line, word))
data2.head(4)
train_data,test_data = data2.random_split(.8, seed=0)
selected_words_model = gl.logistic_classifier.create(train_data,
target='sentiment',
features=['selected_word_count_dic'],
validation_set=test_data)
#gl.SFrame.print_rows(num_rows=12, num_columns=5)
coef=selected_words_model['coefficients'].sort('value')
coef.print_rows(num_rows=12, num_columns=5)
# accuracy
selected_words_model.evaluate(test_data)
sentiment_model.evaluate(test_data)
# Analysis why clf is works better than selected_Word_model
diaper_champ_reviews = data2[data2['name']=='Baby Trend Diaper Champ']
diaper_champ_reviews.head(2)
diaper_champ_reviews['pred_sentiment'] = sentiment_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews_sorted = diaper_champ_reviews.sort('pred_sentiment', ascending=False)
diaper_champ_reviews_sorted.head(2)
diaper_champ_reviews['sel_pred_sentiment'] = selected_words_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews_sel_sorted = diaper_champ_reviews.sort('pred_sentiment', ascending=False)
diaper_champ_reviews_sel_sorted.head(2)
diaper_champ_reviews_sel_sorted[0:1]['review']
# Out of the 11 words in selected_words, which one is most used in the reviews in the dataset?
print('hate_count',data2['hate_count'].sum())
print('wow_count',data2['wow_count'].sum())
print('awful_count',data2['awful_count'].sum())
print('terrible_count',data2['terrible_count'].sum())
print('bad_count',data2['bad_count'].sum())
print('horrible_count',data2['horrible_count'].sum())
print('love_count',data2['love_count'].sum())
print('amazing_count',data2['amazing_count'].sum())
print('fantastic_count',data2['fantastic_count'].sum())
print('great_count',data2['great_count'].sum())
print('awesome_count',data2['awesome_count'].sum())
# It is quite common to use the **majority class classifier** as the a baseline (or reference) model for
# comparison with your classifier model. The majority classifier model predicts the majority class for all data points.
# At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print (num_positive)
print (num_negative)
print (num_positive*1.0/len(train_data))
test_num_positive = (test_data['sentiment'] == +1).sum()
test_num_negative = (test_data['sentiment'] == -1).sum()
print (test_num_positive)
print (test_num_negative)
print(len(test_data))
print(test_num_positive*1.0/len(test_data))
"""
Explanation: def selected_word_count(line, selected_words=['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']):
wc={}
for word in line.split():
if word in selected_words:
wc[word] = wc.get(word, 0) +1
return wc
End of explanation
"""
|
iRipVanWinkle/ml
|
Data Science UA - September 2017/Lecture 04 - Overview of Linear Algebra and Matrix Computations/Finding a Root of a Function - Bisection and Newton Methods.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Finding the Root (Zero) of a Function
Finding the root, or zero, of a function is a very common task in exploratory computing. This Notebook presents the Bisection method and Newton's method for finding the root, or 0, of a function.
End of explanation
"""
def exponential_function(x):
return 0.5 - np.exp(-x)
x = np.linspace(0, 4, 100)
y = exponential_function(x)
plt.plot(x, y)
plt.axhline(0, color='r', ls='--')
"""
Explanation: Bisection method
Given a continuous function $f(x)$ and two values of $x_1$, $x_2$ such that $f(x_1)$ and $f(x_2)$ have opposite signs the Bisection method is a root-finding method that repeatedly bisects the interval $[x_1, x_2]$ and then selects a subinterval (in which a root must be) for further processing. (Since $f(x_1)$ and $f(x_2)$ have opposite signs, it follows that $f(x)$ is zero somewhere between $x_1$ and $x_2$.) The Bisection method iterate towards the zero of the function by cutting the root search interval in half at every iteration. The method calculates the middle point $x_m$ between $x_1$ and $x_2$ and compute $f(x_m)$ and then replaces either $x_1$ or $x_2$ by $x_m$ such the values of $f$ at the end points of the interval are of opposite signs. The process is repeated until the interval is small enough that its middle point can be considered a good approximation of the root of the function. In summary, the algorithm works as follows:
Compute $f(x_1)$ and $f(x_2)$
Compute $x_m = \frac{1}{2}(x_1 + x_2)$.
Compute $f(x_m)$.
If $f(x_m)f(x_2) < 0$, replace $x_1$ by $x_m$, otherwise, replace $x_2$ by $x_m$.
If $|x_1 - x_2|<\varepsilon$, where $\varepsilon$ is a user-specified tolerance, return $\frac{1}{2}(x_1 + x_2)$, otherwise return to step 2.
Example: let $f(x)$ be $\frac{1}{2}-\text{e}^{-x}$ and $x_1$ and $x_2$ be 0 and 4, respectively. Notice that $f(x)$ has a zero somewhere on the plotted interval.
End of explanation
"""
def bisection(func, x1, x2, tol=1e-3, nmax=10, silent=True):
f1 = func(x1)
f2 = func(x2)
assert f1 * f2< 0, 'Error: zero not in interval x1-x2'
for i in range(nmax):
xm = 0.5*(x1 + x2)
fm = func(xm)
if fm * f2 < 0:
x1 = xm
f1 = fm
else:
x2 = xm
f2 = fm
if silent is False: print(x1, x2, f1, f2)
if abs(x1 - x2) < tol:
break
if abs(func(x1)) > tol:
print('Maximum number of iterations reached')
return x1
"""
Explanation: Implementation of the Bisection method
We implement the bisection method as a function called bisection which takes as arguments:
The function for which we want to find the root.
$x_1$ and $x_2$
The tolerance tol to be used as a stopping criterion (by default 0.001).
The maximum number of iterations nmax. Make nmax a keyword argument with a default value of, for example, 10.
Our function returns the value of $x$ where $f(x)$ is (approximately) zero, or print a warning if the maximum number of iterations is reached before the tolerance is met.
Steps 2-5 of the algorithm explained above are implemented as a loop to be run until the tolerance level is met, at most nmax times.
End of explanation
"""
x1 = 0
x2 = 4
function = exponential_function
xzero = bisection(function, x1, x2, tol=1e-3, nmax=20, silent=True)
print ("The root of exponential_function between %.2f and %.2f is %f" % (x1, x2, xzero))
print ("The value of the function at the 'root' is %f" % exponential_function(xzero))
"""
Explanation: We use the bisection method to find the root of the $exponential_function$ defined above
End of explanation
"""
x1 = 0
x2 = 3
function = np.cos
root = bisection(function, 0, 3, tol=1e-6, nmax=30)
print ("The root of cos between %.2f and %.2f is %f" % (x1, x2, root))
"""
Explanation: and of $cos$ between 0 and 3.
End of explanation
"""
def newtonsmethod(func, funcp, xs, tol=1e-6, nmax=10, silent=True):
f = func(xs)
for i in range(nmax):
fp = funcp(xs)
xs = xs - f/fp
f = func(xs)
if silent is False: print(xs, func(xs))
if abs(f) < tol:
return (xs,i+1)
break
if abs(f) > tol:
#print('Max number of iterations reached before convergence')
return (None, -1)
"""
Explanation: Newton's method
The Bisection method is a brute-force method guaranteed to find a root of a continuous function $f$ on an interval $(x_1,x_2)$, if $(x_1,x_2)$ contains a root for $f$. The Bisection method is not very efficient and it requires a search interval that contains only one root.
An alternative is Newton's method (also called the Newton-Raphson method). Consider the graph below. To find the root of the function represented by the blue line, Newton's method starts at a user-defined starting location, $x_0$ (the blue dot) and fits a straight line through the point $(x,y)=(x_0,f(x_0))$ in such a way that the line is tangent to $f(x)$ at $x_0$ (the red line). The intersection of the red line with the horizontal axis is the next estimate $x_1$ of the root of the function (the red dot). This process is repeated until a value of $f(x)$ is found that is sufficiently close to zero (within a specified tolerance), i.e., a straight line is fitted through the point $(x,y)=(x_1,f(x_1))$, tangent to the function, and the the next estimate of the root of the function is taken as the intersection of this line with the horizontal axis, until the value of f at the root estimate is very close to 0.
Unfortunately, not guaranteed that it always works, as is explained below.
<img src="http://i.imgur.com/tK1EOtD.png" alt="Newton's method on wikipedia">
The equation for a straight line with slope $a$ through the point $x_n,f(x_n)$ is:
$$y = a(x-x_n) + f(x_n)$$
For the line to be tangent to the function $f(x)$ at the point $x=x_n$, the slope $a$ has to equal the derivative of $f(x)$ at $x_n$: $a=f'(x_n)$. The intersection of the line with the horizontal axis is the value of $x$ that results in $y=0$ and this is the next estimate $x_{n+1}$ of the root of the function. In order to find this estimate we need to solve:
$$0 = f'(x_n) (x_{n+1}-x_n) + f(x_n)$$
which gives
$$\boxed{x_{n+1} = x_n - f(x_n)/f'(x_n)}$$
The search for the root is completed when $|f(x)|$ is below a user-specified tolerance.
An animated illustration of Newton's method can be found on Wikipedia:
<img src="http://upload.wikimedia.org/wikipedia/commons/e/e0/NewtonIteration_Ani.gif" alt="Newton's method on wikipedia" width="400px">
Newton's method is guaranteed to find the root of a function if the function is well behaved and the search starts close enough to the root. If those two conditions are met, Newton's method is very fast, but if they are not met, the method is not guaranteed to converge to the root.
Another disadvantage of Newton's method is that we need to define the derivative of the function.
Note that the function value does not necessarily go down at every iteration (as illustated in the animation above).
Newton's Method Implementation
We implement Newton's method as function newtonsmethod that takes in the following arguments:
The function for which to find the root.
The derivative of the function.
The starting point of the search $x_0$.
The tolerance tol used as a stopping criterion, by default $10^{-6}$.
The maximum number of iterations nmax, by default 10.
newtonsmethod returns the value of $x$ where $f(x)$ is (approximately) zero or prints a message if the maximum number of iterations is reached before the tolerance is met.
End of explanation
"""
def fp(x):
return np.exp(-x)
xs = 1
func = exponential_function
funcp = fp
tol = 1e-6
nmax = 10
xzero, iterations = newtonsmethod(func, funcp, xs, tol, nmax)
print("First Example")
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, exponential_function(root) = %f" % (xzero, exponential_function(xzero)))
print("tolerance reached in %d iterations" % iterations)
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
print("")
xs = 4
nmax = 40
xzero, iterations = newtonsmethod(func, funcp, xs, nmax)
print("Second Example")
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, exponential_function(root) = %f" % (xzero, exponential_function(xzero)))
print("tolerance reached in %d iterations" % iterations)
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
"""
Explanation: We test newtonsmethod by finding the root of $f(x)=\frac{1}{2}-\text{e}^{-x}$ using $x_0=1$ as the starting point of the search. How many iterations do we need if we start at $x=4$?
End of explanation
"""
xs = 1
xzero, iterations = newtonsmethod(func=np.sin, funcp=np.cos, xs=1)
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, sin(root) = %e" % (xzero, np.sin(xzero)))
print("tolerance reached in %d iterations" % iterations)
print("root / pi = %f" % (xzero / np.pi))
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
print("")
xs = 1.5
xzero, iterations = newtonsmethod(func=np.sin, funcp=np.cos, xs=1.5)
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, sin(root) = %e" % (xzero, np.sin(xzero)))
print("tolerance reached in %d iterations" % iterations)
print("root / pi = %f" % (xzero / np.pi))
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
"""
Explanation: We also demonstrate how newton works by finding the zero of $\sin(x)$, which has many roots: $-2\pi$, $-\pi$, $0$, $pi$, $2\pi$, etc. Which root do we find when starting at $x=1$ and which root do we find when starting at $x=1.5$?
End of explanation
"""
from scipy.optimize import fsolve
def h(x):
return np.log(x ** 2) - 2
x0 = fsolve(h, 1)
print("x_root = %f, function value(root) = %e" % (x0, h(x0)))
"""
Explanation: Root finding methods in scipy
The package scipy.optimize includes a number of routines for the minimization of a function and for finding the zeros of a function. Among them, bisect, newton, and fsolve. fsolve has the additional advantage of also estimating the derivative of the function. fsolve can be used to find an (approximate) answer for a system of non-linear equations.
fsolve
We demonstrate how to use thefsolve method of the scipy.optimize package by finding the value for which $\ln(x^2)=2$
End of explanation
"""
from scipy.optimize import fsolve
def g(x):
return x + 2 * np.cos(x)
x = np.linspace(-2, 4, 100)
x0 = fsolve(g, 1)
plt.plot(x, g(x))
plt.plot(x0, g(x0), 'ro')
plt.axhline(y=0, color='r')
"""
Explanation: Plotting the root
We plot the function $f(x)=x+2\cos(x)$ for $x$ going from -2 to 4, and on the same graph, we also plot a red dot at the location where $f(x)=0$.
End of explanation
"""
|
fastai/course-v3
|
zh-nbs/Lesson5_sgd_mnist.ipynb
|
apache-2.0
|
%matplotlib inline
from fastai.basics import *
"""
Explanation: Practical Deep Learning for Coders, v3
Lesson5_sgd_mnist
End of explanation
"""
path = Config().data_path()/'mnist'
path.ls()
with gzip.open(path/'mnist.pkl.gz', 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
plt.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train.shape, y_train.min(), y_train.max()
"""
Explanation: MNIST SGD
随机梯度下降
Get the 'pickled' MNIST dataset from http://deeplearning.net/data/mnist/mnist.pkl.gz. We're going to treat it as a standard flat dataset with fully connected layers, rather than using a CNN.
从这里 下载pickled MNIST数据集。我们将用标准的平面文件全连接处理数据,而不是用卷积神经网络(CNN)。
End of explanation
"""
bs=64
train_ds = TensorDataset(x_train, y_train)
valid_ds = TensorDataset(x_valid, y_valid)
data = DataBunch.create(train_ds, valid_ds, bs=bs)
x,y = next(iter(data.train_dl))
x.shape,y.shape
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10, bias=True)
def forward(self, xb): return self.lin(xb)
model = Mnist_Logistic().cuda()
model
model.lin
model(x).shape
[p.shape for p in model.parameters()]
lr=2e-2
loss_func = nn.CrossEntropyLoss()
def update(x,y,lr):
wd = 1e-5
y_hat = model(x)
# weight decay
w2 = 0.
for p in model.parameters(): w2 += (p**2).sum()
# add to regular loss
loss = loss_func(y_hat, y) + w2*wd
loss.backward()
with torch.no_grad():
for p in model.parameters():
p.sub_(lr * p.grad)
p.grad.zero_()
return loss.item()
losses = [update(x,y,lr) for x,y in data.train_dl]
plt.plot(losses);
class Mnist_NN(nn.Module):
def __init__(self):
super().__init__()
self.lin1 = nn.Linear(784, 50, bias=True)
self.lin2 = nn.Linear(50, 10, bias=True)
def forward(self, xb):
x = self.lin1(xb)
x = F.relu(x)
return self.lin2(x)
model = Mnist_NN().cuda()
losses = [update(x,y,lr) for x,y in data.train_dl]
plt.plot(losses);
model = Mnist_NN().cuda()
def update(x,y,lr):
opt = optim.Adam(model.parameters(), lr)
y_hat = model(x)
loss = loss_func(y_hat, y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
losses = [update(x,y,1e-3) for x,y in data.train_dl]
plt.plot(losses);
learn = Learner(data, Mnist_NN(), loss_func=loss_func, metrics=accuracy)
%debug
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 1e-2)
learn.recorder.plot_lr(show_moms=True)
learn.recorder.plot_losses()
"""
Explanation: In lesson2-sgd we did these things ourselves:
在第二节的sgd例子中,我们定义了以下的函数:
python
x = torch.ones(n,2)
def mse(y_hat, y): return ((y_hat-y)**2).mean()
y_hat = x@a
Now instead we'll use PyTorch's functions to do it for us, and also to handle mini-batches (which we didn't do last time, since our dataset was so small).
现在我们用PyTorch的函数来帮我们完成这个工作,并且进行数据的迷你批次处理(上次因为数据集比较小,我们没有这样做)。
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.0/examples/single_spots.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.0,<2.1"
"""
Explanation: Single Star with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_spot(radius=30, colat=80, long=0, relteff=0.9)
"""
Explanation: Adding Spots
Let's add one spot to our star. Since there is only one star, the spot will automatically attach without needing to provide component (as is needed in the binary with spots example
End of explanation
"""
print b['spot']
"""
Explanation: Spot Parameters
A spot is defined by the colatitude and longitude of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
NOTE: the parameter name was changed from "colon" to "long" in 2.0.2. For all further releases in 2.0.X, the "colon" parameter still exists but is read-only. Starting with 2.1.0, the "colon" parameter will no longer exist.
End of explanation
"""
times = np.linspace(0, 10, 11)
b.set_value('period', 10)
b.add_dataset('mesh', times=times)
b.run_compute(distortion_method='rotstar', irrad_method='none')
b.animate(x='xs', y='ys', facecolor='teffs')
"""
Explanation: The 'colat' parameter defines the latitude on the star measured from its North Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the observer at t0.
NOTE: prior to version 2.0.2, the definition for the location of a spot on a single star was different and spots did not corotate correctly. If using spots, please make sure to be using 2.0.2 or later.
End of explanation
"""
b.set_value('t0', 5)
b.run_compute(distortion_method='rotstar', irrad_method='none')
b.animate(x='xs', y='ys', facecolor='teffs')
"""
Explanation: If we set t0 to 5 instead of zero, then the spot will cross the line-of-sight at t=5 (since the spot's longitude is 0).
End of explanation
"""
b.set_value('incl', 0)
b.run_compute(distortion_method='rotstar', irrad_method='none')
b.animate(x='xs', y='ys', facecolor='teffs')
"""
Explanation: And if we change the inclination to 0, we'll be looking at the north pole of the star. This clearly illustrates the right-handed rotation of the star. At time=t0=5 the spot will now be pointing in the negative y-direction.
End of explanation
"""
|
ML4DS/ML4all
|
R5.Bayesian_Regression/.ipynb_checkpoints/Bayesian_regression-checkpoint.ipynb
|
mit
|
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
import time
"""
Explanation: Bayesian Parametric Regression
Notebook version: 1.3 (Sep 26, 2016)
Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid-Sueiro (jesus.cid@uc3m.es)
Changes: v.1.0 - First version
v.1.1 - ML Model selection included
v.1.2 - Some typos corrected
v.1.3 - Rewriting text, reorganizing content, some exercises.
Pending changes: * Include regression on the stock data
End of explanation
"""
n_points = 20
n_grid = 200
frec = 3
std_n = 0.2
degree = 3
nplots = 20
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = 0.03 ### Try increasing this value
var_w = sigma_p**2 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
xmin = np.min(X_tr)
xmax = np.max(X_tr)
X_grid = np.linspace(xmin-0.2*(xmax-xmin), xmax+0.2*(xmax-xmin),n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
for k in range(nplots):
#Draw weigths fromt the prior distribution
w_iter = np.random.multivariate_normal(mean_w, var_w)
S_grid_iter = np.polyval(w_iter,X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
ax.set_xlim(xmin-0.2*(xmax-xmin), xmax+0.2*(xmax-xmin))
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.set_xlabel('$x$')
ax.set_ylabel('$s$')
plt.show()
"""
Explanation: 1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
NOTE: In the following, we will use capital letters, ${\bf X}$, $S$, ..., to denote random variables, and lower-case letters ${\bf x}$, s, ..., to the denote the values they can take. When there is no ambigüity, we will remove subindices of the density functions, $p_{{\bf X}, S}({\bf x}, s)= p({\bf x}, s)$ to simplify the mathematical notation.
1.2. Model-based parametric regression
Model-based regression methods assume that all data in the training and test dataset habe been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown.
In particular, in this notebook we will assume the target variables in all pairs $({\bf x}^{(k)}, s^{(k)})$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$.
Once $p(s|{\bf x},{\bf w})$ is known or can be estimated, Estimation Theory can be applied to estimate $s$ for any input ${\bf x}$. For instance, any of these classical estimates can be used:
Maximum A Posterior (MAP): $\qquad\hat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x}, {\bf w})$
Minimum Mean Square Error (MSE): $\qquad\hat{s}_{\text{MSE}} = \mathbb{E}{S |{\bf x}, {\bf w}}$
<img src="figs/ParametricReg.png", width=300>
1.3.1. Maximum Likelihood (ML) parameter estimation
One way to estimate ${\bf w}$ is to apply the maximum likelihood principle: take the value ${\bf w}_\text{ML}$ maximizing the joint distribution of the target variables given the inputs and given ${\bf w}$, i.e.
$$
{\bf w}\text{ML} = \arg\max{\bf w} p({\bf s}|{\bf X}, {\bf w})
$$
where ${\bf s} = \left(s^{(1)}, \dots, s^{(K)}\right)^\top$ is the vector of target variables and ${\bf X} = \left({\bf x}^{(1)}, \dots, {\bf x}^{(K)}\right)^\top$ is the input matrix.
NOTE: Since the training data inputs are known, all probability density functions and expectations in the remainder of this notebook will be conditioned on ${\bf X}$. To simplify the mathematical notation, from now on we will remove ${\bf X}$ from all conditions. Keep in mind that, in any case, all probabilities and expectations may depend on ${\bf X}$ implicitely.
1.3.2. The Gaussian case
A particularly interesting case arises when the data model is Gaussian:
$$p(s|{\bf x}, {\bf w}) =
\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)
$$
where ${\bf z}=T({\bf x})$ is a vector with components which can be computed directly from the observed variables. Such expression includes a linear regression model, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a <i>"linear in the parameters"</i> model.
In that case, it can be shown that the likelihood function $p({\bf s}| {\bf w})$ ($\equiv p({\bf s}| {\bf X}, {\bf w})$) is given by
$$
p({\bf s}| {\bf w})
= \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K
\exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right)
$$
which is maximum for the Least Squares solution
$$
{\bf w}_{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}
$$
1.4. Limitations of the ML estimators.
Since the ML estimation is equivalent to the LS solution under a Gaussian data model, it has the same drawbacks of LS regression. In particular, ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, som cross validation procedures is required to keep the complexity of the predictor function under control depending on the size of the training set.
2. Bayesian Regression
One of the reasons why the ML estimate is prone to overfitting is that the prediction function uses ${\bf w}_\text{ML}$ without taking into account how much uncertain the true value of ${\bf w}$ is.
Bayesian methods utilize such information but considering ${\bf w}$ as a random variable with some prior distribution $p({\bf w})$. The posterior distribution $p({\bf w}|{\bf s})$ will be our measure of the uncertainty about the true value of the model parameters.
In fact, this posterior distribution is a key component of the predictor function. Indeed, the minimum MSE estimate can be computed as
$$
\hat{s}_\text{MSE}
= \mathbb{E}{s|{\bf s}, {\bf x}}
= \int \mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}} p({\bf w}|{\bf s}) d{\bf w}
$$
Since the samples are i.i.d. $\mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}} = \mathbb{E}{s|{\bf w}, {\bf x}}$ and, thus
$$
\hat{s}_\text{MSE}
= \int \mathbb{E}{s|{\bf w}, {\bf x}} p({\bf w}|{\bf s}) d{\bf w}
$$
Noting that $\mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}}$ is the minimum MSE prediction for a given value of ${\bf w}$, we observe that the Bayesian predictor is a weighted sum of these predictions, weighted by its posterior probability (density) of being the correct one.
2.1. Posterior weight distribution
We will express our <i>a priori</i> belief of models using a prior distribution $p({\bf w})$. Then we can infer the <i>a posteriori</i> distribution using Bayes' rule:
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Where:
- $p({\bf s}|{\bf w})$: is the likelihood function
- $p({\bf w})$: is the <i>prior</i> distribution of the weights (assumptions are needed here)
- $p({\bf s})$: is the <i>marginal</i> distribution of the observed data, which could be obtained integrating the expression in the numerator
The previous expression can be interpreted in a rather intuitive way:
Since ${\bf w}$ are the parameters of the model, $p({\bf w})$ express our belief about which models should be preferred over others before we see any data. For instance, since parameter vectors with small norms produce smoother curves, we could assign (<i>a priori</i>) a larger pdf value to models with smaller norms
The likelihood function $p({\bf s}|{\bf w})$ tells us how well the observations can be explained by a particular model
Finally, the posterior distribution $p({\bf w}|{\bf s})$ expresses the estimated goodness of each model (i.e., each parameter vector ${\bf w}$) taking into consideration both the prior and the likelihood of $\bf w$. Thus, a model with large $p({\bf w})$ would have a low posterior value if it offers a poor explanation of the data (i.e., if $p({\bf s}|{\bf w})$ is small), whereas models that fit well with the observations would get emphasized
The posterior distribution of weights opens the door to working with several models at once. Rather thank keeping the estimated best model according to a certain criterion, we can now use all models parameterized by ${\bf w}$, assigning them different degrees of confidence according to $p({\bf w}|{\bf s})$.
2.1.1. A Gaussian Prior
Since each value of ${\bf w}$ determines a regression functions, by stating a prior distributions over the weights we state also a prior distribution over the space of regression functions.
For instance, we will consider a particular example in which we assume a Gaussian prior for the weights given by:
$${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$$
Example
Assume that the true target variable is related to the input observations through the equation
$$
s = {\bf w}^\top{\bf z} + \varepsilon
$$
where ${\bf z} = T({\bf x})$ is a polynomial transformation of the input, $\varepsilon$ is a Gaussian noise variable and ${\bf w}$ some unknown parameter vector.
Assume a Gausian prior weigh distribution, ${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$. For each parameter vector ${\bf w}$, there is a polynomial $f({\bf x}) = {\bf w}^\top {\bf z}$ associated to it. Thus, by drawing samples from $p({\bf w})$ we can generate and plot their associated polynomial functions. This is carried out in the following example.
You can check the effect of modifying the variance of the prior distribution.
End of explanation
"""
# True data parameters
w_true = 3
std_n = 0.4
# Generate the whole dataset
n_max = 64
X_tr = 3 * np.random.random((n_max,1)) - 0.5
S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1)
"""
Explanation: 2.2. Summary
Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following:
Assume a parametric data model $p(s| {\bf x},{\bf w})$ and a prior distribution $p({\bf w})$.
Using the data model and the i.i.d. assumption, compute $p({\bf s}|{\bf w})$.
Applying the bayes rule, compute the posterior distribution $p({\bf w}|{\bf s})$.
Compute the MSE estimate of $s$ given ${\bf x}$.
3. Bayesian regression for a Gaussian model.
We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model.
3.1. Step 1: The Gaussian model.
Let as assume that the likelihood function is given by the Gaussian model described in Sec. 1.3.2.
$$
s~|~{\bf w} \sim {\cal N}\left({\bf z}^\top{\bf w}, \sigma_\varepsilon^2 {\bf I} \right)
$$
and that the prior is also Gaussian
$$
{\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)
$$
3.2. Step 2: Complete data likelihood
Using the i.i.d. assumption,
$$
{\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)
$$
3.3. Step 3: Posterior weight distribution
The posterior distribution of the weights can be computed using the Bayes rule
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore,
$${\bf w}~|~{\bf s} \sim {\cal N}\left({\bf w}\text{MSE}, {\bf V}{\bf w}\right)$$
After some algebra, it can be shown that mean and the covariance matrix of the distribution are:
$${\bf V}{\bf w} = \left[\frac{1}{\sigma\varepsilon^2} {\bf Z}^{\top}{\bf Z}
+ {\bf V}_p^{-1}\right]^{-1}$$
$${\bf w}\text{MSE} = {\sigma\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$
Exercise 1:
Consider the dataset with one-dimensional inputs given by
End of explanation
"""
# Model parameters
sigma_eps = 0.4
mean_w = np.zeros((1,))
sigma_p = 1e6
Var_p = sigma_p**2* np.eye(1)
"""
Explanation: Fit a Bayesian linear regression model assuming ${\bf z}={\bf x}$ and
End of explanation
"""
# No. of points to analyze
n_points = [1, 2, 4, 8, 16, 32, 64]
# Prepare plots
w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axis
plt.figure()
# Compute the prior distribution over the grid points in w_grid
# p = <FILL IN>
p = 1.0/(sigma_p*np.sqrt(2*np.pi)) * np.exp(-(w_grid**2)/(2*sigma_p**2))
plt.plot(w_grid, p,'g-')
for k in n_points:
# Select the first k samples
Zk = X_tr[0:k, :]
Sk = S_tr[0:k]
# Parameters of the posterior distribution
# 1. Compute the posterior variance.
# (Make sure that the resulting variable, Var_w, is a 1x1 numpy array.)
# Var_w = <FILL IN>
Var_w = np.linalg.inv(np.dot(Zk.T, Zk)/(sigma_eps**2) + np.linalg.inv(Var_p))
# 2. Compute the posterior mean.
# (Make sure that the resulting variable, w_MSE, is a scalar)
# w_MSE = <FILL IN>
w_MSE = (Var_w.dot(Zk.T).dot(Sk)/(sigma_eps**2)).flatten()
# Compute the posterior distribution over the grid points in w_grid
sigma_w = np.sqrt(Var_w.flatten()) # First we take a scalar standard deviation
# p = <FILL IN>
p = 1.0/(sigma_w*np.sqrt(2*np.pi)) * np.exp(-((w_grid-w_MSE)**2)/(2*sigma_w**2))
plt.plot(w_grid, p,'g-')
plt.fill_between(w_grid, 0, p, alpha=0.8, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=1, antialiased=True)
plt.xlim(w_grid[0], w_grid[-1])
plt.ylim(0, np.max(p))
plt.xlabel('$w$')
plt.ylabel('$p(w|s)$')
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(2.0)
# Remove the temporary plots and fix the last one
display.clear_output(wait=True)
plt.show()
"""
Explanation: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
End of explanation
"""
# <SOL>
x = np.array([-1.0, 3.0])
s_pred = w_MSE * x
plt.figure()
plt.plot(X_tr, S_tr,'b.')
plt.plot(x, s_pred)
plt.show()
# </SOL>
"""
Explanation: Exercise 2:
Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation sigma_n which is exactly equal to the value assumed by the model, stored in variable sigma_eps. Check what happens if we take sigma_eps=4*sigma_n or sigma_eps=sigma_n/4.
Does the algorithm fails in that cases?
What differences can you observe with respect to the ideal case sigma_eps=sigma_n?
3.4. Step 4: MSE estimate
Noting that
$$
\mathbb{E}{s|{\bf w}, {\bf x}} = {\bf w}^\top {\bf z}
$$
we can write
$$
\hat{s}\text{MSE}
= \int {\bf w}^\top {\bf z} p({\bf w}|{\bf s}) d{\bf w}
= \left(\int {\bf w} p({\bf w}|{\bf s}) d{\bf w}\right)^\top {\bf z}
= {\bf w}\text{MSE}^\top {\bf z}
$$
where
$$
{\bf w}\text{MSE}
= \int {\bf w} p({\bf w}|{\bf s}) d{\bf w}
= {\sigma\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}
$$
Therefore, in the Gaussian case, the weighted integration of prediction function is equivalent to apply a unique model, with weights ${\bf w}_\text{MSE}$.
Exercise 3:
Plot the minimum MSE predictions of $s$ for inputs $x$ in the interval [-1, 3].
End of explanation
"""
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 12
nplots = 6
# Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5
Var_p = sigma_p**2 * np.eye(degree+1)
# Data generation
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-.5,2.5,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z = np.asmatrix(Z)
#Compute posterior distribution parameters
Var_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(Var_p))
posterior_mean = Var_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
for k in range(nplots):
# Draw weights from the posterior distribution
w_iter = np.random.multivariate_normal(posterior_mean, Var_w)
# Note that polyval assumes the first element of weight vector is the coefficient of
# the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(w_iter[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
# We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid, S_grid_iter, 'm-', label='LS regression')
ax.set_xlim(-.5, 2.5)
ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2)
ax.legend(loc='best')
plt.show()
"""
Explanation: 3.5 Maximum likelihood vs Bayesian Inference. Making predictions
Following an <b>ML approach</b>, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as:
$$p({s^}|{\bf w}_{ML},{\bf x}^) $$
For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is:
$$p({s^}|{\bf w}_{ML},{\bf x}^) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^ - {\bf w}_{ML}^\top {\bf z}^\right)^2}{2 \sigma_\varepsilon^2} \right)$$
* The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model).
* If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction.
Using <b>Bayesian inference</b>, we retain all models. Then, the inference of the value $s^ = s({\bf x}^)$ is carried out by mixing all models, according to the weights given by the posterior distribution.
\begin{align}p({s^}|{\bf x}^,{\bf s})
& = \int p({s^}~|~{\bf w},{\bf x}^) p({\bf w}~|~{\bf s}) d{\bf w}\end{align}
where:
* $p({s^*}|{\bf w},{\bf x}^*) = \displaystyle\frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$
* $p({\bf w}~|~{\bf s})$: Is the posterior distribution of the weights, that can be computed using Bayes' Theorem.
The following fragment of code draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
End of explanation
"""
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 12
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
"""
Explanation: Posterior distribution of the target
Since $f^ = f({\bf x}^) = {\bf w}^\top{\bf z}$, $f^*$ is also a Gaussian variable whose posterior mean and variance can be calculated as follows:
$$\mathbb{E}{{{\bf z}^}^\top {\bf w}~|~{\bf s}, {\bf z}^} =
{{\bf z}^}^\top \mathbb{E}{{\bf w}|{\bf s}} =
{\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$
$$\text{Cov}\left[{{\bf z}^}^\top {\bf w}~|~{\bf s}, {\bf z}^\right] =
{{\bf z}^}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {{\bf z}^} =
{{\bf z}^}^\top {\bf V}_{\bf w} {{\bf z}^}$$
Therefore, $f^~|~{\bf s}, {\bf x}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}, {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^*} \right)$
Finally, for $s^ = f^ + \varepsilon^$, the posterior distribution is $s^~|~{\bf s}, {\bf z}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}{\bf w} {\bf Z}^\top {\bf s}, {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^} + \sigma\varepsilon^2\right)$
End of explanation
"""
from math import pi
n_points = 15
frec = 3
std_n = 0.2
max_degree = 12
#Prior distribution parameters
sigma_eps = 0.2
mean_w = np.zeros((degree+1,))
sigma_p = 0.5
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Evaluate the posterior evidence
logE = []
for deg in range(max_degree):
Z_iter = Z[:,:deg+1]
logE_iter = -((deg+1)*np.log(2*pi)/2) \
-np.log(np.linalg.det((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points)))/2 \
-S_tr.T.dot(np.linalg.inv((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points))).dot(S_tr)/2
logE.append(logE_iter[0,0])
plt.plot(np.array(range(max_degree))+1,logE)
plt.xlabel('Polynomia degree')
plt.ylabel('log evidence')
plt.show()
"""
Explanation: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
4 Maximum evidence model selection
We have already addressed with Bayesian Inference the following two issues:
For a given degree, how do we choose the weights?
Should we focus on just one model, or can we use several models at once?
However, we still needed some assumptions: a parametric model (i.e., polynomial function and <i>a priori</i> degree selection) and several parameters needed to be adjusted.
Though we can recur to cross-validation, Bayesian inference opens the door to other strategies.
We could argue that rather than keeping single selections of these parameters, we could use simultaneously several sets of parameters (and/or several parametric forms), and average them in a probabilistic way ... (like we did with the models)
We will follow a simpler strategy, selecting just the most likely set of parameters according to an ML criterion
4.1 Model evidence
The evidence of a model is defined as
$$L = p({\bf s}~|~{\cal M})$$
where ${\cal M}$ denotes the model itself and any free parameters it may have. For instance, for the polynomial model we have assumed so far, ${\cal M}$ would represent the degree of the polynomia, the variance of the additive noise, and the <i>a priori</i> covariance matrix of the weights
Applying the Theorem of Total probability, we can compute the evidence of the model as
$$L = \int p({\bf s}~|~{\bf f},{\cal M}) p({\bf f}~|~{\cal M}) d{\bf f} $$
For the linear model $f({\bf x}) = {\bf w}^\top{\bf z}$, the evidence can be computed as
$$L = \int p({\bf s}~|~{\bf w},{\cal M}) p({\bf w}~|~{\cal M}) d{\bf w} $$
It is important to notice that these probability density functions are exactly the ones we computed on the previous section. We are just making explicit that they depend on a particular model and the selection of its parameters. Therefore:
$p({\bf s}~|~{\bf w},{\cal M})$ is the likelihood of ${\bf w}$
$p({\bf w}~|~{\cal M})$ is the <i>a priori</i> distribution of the weights
4.2 Model selection via evidence maximization
As we have already mentioned, we could propose a prior distribution for the model parameters, $p({\cal M})$, and use it to infer the posterior. However, this can be very involved (usually no closed-form expressions can be derived)
Alternatively, maximizing the evidence is normally good enough
$${\cal M}{ML} = \arg\max{\cal M} p(s~|~{\cal M})$$
Note that we are using the subscript 'ML' because the evidence can also be referred to as the likelihood of the model
4.3 Example: Selection of the degree of the polynomia
For the previous example we had (we consider a spherical Gaussian for the weights):
${\bf s}~|~{\bf w},{\cal M}~\sim~{\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)$
${\bf w}~|~{\cal M}~\sim~{\cal N}\left({\bf 0},\sigma_p^2 {\bf I} \right)$
In this case, $p({\bf s}~|~{\cal M})$ follows also a Gaussian distribution, and it can be shown that
$L = p({\bf s}~|~{\cal M}) = {\cal N}\left({\bf 0},\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I} \right)$
If we just pursue the maximization of $L$, this is equivalent to maximizing the log of the evidence
$$\log(L) = -\frac{M}{2} \log(2\pi) -{\frac{1}{2}}\log\mid\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\mid - \frac{1}{2} {\bf s}^\top \left(\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\right)^{-1} {\bf s}$$
where $M$ denotes the length of vector ${\bf z}$ (the degree of the polynomia minus 1).
The following fragment of code evaluates the evidence of the model as a function of the degree of the polynomia
End of explanation
"""
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 5 #M-1
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
plt.show()
"""
Explanation: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for M=6
End of explanation
"""
|
materialsproject/mapidoc
|
index.ipynb
|
bsd-3-clause
|
# We start by importing MPRester, which is available from the root import of pymatgen.
from pymatgen import MPRester
from pprint import pprint
# Initializing MPRester. Note that you can call MPRester. MPRester looks for the API key in two places:
# - Supplying it directly as an __init__ arg.
# - Setting the "MAPI_KEY" environment variable.
# Please obtain your API key at https://www.materialsproject.org/dashboard
m = MPRester()
"""
Explanation: Introduction
This notebook demonstrates the use of the Materials API using Python. We will do so with Python Materials Genomics (pymatgen)'s high level tools as well as using the requests package.
Using pymatgen's MPRester (Recommended)
End of explanation
"""
#The following query returns all structures in the Materials Project with formula "Fe2O3"
pprint(m.get_data("Li2O", prop="structure"))
# These query returns the chemical formula and material id of all Materials with formula of form "*3O4".
# The material_id is always returned with any use of get_data.
pprint(m.get_data("*3O4", prop="pretty_formula"))
# Getting a DOS object and plotting it. Bandstructures are similar.
dos = m.get_dos_by_material_id("mp-19017")
bs = m.get_bandstructure_by_material_id("mp-19017")
from pymatgen.electronic_structure.plotter import DosPlotter, BSPlotter
%matplotlib inline
dos_plotter = DosPlotter()
dos_plotter.add_dos_dict(dos.get_spd_dos())
dos_plotter.show()
bs_plotter = BSPlotter(bs)
bs_plotter.show()
"""
Explanation: Doing simple queries using the high-level methods.
Many methods in MPRester supports the extremely simple yet powerful query syntax for materials. There are three kinds of queries:
Formulae, e.g., "Li2O", "Fe2O3", "*TiO3
Chemical systems, e.g., "Li-Fe-O", "*-Fe-O"
Materials ids, e.g., "mp-1234"
The MPRester automatically detects what kind of query is being made. Also, for formulas and chemical systems, wildcards are supported with a *. That means *2O will yield a list of the following formula results:
B2O, Xe2O, Li2O ...
End of explanation
"""
# Get material ids for everything in the Materials Project database
data = m.query(criteria={}, properties=["task_id"])
# Get the energy for materials with material_ids "mp-1234" and "mp-2345".
data = m.query(criteria={"task_id": {"$in": ["mp-1234", "mp-1"]}}, properties=["final_energy"])
print data
# Get the spacegroup symbol for all materials with formula Li2O.
data = m.query(criteria={"pretty_formula": "Li2O"}, properties=["spacegroup.symbol"])
print data
# Get the ICSD of all compounds containing either K, Li or Na with O.
data = m.query(criteria={"elements": {"$in": ["K", "Li", "Na"], "$all": ["O"]}, "nelements": 2},
properties=["icsd_id", "pretty_formula", "spacegroup.symbol"])
pprint(data)
"""
Explanation: More sophisticated queries using MPRester's very powerful query method.
The query() method essentially works almost like a raw MongoDB query on the Materials Project database. With it, you can perform extremely sophisticated queries to obtain large and customized quantities of materials data easily. The way to use query is
python
query(criteria, properties)
The criteria argument can either be a simple string similar to the powerful wildcard based formula and chemical system search described above, or a full MongoDB query dict with all the features of the Mongo query syntax.
End of explanation
"""
import requests
import os
import json
r = requests.get("https://www.materialsproject.org/rest/v2/materials/Li2O/vasp/final_structure",
headers={"X-API-KEY": os.environ["MAPI_KEY"]})
content = r.json() # a dict
r = requests.get("https://www.materialsproject.org/rest/v2/materials/*3O4/vasp/pretty_formula",
headers={"X-API-KEY": os.environ["MAPI_KEY"]})
content = r.json() # a dict
pprint(content["response"])
"""
Explanation: Using requests (or urllib)
If you decide not to install pymatgen, you can still make use of the Materials API by calling the relevant URLs directly. Here, we will demonstrate how you can do so using the requests library, though any http library should work similarly. All the queries demonstrated here are similar to the above queries.
End of explanation
"""
data = {
"criteria": {
"elements": {"$in": ["Li", "Na", "K"], "$all": ["O"]},
"nelements": 2,
},
"properties": [
"icsd_id",
"pretty_formula",
"spacegroup.symbol"
]
}
r = requests.post('https://materialsproject.org/rest/v2/query',
headers={'X-API-KEY': os.environ["MAPI_KEY"]},
data={k: json.dumps(v) for k,v in data.iteritems()})
content = r.json() # a dict
pprint(content["response"])
"""
Explanation: Note that we cannot demonstrate DOS nad Bandstructure plotting here, since those rely on pymatgen's high level plotting utilities for these objects. But you can of course query for the DOS and Bandstructure data and implement your own customized plotting in your favorite graphing utility.
End of explanation
"""
|
zomansud/coursera
|
ml-regression/week-2/week-2-multiple-regression-assignment-1-blank.ipynb
|
mit
|
import graphlab
"""
Explanation: Regression Week 2: Multiple Regression (Interpretation)
The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
* Use SFrames to do some feature engineering
* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
* Look at coefficients and interpret their meanings
* Evaluate multiple models via RSS
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
sales.head()
"""
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
train_data,test_data = sales.random_split(.8,seed=0)
print len(train_data)
print len(test_data)
"""
Explanation: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
"""
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
"""
Explanation: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
End of explanation
"""
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
"""
Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
End of explanation
"""
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
"""
Explanation: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
End of explanation
"""
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residual = outcome - predictions
# Then square and add them up
residual_squared = residual * residual
RSS = residual_squared.sum()
return(RSS)
"""
Explanation: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
End of explanation
"""
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
"""
Explanation: Test your function by computing the RSS on TEST data for the example model:
End of explanation
"""
from math import log
"""
Explanation: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
End of explanation
"""
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
train_data['bed_bath_rooms'] = train_data.apply(lambda x : x['bedrooms'] * x['bathrooms'])
test_data['bed_bath_rooms'] = test_data.apply(lambda x : x['bedrooms'] * x['bathrooms'])
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x : log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x : log(x))
train_data['lat_plus_long'] = train_data.apply(lambda x : x['lat'] + x['long'])
test_data['lat_plus_long'] = test_data.apply(lambda x : x['lat'] + x['long'])
"""
Explanation: Next create the following 4 new features as column in both TEST and TRAIN data:
* bedrooms_squared = bedrooms*bedrooms
* bed_bath_rooms = bedrooms*bathrooms
* log_sqft_living = log(sqft_living)
* lat_plus_long = lat + long
As an example here's the first one:
End of explanation
"""
print 'Bedrooms Squared: ' + str(round(test_data['bedrooms_squared'].mean(), 2))
print 'Bed Bath Rooms: ' + str(round(test_data['bed_bath_rooms'].mean(), 2))
print 'Log Sqft Living: ' + str(round(test_data['log_sqft_living'].mean(), 2))
print 'Lat Plus Long: ' + str(round(test_data['lat_plus_long'].mean(), 2))
"""
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
End of explanation
"""
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
"""
Explanation: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
* Model 2: add bedrooms*bathrooms
* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
End of explanation
"""
# Learn the three models: (don't forget to set validation_set = None)
# model 1
model_1_features_model = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features,
validation_set = None)
# model 2
model_2_features_model = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features,
validation_set = None)
# model 3
model_3_features_model = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features,
validation_set = None)
# Examine/extract each model's coefficients:
model_1_features_weight_summary = model_1_features_model.get("coefficients")
print "Model #1"
print model_1_features_weight_summary
model_2_features_weight_summary = model_2_features_model.get("coefficients")
print "Model #2"
print model_2_features_weight_summary
model_3_features_weight_summary = model_3_features_model.get("coefficients")
print "Model #3"
print model_3_features_weight_summary
"""
Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
End of explanation
"""
# Compute the RSS on TRAINING data for each of the three models and record the values:
rss_model_1_train = get_residual_sum_of_squares(model_1_features_model, train_data, train_data['price'])
print "Model #1"
print rss_model_1_train
rss_model_2_train = get_residual_sum_of_squares(model_2_features_model, train_data, train_data['price'])
print "Model #2"
print rss_model_2_train
rss_model_3_train = get_residual_sum_of_squares(model_3_features_model, train_data, train_data['price'])
print "Model #3"
print rss_model_3_train
"""
Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?
Think about what this means.
Comparing multiple models
Now that you've learned three models and extracted the model weights we want to evaluate which model is best.
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
End of explanation
"""
# Compute the RSS on TESTING data for each of the three models and record the values:
rss_model_1_test = get_residual_sum_of_squares(model_1_features_model, test_data, test_data['price'])
print "Model #1"
print rss_model_1_test
rss_model_2_test = get_residual_sum_of_squares(model_2_features_model, test_data, test_data['price'])
print "Model #2"
print rss_model_2_test
rss_model_3_test = get_residual_sum_of_squares(model_3_features_model, test_data, test_data['price'])
print "Model #3"
print rss_model_3_test
"""
Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?
Now compute the RSS on on TEST data for each of the three models.
End of explanation
"""
|
ChadFulton/statsmodels
|
examples/notebooks/ols.ipynb
|
bsd-3-clause
|
%matplotlib inline
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
"""
Explanation: Ordinary Least Squares
End of explanation
"""
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
"""
Explanation: OLS estimation
Artificial data:
End of explanation
"""
X = sm.add_constant(X)
y = np.dot(X, beta) + e
"""
Explanation: Our model needs an intercept so we add a column of 1s:
End of explanation
"""
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
print('Parameters: ', results.params)
print('R2: ', results.rsquared)
"""
Explanation: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:
End of explanation
"""
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
"""
Explanation: OLS non-linear curve but linear in parameters
We simulate artificial data with a non-linear relationship between x and y:
End of explanation
"""
res = sm.OLS(y, X).fit()
print(res.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
"""
Explanation: Extract other quantities of interest:
End of explanation
"""
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best');
"""
Explanation: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
End of explanation
"""
nsample = 50
groups = np.zeros(nsample, int)
groups[20:40] = 1
groups[40:] = 2
#dummy = (groups[:,None] == np.unique(groups)).astype(float)
dummy = sm.categorical(groups, drop=True)
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
beta = [1., 3, -3, 10]
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + e
"""
Explanation: OLS with dummy variables
We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
End of explanation
"""
print(X[:5,:])
print(y[:5])
print(groups)
print(dummy[:5,:])
"""
Explanation: Inspect the data:
End of explanation
"""
res2 = sm.OLS(y, X).fit()
print(res2.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
prstd, iv_l, iv_u = wls_prediction_std(res2)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
legend = ax.legend(loc="best")
"""
Explanation: Draw a plot to compare the true relationship to OLS predictions:
End of explanation
"""
R = [[0, 1, 0, 0], [0, 0, 1, 0]]
print(np.array(R))
print(res2.f_test(R))
"""
Explanation: Joint hypothesis test
F test
We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
End of explanation
"""
print(res2.f_test("x2 = x3 = 0"))
"""
Explanation: You can also use formula-like syntax to test hypotheses
End of explanation
"""
beta = [1., 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
res3 = sm.OLS(y, X).fit()
print(res3.f_test(R))
print(res3.f_test("x2 = x3 = 0"))
"""
Explanation: Small group effects
If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
End of explanation
"""
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
"""
Explanation: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
End of explanation
"""
ols_model = sm.OLS(y, X)
ols_results = ols_model.fit()
print(ols_results.summary())
"""
Explanation: Fit and summary:
End of explanation
"""
norm_x = X.values
for i, name in enumerate(X):
if name == "const":
continue
norm_x[:,i] = X[name]/np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T,norm_x)
"""
Explanation: Condition number
One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
End of explanation
"""
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
"""
Explanation: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
End of explanation
"""
ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit()
print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))
"""
Explanation: Dropping an observation
Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
End of explanation
"""
infl = ols_results.get_influence()
"""
Explanation: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
End of explanation
"""
2./len(X)**.5
print(infl.summary_frame().filter(regex="dfb"))
"""
Explanation: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
End of explanation
"""
|
gdementen/larray
|
doc/source/tutorial/tutorial_aggregations.ipynb
|
gpl-3.0
|
from larray import *
"""
Explanation: Aggregations
Import the LArray library:
End of explanation
"""
# load the 'demography_eurostat' dataset
demography_eurostat = load_example_data('demography_eurostat')
# extract the 'country', 'gender' and 'time' axes
country = demography_eurostat.country
gender = demography_eurostat.gender
time = demography_eurostat.time
# extract the 'population_5_countries' array as 'population'
population = demography_eurostat.population_5_countries
# show the 'population' array
population
"""
Explanation: Load the population array and related axes from the demography_eurostat dataset:
End of explanation
"""
population.sum(gender)
"""
Explanation: The LArray library provides many aggregation functions. The list is given in the Aggregation Functions subsection of the API Reference page.
Aggregation operations can be performed on axes or groups. Axes and groups can be mixed.
The main rules are:
Axes are separated by commas ,
Groups belonging to the same axis are grouped inside parentheses ()
Calculate the sum along an axis:
End of explanation
"""
population.sum(country, gender)
"""
Explanation: or several axes (axes are separated by commas ,):
End of explanation
"""
population.sum_by(time)
"""
Explanation: Calculate the sum along all axes except one by appending _by to the aggregation function:
End of explanation
"""
benelux = population.country['Belgium', 'Netherlands', 'Luxembourg'] >> 'benelux'
fr_de = population.country['France', 'Germany'] >> 'FR+DE'
population.sum((benelux, fr_de))
"""
Explanation: Calculate the sum along groups (the groups belonging to the same axis must grouped inside parentheses ()):
End of explanation
"""
population.sum(gender, (benelux, fr_de))
"""
Explanation: Mixing axes and groups in aggregations:
End of explanation
"""
# mixing slices and individual labels leads to the creation of several groups (a tuple of groups)
except_2016 = time[:2015, 2017]
except_2016
# leading to potentially unexpected results
population.sum(except_2016)
# the union() method allows to mix slices and individual labels to create a single group
except_2016 = time[:2015].union(time[2017])
except_2016
population.sum(except_2016)
"""
Explanation: <div class="alert alert-warning">
**Warning:** Mixing slices and individual labels inside the `[ ]` will generate **several groups** (a tuple of groups) instead of a single group.<br>If you want to create a single group using both slices and individual labels, you need to use the `.union()` method (see below).
</div>
End of explanation
"""
|
darcamo/pyphysim
|
apps/comp_BD/Block Diagonalization.ipynb
|
gpl-2.0
|
%pylab inline
"""
Explanation: Simulation Results for the Enhanced Block Diagonalization algorithm
Initializations
Here we import some packages and do some initialization.
End of explanation
"""
import sys
sys.path.append("/home/darlan/cvs_files/pyphysim/")
# xxxxxxxxxx Import Statements xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
from pyphysim.simulations.runner import SimulationRunner
from pyphysim.simulations.parameters import SimulationParameters
from pyphysim.simulations.results import SimulationResults, Result
from pyphysim.comm import modulators, channels
from pyphysim.util.conversion import dB2Linear
from pyphysim.util import misc
import numpy as np
from pprint import pprint
from apps.simulate_comp import plot_spectral_efficience_all_metrics, plot_per_all_metrics
# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx333
"""
Explanation: Now we import some modules we use and add the PyPhysim to the python path.
End of explanation
"""
results_filename_rank_1 = 'bd_results_2x2_ext_int_rank_1'
results_filename_rank_2 = 'bd_results_2x2_ext_int_rank_2'
results_rank_1 = SimulationResults.load_from_file('{0}{1}'.format(results_filename_rank_1, '.pickle'))
SNR_rank_1 = results_rank_1.params['SNR']
results_rank_2 = SimulationResults.load_from_file('{0}{1}'.format(results_filename_rank_2, '.pickle'))
SNR_rank_2 = results_rank_2.params['SNR']
"""
Explanation: Load the results from disk
Now we set the transmit parameters and load the simulation results from the file corresponding to those transmit parameters.
End of explanation
"""
Pe_dBm = 10
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,7))
fig = plot_spectral_efficience_all_metrics(results_rank_1, Pe_dBm, ax[0])
fig = plot_per_all_metrics(results_rank_1, Pe_dBm, ax[1])
"""
Explanation: Results for external interference of 10dBm (rank1)
End of explanation
"""
Pe_dBm = 0
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,7))
fig = plot_spectral_efficience_all_metrics(results_rank_1, Pe_dBm, ax[0])
fig = plot_per_all_metrics(results_rank_1, Pe_dBm, ax[1])
"""
Explanation: Results for external interference of 0dBm (rank 1)
End of explanation
"""
Pe_dBm = -10
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,7))
fig = plot_spectral_efficience_all_metrics(results_rank_1, Pe_dBm, ax[0])
fig = plot_per_all_metrics(results_rank_1, Pe_dBm, ax[1])
"""
Explanation: Results for external interference of -10dBm (rank 1)
End of explanation
"""
Pe_dBm = 10
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,7))
fig = plot_spectral_efficience_all_metrics(results_rank_2, Pe_dBm, ax[0])
fig = plot_per_all_metrics(results_rank_2, Pe_dBm, ax[1])
"""
Explanation: Results for external interference of 10dBm (rank2)
End of explanation
"""
Pe_dBm = 0
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20,7))
fig = plot_spectral_efficience_all_metrics(results_rank_2, Pe_dBm, ax[0])
fig = plot_per_all_metrics(results_rank_2, Pe_dBm, ax[1])
"""
Explanation: Results for external interference of 0dBm (rank 2)
End of explanation
"""
|
akshaybabloo/Car-ND
|
Term_1/CNN_5/LeNet_8/LeNet_8_2.ipynb
|
mit
|
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
"""
Explanation: LeNet Lab Solution
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
"""
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
"""
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
"""
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
"""
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
"""
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
"""
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
"""
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
"""
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
"""
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
"""
Explanation: SOLUTION: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
"""
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
"""
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
"""
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
"""
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation
"""
|
bearing/dosenet-analysis
|
Programming Lesson Modules/Module 3- Simple Plots and Histograms.ipynb
|
mit
|
%matplotlib inline
# Enables IPython matplotlib mode which allows plots to be shown in
# markdown sections. Not necessary in functionality of code.
import csv
import io
import urllib.request
import matplotlib.pyplot as plt
# matplotlib is one of the most frequently used Python extensions for plotting.
# The plt identifier for matplotlib.pyplot is a convention used in many codes.
"""
Explanation: Module 3- Simple Plots and Histograms
author: Radley Rigonan
This module demonstrates basic usage of matplotlib to create scatter plots, line graphs, and histograms. These are fundamental formats that can be used to display information in a clear and organized manner.
In this module, I will be using etch.csv AND lbl.csv which can be accessed from the following links.
https://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv
http://radwatch.berkeley.edu/sites/default/files/dosenet/lbl.csv
End of explanation
"""
def importwebCSV(url):
response = urllib.request.urlopen(url)
reader = csv.reader(io.TextIOWrapper(response))
datetime = []
cpm = []
line = 0
for row in reader:
if line != 0:
datetime.append(row[0])
cpm.append(float(row[6]))
line += 1
# Python syntax for line = line + 1 (+1 to current value for line)
return (datetime,cpm)
url_etch = 'https://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv'
datetime_etch, cpm_etch = importwebCSV(url_etch)
# run function and store return values as datetime_etch and cpm_etch
url_lbl = 'http://radwatch.berkeley.edu/sites/default/files/dosenet/lbl.csv'
datetime_lbl, cpm_lbl = importwebCSV(url_lbl)
# run function and store return values as datetime_lbl and cpm_lbl
"""
Explanation: First we want to import two sets of data from DoseNet:
You should recognize the following steps! If you are not famiar with importing data from a DDL, then check previous modules on retrieving and importing CSVs.
End of explanation
"""
def line(cpm, plot_title):
# This function takes two arguments:
# CPM data in a list and the plot title as a string
plt.plot(cpm)
plt.ylabel('Counts Per Minute') # label the y-axis
plt.title(plot_title) # put a title!
line(cpm_etch,'Etcheverry DoseNet Measurements: Line Graph')
# run function to show the plot!
"""
Explanation: By default, the matplotlib.pyplot.plot command plots data points in a line graph. If you input a single list of floating point values into plt.plot, it will plot the list on the y-axis. The following commands create a simple line graph with our CPM data.
End of explanation
"""
def scatter(cpm, plot_title):
plt.plot(cpm,'ro')
# The 'ro' modifier after cpm_lbl does two things:
# 'r' changes the color to red and 'o' creates points instead of lines
plt.ylabel('Counts Per Minute')
plt.title(plot_title)
scatter(cpm_lbl,'LBL DoseNet Measurements: Scatter Plot')
"""
Explanation: Alternatively, you can create a scatter plot by with a modifier in plt.plot's inputs. The following module creates a simple scatter plot with CPM data from our device in Lawrence Berkeley.
For more insight into line, axes, and other plot modifiers, matplotlib's official website contains detailed documentation on plotting and graphic design.
End of explanation
"""
def histogram(cpm, plot_title):
plt.hist(cpm,bins=31)
plt.ylabel('Frequency')
plt.xlabel('Counts Per Minute')
plt.title(plot_title)
histogram(cpm_etch,'Etcheverry DoseNet Measurements: Histogram')
"""
Explanation: Histograms are a graphics that depict distributions of data. Histograms are incredibly useful in statistical analysis of raw data. Key pieces of information conveyed by a histogram include: mean, standard deviation, and data stability.
The following commands demonstrate the plt.hist command. The plt.hist command organizes a list of data into a histogram, enabling us to streamline a process that can be extremely tedious by hand.
End of explanation
"""
def subplot_overlay(cpm1, plot_title1, cpm2, plot_title2, cpm3, cpm4, plot_title3):
plt.subplot(2,2,1)
# This means you are making a subplot with 2 rows, 2 columns, and are
# currently working on the 1st one (plot in top left)
scatter(cpm1,plot_title1)
plt.subplot(2,2,2)
# subplot with 2 rows, 2 columns, 2nd plot (top right)
histogram(cpm2,plot_title2)
plt.subplot(2,1,2)
# 2 rows, 1 column, 2nd plot. This plot will be placed on the 2nd plot
# (bottom-most plot) as if it were a 2-by-1 grid
p1, = line(cpm3,''),
p2, = line(cpm4,''),
# Note: The comma after the statement is necessary or plt.legend will fail
plt.title(plot_title3)
legend_labels = ['cpm3', 'cpm4']
plt.legend(legend_labels, loc='best')
# Giving the plots labels enables us to create a legend
plt.tight_layout()
# Tightens title and axes labels and prevents overlapping
plt.show()
subplot_overlay(cpm_etch, 'Etcheverry Scatter Plot',
cpm_lbl, 'LBL Histogram',
cpm_lbl, cpm_etch, 'LBL (cpm3) and Etcheverry (cpm4) Line Plot')
"""
Explanation: The plot command can be used in conjunction with a variety of other commands to create complex graphics. Two straightforward examples of this are subplots and overlaid plots. Subplots allow multiple plots to be placed onto single canvas. Similarly, there are also methods to place multiple sets of data on a single axis to create overlapping plots.
The following example combines everything that had been introduced in this module and incorporates subplots and overlapping plots.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.