repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
jbchouinard/dand_project1
|
Data Analyst Nanodegree Project 1.ipynb
|
mit
|
# Imports
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
from scipy.stats import ttest_rel, norm
# Read in data
df = pd.read_csv('stroopdata.csv')
"""
Explanation: Test a Perceptual Phenomenon
End of explanation
"""
IQR_congruent = df['Congruent'].quantile(0.75) - df['Congruent'].quantile(0.25)
IQR_incongruent = df['Incongruent'].quantile(0.75) - df['Incongruent'].quantile(0.25)
print(df['Congruent'].median(), df['Incongruent'].median(), IQR_congruent, IQR_incongruent)
"""
Explanation: What is our independent variable? What is our dependent variable?
Independent variable: the text color condition (congruent or incongruent).
Dependent variable: the reaction time (the time it takes to name the ink color).
What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.
Null hypothesis: the reaction time for the congruent and incongruent conditions is the same.
$$\mu(\text{congruent}) = \mu(\text{incongruent})$$
Alternative hypothesis: the reaction time for the two conditons are different.
$$\mu(\text{congruent}) \neq \mu(\text{incongruent})$$
It's hard to imagine the reverse being true; that the reaction time would be faster for the incongruent condition. On the other hand, it is not a priori completely implausible that the reaction times would be the same.
We will use the paired t-test to test whether the mean reaction times under the two conditions are equal or not. The t-test allows finding out whether the means of two sets of data are different.
We believe the use of a paired t-test is appropriate since the following conditions are met (see [1], Analysis checklist: Paired t test):
1. Are the differences distributed according to a Gaussian distribution?
As will be shown graphically in a later section, the differences are distributed close to normally.
2. Was the pairing effective?
The pairing was part of the design of the experiment; each pair represents trials from a single participant. This is an effective pairing method.
3. Are the pairs independent?
Each pair corresponds to trials by a different participant; they are independent.
4. Are you comparing exactly two groups?
Yes.
5. If you chose a one-tail P value, did you predict correctly?
We did not choose a one-tail P value (although we could have).
6. Do you care about differences or ratios?
We expect the control values and differences to be of the same order of magnitude, so differences should be fine.
Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.
End of explanation
"""
%matplotlib inline
plt.plot(df['Incongruent']-df['Congruent'], 'o')
plt.title('Difference Between Congruent and \n Incongruent Conditions on Stroop Task')
plt.xlabel('Participant')
plt.ylabel('Reaction time');
"""
Explanation: Median reaction time, congruent condition: 14.357
Median reaction time, incongruent condition: 21.017
Interquartile range, congruent condition: 4.305
Interquartile range, incongruent condition: 5.335.
Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots
End of explanation
"""
plt.plot(x,norm.pdf(x, 7.3, 5), 'g-', label="N(7.3, 5)")
sns.kdeplot(df['Incongruent']-df['Congruent'], label="KDE")
plt.title('Kernel Density Estimate of the Distribution\nof Differences Between the Two Conditions');
x = np.linspace(-10, 30, 1000)
plt.legend();
"""
Explanation: Notice that not a single participant performed better on the incongruent task than on the congruent task (all differences are above zero).
End of explanation
"""
ttest_rel(df['Congruent'], df['Incongruent'])
"""
Explanation: At a glance, the difference distribution looks roughly normally-distributed (a manually fitted normal distribution is plotted for comparison), justifying the use of a paired t-test.
Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
End of explanation
"""
|
trangel/Data-Science
|
reinforcement_learning/crossentropy_method.ipynb
|
gpl-3.0
|
# In Google Colab, uncomment this:
# !wget https://bit.ly/2FMJP5K -O setup.py && bash setup.py
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
env = gym.make("Taxi-v2")
env.reset()
env.render()
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
"""
Explanation: Crossentropy method
This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
End of explanation
"""
policy = np.ones(shape=(n_states, n_actions)) * 1 / n_actions
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
"""
Explanation: Create stochastic policy
This time our policy should be a probability distribution.
policy[s,a] = P(take action a | in state s)
Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.
Please initialize policy uniformly, that is, probabililities of all actions should be equal.
End of explanation
"""
def generate_session(policy, t_max=10**4):
"""
Play game until end or for t_max ticks.
:param policy: an array of shape [n_states,n_actions] with action probabilities
:returns: list of states, list of actions and sum of rewards
"""
states, actions = [], []
total_reward = 0.
s = env.reset()
def sample_action(policy, s):
action_p = policy[s, :].reshape(-1,)
#highest_p_actions = np.argwhere(action_p == np.amax(action_p)).reshape(-1,)
#non_zero_p_actions = np.argwhere(action_p > 0).reshape(-1,)
#random_choice = np.random.choice(highest_p_actions)
#random_choice = np.random.choice(non_zero_p_actions)
random_choice = np.random.choice(np.arange(len(action_p)), p=action_p)
return random_choice
for t in range(t_max):
a = sample_action(policy, s) #<sample action from policy(hint: use np.random.choice) >
new_s, r, done, info = env.step(a)
# Record state, action and add up reward to states,actions and total_reward accordingly.
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [100], label="90'th percentile", color='red')
plt.legend()
"""
Explanation: Play the game
Just like before, but we also record all states and actions we took.
End of explanation
"""
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
"""
#<Compute minimum reward for elite sessions. Hint: use np.percentile >
reward_threshold = np.percentile(rewards_batch, percentile)
#elite_states = <your code here >
#elite_actions = <your code here >
elite_states = []
elite_actions = []
for i, reward in enumerate(rewards_batch):
if reward >= reward_threshold:
elite_states = elite_states + states_batch[i]
elite_actions = elite_actions + actions_batch[i]
return elite_states, elite_actions
states_batch = [
[1, 2, 3], # game1
[4, 2, 0, 2], # game2
[3, 1], # game3
]
actions_batch = [
[0, 2, 4], # game1
[3, 2, 0, 1], # game2
[3, 3], # game3
]
rewards_batch = [
3, # game1
4, # game2
5, # game3
]
test_result_0 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=0)
test_result_40 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=30)
test_result_90 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=90)
test_result_100 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=100)
assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \
and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\
"For percentile 0 you should return all states and actions in chronological order"
assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \
np.all(test_result_40[1] == [3, 2, 0, 1, 3, 3]),\
"For percentile 30 you should only select states/actions from two first"
assert np.all(test_result_90[0] == [3, 1]) and \
np.all(test_result_90[1] == [3, 3]),\
"For percentile 90 you should only select states/actions from one game"
assert np.all(test_result_100[0] == [3, 1]) and\
np.all(test_result_100[1] == [3, 3]),\
"Please make sure you use >=, not >. Also double-check how you compute percentile."
print("Ok!")
def update_policy(elite_states, elite_actions):
"""
Given old policy and a list of elite states/actions from select_elites,
return new updated policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions]
Don't forget to normalize policy to get valid probabilities and handle 0/0 case.
In case you never visited a state, set probabilities for all actions to 1./n_actions
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
"""
new_policy = np.zeros([n_states, n_actions])
#<Your code here: update probabilities for actions given elite states & actions >
# Don't forget to set 1/n_actions for all actions in unvisited states.
for state, action in zip(elite_states, elite_actions):
new_policy[state, action] = new_policy[state, action] + 1
for state in range(n_states):
s = np.sum(new_policy[state, :])
if s == 0:
new_policy[state, :] = 1. / n_actions
else:
new_policy[state, :] = new_policy[state, :] / s
return new_policy
elite_states = [1, 2, 3, 4, 2, 0, 2, 3, 1]
elite_actions = [0, 2, 4, 3, 2, 0, 1, 3, 3]
new_policy = update_policy(elite_states, elite_actions)
assert np.isfinite(new_policy).all(
), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero."
assert np.all(
new_policy >= 0), "Your new policy can't have negative action probabilities"
assert np.allclose(new_policy.sum(
axis=-1), 1), "Your new policy should be a valid probability distribution over actions"
reference_answer = np.array([
[1., 0., 0., 0., 0.],
[0.5, 0., 0., 0.5, 0.],
[0., 0.33333333, 0.66666667, 0., 0.],
[0., 0., 0., 0.5, 0.5]])
assert np.allclose(new_policy[:4, :5], reference_answer)
print("Ok!")
"""
Explanation: Crossentropy method steps
End of explanation
"""
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# reset policy just in case
policy = np.ones([n_states, n_actions]) / n_actions
n_sessions = 250 # sample this many sessions
percentile = 30 # take this percent of session with highest rewards
learning_rate = 0.5 # add this thing to all counts for stability
log = []
for i in range(100):
%time sessions = [generate_session(policy) for x in range(n_sessions)] #[ < generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile=percentile) #<select elite states/actions >
new_policy = update_policy(elite_states, elite_actions) #<compute new policy >
policy = learning_rate * new_policy + (1 - learning_rate) * policy
# display results on chart
show_progress(rewards_batch, log, percentile)
"""
Explanation: Training loop
Generate sessions, select N best and fit to those.
End of explanation
"""
from submit import submit_taxi
submit_taxi(generate_session, policy, 'tonatiuh_rangel@hotmail.com', '7uvgN7bBzpJzVw9f')
"""
Explanation: Reflecting on results
You may have noticed that the taxi problem quickly converges from <-1000 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
In case CEM failed to learn how to win from one distinct starting point, it will simply discard it because no sessions from that starting point will make it into the "elites".
To mitigate that problem, you can either reduce the threshold for elite sessions (duct tape way) or change the way you evaluate strategy (theoretically correct way). You can first sample an action for every possible state and then evaluate this choice of actions by running several games and averaging rewards.
Submit to coursera
End of explanation
"""
|
statsmaths/stat665
|
lectures/lec22/notebook22.ipynb
|
gpl-2.0
|
%pylab inline
import copy
import numpy as np
import pandas as pd
import sys
import os
import re
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
from keras.layers.normalization import BatchNormalization
from keras.layers.wrappers import TimeDistributed
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import SimpleRNN, LSTM, GRU
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from gensim.models import word2vec
"""
Explanation: Problem Set 8 Review & Transfer Learning with word2vec
Import various modules that we need for this notebook (now using Keras 1.0.0)
End of explanation
"""
dir_in = "../../../class_data/stl10/"
X_train = np.genfromtxt(dir_in + 'X_train_new.csv', delimiter=',')
Y_train = np.genfromtxt(dir_in + 'Y_train.csv', delimiter=',')
X_test = np.genfromtxt(dir_in + 'X_test_new.csv', delimiter=',')
Y_test = np.genfromtxt(dir_in + 'Y_test.csv', delimiter=',')
"""
Explanation: I. Problem Set 8, Part 1
Let's work through a solution to the first part of problem set 8, where you applied various techniques to the STL-10 dataset.
End of explanation
"""
Y_train_flat = np.zeros(Y_train.shape[0])
Y_test_flat = np.zeros(Y_test.shape[0])
for i in range(10):
Y_train_flat[Y_train[:,i] == 1] = i
Y_test_flat[Y_test[:,i] == 1] = i
"""
Explanation: And construct a flattened version of it, for the linear model case:
End of explanation
"""
model = Sequential()
model.add(Dense(1024, input_shape = (X_train.shape[1],)))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms,
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=32, nb_epoch=5, verbose=1)
test_rate = model.evaluate(X_test, Y_test)[1]
print("Test classification rate %0.05f" % test_rate)
"""
Explanation: (1) neural network
We now build and evaluate a neural network.
End of explanation
"""
svc_obj = SVC(kernel='linear', C=1)
svc_obj.fit(X_train, Y_train_flat)
pred = svc_obj.predict(X_test)
pd.crosstab(pred, Y_test_flat)
c_rate = sum(pred == Y_test_flat) / len(pred)
print("Test classification rate %0.05f" % c_rate)
"""
Explanation: (2) support vector machine
And now, a basic linear support vector machine.
End of explanation
"""
lr = LogisticRegression(penalty = 'l1')
lr.fit(X_train, Y_train_flat)
pred = lr.predict(X_test)
pd.crosstab(pred, Y_test_flat)
c_rate = sum(pred == Y_test_flat) / len(pred)
print("Test classification rate %0.05f" % c_rate)
"""
Explanation: (3) penalized logistc model
And finally, an L1 penalized model:
End of explanation
"""
dir_in = "../../../class_data/chi_python/"
X_train = np.genfromtxt(dir_in + 'chiCrimeMat_X_train.csv', delimiter=',')
Y_train = np.genfromtxt(dir_in + 'chiCrimeMat_Y_train.csv', delimiter=',')
X_test = np.genfromtxt(dir_in + 'chiCrimeMat_X_test.csv', delimiter=',')
Y_test = np.genfromtxt(dir_in + 'chiCrimeMat_Y_test.csv', delimiter=',')
"""
Explanation: II. Problem Set 8, Part 2
Now, let's read in the Chicago crime dataset and see how well we can get a neural network to perform on it.
End of explanation
"""
model = Sequential()
model.add(Dense(1024, input_shape = (434,)))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(5))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms,
metrics=['accuracy'])
# downsample, if need be:
num_sample = X_train.shape[0]
model.fit(X_train[:num_sample], Y_train[:num_sample], batch_size=32,
nb_epoch=10, verbose=1)
test_rate = model.evaluate(X_test, Y_test)[1]
print("Test classification rate %0.05f" % test_rate)
"""
Explanation: Now, built a neural network for the model
End of explanation
"""
path = "../../../class_data/aclImdb/"
ff = [path + "train/pos/" + x for x in os.listdir(path + "train/pos")] + \
[path + "train/neg/" + x for x in os.listdir(path + "train/neg")] + \
[path + "test/pos/" + x for x in os.listdir(path + "test/pos")] + \
[path + "test/neg/" + x for x in os.listdir(path + "test/neg")]
TAG_RE = re.compile(r'<[^>]+>')
def remove_tags(text):
return TAG_RE.sub('', text)
input_label = ([1] * 12500 + [0] * 12500) * 2
input_text = []
for f in ff:
with open(f) as fin:
pass
input_text += [remove_tags(" ".join(fin.readlines()))]
"""
Explanation: III. Transfer Learning IMDB Sentiment analysis
Now, let's use the word2vec embeddings on the IMDB sentiment analysis corpus. This will allow us to use a significantly larger vocabulary of words. I'll start by reading in the IMDB corpus again from the raw text.
End of explanation
"""
num_words = 5000
max_len = 400
tok = Tokenizer(num_words)
tok.fit_on_texts(input_text[:25000])
X_train = tok.texts_to_sequences(input_text[:25000])
X_test = tok.texts_to_sequences(input_text[25000:])
y_train = input_label[:25000]
y_test = input_label[25000:]
X_train = sequence.pad_sequences(X_train, maxlen=max_len)
X_test = sequence.pad_sequences(X_test, maxlen=max_len)
words = []
for iter in range(num_words):
words += [key for key,value in tok.word_index.items() if value==iter+1]
loc = "/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin"
w2v = word2vec.Word2Vec.load_word2vec_format(loc, binary=True)
weights = np.zeros((num_words,300))
for idx, w in enumerate(words):
try:
weights[idx,:] = w2v[w]
except KeyError as e:
pass
model = Sequential()
model.add(Embedding(num_words, 300, input_length=max_len))
model.add(Dropout(0.5))
model.add(GRU(16,activation='relu'))
model.add(Dense(128))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.layers[0].set_weights([weights])
model.layers[0].trainable = False
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=5, verbose=1,
validation_data=(X_test, y_test))
"""
Explanation: I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us.
End of explanation
"""
|
sonium0/pymatgen
|
examples/Ordering Disordered Structures.ipynb
|
mit
|
# Let us start by creating a disordered CuAu fcc structure.
from pymatgen import Structure, Lattice
specie = {"Cu0+": 0.5, "Au0+": 0.5}
cuau = Structure.from_spacegroup("Fm-3m", Lattice.cubic(3.677), [specie], [[0, 0, 0]])
print cuau
"""
Explanation: Introduction
This notebook demonstrates how to carry out an ordering of a disordered structure using pymatgen.
End of explanation
"""
from pymatgen.transformations.standard_transformations import OrderDisorderedStructureTransformation
trans = OrderDisorderedStructureTransformation()
ss = trans.apply_transformation(cuau, return_ranked_list=100)
print(len(ss))
print ss[0]
"""
Explanation: Note that each site is now 50% occupied by Cu and Au. Because the ordering algorithms uses an Ewald summation to rank the structures, you need to explicitly specify the oxidation state for each species, even if it is 0. Let us now perform ordering of these sites using two methods.
Method 1 - Using the OrderDisorderedStructureTransformation
The first method is to use the OrderDisorderedStructureTransformation.
End of explanation
"""
from pymatgen.analysis.structure_matcher import StructureMatcher
matcher = StructureMatcher()
groups = matcher.group_structures([d["structure"] for d in ss])
print len(groups)
print groups[0][0]
"""
Explanation: Note that the OrderDisorderedTransformation (with a sufficiently large return_ranked_list parameter) returns all orderings, including duplicates without accounting for symmetry. A computed ewald energy is returned together with each structure. To eliminate duplicates, the best way is to use StructureMatcher's group_structures method, as demonstrated below.
End of explanation
"""
from pymatgen.transformations.advanced_transformations import EnumerateStructureTransformation
specie = {"Cu": 0.5, "Au": 0.5}
cuau = Structure.from_spacegroup("Fm-3m", Lattice.cubic(3.677), [specie], [[0, 0, 0]])
trans = EnumerateStructureTransformation(max_cell_size=3)
ss = trans.apply_transformation(cuau, return_ranked_list=1000)
print len(ss)
print "cell sizes are %s" % ([len(d["structure"]) for d in ss])
"""
Explanation: Method 2 - Using the EnumerateStructureTransformation
If you have enumlib installed, you can use the EnumerateStructureTransformation. This automatically takes care of symmetrically equivalent orderings and can enumerate supercells, but is much more prone to parameter sensitivity and cannot handle very large structures. The example below shows an enumerate of CuAu up to cell sizes of 4.
End of explanation
"""
|
AllenDowney/ModSimPy
|
soln/chap02soln.ipynb
|
mit
|
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# set the random number generator
np.random.seed(7)
# If this cell runs successfully, it produces no output.
"""
Explanation: Modeling and Simulation in Python
Chapter 2
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
bikeshare = State(olin=10, wellesley=2)
"""
Explanation: Modeling a bikeshare system
We'll start with a State object that represents the number of bikes at each station.
When you display a State object, it lists the state variables and their values:
End of explanation
"""
bikeshare.olin
bikeshare.wellesley
"""
Explanation: We can access the state variables using dot notation.
End of explanation
"""
bikeshare.olin -= 1
"""
Explanation: Exercise: What happens if you spell the name of a state variable wrong? Edit the previous cell, change the spelling of wellesley, and run the cell again.
The error message uses the word "attribute", which is another name for what we are calling a state variable.
Exercise: Add a third attribute called babson with initial value 0, and display the state of bikeshare again.
Updating
We can use the update operators += and -= to change state variables.
End of explanation
"""
bikeshare
"""
Explanation: If we display bikeshare, we should see the change.
End of explanation
"""
bikeshare.wellesley += 1
bikeshare
"""
Explanation: Of course, if we subtract a bike from olin, we should add it to wellesley.
End of explanation
"""
def bike_to_wellesley():
bikeshare.olin -= 1
bikeshare.wellesley += 1
"""
Explanation: Functions
We can take the code we've written so far and encapsulate it in a function.
End of explanation
"""
bike_to_wellesley()
bikeshare
"""
Explanation: When you define a function, it doesn't run the statements inside the function, yet. When you call the function, it runs the statements inside.
End of explanation
"""
bike_to_wellesley
"""
Explanation: One common error is to omit the parentheses, which has the effect of looking up the function, but not calling it.
End of explanation
"""
# Solution
def bike_to_olin():
bikeshare.wellesley -= 1
bikeshare.olin += 1
# Solution
bike_to_olin()
bikeshare
"""
Explanation: The output indicates that bike_to_wellesley is a function defined in a "namespace" called __main__, but you don't have to understand what that means.
Exercise: Define a function called bike_to_olin that moves a bike from Wellesley to Olin. Call the new function and display bikeshare to confirm that it works.
End of explanation
"""
help(flip)
"""
Explanation: Conditionals
modsim.py provides flip, which takes a probability and returns either True or False, which are special values defined by Python.
The Python function help looks up a function and displays its documentation.
End of explanation
"""
flip(0.7)
"""
Explanation: In the following example, the probability is 0.7 or 70%. If you run this cell several times, you should get True about 70% of the time and False about 30%.
End of explanation
"""
if flip(0.7):
print('heads')
"""
Explanation: In the following example, we use flip as part of an if statement. If the result from flip is True, we print heads; otherwise we do nothing.
End of explanation
"""
if flip(0.7):
print('heads')
else:
print('tails')
"""
Explanation: With an else clause, we can print heads or tails depending on whether flip returns True or False.
End of explanation
"""
bikeshare = State(olin=10, wellesley=2)
"""
Explanation: Step
Now let's get back to the bikeshare state. Again let's start with a new State object.
End of explanation
"""
if flip(0.5):
bike_to_wellesley()
print('Moving a bike to Wellesley')
bikeshare
"""
Explanation: Suppose that in any given minute, there is a 50% chance that a student picks up a bike at Olin and rides to Wellesley. We can simulate that like this.
End of explanation
"""
if flip(0.4):
bike_to_olin()
print('Moving a bike to Olin')
bikeshare
"""
Explanation: And maybe at the same time, there is also a 40% chance that a student at Wellesley rides to Olin.
End of explanation
"""
def step():
if flip(0.5):
bike_to_wellesley()
print('Moving a bike to Wellesley')
if flip(0.4):
bike_to_olin()
print('Moving a bike to Olin')
"""
Explanation: We can wrap that code in a function called step that simulates one time step. In any given minute, a student might ride from Olin to Wellesley, from Wellesley to Olin, or both, or neither, depending on the results of flip.
End of explanation
"""
step()
bikeshare
"""
Explanation: Since this function takes no parameters, we call it like this:
End of explanation
"""
def step(p1, p2):
if flip(p1):
bike_to_wellesley()
print('Moving a bike to Wellesley')
if flip(p2):
bike_to_olin()
print('Moving a bike to Olin')
"""
Explanation: Parameters
As defined in the previous section, step is not as useful as it could be, because the probabilities 0.5 and 0.4 are "hard coded".
It would be better to generalize this function so it takes the probabilities p1 and p2 as parameters:
End of explanation
"""
step(0.5, 0.4)
bikeshare
"""
Explanation: Now we can call it like this:
End of explanation
"""
# Solution
def step(p1, p2):
print(p1, p2)
if flip(p1):
bike_to_wellesley()
print('Moving a bike to Wellesley')
if flip(p2):
bike_to_olin()
print('Moving a bike to Olin')
step(0.3, 0.2)
"""
Explanation: Exercise: At the beginning of step, add a print statement that displays the values of p1 and p2. Call it again with values 0.3, and 0.2, and confirm that the values of the parameters are what you expect.
End of explanation
"""
def step(p1, p2):
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin()
"""
Explanation: For loop
Before we go on, I'll redefine step without the print statements.
End of explanation
"""
bikeshare = State(olin=10, wellesley=2)
"""
Explanation: And let's start again with a new State object:
End of explanation
"""
for i in range(4):
bike_to_wellesley()
bikeshare
"""
Explanation: We can use a for loop to move 4 bikes from Olin to Wellesley.
End of explanation
"""
for i in range(4):
step(0.3, 0.2)
bikeshare
"""
Explanation: Or we can simulate 4 random time steps.
End of explanation
"""
for i in range(60):
step(0.3, 0.2)
bikeshare
"""
Explanation: If each step corresponds to a minute, we can simulate an entire hour like this.
End of explanation
"""
results = TimeSeries()
"""
Explanation: After 60 minutes, you might see that the number of bike at Olin is negative. We'll fix that problem in the next notebook.
But first, we want to plot the results.
TimeSeries
modsim.py provides an object called a TimeSeries that can contain a sequence of values changing over time.
We can create a new, empty TimeSeries like this:
End of explanation
"""
results[0] = bikeshare.olin
results
"""
Explanation: And we can add a value to the TimeSeries like this:
End of explanation
"""
bikeshare = State(olin=10, wellesley=2)
"""
Explanation: The 0 in brackets is an index that indicates that this value is associated with time step 0.
Now we'll use a for loop to save the results of the simulation. I'll start one more time with a new State object.
End of explanation
"""
for i in range(10):
step(0.3, 0.2)
results[i] = bikeshare.olin
"""
Explanation: Here's a for loop that runs 10 steps and stores the results.
End of explanation
"""
results
"""
Explanation: Now we can display the results.
End of explanation
"""
results.mean()
results.describe()
"""
Explanation: A TimeSeries is a specialized version of a Pandas Series, so we can use any of the functions provided by Series, including several that compute summary statistics:
End of explanation
"""
plot(results, label='Olin')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
savefig('figs/chap02-fig01.pdf')
"""
Explanation: You can read the documentation of Series here.
Plotting
We can also plot the results like this.
End of explanation
"""
help(decorate)
"""
Explanation: decorate, which is defined in the modsim library, adds a title and labels the axes.
End of explanation
"""
help(savefig)
"""
Explanation: savefig() saves a figure in a file.
End of explanation
"""
# Solution
def run_simulation(p1, p2, num_steps):
olin = TimeSeries()
wellesley = TimeSeries()
for i in range(num_steps):
step(p1, p2)
olin[i] = bikeshare.olin
wellesley[i] = bikeshare.wellesley
plot(olin, label='Olin')
plot(wellesley, label='Wellesley')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
# Solution
bikeshare = State(olin=10, wellesley=2)
run_simulation(0.3, 0.2, 60)
"""
Explanation: The suffix of the filename indicates the format you want. This example saves the current figure in a PDF file named chap01-fig01.pdf.
Exercise: Wrap the code from this section in a function named run_simulation that takes three parameters, named p1, p2, and num_steps.
It should:
Create a TimeSeries object to hold the results.
Use a for loop to run step the number of times specified by num_steps, passing along the specified values of p1 and p2.
After each step, it should save the number of bikes at Olin in the TimeSeries.
After the for loop, it should plot the results and
Decorate the axes.
To test your function:
Create a State object with the initial state of the system.
Call run_simulation with appropriate parameters.
Save the resulting figure.
Optional:
Extend your solution so it creates two TimeSeries objects, keeps track of the number of bikes at Olin and at Wellesley, and plots both series at the end.
End of explanation
"""
help(decorate)
"""
Explanation: Opening the hood
The functions in modsim.py are built on top of several widely-used Python libraries, especially NumPy, SciPy, and Pandas. These libraries are powerful but can be hard to use. The intent of modsim.py is to give you the power of these libraries while making it easy to get started.
In the future, you might want to use these libraries directly, rather than using modsim.py. So we will pause occasionally to open the hood and let you see how modsim.py works.
You don't need to know anything in these sections, so if you are already feeling overwhelmed, you might want to skip them. But if you are curious, read on.
Pandas
This chapter introduces two objects, State and TimeSeries. Both are based on the Series object defined by Pandas, which is a library primarily used for data science.
You can read the documentation of the Series object here
The primary differences between TimeSeries and Series are:
I made it easier to create a new, empty Series while avoiding a confusing inconsistency.
I provide a function so the Series looks good when displayed in Jupyter.
I provide a function called set that we'll use later.
State has all of those capabilities; in addition, it provides an easier way to initialize state variables, and it provides functions called T and dt, which will help us avoid a confusing error later.
Pyplot
The plot function in modsim.py is based on the plot function in Pyplot, which is part of Matplotlib. You can read the documentation of plot here.
decorate provides a convenient way to call the pyplot functions title, xlabel, and ylabel, and legend. It also avoids an annoying warning message if you try to make a legend when you don't have any labelled lines.
End of explanation
"""
source_code(flip)
"""
Explanation: NumPy
The flip function in modsim.py uses NumPy's random function to generate a random number between 0 and 1.
You can get the source code for flip by running the following cell.
End of explanation
"""
|
statsmodels/statsmodels
|
examples/notebooks/quasibinomial.ipynb
|
bsd-3-clause
|
import statsmodels.api as sm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from io import StringIO
"""
Explanation: Quasi-binomial regression
This notebook demonstrates using custom variance functions and non-binary data
with the quasi-binomial GLM family to perform a regression analysis using
a dependent variable that is a proportion.
The notebook uses the barley leaf blotch data that has been discussed in
several textbooks. See below for one reference:
https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_glimmix_sect016.htm
End of explanation
"""
raw = StringIO(
"""0.05,0.00,1.25,2.50,5.50,1.00,5.00,5.00,17.50
0.00,0.05,1.25,0.50,1.00,5.00,0.10,10.00,25.00
0.00,0.05,2.50,0.01,6.00,5.00,5.00,5.00,42.50
0.10,0.30,16.60,3.00,1.10,5.00,5.00,5.00,50.00
0.25,0.75,2.50,2.50,2.50,5.00,50.00,25.00,37.50
0.05,0.30,2.50,0.01,8.00,5.00,10.00,75.00,95.00
0.50,3.00,0.00,25.00,16.50,10.00,50.00,50.00,62.50
1.30,7.50,20.00,55.00,29.50,5.00,25.00,75.00,95.00
1.50,1.00,37.50,5.00,20.00,50.00,50.00,75.00,95.00
1.50,12.70,26.25,40.00,43.50,75.00,75.00,75.00,95.00"""
)
"""
Explanation: The raw data, expressed as percentages. We will divide by 100
to obtain proportions.
End of explanation
"""
df = pd.read_csv(raw, header=None)
df = df.melt()
df["site"] = 1 + np.floor(df.index / 10).astype(int)
df["variety"] = 1 + (df.index % 10)
df = df.rename(columns={"value": "blotch"})
df = df.drop("variable", axis=1)
df["blotch"] /= 100
"""
Explanation: The regression model is a two-way additive model with
site and variety effects. The data are a full unreplicated
design with 10 rows (sites) and 9 columns (varieties).
End of explanation
"""
model1 = sm.GLM.from_formula(
"blotch ~ 0 + C(variety) + C(site)", family=sm.families.Binomial(), data=df
)
result1 = model1.fit(scale="X2")
print(result1.summary())
"""
Explanation: Fit the quasi-binomial regression with the standard variance
function.
End of explanation
"""
plt.clf()
plt.grid(True)
plt.plot(result1.predict(linear=True), result1.resid_pearson, "o")
plt.xlabel("Linear predictor")
plt.ylabel("Residual")
"""
Explanation: The plot below shows that the default variance function is
not capturing the variance structure very well. Also note
that the scale parameter estimate is quite small.
End of explanation
"""
class vf(sm.families.varfuncs.VarianceFunction):
def __call__(self, mu):
return mu ** 2 * (1 - mu) ** 2
def deriv(self, mu):
return 2 * mu - 6 * mu ** 2 + 4 * mu ** 3
"""
Explanation: An alternative variance function is mu^2 * (1 - mu)^2.
End of explanation
"""
bin = sm.families.Binomial()
bin.variance = vf()
model2 = sm.GLM.from_formula("blotch ~ 0 + C(variety) + C(site)", family=bin, data=df)
result2 = model2.fit(scale="X2")
print(result2.summary())
"""
Explanation: Fit the quasi-binomial regression with the alternative variance
function.
End of explanation
"""
plt.clf()
plt.grid(True)
plt.plot(result2.predict(linear=True), result2.resid_pearson, "o")
plt.xlabel("Linear predictor")
plt.ylabel("Residual")
"""
Explanation: With the alternative variance function, the mean/variance relationship
seems to capture the data well, and the estimated scale parameter is
close to 1.
End of explanation
"""
|
kinshuk4/MoocX
|
k2e/dev/languages/python/python_classes.ipynb
|
mit
|
# Import display
from IPython.display import display
# Example of instantiating a class
# Create a class
class Add:
def __init__(self, num_1, num_2):
self.num_1 = num_1
self.num_2 = num_2
def sum_all(self):
print("Method sum_all in class Add")
return self.num_1 + self.num_2
# Crease an instance of Add
a = Add(10,10)
# Now we can access the function within the class
# using the dot notation
display(x.sum_all())
"""
Explanation: Python Classes
A class has many functions.
Creating a class: init
- The init method is also called a constructor.
- It takes in parameters and assigns fields to the new instance
End of explanation
"""
# Example of inheritance
class Add:
def __init__(self, num_1, num_2):
self.num_1 = num_1
self.num_2 = num_2
def sum_all(self):
print("Method sum_all() in class A:")
return self.num_1 + self.num_2
class Multiply(Add):
def mult_all(self):
print("Method mult_all() in class B:")
return self.num_1 * self.num_2
# Instantiate Multiply class
m = Multiply(10, 10)
# Call method sum_all
# This is inherited from class Add
display(m.sum_all())
# Call method mult_all
display(m.mult_all())
"""
Explanation: Inheritance
- We can have a class inheriting from other classes.
- We can specify the inheritance in paranthesis.
End of explanation
"""
# Example of Private Members within a Class
class A:
def __init__(self, num_1, num_2):
self.__num_1 = num_1
self.__num_2 = num_2
__num_1 = 10
__num_2 = 10
# Instantiate class
a = A(5, 5)
# Call private member
display(a._A__num_1)
display(a._A__num_2)
"""
Explanation: Private Members
- These are created with 2 underscores within a class
- They can only be accessed outside the class if we add _ClassName
End of explanation
"""
# Example of sub-class
class Add:
def __init__(self, num_1, num_2):
self.num_1 = num_1
self.num_2 = num_2
def sum_all(self):
print("Method sum_all() in class A:")
return self.num_1 + self.num_2
class Multiply(Add):
def mult_all(self):
print("Method mult_all() in class B:")
return self.num_1 * self.num_2
# Instance derived class Multiply(Add)
m = Multiply(10, 10)
# Check if Multiply inherits Add
display(issubclass(Multiply, Add))
# Check if Add inherits Multiply
display(issubclass(Add, Multiply))
# Check if Add inherits itself
display(issubclass(Add, Add))
"""
Explanation: Checking for subclasses
- issubclass(class_A, class_B)
- You can use this to determine if one class inherits another class.
End of explanation
"""
class Add:
def __init__(self, num_1, num_2):
self.num_1 = num_1
self.num_2 = num_2
def sum_all(self):
print("Method sum_all() in class A:")
return self.num_1 + self.num_2
# Instantiate: create object
a = Add(10, 10)
# Check if an object is an instance of a class
display(isinstance(a, Add))
"""
Explanation: Checking if an object is an instance of a class
- isinstance(object, class)
End of explanation
"""
class Add:
def __init__(self, word_1, word_2, word_3):
self.word_1 = word_1
self.word_2 = word_2
self.word_3 = word_3
def __repr__(self):
print("Method sum_all() in class A:")
return self.word_1 + self.word_2 + self.word_3
# Create Add instance
a = Add("I", " Love", " you")
print(a)
# Access _repr_ from Add class
print(repr(a))
"""
Explanation: Representation
- repr(object) accesses the repr method in a class where the object is an instantiation of the class.
End of explanation
"""
class Add:
@classmethod
def __init__(self, num_1, num_2):
self.num_1 = num_1
self.num_2 = num_2
def sum_all(self):
print("Method sum_all() in class A:")
return self.num_1 + self.num_2
# Static method
display(Add(10,10).sum_all())
# Instance method
a = Add(10, 10)
display(a.sum_all())
"""
Explanation: Call class directly with Classmethod
- @classmethod
- We can call with a static or instance method
End of explanation
"""
class Car:
def get_func(self):
return self._word
def set_func(self, value):
self._word = value.upper()
word = property(get_func, set_func)
# Create instance
c = Car()
# Set word property
c.word = "BMW"
# Get name property
display(c.word)
"""
Explanation: Property
- property(get_func, set_func)
- This allows you to get and set a value.
End of explanation
"""
class A:
def letter(self):
print("A")
class B(A):
def letter(self):
print("B")
# Call name method from parent class.
super().letter()
# Create Circle and call name.
b = B()
b.letter()
"""
Explanation: Super
- super().parent_method()
- This allows us to call a method from the parent class with the same method name
End of explanation
"""
class Add:
def __init__(self, num_1, num_2):
self.num_1 = num_1
self.num_2 = num_2
def sum_all(self):
print("Method sum_all() in class A:")
return self.num_1 + self.num_2
def __hash__(self):
return int(self.num_1)
a = Add(10 , 10)
display(hash(a))
"""
Explanation: Hash
- Comparing objects using hash is fast.
End of explanation
"""
|
nagordon/mechpy
|
tutorials/testing.ipynb
|
mit
|
# setup
import numpy as np
import sympy as sp
import pandas as pd
import scipy
from pprint import pprint
sp.init_printing(use_latex='mathjax')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12, 8) # (width, height)
plt.rcParams['font.size'] = 14
plt.rcParams['legend.fontsize'] = 16
from matplotlib import patches
get_ipython().magic('matplotlib') # seperate window
get_ipython().magic('matplotlib inline') # inline plotting
"""
Explanation: Mechpy Tutorials
a mechanical engineering toolbox
source code - https://github.com/nagordon/mechpy
documentation - https://nagordon.github.io/mechpy/web/
Neal Gordon
2017-02-20
material testing analysis
This quick tutorial shows some simple scripts for analyzing material test data
Python Initilaization with module imports
End of explanation
"""
import glob as gb
from matplotlib.pyplot import *
%matplotlib inline
csvdir='./examples/'
e=[]
y=[]
for s in specimen:
files = gb.glob(csvdir + '*.csv') # select all csv files
fig, ax = subplots()
title(s)
Pult = []
for f in files:
d1 = pd.read_csv(f, skiprows=1)
d1 = d1[1:] # remove first row of string
d1.columns = ['t', 'load', 'ext'] # rename columns
d1.head()
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
Pult.append(np.max(d1.load))
plot(d1.ext, d1.load)
ylabel('Pult, lbs')
xlabel('extension, in')
e.append(np.std(Pult))
y.append(np.average(Pult) )
show()
# bar chart
barwidth = 0.35 # the width of the bars
fig, ax = subplots()
x = np.arange(len(specimen))
ax.bar(x, y, width=barwidth, yerr=e)
#ax.set_xticks(x)
xticks(x+barwidth/2, specimen, rotation='vertical')
title('Pult with sample average and stdev of n=3')
ylabel('Pult, lbs')
margins(0.05)
show()
"""
Explanation: Reading raw test data example 1
This example shows how to read multiple csv files and plot them together
End of explanation
"""
f = 'Aluminum_loops.txt'
d1 = pd.read_csv(f, skiprows=4,delimiter='\t')
d1 = d1[1:] # remove first row of string
d1.columns = ['time', 'load', 'cross','ext','strain','stress'] # rename columns
d1.head()
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
plot(d1.ext, d1.load)
ylabel('stress')
xlabel('strain')
d1.head()
"""
Explanation: Reading test data - example 2
This example shows how to read a different format of data and plot
End of explanation
"""
f = 'al_MTS_test.csv'
d1 = pd.read_csv(f, skiprows=3,delimiter=',')
d1 = d1[1:] # remove first row of string
d1 = d1[['Time','Axial Force', 'Axial Fine Displacement', 'Axial Length']]
d1.columns = ['time', 'load', 'strain','cross'] # rename columns
# remove commas in data
for d in d1.columns:
#d1.dtypes
d1[d] = d1[d].map(lambda x: float(str(x).replace(',','')))
plot(d1.strain, d1.load)
ylabel('stress')
xlabel('strain')
"""
Explanation: another example of plotting data
End of explanation
"""
%matplotlib inline
from scipy import signal
from pylab import plot, xlabel, ylabel, title, rcParams, figure
import numpy as np
pltwidth = 16
pltheight = 8
rcParams['figure.figsize'] = (pltwidth, pltheight)
csv = np.genfromtxt('./stress_strain1.csv', delimiter=",")
disp = csv[:,0]
force = csv[:,1]
print('number of data points = %i' % len(disp))
def moving_average(x, window):
"""Moving average of 'x' with window size 'window'."""
y = np.empty(len(x)-window+1)
for i in range(len(y)):
y[i] = np.sum(x[i:i+window])/window
return y
plt1 = plot(disp, force);
xlabel('displacement');
ylabel('force');
figure()
mywindow = 1000 # the larger the filter window, the more agressive the filtering
force2 = moving_average(force, mywindow)
x2 = range(len(force2))
plot(x2, force2);
title('Force smoothed with moving average filter');
# Find f' using diff to find the first intersection of the 0
# mvavgforce = mvavgforce[:len(mvavgforce)/2]
force2p = np.diff(force2)
x2p = range(len(force2p))
plot(x2p, force2p);
title('Slope of the smoothed curve')
i = np.argmax(force2p<0)
### or
# i = where(force2p<0)[0][0]
#### or
# for i, f in enumerate(force2p):
# if f < 0:
# break
plot(x2p, force2p, i,force2p[i],'o', markersize=15);
title('find the point at which the slope goes negative, indicating a switch in the slope direction');
plot(x2, force2, i,force2[i],'o',markersize=15);
title('using that index, plot on the force-displacement curve');
#Now, we need to find the next point from here that is 10 less.
delta = 1
i2 = np.argmax(force2[i]-delta > force2[i:])
# If that point does not exist on the immediate downward sloping path,
#then just choose the max point. In this case, 10 would exist very
#far away from the point and not be desireable
if i2 > i:
i2=0
plot(x2, force2, i,force2[i],'o', i2+i, force2[i2+i] ,'*', markersize=15);
disp
"""
Explanation: Finding the "first" peak and delta-10 threshhold limit on force-displacement data of an aluminum coupon
http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/DataFiltering.ipynb
End of explanation
"""
# remove nan
disp = disp[~np.isnan(force)]
force = force[~np.isnan(force)]
A = 0.1 # area
stress = force/A / 1e3
strain = disp/25.4 * 1e-3
plt.plot(strain, stress)
stress_range = np.array([5, 15])
PL = 0.0005
E_tan = stress/strain
assert(len(stress)==len(strain))
i = (stress > stress_range[0]) & (stress < stress_range[1])
stress_mod = stress[i]
strain_mod = strain[i]
fit = np.polyfit(strain_mod,stress_mod,1)
fit_fn = np.poly1d(fit)
fit_fn
PLi = np.argmax( (stress - (fit_fn(strain-PL)) < 0) )
PLi
# fit_fn is now a function which takes in x and returns an estimate for y
#plt.text(4,4,fit_fn)
plt.plot(strain ,stress, 'y')
plot(strain, fit_fn(strain-PL) , '--k', strain[PLi], stress[PLi],'o')
plt.xlim(0, np.max(strain))
plt.ylim(0, np.max(stress))
print('ultimate stress %f' % np.max(stress))
print('ultimate strain %f' % np.max(strain))
print('strain proportion limit %f' % strain[PLi])
print('stress proportion limit %f' % stress[PLi])
E_tan = E_tan[~np.isinf(E_tan)]
strainE = strain[1:]
plot(strainE, E_tan,'b', strainE[PLi], E_tan[PLi],'o')
plt.ylim([0,25000])
plt.title('Tangent Modulus')
"""
Explanation: Modulus
End of explanation
"""
|
sjschmidt44/bike_share
|
bike_share_data_2.ipynb
|
mit
|
from pandas import DataFrame, Series
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
weather = pd.read_table('daily_weather.tsv')
usage = pd.read_table('usage_2012.tsv')
stations = pd.read_table('stations.tsv')
newseasons = {'Summer': 'Spring', 'Spring': 'Winter', 'Fall': 'Summer', 'Winter': 'Fall'}
weather['season_desc'] = weather['season_desc'].map(newseasons)
weather['Day'] = pd.DatetimeIndex(weather.date).date
weather['Month'] = pd.DatetimeIndex(weather.date).month
"""
Explanation: Plot bike-share data with Matplotlib
End of explanation
"""
weather['temp'].plot()
# weather.plot(kind='line', y='temp', x='Day')
plt.show()
weather[['Month', 'humidity', 'temp']].groupby('Month').aggregate(np.mean).plot(kind='bar')
plt.show()
"""
Explanation: Question 1: Plot Daily Temp of 2012
Plot the daily temperature over the course of the year. (This should probably be a line chart.) Create a bar chart that shows the average temperature and humidity by month.
End of explanation
"""
w = weather[['season_desc', 'temp', 'total_riders']]
w_fal = w.loc[w['season_desc'] == 'Fall']
w_win = w.loc[w['season_desc'] == 'Winter']
w_spr = w.loc[w['season_desc'] == 'Spring']
w_sum = w.loc[w['season_desc'] == 'Summer']
plt.scatter(w_fal['temp'], w_fal['total_riders'], c='y', label='Fall', s=100, alpha=.5)
plt.scatter(w_win['temp'], w_win['total_riders'], c='r', label='Winter', s=100, alpha=.5)
plt.scatter(w_spr['temp'], w_spr['total_riders'], c='b', label='Sprint', s=100, alpha=.5)
plt.scatter(w_sum['temp'], w_sum['total_riders'], c='g', label='Summer', s=100, alpha=.5)
plt.legend(loc='lower right')
plt.xlabel('Temperature')
plt.ylabel('Total Riders')
plt.show()
"""
Explanation: Question 2: Rental Volumes compared to Temp
Use a scatterplot to show how the daily rental volume varies with temperature. Use a different series (with different colors) for each season.
End of explanation
"""
w = weather[['season_desc', 'windspeed', 'total_riders']]
w_fal = w.loc[w['season_desc'] == 'Fall']
w_win = w.loc[w['season_desc'] == 'Winter']
w_spr = w.loc[w['season_desc'] == 'Spring']
w_sum = w.loc[w['season_desc'] == 'Summer']
plt.scatter(w_fal['windspeed'], w_fal['total_riders'], c='y', label='Fall', s=100, alpha=.5)
plt.scatter(w_win['windspeed'], w_win['total_riders'], c='r', label='Winter', s=100, alpha=.5)
plt.scatter(w_spr['windspeed'], w_spr['total_riders'], c='b', label='Sprint', s=100, alpha=.5)
plt.scatter(w_sum['windspeed'], w_sum['total_riders'], c='g', label='Summer', s=100, alpha=.5)
plt.legend(loc='lower right')
plt.xlabel('Wind Speed')
plt.ylabel('Total Riders')
plt.show()
"""
Explanation: Question 3: Daily Rentals compared to Windspeed
Create another scatterplot to show how daily rental volume varies with windspeed. As above, use a different series for each season.
End of explanation
"""
s = stations[['station','lat','long']]
u = pd.concat([usage['station_start']], axis=1, keys=['station'])
counts = u['station'].value_counts()
c = DataFrame(counts.index, columns=['station'])
c['counts'] = counts.values
m = pd.merge(s, c, on='station')
plt.scatter(m['long'], m['lat'], c='b', label='Location', s=(m['counts'] * .05), alpha=.1)
plt.legend(loc='lower right')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.show()
"""
Explanation: Question 4: Rental Volumes by Geographical Location
How do the rental volumes vary with geography? Compute the average daily rentals for each station and use this as the radius for a scatterplot of each station's latitude and longitude.
End of explanation
"""
|
rvernagus/data-science-notebooks
|
Data Science From Scratch/6 - Probability.ipynb
|
mit
|
def uniform_pdf(x):
return 1 if x >= 0 and x < 1 else 0
xs = np.arange(-1, 2, .001)
ys = [uniform_pdf(x) for x in xs]
plt.plot(xs, ys);
uniform_pdf(-0.01)
"""
Explanation: Probabilities are a way of quantifying the possibility of the occurrence of a specific event or events given the set of all possible events.
Notationally, $P(E)$ means "the probability of event $E$."
Dependence and Independence
Events $E$ and $F$ are dependent if information about $E$ gives us information about the probability of $F$ occurring (or vice versa). If this is not the case, the variables are independent of each other.
For independent events, the probability of both occurring is the product of the probabilities of each occurring:
$$P(E, F) = P(E)P(F)$$
Conditional Probability
If events are not independent, we can express conditional probability ($E$ is conditional on $F$ or what is the probability that $E$ happens given that $F$ happens):
$$P(E\ |\ F) = P(E, F)\ /\ P(F)$$
which (if $E$ and $F$ are dependent) can be written as
$$P(E, F) = P(E\ |\ F)P(F)$$
When $E$ and $F$ are independent:
$$P(E\ |\ F) = P(E)$$
Bayes's Theorem
Conditional probabilities can be "reversed":
$$P(E \text{ | } F) = P(E, F) \text{ / } P(F) = P(F \text{ | } E)P(E) \text{ / } P(F)$$
If $E$ doesn't happen:
$$P(F) = P(F, E) + P(F, \neg E)$$
Leads to Bayes's Theorem:
$$P(E\ |\ F) = P(F\ |\ E)P(E)\ /\ [P(F\ |\ E)P(E) + P(F\ |\ \neg E)P(\ \neg E)]$$
Random Variables
A random variable is one whose possible values can be placed on an associated probability distribution. The distribution refines the probabilities that the variable will take on each of the possible values.
Continuous Distributions
Coin flips represent a discrete distribution, i.e., one that takes on mutually exclusive values with no "in-betweens." A continuous distribution is one that allows for a full range of values along a continuum such as height or weight.
Continuous distributions use a probability density function (pdf) to define probability of a value within a given range.
The pdf for the uniform distribution is:
End of explanation
"""
def uniform_cdf(x):
"""Returns probability that a value is <= x"""
if x < 0: return 0
elif x < 1: return x
else: return 1
xs = np.arange(-1, 2, .001)
ys = [uniform_cdf(x) for x in xs]
plt.step(xs, ys);
"""
Explanation: The cumulative distribution function (cdf) gives the probability that a random variable is less than or equal to a certain value.
End of explanation
"""
def normal_pdf(x, mu=0, sigma=1):
sqrt_two_pi = math.sqrt(2 * math.pi)
return (math.exp(-(x - mu)**2 / 2 / sigma**2) / (sqrt_two_pi * sigma))
xs = [x / 10.0 for x in range(-50, 50)]
plt.plot(xs, [normal_pdf(x, sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs, [normal_pdf(x, sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs, [normal_pdf(x, sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs, [normal_pdf(x, mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend()
plt.title("Various Normal pdfs")
plt.show()
"""
Explanation: The Normal Distribution
The normal distribution is the definitive example of a random distribution (the classic bell curve shape). It is defined by two parameters: the mean $\mu$ and the standard deviation $\sigma$.
The function for the distribution is:
$$f(x\ |\ \mu, \sigma) = \frac{1}{\sqrt{2\pi}\sigma}\ exp\bigg(-\frac{(x - \mu)^2}{2\sigma^2}\bigg)$$
End of explanation
"""
def normal_cdf(x, mu=0, sigma=1):
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
plt.plot(xs, [normal_cdf(x, sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs, [normal_cdf(x, sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs, [normal_cdf(x, sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs, [normal_cdf(x, mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend(loc=4) # bottom right
plt.title("Various Normal cdfs")
plt.show()
def inverse_normal_cdf(p, mu=0, sigma=1, tolerance=0.00001):
"""find approimate inverse using binary search"""
# if not standard, compute standard and rescale
if mu!= 0 or sigma != 1:
return mu + sigma * inverse_normal_cdf(p, tolerance=tolerance)
low_z, low_p = -10.0, 0
hi_z, hi_p = 10.0, 1
while hi_z - low_z > tolerance:
mid_z = (low_z + hi_z) / 2
mid_p = normal_cdf(mid_z)
if mid_p < p:
low_z, low_p = mid_z, mid_p
elif mid_p > p:
hi_z, hi_p = mid_z, mid_p
else:
break
return mid_z
"""
Explanation: When $\mu = 0$ and $\sigma = 1$ we call a distribution the standard normal distribution.
End of explanation
"""
def bernoulli_trial(p):
return 1 if random.random() < p else 0
def binomial(n, p):
return sum(bernoulli_trial(p) for _ in range(n))
"""
Explanation: The Central Limit Theorem
The central limit theorem states that the average of a large number of independent and identically distributed random variables is itself normally distributed.
So, if $x_1, ..., x_n$ are random and they have mean $\mu$ and standard deviation $\sigma$, then $\frac{1}{n}(x_1 +\ ...\ + x_n)$ will be normally distributed. An equivalent expression is $\frac{(x_1 +\ ...\ + x_n)\ -\ \mu n}{\sigma \sqrt{n}}$
A binomial random variable (Binomial(n, p)) is the sum of $n$ independent Bernoulli (Bernoulli(p)) random variables. Each of the variables equals 1 with a probability of $p$ and equals 0 with a probability of $1 - p$.
End of explanation
"""
def plot_binomial(p, n, num_points):
data = [binomial(n, p) for _ in range(num_points)]
histogram = Counter(data)
plt.bar([x - 0.04 for x in histogram.keys()],
[v / num_points for v in histogram.values()],
0.8,
color='0.75')
mu = p * n
sigma = math.sqrt(n * p * (1 - p))
xs = range(min(data), max(data) + 1)
ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma) for i in xs]
plt.plot(xs, ys)
plt.title('Binomial Distribution vs. Normal Approximation')
plot_binomial(0.75, 100, 10000)
"""
Explanation: The mean of a Bernoulli(p) variable is $p$ and its standard deviation is $\sqrt{p(1 - p)}$.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
stable/_downloads/1242d47b65d952f9f80cf19fb9e5d76e/35_eeg_no_mri.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = 'fsaverage' # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
"""
Explanation: EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution:: Source reconstruction without an individual T1 MRI from the
subject will be less accurate. Do not over interpret
activity locations which can be off by multiple centimeters.
Adult template MRI (fsaverage)
First we show how fsaverage can be used as a surrogate subject.
End of explanation
"""
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
(ch_name,
ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))
for ch_name in raw.ch_names)
raw.rename_channels(new_names)
# Read and set the EEG electrode locations, which are already in fsaverage's
# space (MNI space) for standard_1020:
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans,
show_axes=True, mri_fiducials=True, dig='fiducials')
"""
Explanation: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
End of explanation
"""
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
"""
Explanation: Setup source space and compute forward
End of explanation
"""
ch_names = \
'Fz Cz Pz Oz Fp1 Fp2 F3 F4 F7 F8 C3 C4 T7 T8 P3 P4 P7 P8 O1 O2'.split()
data = np.random.RandomState(0).randn(len(ch_names), 1000)
info = mne.create_info(ch_names, 1000., 'eeg')
raw = mne.io.RawArray(data, info)
"""
Explanation: From here on, standard inverse imaging methods can be used!
Infant MRI surrogates
We don't have a sample infant dataset for MNE, so let's fake a 10-20 one:
End of explanation
"""
subject = mne.datasets.fetch_infant_template('6mo', subjects_dir, verbose=True)
"""
Explanation: Get an infant MRI template
To use an infant head model for M/EEG data, you can use
:func:mne.datasets.fetch_infant_template to download an infant template:
End of explanation
"""
fname_1020 = op.join(subjects_dir, subject, 'montages', '10-20-montage.fif')
mon = mne.channels.read_dig_fif(fname_1020)
mon.rename_channels(
{f'EEG{ii:03d}': ch_name for ii, ch_name in enumerate(ch_names, 1)})
trans = mne.channels.compute_native_head_t(mon)
raw.set_montage(mon)
print(trans)
"""
Explanation: It comes with several helpful built-in files, including a 10-20 montage
in the MRI coordinate frame, which can be used to compute the
MRI<->head transform trans:
End of explanation
"""
bem_dir = op.join(subjects_dir, subject, 'bem')
fname_src = op.join(bem_dir, f'{subject}-oct-6-src.fif')
src = mne.read_source_spaces(fname_src)
print(src)
fname_bem = op.join(bem_dir, f'{subject}-5120-5120-5120-bem-sol.fif')
bem = mne.read_bem_solution(fname_bem)
"""
Explanation: There are also BEM and source spaces:
End of explanation
"""
fig = mne.viz.plot_alignment(
raw.info, subject=subject, subjects_dir=subjects_dir, trans=trans,
src=src, bem=bem, coord_frame='mri', mri_fiducials=True, show_axes=True,
surfaces=('white', 'outer_skin', 'inner_skull', 'outer_skull'))
mne.viz.set_3d_view(fig, 25, 70, focalpoint=[0, -0.005, 0.01])
"""
Explanation: You can ensure everything is as expected by plotting the result:
End of explanation
"""
|
dpshelio/2015-EuroScipy-pandas-tutorial
|
solved - 03 - Indexing and selecting data.ipynb
|
bsd-2-clause
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
countries = countries.set_index('country')
countries
"""
Explanation: Indexing and selecting data
End of explanation
"""
countries['area']
"""
Explanation: Some notes on selecting data
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. We now have to distuinguish between:
selection by label
selection by position.
data[] provides some convenience shortcuts
For a DataFrame, basic indexing selects the columns.
Selecting a single column:
End of explanation
"""
countries[['area', 'population']]
"""
Explanation: or multiple columns:
End of explanation
"""
countries['France':'Netherlands']
"""
Explanation: But, slicing accesses the rows:
End of explanation
"""
countries.loc['Germany', 'area']
"""
Explanation: So as a summary, [] provides the following convenience shortcuts:
Series: selecting a label: s[label]
DataFrame: selecting a single or multiple columns: df['col'] or df[['col1', 'col2']]
DataFrame: slicing the rows: df['row_label1':'row_label2'] or df[mask]
Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
These methods index the different dimensions of the frame:
df.loc[row_indexer, column_indexer]
df.iloc[row_indexer, column_indexer]
Selecting a single element:
End of explanation
"""
countries.loc['France':'Germany', ['area', 'population']]
"""
Explanation: But the row or column indexer can also be a list, slice, boolean array, ..
End of explanation
"""
countries.iloc[0:2,1:3]
"""
Explanation: Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
"""
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
"""
Explanation: The different indexing methods can also be used to assign data:
End of explanation
"""
countries['area'] > 100000
"""
Explanation: Boolean indexing (filtering)
Like a where clause in SQL. The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
End of explanation
"""
countries['density'] = countries['population']*1000000 / countries['area']
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Add a column `density` with the population density (note: population column is expressed in millions)
</div>
End of explanation
"""
countries.loc[countries['density'] > 300, ['capital', 'population']]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select the capital and the population column of those countries where the density is larger than 300
</div>
End of explanation
"""
countries['density_ratio'] = countries['density'] / countries['density'].mean()
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Add a column 'density_ratio' with the ratio of the density to the mean density
</div>
End of explanation
"""
countries.loc['United Kingdom', 'capital'] = 'Cambridge'
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Change the capital of the UK to Cambridge
</div>
End of explanation
"""
countries[(countries['density'] > 100) & (countries['density'] < 300)]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select all countries whose population density is between 100 and 300 people/km²
</div>
End of explanation
"""
s = countries['capital']
s.isin?
s.isin(['Berlin', 'London'])
"""
Explanation: Some other useful methods: isin and string methods
The isin method of Series is very useful to select rows that may contain certain values:
End of explanation
"""
countries[countries['capital'].isin(['Berlin', 'London'])]
"""
Explanation: This can then be used to filter the dataframe with boolean indexing:
End of explanation
"""
'Berlin'.startswith('B')
"""
Explanation: Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method:
End of explanation
"""
countries['capital'].str.startswith('B')
"""
Explanation: In pandas, these are available on a Series through the str namespace:
End of explanation
"""
countries[countries['capital'].str.len() > 7]
"""
Explanation: For an overview of all string methods, see: http://pandas.pydata.org/pandas-docs/stable/api.html#string-handling
<div class="alert alert-success">
<b>EXERCISE</b>: Select all countries that have capital names with more than 7 characters
</div>
End of explanation
"""
countries[countries['capital'].str.contains('am')]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select all countries that have capital names that contain the character sequence 'am'
</div>
End of explanation
"""
countries.loc['Belgium', 'capital'] = 'Ghent'
countries
countries['capital']['Belgium'] = 'Antwerp'
countries
countries[countries['capital'] == 'Antwerp']['capital'] = 'Brussels'
countries
"""
Explanation: Pitfall: chained indexing (and the 'SettingWithCopyWarning')
End of explanation
"""
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
"""
Explanation: How to avoid this?
Use loc instead of chained indexing if possible!
Or copy explicitly if you don't want to change the original data.
More exercises!
For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation
"""
len(titles)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many movies are listed in the titles dataframe?
</div>
End of explanation
"""
titles.sort('year').head(2)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What are the earliest two films listed in the titles dataframe?
</div>
End of explanation
"""
len(titles[titles.title == 'Hamlet'])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many movies have the title "Hamlet"?
</div>
End of explanation
"""
titles[titles.title == 'Treasure Island'].sort('year')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List all of the "Treasure Island" movies from earliest to most recent.
</div>
End of explanation
"""
t = titles
len(t[(t.year >= 1950) & (t.year <= 1959)])
len(t[t.year // 10 == 195])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many movies were made from 1950 through 1959?
</div>
End of explanation
"""
c = cast
c = c[c.title == 'Inception']
c = c[c.n.isnull()]
len(c)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many roles in the movie "Inception" are NOT ranked by an "n" value?
</div>
End of explanation
"""
c = cast
c = c[c.title == 'Inception']
c = c[c.n.notnull()]
len(c)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: But how many roles in the movie "Inception" did receive an "n" value?
</div>
End of explanation
"""
c = cast
c = c[c.title == 'North by Northwest']
c = c[c.n.notnull()]
c = c.sort('n')
c
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
</div>
End of explanation
"""
c = cast
c = c[(c.title == 'Hamlet') & (c.year == 1921)]
len(c)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many roles were credited in the silent 1921 version of Hamlet?
</div>
End of explanation
"""
c = cast
c = c[c.name == 'Cary Grant']
c = c[c.year // 10 == 194]
c = c[c.n == 2]
c = c.sort('year')
c
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year.
</div>
End of explanation
"""
|
mcs07/PubChemPy
|
examples/Chemical fingerprints and similarity.ipynb
|
mit
|
import pubchempy as pcp
from IPython.display import Image
"""
Explanation: Chemical similarity using PubChem fingerprints
End of explanation
"""
coumarin = pcp.Compound.from_cid(323)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=323&t=l')
coumarin_314 = pcp.Compound.from_cid(72653)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=72653&t=l')
coumarin_343 = pcp.Compound.from_cid(108770)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=108770&t=l')
aspirin = pcp.Compound.from_cid(2244)
Image(url='https://pubchem.ncbi.nlm.nih.gov/image/imgsrv.fcgi?cid=2244&t=l')
"""
Explanation: First we'll get some compounds. Here we just use PubChem CIDs to retrieve, but you could search (e.g. using name, SMILES, SDF, etc.).
End of explanation
"""
coumarin.fingerprint
"""
Explanation: The similarity between two molecules is typically calculated using molecular fingerprints that encode structural information about the molecule as a series of bits (0 or 1). These bits represent the presence or absence of particular patterns or substructures — two molecules that contain more of the same patterns will have more bits in common, indicating that they are more similar.
The PubChem CACTVS fingerprint is available on each compound using the fingerprint method. This is returned as a hex-encoded string:
End of explanation
"""
bin(int(coumarin.fingerprint, 16))
"""
Explanation: We can decode this from hexadecimal and then display as a binary string as follows:
End of explanation
"""
def tanimoto(compound1, compound2):
fp1 = int(compound1.fingerprint, 16)
fp2 = int(compound2.fingerprint, 16)
fp1_count = bin(fp1).count('1')
fp2_count = bin(fp2).count('1')
both_count = bin(fp1 & fp2).count('1')
return float(both_count) / (fp1_count + fp2_count - both_count)
"""
Explanation: There is more information about the PubChem fingerprints at ftp://ftp.ncbi.nlm.nih.gov/pubchem/specifications/pubchem_fingerprints.txt
The most commonly used measure for quantifying the similarity of two fingerprints is the Tanimoto Coefficient, given by:
$$ T = \frac{N_{ab}}{N_{a} + N_{b} - N_{ab}} $$
where $N_{a}$ and $N_{b}$ are the number of 1-bits (i.e corresponding to the presence of a pattern) in the fingerprints of molecule $a$ and molecule $b$ respectively. $N_{ab}$ is the number of 1-bits common to the fingerprints of both molecule $a$ and $b$. The Tanimoto coefficient ranges from 0 when the fingerprints have no bits in common, to 1 when the fingerprints are identical.
Here's a simple way to calculate the Tanimoto coefficient between two compounds in python:
End of explanation
"""
tanimoto(coumarin, coumarin)
tanimoto(coumarin, coumarin_314)
tanimoto(coumarin, coumarin_343)
tanimoto(coumarin_314, coumarin_343)
tanimoto(coumarin, aspirin)
tanimoto(coumarin_343, aspirin)
"""
Explanation: Let's try it out:
End of explanation
"""
|
menpo/menpo3d-notebooks
|
notebooks/Rasterization Basics.ipynb
|
bsd-3-clause
|
import numpy as np
import menpo3d.io as mio
mesh = mio.import_builtin_asset('james.obj')
"""
Explanation: Offscreen Rasterization Basics
Menpo3D wraps a subproject called cyrasterize which allows for simple rasterization of 3D meshes. At the moment, only basic rendering is support, with no lighting. However, in the near future many more features will be added.
To begin, we need to import a mesh.
End of explanation
"""
%matplotlib qt
viewer = mesh.view()
"""
Explanation: As with all core Menpo objects, it is very simple to visualize what the textured mesh looks like. An external window will be created which shows the mesh that we just loaded (the lovely James Booth). This window is fully interactive and contains a number of features provided by the underlying window manager, Mayavi.
Leave this window open so that we can try and replicate it using the rasterizer!
Note: You must call %matplotlib qt before rendering any 3D meshes to prevent the notebook from crashing
End of explanation
"""
viewer_settings = viewer.renderer_settings
"""
Explanation: Fetching the viewer state
Oncr you've moved James around in to an interesting pose, you might want to take snapshot of this pose using the rasterizer! We allow you to easily access this state via a property on the viewer.
NOTE: You must leave the visualisation window open in order to be able to access these settings
End of explanation
"""
# Let's print the current state so that we can see it!
np.set_printoptions(linewidth=500, precision=1, suppress=True)
for k, v in viewer_settings.iteritems():
print("{}: ".format(k))
print(v)
"""
Explanation: As you can see from the output below, the renderer_settings property provides all the necessary state to control the camera for rasterization.
End of explanation
"""
from menpo3d.rasterize import GLRasterizer
# Build a rasterizer configured from the current view
r = GLRasterizer(**viewer_settings)
"""
Explanation: Building a GLRasterizer
Now that we have all the necessary state, we a able to initialize our rasterizer and produce output images. We begin by initialising a GLRasterizer which the necessary camera/rendering canvas state.
End of explanation
"""
# Rasterize to produce an RGB image
rgb_img = r.rasterize_mesh(mesh)
%matplotlib inline
rgb_img.view()
"""
Explanation: We can then rasterize our mesh of James, given then camera parameters that we just initialised our rasterizer with. This will produce a single output image that should be identical (bar the background colour or any lighting settings) to the view shown in the visualisation window.
End of explanation
"""
rgb_img.mask.view()
"""
Explanation: All rasterized images have their mask set to show what the rasterizer actually processed. Any black pixels were not processed by the shader.
End of explanation
"""
# The first output is the RGB image as before, the second is the XYZ information
rgb_img, shape_img = r.rasterize_mesh_with_shape_image(mesh)
# The last channel is the z information in model space coordinates
# Note that this is NOT camera depth
shape_img.view(channels=2)
"""
Explanation: Rasterisation of arbitrary floating
GLRasterizer gives us the ability to rasterize arbitrary floating point information. For instance, we can render out a XYZ floating point shape image. This is particularly useful for simulating depth cameras such as the Microsoft Kinect. Note, however, that the depth (z) values returned are in world coordinates, and do not represent true distances from the 'camera'.
End of explanation
"""
|
balarsen/pymc_learning
|
Distributions/fatiguelife.ipynb
|
bsd-3-clause
|
import itertools
import matplotlib.pyplot as plt
import matplotlib as mpl
from pymc3 import Model, Normal, Slice
from pymc3 import sample
from pymc3 import traceplot
from pymc3.distributions import Interpolated
import pymc3 as mc
from theano import as_op
import theano.tensor as tt
import numpy as np
from scipy import stats
import tqdm
import pandas as pd
import spacepy.toolbox as tb
%matplotlib inline
%load_ext version_information
%version_information pymc3, scipy
"""
Explanation: A fatigue-life (Birnbaum-Saunders) continuous random variable.
End of explanation
"""
def fatiguelife_pdf(x, c):
return (x+1) / (2*c*np.sqrt(2*np.pi*x**3)) * np.exp(-(x-1)**2/(2*x*c**2))
x = tb.logspace(1e-2, 1e3, 1000)
for c in [5,10,15,20,30,40,50]:
plt.loglog(x, fatiguelife_pdf(x, c), label='{}'.format(c))
plt.legend()
plt.ylim((1e-5, 10))
"""
Explanation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fatiguelife.html#scipy.stats.fatiguelife
End of explanation
"""
c=5
def fatiguelife_pdf_dist(x, c=5):
ans = tt.log((tt.abs_(x)+1) / (2*c*tt.sqrt(2*np.pi*tt.abs_(x)**3)) *
tt.exp(-(tt.abs_(x)-1)**2/(2*tt.abs_(x)*c**2)))
return ans
with mc.Model() as model:
fatiguelife = mc.DensityDist('fatiguelife', logp=fatiguelife_pdf_dist, testval=2)
trace = mc.sample(2000, njobs=2)
trace['fatiguelife'][trace['fatiguelife']>0]
plt.hist(trace['fatiguelife'][trace['fatiguelife']>0], 100, normed=True, histtype='step');
plt.yscale('log')
x = tb.linspace(0, 400, 1000)
plt.loglog(x, fatiguelife_pdf(x, 5), c='r')
traceplot(trace, combined=True)
tt.abs_
mc.DensityDist?
plt.hist?
"""
Explanation: Do this in pymc3
End of explanation
"""
|
HrantDavtyan/Data_Scraping
|
Week 4/JSON.ipynb
|
apache-2.0
|
import json
"""
Explanation: Working with JSON documents
JSON documents are very popular, especially when it comes to API responces and/or financial data. They provide nice, dictionary-like interface to data with the opportunity of working with keys rather than indecies only. Thus, Python has a built-in support for JSON documents with necessary ready-made functions. To access those functions, one needs to import the JSON library, which comes directly installed with Python.
End of explanation
"""
input = '''[
{ "id" : "01",
"status" : "Instructor",
"name" : "Hrant"
} ,
{ "id" : "02",
"status" : "Student",
"name" : "Jimmy"
}
]'''
"""
Explanation: Let's create a sample JSON fie and save it to some variable called input.
End of explanation
"""
# parse/load string
data = json.loads(input)
# data is a usual list
type(data)
print(data)
from pprint import pprint
pprint(data)
print 'User count:', len(data), "\n"
data[0]['name']
for element in data:
print 'Name: ', element['name']
print 'Id: ', element['id']
print 'Status: ', element['status'], "\n"
"""
Explanation: As you can see here, our JSON documents is nothing else than a list of two dictionaires with 3 keys each (and a value for each key). To parse it as a usual Python object (list in this case), the loads() function from the json package is used.
End of explanation
"""
import pandas as pd
address = "C:\Data_scraping\JSON\sample_data.json"
my_json_data = pd.read_json(address)
my_json_data.head()
"""
Explanation: Reading JSON from a file
using Pandas
End of explanation
"""
import json
with open(address,"r") as file:
local_json = json.load(file)
print(local_json)
type(local_json)
pprint(local_json)
"""
Explanation: using with open()
End of explanation
"""
with open('our_json_w.json', 'w') as output:
json.dump(local_json, output)
"""
Explanation: Writing JSON files
End of explanation
"""
with open('our_json_w.json', 'w') as output:
json.dump(local_json, output, sort_keys = True, indent = 4)
"""
Explanation: Yet, as you may have already noticed, the saved JSON files not human-readible. To make them more user friendly, we may sort the Keys and provide 4-tab indentation.
End of explanation
"""
import csv, json
address = "C:\Data_scraping\JSON\sample_data.json"
with open(address,"r") as file:
local_json = json.load(file)
with open("from_json.csv", "w") as f:
writer = csv.writer(f)
writer.writerow(["ID","Name","Status"])
for item in local_json:
writer.writerow([item['id'],item['name'],item['status']])
"""
Explanation: Converting JSON to CSV
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/launching_into_ml/solutions/supplemental/decision_trees_and_random_Forests_in_Python.ipynb
|
apache-2.0
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
"""
Explanation: Decision Trees and Random Forests in Python
Learning Objectives
Explore and analyze data using a Pairplot
Train a single Decision Tree
Predict and evaluate the Decision Tree
Compare the Decision Tree model to a Random Forest
Introduction
In this lab, you explore and analyze data using a Pairplot, train a single Decision Tree, predict and evaluate the Decision Tree, and compare the Decision Tree model to a Random Forest. Recall that the Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, the decision tree algorithm can be used for solving both regression and classification problems too. Simply, the goal of using a Decision Tree is to create a training model that can use to predict the class or value of the target variable by learning simple decision rules inferred from prior data(training data).
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Load necessary libraries
We will start by importing the necessary libraries for this lab.
End of explanation
"""
df = pd.read_csv("../kyphosis.csv")
df.head()
"""
Explanation: Get the Data
End of explanation
"""
# TODO 1
sns.pairplot(df, hue="Kyphosis", palette="Set1")
"""
Explanation: Exploratory Data Analysis
We'll just check out a simple pairplot for this small dataset.
End of explanation
"""
from sklearn.model_selection import train_test_split
X = df.drop("Kyphosis", axis=1)
y = df["Kyphosis"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
"""
Explanation: Train Test Split
Let's split up the data into a training set and a test set!
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
# TODO 2
dtree.fit(X_train, y_train)
"""
Explanation: Decision Trees
We'll start just by training a single decision tree.
End of explanation
"""
predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
# TODO 3a
print(classification_report(y_test, predictions))
# TODO 3b
print(confusion_matrix(y_test, predictions))
"""
Explanation: Prediction and Evaluation
Let's evaluate our decision tree.
End of explanation
"""
import pydot
from IPython.display import Image
from six import StringIO
from sklearn.tree import export_graphviz
features = list(df.columns[1:])
features
dot_data = StringIO()
export_graphviz(
dtree, out_file=dot_data, feature_names=features, filled=True, rounded=True
)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png())
"""
Explanation: Tree Visualization
Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
# TODO 4a
print(confusion_matrix(y_test, rfc_pred))
# TODO 4b
print(classification_report(y_test, rfc_pred))
"""
Explanation: Random Forests
Now let's compare the decision tree model to a random forest.
End of explanation
"""
|
Naereen/notebooks
|
euler/Project Euler (Python 3) - to problem 100.ipynb
|
mit
|
%load_ext Cython
%%cython
import math
def erathostene_sieve(int n):
cdef list primes = [False, False] + [True] * (n - 1) # from 0 to n included
cdef int max_divisor = math.floor(math.sqrt(n))
cdef int i = 2
for divisor in range(2, max_divisor + 1):
if primes[divisor]:
number = 2*divisor
while number <= n:
primes[number] = False
number += divisor
return primes
sieve10million = erathostene_sieve(int(1e7))
primes_upto_10million = [p for p,b in enumerate(sieve10million) if b]
print(f"There are {len(primes_upto_10million)} prime numbers smaller than 10 million")
"""
Explanation: Project Euler
This Python 3 notebook contains some solutions for the Project Euler challenge.
/!\ Warning: do not spoil yourself the pleasure of solving these problems by yourself!
I (Lilian Besson) started to work again on Project Euler in October 2020
I should try to work on it again, hence this notebook...
Common tool
Let's write here a few efficient functions that are used in lots of problems.
End of explanation
"""
import itertools
prime = 56003
nb_digit_prime = len(str(prime))
nb_replacements = 2
for c in itertools.combinations(range(nb_digit_prime), nb_replacements):
print(c)
from typing import List
def find_prime_digit_replacements(max_size_family: int=6, primes: List[int]=primes_upto_10million) -> int:
set_primes = set(primes)
# we explore this list of primes in ascending order,
# so we'll find the smallest that satisfy the property
# for prime in primes:
for prime in range(10, max(primes) + 1):
str_prime = str(prime)
# for this prime, try all the possibilities
nb_digit_prime = len(str_prime)
for nb_replacements in range(1, nb_digit_prime + 1): # cannot replace all the digits
# now try to replace nb_replacements digits (not necessarily adjacent)
for positions in itertools.combinations(range(nb_digit_prime), nb_replacements):
size_family = 0
good_digits = []
good_primes = []
for new_digit in range(0, 9 + 1):
if positions[0] == 0 and new_digit == 0:
continue
new_prime = int(''.join(
(c if i not in positions else str(new_digit))
for i,c in enumerate(str_prime)
))
if new_prime in set_primes:
size_family += 1
good_digits.append(new_digit)
good_primes.append(new_prime)
if size_family >= max_size_family:
print(f"For p = {prime} with {nb_digit_prime} digits, and {nb_replacements} replacement(s), we found")
print(f"a family of {size_family} prime(s) when replacing digit(s) at position(s) {positions}")
for new_digit, new_prime in zip(good_digits, good_primes):
print(f" {new_prime} obtained by replacing with digit {new_digit}")
return prime
"""
Explanation: Problem 51: prime digit replacements (pastis ! 51 je t'aime)
By replacing the 1st digit of the 2-digit number x3, it turns out that six of the nine possible values: 13, 23, 43, 53, 73, and 83, are all prime.
By replacing the 3rd and 4th digits of 56xx3 with the same digit, this 5-digit number is the first example having seven primes among the ten generated numbers, yielding the family: 56003, 56113, 56333, 56443, 56663, 56773, and 56993. Consequently 56003, being the first member of this family, is the smallest prime with this property.
Find the smallest prime which, by replacing part of the number (not necessarily adjacent digits) with the same digit, is part of an eight prime value family.
Who it doesn't seem easy, I can't (yet) think of an efficient solution.
End of explanation
"""
%%time
find_prime_digit_replacements(max_size_family=6)
%%time
find_prime_digit_replacements(max_size_family=7)
"""
Explanation: Let's try to obtain the examples given in the problem statement, with the smallest prime giving a 6-sized family being 13 and the smallest prime giving a 7-sized family being 56003.
End of explanation
"""
%%time
find_prime_digit_replacements(max_size_family=8)
"""
Explanation: The code seems to work pretty well. It's not that fast... but let's try to obtain the smallest prime giving a 8-sized family.
End of explanation
"""
def x_to_kx_contain_same_digits(x: int, kmax: int) -> bool:
digits_x = sorted(list(str(x)))
for k in range(2, kmax+1):
digits_kx = sorted(list(str(k*x)))
if digits_x != digits_kx:
return False
return True
assert not x_to_kx_contain_same_digits(125873, 2)
assert x_to_kx_contain_same_digits(125874, 2)
assert not x_to_kx_contain_same_digits(125875, 2)
assert not x_to_kx_contain_same_digits(125874, 3)
def find_smallest_x_such_that_x_to_6x_contain_same_digits(kmax: int=6) -> int:
x = 1
while True:
if x_to_kx_contain_same_digits(x, kmax):
print(f"Found a solution x = {x}, proof:")
for k in range(1, kmax + 1):
print(f" k x = {k}*{x}={k*x}")
return x
x += 1
%%time
find_smallest_x_such_that_x_to_6x_contain_same_digits()
"""
Explanation: Done!
Problem 52: Permuted multiples
It can be seen that the number, 125874, and its double, 251748, contain exactly the same digits, but in a different order.
Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits.
End of explanation
"""
%load_ext Cython
%%cython
def choose_kn(int k, int n):
# {k choose n} = {n-k choose n} so first let's keep the minimum
if k < 0 or k > n:
return 0
elif k > n-k:
k = n-k
# instead of computing with factorials (that blow up VERY fast),
# we can compute with product
product = 1
for p in range(k+1, n+1):
product *= p
for p in range(2, n-k+1):
product //= p
return product
choose_kn(10, 23)
def how_many_choose_kn_are_greater_than_x(max_n: int, x: int) -> int:
count = 0
for n in range(1, max_n + 1):
for k in range(1, n//2 + 1):
c_kn = choose_kn(k, n)
if c_kn > x:
count += 1
if n-k != k:
# we count twice for (n choose k) and (n choose n-k)
# only if n-k != k
count += 1
return count
how_many_choose_kn_are_greater_than_x(100, 1e6)
"""
Explanation: Done, it was quick.
Problem 53: Combinatoric selections
There are exactly ten ways of selecting three from five, 12345: 123, 124, 125, 134, 135, 145, 234, 235, 245, and 345.
In combinatorics, we use the notation, ${5 \choose 3} = 10$.
In general, $${n \choose r} = \frac{n!}{r! (n-r)!}$$
It is not until $n=23$, that a value exceeds one-million: ${23 \choose 10} = 1144066$.
How many, not necessarily distinct, values of ${n \choose r}$ for $1 \leq n \leq 100$, are greater than one-million?
End of explanation
"""
|
bataeves/kaggle
|
sber/Model-Copy-0.31592.ipynb
|
unlicense
|
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
def preprocess_anomaly(df):
df["full_sq"] = map(lambda x: x if x > 10 else float("NaN"), df["full_sq"])
df["life_sq"] = map(lambda x: x if x > 5 else float("NaN"), df["life_sq"])
df["kitch_sq"] = map(lambda x: x if x > 2 else float("NaN"), df["kitch_sq"])
# full_sq-life_sq<0 full_sq-kitch_sq<0 life_sq-kitch_sq<0 floor-max_floor<0
return df
def preprocess_categorial(df):
df = mess_y_categorial(df, 5)
df = df.select_dtypes(exclude=['object'])
return df
def apply_categorial(test, train):
test = mess_y_categorial_fold(test, train)
test = test.select_dtypes(exclude=['object'])
return test
def smoothed_likelihood(targ_mean, nrows, globalmean, alpha=10):
try:
return (targ_mean * nrows + globalmean * alpha) / (nrows + alpha)
except Exception:
return float("NaN")
def mess_y_categorial(df, nfolds=3, alpha=10):
from sklearn.utils import shuffle
from copy import copy
folds = np.array_split(shuffle(df), nfolds)
newfolds = []
for i in range(nfolds):
fold = folds[i]
other_folds = copy(folds)
other_folds.pop(i)
other_fold = pd.concat(other_folds)
newfolds.append(mess_y_categorial_fold(fold, other_fold, alpha=10))
return pd.concat(newfolds)
def mess_y_categorial_fold(fold_raw, other_fold, cols=None, y_col="price_doc", alpha=10):
fold = fold_raw.copy()
if not cols:
cols = list(fold.select_dtypes(include=["object"]).columns)
globalmean = other_fold[y_col].mean()
for c in cols:
target_mean = other_fold[[c, y_col]].groupby(c).mean().to_dict()[y_col]
nrows = other_fold[c].value_counts().to_dict()
fold[c + "_sll"] = fold[c].apply(
lambda x: smoothed_likelihood(target_mean.get(x), nrows.get(x), globalmean, alpha) if x else float("NaN")
)
return fold
def apply_macro(df):
macro_cols = [
'timestamp', "balance_trade", "balance_trade_growth", "eurrub", "average_provision_of_build_contract",
"micex_rgbi_tr", "micex_cbi_tr", "deposits_rate", "mortgage_value", "mortgage_rate",
"income_per_cap", "rent_price_4+room_bus", "museum_visitis_per_100_cap", "apartment_build"
]
return pd.merge(df, macro, on='timestamp', how='left')
def preprocess(df):
from sklearn.preprocessing import OneHotEncoder, FunctionTransformer
# df = apply_macro(df)
# df["timestamp_year"] = df["timestamp"].apply(lambda x: x.split("-")[0])
# df["timestamp_month"] = df["timestamp"].apply(lambda x: x.split("-")[1])
# df["timestamp_year_month"] = df["timestamp"].apply(lambda x: x.split("-")[0] + "-" + x.split("-")[1])
df = df.drop(["id", "timestamp"], axis=1)
ecology = ["no data", "poor", "satisfactory", "good", "excellent"]
df["ecology_index"] = map(ecology.index, df["ecology"].values)
bool_feats = [
"thermal_power_plant_raion",
"incineration_raion",
"oil_chemistry_raion",
"radiation_raion",
"railroad_terminal_raion",
"big_market_raion",
"nuclear_reactor_raion",
"detention_facility_raion",
"water_1line",
"big_road1_1line",
"railroad_1line",
"culture_objects_top_25"
]
for bf in bool_feats:
df[bf + "_bool"] = map(lambda x: x == "yes", df[bf].values)
df = preprocess_anomaly(df)
df['rel_floor'] = df['floor'] / df['max_floor'].astype(float)
df['rel_kitch_sq'] = df['kitch_sq'] / df['full_sq'].astype(float)
df['rel_life_sq'] = df['life_sq'] / df['full_sq'].astype(float)
df["material_cat"] = df.material.fillna(0).astype(int).astype(str).replace("0", "")
df["state_cat"] = df.state.fillna(0).astype(int).astype(str).replace("0", "")
# df["age_of_building"] = df["timestamp_year"].astype(float) - df["build_year"].astype(float)
df["num_room_cat"] = df.num_room.fillna(0).astype(int).astype(str).replace("0", "")
return df
# train_raw["price_doc"] = np.log1p(train_raw["price_doc"].values)
train_pr = preprocess(train_raw)
train = preprocess_categorial(train_pr)
train = train.fillna(-1)
X = train.drop(["price_doc"], axis=1)
y = train["price_doc"].values
"""
Explanation: Препроцессинг фич
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X.values, y, test_size=0.20, random_state=43)
dtrain_all = xgb.DMatrix(X.values, y, feature_names=X.columns)
dtrain = xgb.DMatrix(X_train, y_train, feature_names=X.columns)
dval = xgb.DMatrix(X_val, y_val, feature_names=X.columns)
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.01,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
# Uncomment to tune XGB `num_boost_rounds`
model = xgb.train(xgb_params, dtrain, num_boost_round=2000, evals=[(dval, 'val')],
early_stopping_rounds=40, verbose_eval=40)
num_boost_round = model.best_iteration
cv_output = xgb.cv(dict(xgb_params, silent=0), dtrain_all, num_boost_round=num_boost_round, verbose_eval=40)
cv_output[['train-rmse-mean', 'test-rmse-mean']].plot()
model = xgb.train(dict(xgb_params, silent=0), dtrain_all, num_boost_round=num_boost_round, verbose_eval=40)
print "predict-train:", rmse(model.predict(dtrain_all), y)
model = xgb.XGBRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, nthread=-1, silent=False)
model.fit(X.values, y, verbose=20)
with open("scores.tsv", "a") as sf:
sf.write("%s\n" % rmsle(model.predict(X.values), y))
!tail scores.tsv
show_weights(model, feature_names=list(X.columns), importance_type="weight")
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
def validate(clf):
cval = np.abs(cross_val_score(clf, X.values, y, cv=3,
scoring=make_scorer(rmsle, False), verbose=2))
return np.mean(cval), cval
print validate(model)
"""
Explanation: Обучение моделей
End of explanation
"""
test = pd.read_csv("data/test.csv")
test_pr = preprocess(test)
test_pr = apply_categorial(test_pr, train_pr)
test_pr = test_pr.fillna(-1)
dtest = xgb.DMatrix(test_pr.values, feature_names=test_pr.columns)
y_pred = model.predict(dtest)
# y_pred = model.predict(test_pr.values)
# y_pred = np.exp(y_pred) - 1
submdf = pd.DataFrame({"id": test["id"], "price_doc": y_pred})
submdf.to_csv("data/submission.csv", header=True, index=False)
!head data/submission.csv
"""
Explanation: Submission
End of explanation
"""
|
GoogleCloudPlatform/ai-platform-samples
|
ai-platform/tutorials/unofficial/pytorch-on-google-cloud/sentiment_classification/pytorch-text-classification-caip-training.ipynb
|
apache-2.0
|
!pip -q install torch==1.7
!pip -q install transformers
!pip -q install datasets
!pip -q install tqdm
"""
Explanation: Training PyTorch Model on Google Cloud AI Platform Training
Fine Tuning Pretrained BERT Model for Sentiment Classification Task
Overview
This example is inspired from Token-Classification notebook and run_glue.py from HuggingFace 🤗.
We will be fine-tuning bert-base-cased (pre-trained) model.
You can find the details about this model at 🤗 Hub.
For more notebooks of the state of the art PyTorch/Tensorflow/JAX you can explore 🤗 Notebooks.
Dataset
We will be using IMDB moview review dataset from Huggingface Datasets.
Objective
Get familiar with PyTorch on Cloud AI Platform notebooks instances.
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
Cloud AI Platform Notebook
Cloud AI Platform Training
Learn about Cloud AI Platform
pricing and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Setting up Notebook Environment
This notebook assumes PyTorch 1.7 DLVM development environment. You can create a Notebook instance using Google Cloud Console or gcloud command.
gcloud notebooks instances create example-instance \
--vm-image-project=deeplearning-platform-release \
--vm-image-family=pytorch-1-7-cu110-notebooks \
--machine-type=n1-standard-4 \
--location=us-central1-a \
--boot-disk-size=100 \
--accelerator-core-count=1 \
--accelerator-type=NVIDIA_TESLA_T4 \
--install-gpu-driver \
--network=default
NOTE: You must have GPU quota before you can create instances with GPUs. Check the quotas page to ensure that you have enough GPUs available in your project. If GPUs are not listed on the quotas page or you require additional GPU quota, request a quota increase. Free Trial accounts do not receive GPU quota by default.
Python Dependencies
Python dependencies required for this notebook are Transformers and Datasets and will be installed in the notebook itself.
End of explanation
"""
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the {packages}, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
import numpy as np
from datasets import load_dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
EvalPrediction, Trainer, TrainingArguments,
default_data_collator)
"""
Explanation: Python imports
End of explanation
"""
datasets = load_dataset("imdb")
batch_size = 16
max_seq_length = 128
model_name_or_path = "bert-base-cased"
datasets
"""
Explanation: Loading the dataset
We use the 🤗 Datasets library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions load_dataset and load_metric.
For this example we will use IMDB movie review dataset for sentiment classification task.
End of explanation
"""
print(
"Total # of rows in training dataset {} and size {:5.2f} MB".format(
datasets["train"].shape[0], datasets["train"].size_in_bytes / (1024 * 1024)
)
)
print(
"Total # of rows in test dataset {} and size {:5.2f} MB".format(
datasets["test"].shape[0], datasets["test"].size_in_bytes / (1024 * 1024)
)
)
"""
Explanation: The datasets object itself is DatasetDict, which contains one key for the training, validation and test set.
End of explanation
"""
datasets["train"][0]
"""
Explanation: To access an actual element, you need to select a split first, then give an index:
End of explanation
"""
label_list = datasets["train"].unique("label")
"""
Explanation: Using the unique method to extract label list. This will allow us to experiment with other datasets without hard-coding labels.
End of explanation
"""
import random
import pandas as pd
from datasets import ClassLabel, Sequence
from IPython.display import HTML, display
def show_random_elements(dataset, num_examples=2):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
df[column] = df[column].transform(
lambda x: [typ.feature.names[i] for i in x]
)
display(HTML(df.to_html()))
show_random_elements(datasets["train"])
"""
Explanation: To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset (automatically decoding the labels in passing).
End of explanation
"""
tokenizer = AutoTokenizer.from_pretrained(
model_name_or_path,
use_fast=True,
)
# 'use_fast' ensure that we use fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.
"""
Explanation: Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers Tokenizer which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained method, which will ensure:
we get a tokenizer that corresponds to the model architecture we want to use,
we download the vocabulary used when pretraining this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
End of explanation
"""
tokenizer("Hello, this is one sentence!")
"""
Explanation: You can check which type of models have a fast tokenizer available and which don't on the big table of models.
You can directly call this tokenizer on one sentence:
End of explanation
"""
example = datasets["train"][4]
print(example)
tokenizer(
["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."],
is_split_into_words=True,
)
"""
Explanation: Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in this tutorial if you're interested.
Note: If, as is the case here, your inputs have already been split into words, you should pass the list of words to your tokenzier with the argument is_split_into_words=True:
End of explanation
"""
# Dataset loading repeated here to make this cell idempotent
# Since we are over-writing datasets variable
datasets = load_dataset("imdb")
# TEMP: We can extract this automatically but Unique method of the dataset
# is not reporting the label -1 which shows up in the pre-processing
# Hence the additional -1 term in the dictionary
label_to_id = {1: 1, 0: 0, -1: 0}
def preprocess_function(examples):
# Tokenize the texts
args = (examples["text"],)
result = tokenizer(
*args, padding="max_length", max_length=max_seq_length, truncation=True
)
# Map labels to IDs (not necessary for GLUE tasks)
if label_to_id is not None and "label" in examples:
result["label"] = [label_to_id[example] for example in examples["label"]]
return result
datasets = datasets.map(preprocess_function, batched=True, load_from_cache_file=True)
"""
Explanation: Note that transformers are often pretrained with subword tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of that:
End of explanation
"""
model = AutoModelForSequenceClassification.from_pretrained(
model_name_or_path, num_labels=len(label_list)
)
"""
Explanation: Fine Tuning the Model
Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about token classification, we use the AutoModelForSequenceClassification class. Like with the tokenizer, the from_pretrained method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which we can get from the features, as seen before):
End of explanation
"""
args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
weight_decay=0.01,
output_dir="/tmp/cls",
)
"""
Explanation: The warning is telling us we are throwing away some weights (the vocab_transform and vocab_layer_norm layers) and randomly initializing some other (the pre_classifier and classifier layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
To instantiate a Trainer, we will need to define three more things. The most important is the TrainingArguments, which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:
End of explanation
"""
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.argmax(preds, axis=1)
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
"""
Explanation: Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay.
The last thing to define for our Trainer is how to compute the metrics from the predictions. You can define your custom compute_metrics function. It takes an EvalPrediction object (a namedtuple with a predictions and label_ids field) and has to return a dictionary string to float.
End of explanation
"""
trainer = Trainer(
model,
args,
train_dataset=datasets["train"],
eval_dataset=datasets["test"],
data_collator=default_data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
"""
Explanation: Now we Create the Trainer object and we are almost ready to train.
End of explanation
"""
trainer.train()
trainer.save_model("./finetuned-bert-classifier")
"""
Explanation: We can now finetune our model by just calling the train method:
End of explanation
"""
trainer.evaluate()
"""
Explanation: The evaluate method allows you to evaluate again on the evaluation dataset or on another dataset:
End of explanation
"""
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name_or_path = "bert-base-cased"
label_text = {0: "Negative", 1: "Positive"}
saved_model_path = "./finetuned-bert-classifier"
def predict(input_text, saved_model_path):
# initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
# preprocess and encode input text
predict_input = tokenizer.encode(
review_text, truncation=True, max_length=128, return_tensors="pt"
)
# load trained model
loaded_model = AutoModelForSequenceClassification.from_pretrained(saved_model_path)
# get predictions
output = loaded_model(predict_input)
# return labels
label_id = torch.argmax(*output.to_tuple(), dim=1)
print(f"Review text: {review_text}")
print(f"Sentiment : {label_text[label_id.item()]}\n")
# example #1
review_text = (
"""Jaw dropping visual affects and action! One of the best I have seen to date."""
)
predict_input = predict(review_text, saved_model_path)
# example #2
review_text = """Take away the CGI and the A-list cast and you end up with film with less punch."""
predict_input = predict(review_text, saved_model_path)
"""
Explanation: To get the precision/recall/f1 computed for each category now that we have finished training, we can apply the same function as before on the result of the predict method:
Running Predictions with Sample Examples
End of explanation
"""
!cd python_package && ./scripts/train-local.sh
"""
Explanation: Run Training Job on Cloud AI Platform (CAIP)
You can do local experimentation on your AI Platform Notebooks instance. However, for larger datasets or models often a vertically scaled compute or horizontally distributed training is required. The most cost effective way to perform this task is Cloud AI Platform Training Service. AI Platform Training takes care of creating designated compute resources, performs the training task and ensures deletion of compute resources once the training job is finished.
In this part of the notebook, we will show you scaling your training job by packaging the code and submitting the training job to AI Platform Training.
Packaging the Training Application
Before runnning the training application with AI Platform Training, training application code and any dependencies must be uploaded into a Cloud Storage bucket that your Google Cloud project can access. This sections shows how to package and stage your application in the cloud.
There are two ways to package your application and dependencies and run on AI Platform Training:
Package application and Python dependencies manually using setup tools
Use custom containers to package dependencies using Docker containers
Recommended Training Application Structure
You can structure your training application in any way you like. However, the following structure is commonly used in AI Platform Training samples, and having your project's organization be similar to the samples can make it easier for you to follow the samples.
We have two directories python_package and custom_container showing both the packaging approaches. README.md files inside each directory has details on the directory structure and instructions on howw to run application locally and on the cloud.
.
├── custom_container
│ ├── Dockerfile
│ ├── README.md
│ ├── scripts
│ │ ├── train-cloud.sh
│ │ └── train-local.sh
│ └── trainer -> ../python_package/trainer/
├── python_package
│ ├── README.md
│ ├── scripts
│ │ ├── train-cloud.sh
│ │ └── train-local.sh
│ ├── setup.py
│ └── trainer
│ ├── __init__.py
│ ├── experiment.py
│ ├── metadata.py
│ ├── model.py
│ ├── task.py
│ └── utils.py
└── pytorch-text-classification-caip-training.ipynb --> This notebook
Main project directory contains your setup.py file or Dockerfile with the dependencies.
Use a subdirectory named trainer to store your main application module and scripts to submit training jobs locally or cloud
Inside trainer directory:
task.py - Main application module 1) initialises and parse task arguments (hyper parameters), and 2) entry point to the trainer
model.py - Includes function to create model with a sequence classification head from a pretrained model.
experiment.py - Runs the model training and evaluation experiment, and exports the final model.
metadata.py - Defines metadata for classification task such as predefined model dataset name, target labels
utils.py - Includes utility functions such as data input functions to read data, save model to GCS bucket
Using Python Packaging to Build Manually
In this notebook, we are using Huggingface datasets and fine tuning a transformer model from Huggingface Transformers Library for sentiment analysis task. We will be adding standard Python dependencies - transformers, datasets and tqdm - in the setup.py file. The find_packages() function inside setup.py includes the trainer directory in the package as it contains __init__.py which tells Python Setuptools to include all subdirectories of the parent directory as dependencies.
```
==========================================
contents of setup.py file
==========================================
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = [
'torch==1.7',
'transformers',
'datasets',
'tqdm'
]
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='AI Platform | Training | PyTorch | Text Classification | Python Package'
)
```
Running Training Job Locally
Before submitting the job to cloud, ensure the script runs locally. The script ./python_package/scripts/train-local.sh runs training locally using python -m trainer.task
python -m trainer.task \
--job-dir ${JOB_DIR} \
--model-name="finetuned-bert-classifier"
End of explanation
"""
!cd python_package && ./scripts/train-cloud.sh
"""
Explanation: Running Training Job on Cloud AI Platform
You would submit the training job to Cloud AI Platform Training using gcloud ai-platform jobs submit training. gcloud command stages your training application on GCS bucket and submits the training job.
gcloud ai-platform jobs submit training ${JOB_NAME} \
--region ${REGION} \
--master-image-uri ${IMAGE_URI} \
--scale-tier=CUSTOM \
--master-machine-type=n1-standard-8 \
--master-accelerator=type=nvidia-tesla-t4,count=2 \
--job-dir ${JOB_DIR} \
--module-name trainer.task \
--package-path ${PACKAGE_PATH} \
-- \
--model-name="finetuned-bert-classifier"
Set the --master-image-uri flag to gcr.io/cloud-aiplatform/training/pytorch-gpu.1-7 for training on pre-built PyTorch v1.7 image for GPU
Set the --packages flag to the path to your packaged application
Set the --module-name flag to the trainer.task which is the main module to start your application
Set the --master-accelerator and --master-machine-type flag to set the infrastructure to run the application. Refer documentation to set machine types and scaling tiers
End of explanation
"""
!cd custom_container && ./scripts/train-local.sh
"""
Explanation: Using Custom Containers
To create a training job with custom container, you have define a Dockerfile to install the dependencies required for the training job. Then, you build and test your Docker image locally to verify it before using it with AI Platform Training.
```
==========================================
contents of Dockerfile
==========================================
Install pytorch
FROM gcr.io/cloud-aiplatform/training/pytorch-gpu.1-7
WORKDIR /root
Installs pandas, and google-cloud-storage.
RUN pip install google-cloud-storage transformers datasets tqdm
Copies the trainer code to the docker image.
COPY ./trainer/init.py ./trainer/init.py
COPY ./trainer/experiment.py ./trainer/experiment.py
COPY ./trainer/utils.py ./trainer/utils.py
COPY ./trainer/metadata.py ./trainer/metadata.py
COPY ./trainer/model.py ./trainer/model.py
COPY ./trainer/task.py ./trainer/task.py
Set up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
```
Running Training Job Locally with Custom Container
Before submitting the job to cloud, ensure the script runs locally. The script ./python_package/scripts/train-local.sh runs training locally using python -m trainer.task
```
Build the docker image
docker build -f Dockerfile -t ${IMAGE_URI} ./
Test your docker image locally
echo "Running the Docker Image"
docker run ${IMAGE_URI} \
--job-dir ${JOB_DIR} \
--model-name="finetuned-bert-classifier"
```
End of explanation
"""
!cd custom_container && ./scripts/train-cloud.sh
"""
Explanation: Running Training Job on Cloud AI Platform with Custom Container
Before submitting the training job, you need to push image to Google Cloud Container Registry and then submit the training job to Cloud AI Platform Training using gcloud ai-platform jobs submit training.
```
Deploy the docker image to Cloud Container Registry
docker push ${IMAGE_URI}
Submit the training job
gcloud ai-platform jobs submit training ${JOB_NAME} \
--region ${REGION} \
--master-image-uri ${IMAGE_URI} \
--scale-tier=CUSTOM \
--master-machine-type=n1-standard-8 \
--master-accelerator=type=nvidia-tesla-t4,count=2 \
--job-dir ${JOB_DIR} \
-- \
--model-name="finetuned-bert-classifier"
```
Set the --master-image-uri flag to the custom container image pushed to Google Cloud Container Registry
Set the --master-accelerator and --master-machine-type flag to set the infrastructure to run the application. Refer documentation to set machine types and scaling tiers
End of explanation
"""
!gcloud ai-platform jobs describe $JOB_NAME
"""
Explanation: Monitoring Training Job on Cloud AI Platform (CAIP)
After you submit your job, you can monitor the job status using gcloud ai-platform jobs describe $JOB_NAME command
End of explanation
"""
!gcloud ai-platform jobs stream-logs $JOB_NAME
"""
Explanation: You can stream logs using gcloud ai-platform jobs stream-logs $JOB_NAME
End of explanation
"""
|
NervanaSystems/neon_course
|
07 visualization callback.ipynb
|
apache-2.0
|
from neon.backends import gen_backend
from neon.initializers import Gaussian
from neon.layers import Affine
from neon.data import MNIST
from neon.transforms import Rectlin, Softmax
from neon.models import Model
from neon.layers import GeneralizedCost
from neon.transforms import CrossEntropyMulti
from neon.optimizers import GradientDescentMomentum
be = gen_backend(batch_size=128)
mnist = MNIST(path='data/')
train_set = mnist.train_iter
test_set = mnist.valid_iter
init_norm = Gaussian(loc=0.0, scale=0.01)
layers = []
layers.append(Affine(nout=100, init=init_norm, activation=Rectlin()))
layers.append(Affine(nout=10, init=init_norm,
activation=Softmax()))
mlp = Model(layers=layers)
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
optimizer = GradientDescentMomentum(0.1, momentum_coef=0.9)
"""
Explanation: Visualization Callback Example
Preamble
Before we dive into creating a callback, we'll need a simple model to work with. This tutorial uses a model similar to the one in neon's examples/mnist_mlp.py, but the same callback should apply to any model.
End of explanation
"""
import subprocess
subprocess.check_output(['pip', 'install', 'bokeh==0.11'])
"""
Explanation: Dependencies
This callback makes use of new features in bokeh 0.11, which needs to be installed before running the callback.
We can install the pip package using the notebook terminal or from inside the notebook itself.
After installation, execute 'Kernel-> restart and run all' to reload the kernel with the newly installed package version.
End of explanation
"""
from neon.callbacks.callbacks import Callbacks, Callback
from bokeh.plotting import output_notebook, figure, ColumnDataSource, show
from bokeh.io import push_notebook
from timeit import default_timer
class CostVisCallback(Callback):
"""
Callback providing a live updating console based progress bar.
"""
def __init__(self, epoch_freq=1,
minibatch_freq=1, update_thresh_s=0.65):
super(CostVisCallback, self).__init__(epoch_freq=epoch_freq,
minibatch_freq=minibatch_freq)
self.update_thresh_s = update_thresh_s
output_notebook()
self.fig = figure(name="cost", title="Cost", x_axis_label="Epoch", plot_width=900)
self.train_source = ColumnDataSource(data=dict(x=[], y0=[]))
self.train_cost = self.fig.line(x=[], y=[], source=self.train_source)
self.val_source = ColumnDataSource(data=dict(x=[], y0=[]))
self.val_cost = self.fig.line(x=[], y=[], source=self.val_source, color='red')
def on_train_begin(self, callback_data, model, epochs):
"""
A good place for one-time startup operations, such as displaying the figure.
"""
show(self.fig)
def on_epoch_begin(self, callback_data, model, epoch):
"""
Since the number of minibatches per epoch is not constant, calculate it here.
"""
self.start_epoch = self.last_update = default_timer()
self.nbatches = model.nbatches
def on_minibatch_end(self, callback_data, model, epoch, minibatch):
"""
Read the training cost already computed by the TrainCostCallback out of 'callback_data', and display it.
"""
now = default_timer()
mb_complete = minibatch + 1
mbstart = callback_data['time_markers/minibatch'][epoch-1] if epoch > 0 else 0
train_cost = callback_data['cost/train'][mbstart + minibatch]
mb_epoch_scale = epoch + minibatch / float(self.nbatches)
self.train_source.data['x'].append(mb_epoch_scale)
self.train_source.data['y'].append(train_cost)
if (now - self.last_update > self.update_thresh_s or mb_complete == self.nbatches):
self.last_update = now
push_notebook()
def on_epoch_end(self, callback_data, model, epoch):
"""
If per-epoch validation cost is being computed by the LossCallback, plot that too.
"""
_eil = self._get_cached_epoch_loss(callback_data, model, epoch, 'loss')
if _eil:
self.val_source.data['x'].append(1 + epoch)
self.val_source.data['y'].append(_eil['cost'])
push_notebook()
"""
Explanation: Callbacks
Neon provides an API for calling operations during the model fit. The progress bars displayed during training are an example of a callback, and we'll go through the process of adding a new callback that visualizes cost graphically instead of printing to screen.
To make a new callback, subclass from Callback, and implement the desired callback methods.
Each of the callback functions have access to callback_data and model objects. callback_data is an H5 file that is saved when supplying the -o flag to neon, and callbacks should store any computed data into callback_data. Visualization callbacks can read already computed data such as training or validation cost from callback_data.
This callback implements the subset of the available callback functions that it needs:
http://neon.nervanasys.com/docs/latest/callbacks.html#creating-callbacks
End of explanation
"""
callbacks = Callbacks(mlp, eval_set=test_set)
cv = CostVisCallback()
callbacks.add_callback(cv)
mlp.fit(train_set, optimizer=optimizer, num_epochs=10, cost=cost, callbacks=callbacks)
"""
Explanation: Running our callback
We'll create all of the standard neon callbacks, and then add ours at the end.
End of explanation
"""
|
WNoxchi/Kaukasos
|
FADL1/dogbreeds.ipynb
|
mit
|
# data = get_data(sz, bs)
# labels_df = pd.read_csv(labels_csv)
# labels_df.pivot_table(index='breed', aggfunc=len).sort_values('id', ascending=False)
# fn = PATH + data.trn_ds.fnames[0]
# PIL.Image.open(fn)
# size_d = {k: PIL.Image.open(PATH+k).size for k in data.trn_ds.fnames}
# row_sz, col_sz = list(zip(*size_d.values()))
# row_sz, col_sz = np.array(row_sz), np.array(col_sz)
# plt.hist(row_sz[row_sz < 1000]); plt.hist(col_sz[col_sz < 1000]);
from sklearn import metrics
data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.lr_find()
learn.sched.plot()
learn.fit(2e-2, 2)
learn.fit(2e-2, 2)
learn.precompute=False
learn.fit(2e-2, 2)
learn.save('RNx50_224_pre')
# increasing size - taking advtg of Fully-Convolutional Arch
learn.set_data(get_data(299, 48))
learn.fit(1e-2, 3, cycle_len=1)
learn.save('RNx50_224_pre')
data = get_data(299, 48)
learn = ConvLearner.pretrained(arch, data)
learn.load('RNx50_224_pre')
learn.freeze()
log_preds, y = learn.TTA()
probs = np.exp(log_preds)
accuracy(log_preds, y), metrics.log_loss(y, probs)
test_preds = np.exp(learn.TTA(is_test=True)[0])
"""
Explanation: In the below code, why are we choosing 300 as the default size value to check if condition?
<><><><>
Great question. Since we have max_zoom=1.1, I figured we should ensure our images are at release sz*1.1. And I figured resizing them to 340x340 would save plenty of time, and leave plenty of room to experiment.
http://forums.fast.ai/t/dog-breed-identification-challenge/7464/51
Note this notebook was run with ... if sz < 300 ... since I didn't understand what was going on.
End of explanation
"""
from sklearn import metrics
PATH = "data/dogbreeds/"
arch = resnext50
sz = 224
bs = 64
labels_csv = f'{PATH}labels.csv'
# n = len(list(open(labels_csv)))-1
val_idxs = get_cv_idxs(0)
def get_data(sz, bs):
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, 'train', labels_csv, bs=bs, tfms=tfms,
val_idxs=val_idxs, suffix='.jpg', test_name='test')
return data if sz < 300 else data.resize(340, 'tmp')
data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(1e-2, 2)
learn.precompute=False
learn.fit(1e-2, 5, cycle_len=1)
learn.set_data(get_data(299, bs=32))
learn.fit(1e-2, 3, cycle_len=1)
learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2)
"""
Explanation: Rerunning without validation for predictions:
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/tutorials/building_a_system.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.Bundle()
"""
Explanation: Advanced: Building a System
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
b = phoebe.Bundle.default_binary()
"""
Explanation: Default Systems
Although the default empty Bundle doesn't include a system, there are available
constructors that create default systems. To create a simple binary with component tags
'binary', 'primary', and 'secondary' (as above), you could call default_binary:
End of explanation
"""
b = phoebe.default_binary()
print(b.hierarchy)
"""
Explanation: or for short:
End of explanation
"""
b = phoebe.default_binary(contact_binary=True)
print(b.hierarchy)
"""
Explanation: To build the same binary but as a contact system, you would call:
End of explanation
"""
b = phoebe.Bundle()
b.add_component(phoebe.component.star, component='primary')
b.add_component('star', component='secondary')
"""
Explanation: For more details on dealing with contact binary systems, see the Contact Binary Hierarchy Tutorial and the Contact Binary Example Script.
Adding Components Manually
IMPORTANT: in the vast majority of cases, starting with one of the default systems is sufficient. Below we will discuss the alternative method of building a system from scratch.
By default, an empty Bundle does not contain any information about our system.
So, let's first start by adding a few stars. Here we'll call the generic add_component method. This method works for any type of component in the system - stars, orbits, planets, disks, rings, spots, etc. The first argument needs to be a callable or the name of a callable in phoebe.parameters.component which include the following options:
orbit
star
envelope
add_component also takes a keyword argument for the 'component' tag. Here we'll give them component tags 'primary' and 'secondary' - but note that these are merely convenience labels and do not hold any special roles. Some tags, however, are forbidden if they clash with other tags or reserved values - so if you get error stating the component tag is forbidden, try using a different string.
End of explanation
"""
b.add_star('extrastarforfun', teff=6000)
"""
Explanation: But there are also shortcut methods for add_star and add_orbit. In these cases you don't need to provide the function, but only the component tag of your star/orbit.
Any of these functions also accept values for any of the qualifiers of the created parameters.
End of explanation
"""
b.add_orbit('binary')
"""
Explanation: Here we call the add_component method of the bundle with several arguments:
a function (or the name of a function) in phoebe.parameters.component. This
function tells the bundle what parameters need to be added.
component: the tag that we want to give this component for future reference.
any additional keyword arguments: you can also provide initial values for Parameters
that you know will be created. In the last example you can see that the
effective temperature will already be set to 6000 (in default units which is K).
and then we'll do the same to add an orbit:
End of explanation
"""
b.set_hierarchy(phoebe.hierarchy.binaryorbit, b['binary'], b['primary'], b['secondary'])
"""
Explanation: Defining the Hierarchy
At this point all we've done is add a bunch of Parameters to our Bundle, but
we still need to specify the hierarchical setup of our system.
Here we want to place our two stars (with component tags 'primary' and 'secondary') in our
orbit (with component tag 'binary'). This can be done with several different syntaxes sent to b.set_hierarchy:
End of explanation
"""
b.set_hierarchy(phoebe.hierarchy.binaryorbit(b['binary'], b['primary'], b['secondary']))
"""
Explanation: or
End of explanation
"""
b.get_hierarchy()
"""
Explanation: If you access the value that this set via get_hierarchy, you'll see that it really just resulted
in a simple string representation:
End of explanation
"""
b.set_hierarchy('orbit:binary(star:primary, star:secondary)')
"""
Explanation: We could just as easily have used this string to set the hierarchy:
End of explanation
"""
b['hierarchy@system']
"""
Explanation: If at any point we want to flip the primary and secondary components or make
this binary a triple, its seriously as easy as changing this hierarchy and
everything else will adjust as needed (including cross-ParameterSet constraints,
and datasets)
The Hierarchy Parameter
Setting the hierarchy just sets the value of a single parameter (although it may take some time because it also does a lot of paperwork and manages constraints between components in the system). You can access that parameter as usual:
End of explanation
"""
b.get_hierarchy()
b.hierarchy
"""
Explanation: or through any of these shortcuts:
End of explanation
"""
print(b.hierarchy.get_stars())
print(b.hierarchy.get_orbits())
"""
Explanation: This HierarchyParameter then has several methods unique to itself. You can, for instance, list the component tags of all the stars or orbits in the hierarchy via get_stars or get_orbits, respectively:
End of explanation
"""
print(b.hierarchy.get_top())
"""
Explanation: Or you can ask for the component tag of the top-level item in the hierarchy via get_top.
End of explanation
"""
print(b.hierarchy.get_parent_of('primary'))
print(b.hierarchy.get_children_of('binary'))
print(b.hierarchy.get_child_of('binary', 0)) # here 0 means primary component, 1 means secondary
print(b.hierarchy.get_sibling_of('primary'))
"""
Explanation: And request the parent, children, child, or sibling of any item in the hierarchy via get_parent_of, get_children_of, or get_sibling_of.
End of explanation
"""
print(b.hierarchy.get_primary_or_secondary('secondary'))
"""
Explanation: We can also check whether a given component (by component tag) is the primary or secondary component in its parent orbit via get_primary_or_secondary. Note that here its just a coincidence (although on purpose) that the component tag is also 'secondary'.
End of explanation
"""
|
scoaste/showcase
|
machine-learning/regression/week-6-local-regression-assignment-completed.ipynb
|
mit
|
import graphlab
"""
Explanation: Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will:
* Find the k-nearest neighbors of a given query input
* Predict the output for the query input using the k-nearest neighbors
* Choose the best value of k using a validation set
Fire up GraphLab Create
End of explanation
"""
sales = graphlab.SFrame('../Data/kc_house_data_small.gl/')
"""
Explanation: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
End of explanation
"""
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
"""
Explanation: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
End of explanation
"""
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
normalized_features = feature_matrix / norms
return (normalized_features, norms)
"""
Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
End of explanation
"""
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
"""
Explanation: Split data into training, test, and validation sets
End of explanation
"""
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
"""
Explanation: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:
End of explanation
"""
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
"""
Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.
End of explanation
"""
features_test[0]
"""
Explanation: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
End of explanation
"""
features_train[9]
"""
Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
End of explanation
"""
dist = lambda x, y : np.sqrt(np.sum((x-y)**2))
dist(features_test[0],features_train[9])
"""
Explanation: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
0.059723593716661257
Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.
End of explanation
"""
dist_dict = {}
for i in range(10):
d = dist(features_test[0],features_train[i])
dist_dict[i] = d
print "distance between test[0] and train[" + str(dist_dict.keys()[i]) + "] is " + str(dist_dict.values()[i])
"""
Explanation: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.
Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.
End of explanation
"""
from collections import OrderedDict
sorted_dist_dict = OrderedDict(sorted(dist_dict.items(), key=lambda t: t[1]))
print "min distance " + str(sorted_dist_dict.values()[0]) + " is to house index " + str(sorted_dist_dict.keys()[0])
"""
Explanation: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
min distance 0.052383627841 is to house 8 (index + 1 = 9)
End of explanation
"""
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
"""
Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
End of explanation
"""
print features_train[0:3] - features_test[0]
"""
Explanation: The subtraction operator (-) in Numpy is vectorized as follows:
End of explanation
"""
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
"""
Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
End of explanation
"""
diff = features_train[::] - features_test[0]
"""
Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.
Perform 1-nearest neighbor regression
Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.
End of explanation
"""
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
"""
Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842:
End of explanation
"""
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
"""
Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
End of explanation
"""
dist_vect = lambda x, y : np.sqrt(np.sum((x-y)**2, axis=1))
distances = dist_vect(features_test[0], features_train[::])
"""
Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint: Do not forget to take the square root of the sum of squares.
End of explanation
"""
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
"""
Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496:
End of explanation
"""
distances_vect = dist_vect(features_test[2], features_train[::])
print min(distances_vect)
train[np.where(distances_vect == min(distances_vect))[0][0]]['price']
"""
Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
QUIZ QUESTIONS
Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
0.00286049526751
What is the predicted value of the query house based on 1-nearest neighbor regression?
249000
End of explanation
"""
def knn(k, features_maxtrix, query_features_vector):
knn_dist_vect = dist_vect(query_features_vector, features_maxtrix[::])
return np.argsort(knn_dist_vect)[:k].tolist()
"""
Explanation: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint: Look at the documentation for np.argsort.
End of explanation
"""
print knn(4,features_test[2],features_train)
"""
Explanation: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
[382, 1149, 4087, 3142]
End of explanation
"""
def predict_knn(k,features_matrix,output,query_features_vector):
knn_indexes = knn(k,query_features_vector,features_matrix[::])
print knn_indexes
print np.take(output,knn_indexes)
return np.average(np.take(output,knn_indexes))
"""
Explanation: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature vector of the query house, whose price we are predicting.
The function should return a predicted value of the query house.
Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.
End of explanation
"""
print predict_knn(4,features_test[2],train['price'],features_train)
"""
Explanation: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
413987.5
End of explanation
"""
features_test[:10].shape[0]
def predict_knn_set(k, features_matrix_data, output, features_matrix_query):
prediction_set = np.empty(features_matrix_query.shape[0])
for i in range(features_matrix_query.shape[0]):
prediction_set[i] = predict_knn(k, features_matrix_query[i], output, features_matrix_data)
return prediction_set
"""
Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature matrix for the query set.
The function should return a set of predicted values, one for each house in the query set.
Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.
End of explanation
"""
predictions_knn = predict_knn_set(10, features_train, train['price'], features_test[:10])
print predictions_knn
np.where(predictions_knn == min(predictions_knn))[0][0]
print predict_knn(10,features_test[6],train['price'],features_train)
"""
Explanation: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
6
What is the predicted value of this house?
350032.0
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
"""
Explanation: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:
For k in [1, 2, ..., 15]:
Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.
Computes the RSS for these predictions on the VALIDATION set
Stores the RSS computed above in rss_all
Report which k produced the lowest RSS on VALIDATION set.
(Depending on your computing environment, this computation may take 10-15 minutes.)
To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
End of explanation
"""
|
STREAM3/pyisc
|
docs/pyISC_classification_example.ipynb
|
lgpl-3.0
|
import pyisc;
import numpy as np
from scipy.stats import poisson, norm, multivariate_normal
%matplotlib inline
from pylab import plot, figure
"""
Explanation: pyISC Example: Anomaly Detection with Classes
In this example, we extend the multivariate example to the use of classes. ISC also makes it possible to compute the anomaly score for different classes, so that apples are compared to apples an d not to oranges. In addition, it is also possibel to the anomaly detector to classify unknwon examples.
End of explanation
"""
n_classes = 3
normal_len = 10000
anomaly_len = 15
data = None
for i in range(n_classes):
po_normal = poisson(10+i)
po_normal2 = poisson(2+i)
gs_normal = norm(1+i, 12)
tmp = np.column_stack(
[
[1] * (normal_len),
list(po_normal.rvs(normal_len)),
list(po_normal2.rvs(normal_len)),
list(gs_normal.rvs(normal_len)),
[i] * (normal_len),
]
)
if data is None:
data = tmp
else:
data = np.r_[data,tmp]
# Add anomalies
for i in range(n_classes):
po_anomaly = poisson(25+i)
po_anomaly2 = poisson(3+i)
gs_anomaly = norm(2+i,30)
tmp = np.column_stack(
[
[1] * (anomaly_len),
list(po_anomaly.rvs(anomaly_len)),
list(po_anomaly2.rvs(anomaly_len)),
list(gs_anomaly.rvs(anomaly_len)),
[i] * (anomaly_len),
]
)
if data is None:
data = tmp
else:
data = np.r_[data,tmp]
"""
Explanation: Data with Classification
Create a data set with 3 columns from different probablity distributions.
End of explanation
"""
anomaly_detector = pyisc.AnomalyDetector(
component_models=[
pyisc.P_PoissonOnesided(1,0), # columns 1 and 0
pyisc.P_Poisson(2,0), # columns 2 and 0
pyisc.P_Gaussian(3) # column 3
],
output_combination_rule=pyisc.cr_max
)
"""
Explanation: Anomaly Detector
Create an anomaly detector using as first argument the used statistical models. The we use
- a onesided Poisson distribution for modelling the first fequency column (column 1) (as in the first example),
- a twosided Poisson distribution for the second frequency column (column 2),
- and a Gaussin (Normal) distribution for the last column (column 3).
Given that we now have more than one variable, it is necessary to also add a method to combine the output from the statistical models, which in this case is the maximum anomaly score of each component model:
End of explanation
"""
anomaly_detector.fit(data, y=4); # y is the class column or an array with classes
"""
Explanation: Train the anomaly detector
End of explanation
"""
scores = anomaly_detector.anomaly_score(data, y=4)
"""
Explanation: Compute the anomaly scores for each data point
End of explanation
"""
from pandas import DataFrame
df= DataFrame(data[:15], columns=['Class','#Days', 'Freq1','Freq2','Measure'])
df['Anomaly Score'] = scores[:15]
print df.to_string()
"""
Explanation: Anomaly Scores with Classes
Now we can print some example of normal frequencies vs. anomaly scores for the 15 first normal data points:
End of explanation
"""
df= DataFrame(data[-45:], columns=['#Days', 'Freq1','Freq2','Measure','Class'])
df['Anomaly Score'] = scores[-45:]
print df.to_string()
"""
Explanation: The anomalous frequencies vs. anomaly scores for the 15 anomalous data points:
End of explanation
"""
plot(scores, '.');
"""
Explanation: As can be seen above, the anomalous data also have higher anomaly scores than the normal frequencies as it should be.</b><br/><br/>
This becomes even more visible if we plot the anomaly scores (y-axis) against each data point (x-axis):
End of explanation
"""
score_details = anomaly_detector.anomaly_score_details(data,y=4)
df= DataFrame(data[-45:], columns=['#Days', 'Freq1','Freq2','Measure','Class'])
df['Anomaly:Freq1'] = [detail[2][0] for detail in score_details[-45:]] # Anomaly Score of Freq1
df['Anomaly:Freq2'] = [detail[2][1] for detail in score_details[-45:]] # Anomaly Score of Freq2
df['Anomaly:Measure'] = [detail[2][2] for detail in score_details[-45:]] # Anomaly Score of Measure
df['Anomaly Score'] = [detail[0] for detail in score_details[-45:]] # Combined Anomaly Score
df
"""
Explanation: We can also look at the details of each column in terms of their individual anomaly scores:
End of explanation
"""
data2 = None
true_classes = []
length = 1000
for i in range(n_classes):
po_normal = poisson(10+i)
po_normal2 = poisson(2+i)
gs_normal = norm(1+i, 12)
tmp = np.column_stack(
[
[1] * (length),
list(po_normal.rvs(length)),
list(po_normal2.rvs(length)),
list(gs_normal.rvs(length)),
[None] * (length),
]
)
true_classes += [i] * length
if data2 is None:
data2 = tmp
else:
data2 = np.r_[data2,tmp]
"""
Explanation: Above, the last column corresponds to the same anomaly score as before, where we can se that it corresponds to the maximum of the individual anomaly score to the left, thus, it is the result of the combination rule specified to the anomaly detector.
Anomaly Detector as Classifier
Let us create a data set with unkown classes from the same distributions as above:
End of explanation
"""
from pandas import DataFrame
from sklearn.metrics import accuracy_score
result = DataFrame(columns=['Algorithm','Accuracy'])
clf = pyisc.SklearnClassifier.clf(anomaly_detector)
predicted_classes = clf.predict(data2)
acc = accuracy_score(true_classes, predicted_classes)
result.loc[0] = ['pyISC classifier', acc]
"""
Explanation: Then, we can also use the anomaly detector as a classifier to predict the class for each instance as below:
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
X = data.T[:-1].T
y = data.T[-1]
count = 1
for name, clf in zip(['GaussianNB',
'KNeighborsClassifier',
'RandomForestClassifier'],
[GaussianNB(),
KNeighborsClassifier(n_neighbors=1000,weights='distance'),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1)]):
clf.fit(X,y);
predicted_classes_SK= clf.predict(data2.T[:-1].T)
acc = accuracy_score(true_classes,predicted_classes_SK)
result.loc[count] = [name, acc]
count += 1
result
"""
Explanation: We can also compare it to some available classifiers in Scikit-learn (http://scikit-learn.org/):
End of explanation
"""
|
keras-team/keras-io
|
examples/keras_recipes/ipynb/bayesian_neural_networks.ipynb
|
apache-2.0
|
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
"""
Explanation: Probabilistic Bayesian Neural Networks
Author: Khalid Salama<br>
Date created: 2021/01/15<br>
Last modified: 2021/01/15<br>
Description: Building probabilistic Bayesian neural network models with TensorFlow Probability.
Introduction
Taking a probabilistic approach to deep learning allows to account for uncertainty,
so that models can assign less levels of confidence to incorrect predictions.
Sources of uncertainty can be found in the data, due to measurement error or
noise in the labels, or the model, due to insufficient data availability for
the model to learn effectively.
This example demonstrates how to build basic probabilistic Bayesian neural networks
to account for these two types of uncertainty.
We use TensorFlow Probability library,
which is compatible with Keras API.
This example requires TensorFlow 2.3 or higher.
You can install Tensorflow Probability using the following command:
python
pip install tensorflow-probability
The dataset
We use the Wine Quality
dataset, which is available in the TensorFlow Datasets.
We use the red wine subset, which contains 4,898 examples.
The dataset has 11numerical physicochemical features of the wine, and the task
is to predict the wine quality, which is a score between 0 and 10.
In this example, we treat this as a regression task.
You can install TensorFlow Datasets using the following command:
python
pip install tensorflow-datasets
Setup
End of explanation
"""
def get_train_and_test_splits(train_size, batch_size=1):
# We prefetch with a buffer the same size as the dataset because th dataset
# is very small and fits into memory.
dataset = (
tfds.load(name="wine_quality", as_supervised=True, split="train")
.map(lambda x, y: (x, tf.cast(y, tf.float32)))
.prefetch(buffer_size=dataset_size)
.cache()
)
# We shuffle with a buffer the same size as the dataset.
train_dataset = (
dataset.take(train_size).shuffle(buffer_size=train_size).batch(batch_size)
)
test_dataset = dataset.skip(train_size).batch(batch_size)
return train_dataset, test_dataset
"""
Explanation: Create training and evaluation datasets
Here, we load the wine_quality dataset using tfds.load(), and we convert
the target feature to float. Then, we shuffle the dataset and split it into
training and test sets. We take the first train_size examples as the train
split, and the rest as the test split.
End of explanation
"""
hidden_units = [8, 8]
learning_rate = 0.001
def run_experiment(model, loss, train_dataset, test_dataset):
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=learning_rate),
loss=loss,
metrics=[keras.metrics.RootMeanSquaredError()],
)
print("Start training the model...")
model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset)
print("Model training finished.")
_, rmse = model.evaluate(train_dataset, verbose=0)
print(f"Train RMSE: {round(rmse, 3)}")
print("Evaluating model performance...")
_, rmse = model.evaluate(test_dataset, verbose=0)
print(f"Test RMSE: {round(rmse, 3)}")
"""
Explanation: Compile, train, and evaluate the model
End of explanation
"""
FEATURE_NAMES = [
"fixed acidity",
"volatile acidity",
"citric acid",
"residual sugar",
"chlorides",
"free sulfur dioxide",
"total sulfur dioxide",
"density",
"pH",
"sulphates",
"alcohol",
]
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(1,), dtype=tf.float32
)
return inputs
"""
Explanation: Create model inputs
End of explanation
"""
def create_baseline_model():
inputs = create_model_inputs()
input_values = [value for _, value in sorted(inputs.items())]
features = keras.layers.concatenate(input_values)
features = layers.BatchNormalization()(features)
# Create hidden layers with deterministic weights using the Dense layer.
for units in hidden_units:
features = layers.Dense(units, activation="sigmoid")(features)
# The output is deterministic: a single point estimate.
outputs = layers.Dense(units=1)(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
"""
Explanation: Experiment 1: standard neural network
We create a standard deterministic neural network model as a baseline.
End of explanation
"""
dataset_size = 4898
batch_size = 256
train_size = int(dataset_size * 0.85)
train_dataset, test_dataset = get_train_and_test_splits(train_size, batch_size)
"""
Explanation: Let's split the wine dataset into training and test sets, with 85% and 15% of
the examples, respectively.
End of explanation
"""
num_epochs = 100
mse_loss = keras.losses.MeanSquaredError()
baseline_model = create_baseline_model()
run_experiment(baseline_model, mse_loss, train_dataset, test_dataset)
"""
Explanation: Now let's train the baseline model. We use the MeanSquaredError
as the loss function.
End of explanation
"""
sample = 10
examples, targets = list(test_dataset.unbatch().shuffle(batch_size * 10).batch(sample))[
0
]
predicted = baseline_model(examples).numpy()
for idx in range(sample):
print(f"Predicted: {round(float(predicted[idx][0]), 1)} - Actual: {targets[idx]}")
"""
Explanation: We take a sample from the test set use the model to obtain predictions for them.
Note that since the baseline model is deterministic, we get a single a
point estimate prediction for each test example, with no information about the
uncertainty of the model nor the prediction.
End of explanation
"""
# Define the prior weight distribution as Normal of mean=0 and stddev=1.
# Note that, in this example, the we prior distribution is not trainable,
# as we fix its parameters.
def prior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
prior_model = keras.Sequential(
[
tfp.layers.DistributionLambda(
lambda t: tfp.distributions.MultivariateNormalDiag(
loc=tf.zeros(n), scale_diag=tf.ones(n)
)
)
]
)
return prior_model
# Define variational posterior weight distribution as multivariate Gaussian.
# Note that the learnable parameters for this distribution are the means,
# variances, and covariances.
def posterior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
posterior_model = keras.Sequential(
[
tfp.layers.VariableLayer(
tfp.layers.MultivariateNormalTriL.params_size(n), dtype=dtype
),
tfp.layers.MultivariateNormalTriL(n),
]
)
return posterior_model
"""
Explanation: Experiment 2: Bayesian neural network (BNN)
The object of the Bayesian approach for modeling neural networks is to capture
the epistemic uncertainty, which is uncertainty about the model fitness,
due to limited training data.
The idea is that, instead of learning specific weight (and bias) values in the
neural network, the Bayesian approach learns weight distributions
- from which we can sample to produce an output for a given input -
to encode weight uncertainty.
Thus, we need to define prior and the posterior distributions of these weights,
and the training process is to learn the parameters of these distributions.
End of explanation
"""
def create_bnn_model(train_size):
inputs = create_model_inputs()
features = keras.layers.concatenate(list(inputs.values()))
features = layers.BatchNormalization()(features)
# Create hidden layers with weight uncertainty using the DenseVariational layer.
for units in hidden_units:
features = tfp.layers.DenseVariational(
units=units,
make_prior_fn=prior,
make_posterior_fn=posterior,
kl_weight=1 / train_size,
activation="sigmoid",
)(features)
# The output is deterministic: a single point estimate.
outputs = layers.Dense(units=1)(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
"""
Explanation: We use the tfp.layers.DenseVariational layer instead of the standard
keras.layers.Dense layer in the neural network model.
End of explanation
"""
num_epochs = 500
train_sample_size = int(train_size * 0.3)
small_train_dataset = train_dataset.unbatch().take(train_sample_size).batch(batch_size)
bnn_model_small = create_bnn_model(train_sample_size)
run_experiment(bnn_model_small, mse_loss, small_train_dataset, test_dataset)
"""
Explanation: The epistemic uncertainty can be reduced as we increase the size of the
training data. That is, the more data the BNN model sees, the more it is certain
about its estimates for the weights (distribution parameters).
Let's test this behaviour by training the BNN model on a small subset of
the training set, and then on the full training set, to compare the output variances.
Train BNN with a small training subset.
End of explanation
"""
def compute_predictions(model, iterations=100):
predicted = []
for _ in range(iterations):
predicted.append(model(examples).numpy())
predicted = np.concatenate(predicted, axis=1)
prediction_mean = np.mean(predicted, axis=1).tolist()
prediction_min = np.min(predicted, axis=1).tolist()
prediction_max = np.max(predicted, axis=1).tolist()
prediction_range = (np.max(predicted, axis=1) - np.min(predicted, axis=1)).tolist()
for idx in range(sample):
print(
f"Predictions mean: {round(prediction_mean[idx], 2)}, "
f"min: {round(prediction_min[idx], 2)}, "
f"max: {round(prediction_max[idx], 2)}, "
f"range: {round(prediction_range[idx], 2)} - "
f"Actual: {targets[idx]}"
)
compute_predictions(bnn_model_small)
"""
Explanation: Since we have trained a BNN model, the model produces a different output each time
we call it with the same input, since each time a new set of weights are sampled
from the distributions to construct the network and produce an output.
The less certain the mode weights are, the more variability (wider range) we will
see in the outputs of the same inputs.
End of explanation
"""
num_epochs = 500
bnn_model_full = create_bnn_model(train_size)
run_experiment(bnn_model_full, mse_loss, train_dataset, test_dataset)
compute_predictions(bnn_model_full)
"""
Explanation: Train BNN with the whole training set.
End of explanation
"""
def create_probablistic_bnn_model(train_size):
inputs = create_model_inputs()
features = keras.layers.concatenate(list(inputs.values()))
features = layers.BatchNormalization()(features)
# Create hidden layers with weight uncertainty using the DenseVariational layer.
for units in hidden_units:
features = tfp.layers.DenseVariational(
units=units,
make_prior_fn=prior,
make_posterior_fn=posterior,
kl_weight=1 / train_size,
activation="sigmoid",
)(features)
# Create a probabilisticå output (Normal distribution), and use the `Dense` layer
# to produce the parameters of the distribution.
# We set units=2 to learn both the mean and the variance of the Normal distribution.
distribution_params = layers.Dense(units=2)(features)
outputs = tfp.layers.IndependentNormal(1)(distribution_params)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
"""
Explanation: Notice that the model trained with the full training dataset shows smaller range
(uncertainty) in the prediction values for the same inputs, compared to the model
trained with a subset of the training dataset.
Experiment 3: probabilistic Bayesian neural network
So far, the output of the standard and the Bayesian NN models that we built is
deterministic, that is, produces a point estimate as a prediction for a given example.
We can create a probabilistic NN by letting the model output a distribution.
In this case, the model captures the aleatoric uncertainty as well,
which is due to irreducible noise in the data, or to the stochastic nature of the
process generating the data.
In this example, we model the output as a IndependentNormal distribution,
with learnable mean and variance parameters. If the task was classification,
we would have used IndependentBernoulli with binary classes, and OneHotCategorical
with multiple classes, to model distribution of the model output.
End of explanation
"""
def negative_loglikelihood(targets, estimated_distribution):
return -estimated_distribution.log_prob(targets)
num_epochs = 1000
prob_bnn_model = create_probablistic_bnn_model(train_size)
run_experiment(prob_bnn_model, negative_loglikelihood, train_dataset, test_dataset)
"""
Explanation: Since the output of the model is a distribution, rather than a point estimate,
we use the negative loglikelihood
as our loss function to compute how likely to see the true data (targets) from the
estimated distribution produced by the model.
End of explanation
"""
prediction_distribution = prob_bnn_model(examples)
prediction_mean = prediction_distribution.mean().numpy().tolist()
prediction_stdv = prediction_distribution.stddev().numpy()
# The 95% CI is computed as mean ± (1.96 * stdv)
upper = (prediction_mean + (1.96 * prediction_stdv)).tolist()
lower = (prediction_mean - (1.96 * prediction_stdv)).tolist()
prediction_stdv = prediction_stdv.tolist()
for idx in range(sample):
print(
f"Prediction mean: {round(prediction_mean[idx][0], 2)}, "
f"stddev: {round(prediction_stdv[idx][0], 2)}, "
f"95% CI: [{round(upper[idx][0], 2)} - {round(lower[idx][0], 2)}]"
f" - Actual: {targets[idx]}"
)
"""
Explanation: Now let's produce an output from the model given the test examples.
The output is now a distribution, and we can use its mean and variance
to compute the confidence intervals (CI) of the prediction.
End of explanation
"""
|
jaety/little-pieces
|
py/Rock Paper Scissors.ipynb
|
bsd-3-clause
|
import numpy as np
from numpy.linalg import matrix_power
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Rock, Paper, Scissors or... People are Predictable
The NY Times created a Rock, Paper, Scissors bot. If you try it, chances are it'll win handily. No matter how hard you try, you're going to fall into patterns that the computer is going to be able to identify and then exploit. As the article notes, if you were capable of producing truly random throws, then on average you'd win about as much as you lose, but humans are really bad at acting truly randomly.
For example, people seriously underestimate the probability of streaks. Let's say I throw Rock/Paper/Scissors 100 times, trying to be random. What do you think is the likelihood that I (a person trying to be random) would throw 4 in a row at some point? How about 5 in a row?
I haven't conducted that study, but my guess is that in both cases, it would be very uncommon: maybe 10-25% of the people.
But how likely is it that a truly random computer throws a streak of 4? Well that's something we can calculate. And it turns out the odds are 92%. Even 5 in a row is likely to happen 56% of the time.
End of explanation
"""
def transition_matrix(streak_length):
""" TM[i,j] = Prob[transitioning to streak length i from streak length j]"""
tm = np.zeros((streak_length, streak_length))
tm[0,0:streak_length-1] = 2/3.0
tm[1:streak_length, 0:streak_length-1] = np.eye(streak_length-1) * 1/3.0
tm[streak_length-1, streak_length-1] = 1.0
return np.matrix(tm)
tm = transition_matrix(4); tm
"""
Explanation: How to compute the probabilities
How do we compute the probability of 4 in a row in a stream of 100 throws? We'll model it as a random walk around 4 possible states. After each throw, the possibilities will be that...
No streak. The last throw is different from the one before
2 element streak. The last two throws (but not third) are the same
3 element streak. The last three throws (but not the fourth) are the same.
4 element streak somewhere. At some point we've seen 4 in a row.
Some things to note
* After 1 throw, we obviously start in State 1.
* If we ever reach state 4, we stay there forever.
* The probability of moving from State 1 to State 2, or State 2 to State 3, or State 3 to State 4 is 1/3
* The probability of moving from State 1,2,3 back to State 1 is 2/3.
Put that all together into a matrix of transition probabilities, where M[i,j] is the probability of going to state i given state j, and you get this...
End of explanation
"""
starting_vec = np.matrix([1,0,0,0]).transpose()
tm * starting_vec
"""
Explanation: Given the first bullet point, The vector of state probabilities after the first throw is simply [1,0,0,0]: 100% probability of being in state 1.
If we want the state probabilities after 2 throws, we multiply this vector by the transition matrix tm, like so...
End of explanation
"""
def prob_of_run(streak_length, num_throws):
starting_vec = np.zeros((streak_length,1))
starting_vec[0] = 1.0
tm_n = matrix_power(transition_matrix(streak_length), num_throws - 1)
return (tm_n * starting_vec)[streak_length-1, 0]
"""
Explanation: Then the probabilities for N throws is tm^N * [1,0,0,0]'
End of explanation
"""
streak_lengths = range(2,10)
num_throws = [10, 25, 100, 500]
probs = [[prob_of_run(i, curr_throws) for i in streak_lengths] for curr_throws in num_throws]
line_fmts = ["go-","bo-","ro-", 'mo-']
for throws, prob, fmt in zip(num_throws, probs, line_fmts):
plt.plot(streak_lengths, prob, fmt)
plt.ylabel("probability")
plt.xlabel("streak length")
plt.grid()
plt.legend(["%d throws" % i for i in num_throws])
"""
Explanation: The probability of a streak of length 4 is just the last element of that vector.
Below you have the results for streaks of various lengths, and different throw counts.
End of explanation
"""
|
scoaste/showcase
|
machine-learning/regression/week-5-lasso-assignment-1-completed.ipynb
|
mit
|
import graphlab
"""
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
"""
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
"""
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
"""
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
"""
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
"""
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
"""
Explanation: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
End of explanation
"""
model_all_mask = model_all["coefficients"]["value"] > 0.0
model_all["coefficients"][model_all_mask].print_rows(num_rows=20)
"""
Explanation: Find what features had non-zero weight.
End of explanation
"""
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
"""
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
According to this list of weights, which of the features have been chosen?
* (intercept)
* bathrooms
* sqft_living
* sqft_living_sqrt
* grade
* sqft_above
Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
* Split our sales data into 2 sets: training and test
* Further split our training data into two sets: train, validation
Be very careful that you use seed = 1 to ensure you get the same answer!
End of explanation
"""
import numpy as np
def get_rss(model, data, outcome):
predictions = model.predict(data)
residuals = predictions - outcome
rss = sum(pow(residuals,2))
return(rss)
l1_rss = {}
for l1 in np.logspace(1, 7, num=13):
l1_rss[l1] = get_rss(graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None,
l2_penalty=0., l1_penalty=l1, verbose=False),
validation,
validation["price"])
min_value = min(l1_rss.values())
min_key = [key for key, value in l1_rss.iteritems() if value == min_value]
print "l1 value " + str(min_key) + " yielded rss of " + str(min_value)
model_best_l1 = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None,
l2_penalty=0., l1_penalty=10, verbose=False)
rss_best_l1 = get_rss(model_best_l1,testing,testing["price"])
print rss_best_l1
"""
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
End of explanation
"""
model_best_l1_mask = model_best_l1["coefficients"]["value"] > 0.0
model_best_l1["coefficients"][model_best_l1_mask].print_rows(num_rows=20)
model_best_l1["coefficients"]["value"].nnz()
"""
Explanation: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
10
What is the RSS on TEST data of the model with the best l1_penalty?
1.56983602382e+14
End of explanation
"""
max_nonzeros = 7
"""
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
18
Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
"""
l1_penalty_values = np.logspace(8, 10, num=20)
l1_penalty_values
"""
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
"""
l1_penalty_nnz = {}
for l1 in l1_penalty_values:
model_l1_penalty = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, l2_penalty=0., l1_penalty=l1, verbose=False)
l1_penalty_nnz[l1] = model_l1_penalty["coefficients"]["value"].nnz()
print l1_penalty_nnz
from collections import OrderedDict
sorted_l1_penalty_nnz = OrderedDict(sorted(l1_penalty_nnz.items(), key=lambda t: t[0]))
print sorted_l1_penalty_nnz
l1_penalty_min = float('NaN')
l1_penalty_max = float('NaN')
for i in xrange(1,len(sorted_l1_penalty_nnz)):
if sorted_l1_penalty_nnz.values()[i-1] >= max_nonzeros and sorted_l1_penalty_nnz.values()[i] <= max_nonzeros:
l1_penalty_min = sorted_l1_penalty_nnz.keys()[i-1]
l1_penalty_max = sorted_l1_penalty_nnz.keys()[i]
break
"""
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
End of explanation
"""
print l1_penalty_min
print l1_penalty_max
"""
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
Hint: there are many ways to do this, e.g.:
* Programmatically within the loop above
* Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
End of explanation
"""
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
print l1_penalty_values
"""
Explanation: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
2976351441.63
3792690190.73
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
"""
l1_penalty_rss = {}
for l1 in l1_penalty_values:
l1_penalty_model = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None, l2_penalty=0., l1_penalty=l1, verbose=False)
l1_penalty_rss[l1] = (get_rss(l1_penalty_model,validation,validation["price"]), l1_penalty_model["coefficients"])
sorted_l1_penalty_rss = OrderedDict(sorted(l1_penalty_rss.items(), key=lambda t: t[1][0]))
for item in sorted_l1_penalty_rss.items():
if( item[1][1]["value"].nnz() == max_nonzeros):
print ("l1", item[0])
print ("rss", item[1][0])
l1_penalty_model_mask = item[1][1]["value"] > 0.0
item[1][1][l1_penalty_model_mask].print_rows(num_rows=20)
#print ("coefficients", item[1][1])
break
"""
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
End of explanation
"""
|
tensorflow/docs
|
site/en/tutorials/images/transfer_learning_with_hub.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import time
import PIL.Image as Image
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import datetime
%load_ext tensorboard
"""
Explanation: Transfer learning with TensorFlow Hub
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
TensorFlow Hub is a repository of pre-trained TensorFlow models.
This tutorial demonstrates how to:
Use models from TensorFlow Hub with tf.keras.
Use an image classification model from TensorFlow Hub.
Do simple transfer learning to fine-tune a model for your own image classes.
Setup
End of explanation
"""
mobilenet_v2 ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4"
inception_v3 = "https://tfhub.dev/google/imagenet/inception_v3/classification/5"
classifier_model = mobilenet_v2 #@param ["mobilenet_v2", "inception_v3"] {type:"raw"}
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE+(3,))
])
"""
Explanation: An ImageNet classifier
You'll start by using a classifier model pre-trained on the ImageNet benchmark dataset—no initial training required!
Download the classifier
Select a <a href="https://arxiv.org/abs/1801.04381" class="external">MobileNetV2</a> pre-trained model from TensorFlow Hub and wrap it as a Keras layer with hub.KerasLayer. Any <a href="https://tfhub.dev/s?q=tf2&module-type=image-classification/" class="external">compatible image classifier model</a> from TensorFlow Hub will work here, including the examples provided in the drop-down below.
End of explanation
"""
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')
grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)
grace_hopper
grace_hopper = np.array(grace_hopper)/255.0
grace_hopper.shape
"""
Explanation: Run it on a single image
Download a single image to try the model on:
End of explanation
"""
result = classifier.predict(grace_hopper[np.newaxis, ...])
result.shape
"""
Explanation: Add a batch dimension (with np.newaxis) and pass the image to the model:
End of explanation
"""
predicted_class = tf.math.argmax(result[0], axis=-1)
predicted_class
"""
Explanation: The result is a 1001-element vector of logits, rating the probability of each class for the image.
The top class ID can be found with tf.math.argmax:
End of explanation
"""
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
plt.imshow(grace_hopper)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
"""
Explanation: Decode the predictions
Take the predicted_class ID (such as 653) and fetch the ImageNet dataset labels to decode the predictions:
End of explanation
"""
data_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
"""
Explanation: Simple transfer learning
But what if you want to create a custom classifier using your own dataset that has classes that aren't included in the original ImageNet dataset (that the pre-trained model was trained on)?
To do that, you can:
Select a pre-trained model from TensorFlow Hub; and
Retrain the top (last) layer to recognize the classes from your custom dataset.
Dataset
In this example, you will use the TensorFlow flowers dataset:
End of explanation
"""
batch_size = 32
img_height = 224
img_width = 224
train_ds = tf.keras.utils.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size
)
val_ds = tf.keras.utils.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size
)
"""
Explanation: First, load this data into the model using the image data off disk with tf.keras.utils.image_dataset_from_directory, which will generate a tf.data.Dataset:
End of explanation
"""
class_names = np.array(train_ds.class_names)
print(class_names)
"""
Explanation: The flowers dataset has five classes:
End of explanation
"""
normalization_layer = tf.keras.layers.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.
"""
Explanation: Second, because TensorFlow Hub's convention for image models is to expect float inputs in the [0, 1] range, use the tf.keras.layers.Rescaling preprocessing layer to achieve this.
Note: You could also include the tf.keras.layers.Rescaling layer inside the model. Refer to the Working with preprocessing layers guide for a discussion of the tradeoffs.
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
"""
Explanation: Third, finish the input pipeline by using buffered prefetching with Dataset.prefetch, so you can yield the data from disk without I/O blocking issues.
These are some of the most important tf.data methods you should use when loading data. Interested readers can learn more about them, as well as how to cache data to disk and other techniques, in the Better performance with the tf.data API guide.
End of explanation
"""
result_batch = classifier.predict(train_ds)
predicted_class_names = imagenet_labels[tf.math.argmax(result_batch, axis=-1)]
predicted_class_names
"""
Explanation: Run the classifier on a batch of images
Now, run the classifier on an image batch:
End of explanation
"""
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_class_names[n])
plt.axis('off')
_ = plt.suptitle("ImageNet predictions")
"""
Explanation: Check how these predictions line up with the images:
End of explanation
"""
mobilenet_v2 = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
inception_v3 = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor_model = mobilenet_v2 #@param ["mobilenet_v2", "inception_v3"] {type:"raw"}
"""
Explanation: Note: all images are licensed CC-BY, creators are listed in the LICENSE.txt file.
The results are far from perfect, but reasonable considering that these are not the classes the model was trained for (except for "daisy").
Download the headless model
TensorFlow Hub also distributes models without the top classification layer. These can be used to easily perform transfer learning.
Select a <a href="https://arxiv.org/abs/1801.04381" class="external">MobileNetV2</a> pre-trained model <a href="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" class="external">from TensorFlow Hub</a>. Any <a href="https://tfhub.dev/s?module-type=image-feature-vector&q=tf2" class="external">compatible image feature vector model</a> from TensorFlow Hub will work here, including the examples from the drop-down menu.
End of explanation
"""
feature_extractor_layer = hub.KerasLayer(
feature_extractor_model,
input_shape=(224, 224, 3),
trainable=False)
"""
Explanation: Create the feature extractor by wrapping the pre-trained model as a Keras layer with hub.KerasLayer. Use the trainable=False argument to freeze the variables, so that the training only modifies the new classifier layer:
End of explanation
"""
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
"""
Explanation: The feature extractor returns a 1280-long vector for each image (the image batch size remains at 32 in this example):
End of explanation
"""
num_classes = len(class_names)
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(num_classes)
])
model.summary()
predictions = model(image_batch)
predictions.shape
"""
Explanation: Attach a classification head
To complete the model, wrap the feature extractor layer in a tf.keras.Sequential model and add a fully-connected layer for classification:
End of explanation
"""
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir,
histogram_freq=1) # Enable histogram computation for every epoch.
"""
Explanation: Train the model
Use Model.compile to configure the training process and add a tf.keras.callbacks.TensorBoard callback to create and store logs:
End of explanation
"""
NUM_EPOCHS = 10
history = model.fit(train_ds,
validation_data=val_ds,
epochs=NUM_EPOCHS,
callbacks=tensorboard_callback)
"""
Explanation: Now use the Model.fit method to train the model.
To keep this example short, you'll be training for just 10 epochs. To visualize the training progress in TensorBoard later, create and store logs an a TensorBoard callback.
End of explanation
"""
%tensorboard --logdir logs/fit
"""
Explanation: Start the TensorBoard to view how the metrics change with each epoch and to track other scalar values:
End of explanation
"""
predicted_batch = model.predict(image_batch)
predicted_id = tf.math.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
print(predicted_label_batch)
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/images/tensorboard_transfer_learning_with_hub.png?raw=1"/> -->
Check the predictions
Obtain the ordered list of class names from the model predictions:
End of explanation
"""
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
"""
Explanation: Plot the model predictions:
End of explanation
"""
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
model.save(export_path)
export_path
"""
Explanation: Export and reload your model
Now that you've trained the model, export it as a SavedModel for reusing it later.
End of explanation
"""
reloaded = tf.keras.models.load_model(export_path)
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
abs(reloaded_result_batch - result_batch).max()
reloaded_predicted_id = tf.math.argmax(reloaded_result_batch, axis=-1)
reloaded_predicted_label_batch = class_names[reloaded_predicted_id]
print(reloaded_predicted_label_batch)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(reloaded_predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
"""
Explanation: Confirm that you can reload the SavedModel and that the model is able to output the same results:
End of explanation
"""
|
taliamo/Final_Project
|
organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb
|
mit
|
# I import useful libraries (with functions) so I can visualize my data
# I use Pandas because this dataset has word/string column titles and I like the readability features of commands and finish visual products that Pandas offers
import pandas as pd
import matplotlib.pyplot as plt
import re
import numpy as np
%matplotlib inline
#I want to be able to easily scroll through this notebook so I limit the length of the appearance of my dataframes
from pandas import set_option
set_option('display.max_rows', 10)
"""
Explanation: T. Martz-Oberlander, 2015-11-12, CO2 and Speed of Sound
Formatting ENVIRONMENTAL CONDITIONS pipe organ data for Python operations
NOTE: Here, pitch and frequency are used interchangeably to signify the speed of sound from organ pipes.
The entire script looks for mathematical relationships between CO2 concentration changes and pitch changes from a pipe organ. This script uploads, cleans data and organizes new dataframes, creates figures, and performs statistical tests on the relationships between variable CO2 and frequency of sound from a note played on a pipe organ.
This uploader script:
1) Uploads CO2, temp, and RH data files;
2) Munges it (creates a Date Time column for the time stamps), establishes column contents as floats;
3) Calculates expected frequency, as per Cramer's equation;
4) Imports output from pitch_data.py script, the dataframe with measured frequency;
5) Plots expected frequency curve, CO2 (ppm) curve, and measured pitch points in a figure.
[ Here I pursue data analysis route 1 (as mentionted in my organ_pitch/notebook.md file), which involves comparing one pitch dataframe with one dataframe of environmental characteristics taken at one sensor location. Both dataframes are compared by the time of data recorded. ]
End of explanation
"""
#I import a temp and RH data file
env=pd.read_table('../Data/CO2May.csv', sep=',')
#assigning columns names
env.columns=[['test', 'time','temp C', 'RH %', 'CO2_1', 'CO2_2']]
#I display my dataframe
env
#change data time variable to actual values of time.
env['time']= pd.to_datetime(env['time'])
#print the new table and the type of data.
print(env)
env.dtypes
"""
Explanation: Uploaded RH and temp data into Python¶
First I upload my data set(s). I am working with environmental data from different locations in the church at differnet dates. Files include: environmental characteristics (CO2, temperature (deg C), and relative humidity (RH) (%) measurements).
I can discard the CO2_2 column values since they are false measurements logged from an empty input jack in the CO2 HOBOWare ^(r) device.
End of explanation
"""
#Here I am trying to create a function for the above equation.
#I want to plug in each CO2_ave value for a time stamp (row) from the "env" data frame above.
#define coefficients (Cramer, 1992)
a0 = 331.5024
#a1 = 0.603055
#a2 = -0.000528
a9 = -(-85.20931) #need to account for negative values
#a10 = -0.228525
a14 = 29.179762
#xc = CO2 values from dataframe
#test function
def test_cramer():
assert a0 + ((a9)*400)/100 + a14*((400/1000000)**2) == 672.33964466, 'Equation failure'
return()
test_cramer()
#This function also converts ppm to mole fraction (just quantity as a proportion of total)
def cramer(data):
'''Calculate pitch from CO2_1 concentration'''
calc_freq = a0 + ((a9)*data)/100 + a14*((data/1000000)**2)
return(calc_freq)
#run the cramer values for the calculated frequency
#calc_freq = cramer(env['calc_freq'])
#define the new column as the output of the cramer function
#env['calc_freq'] = calc_freq
#Run the function for the input column (CO2 values)
env['calc_freq'] = cramer(env['CO2_1'])
cramer(env['CO2_1'])
#check the dataframe
#calculated frequency values seem reasonable based on changes in CO2
env
#Now I call in my measured pitch data,
#to be able to visually compare calculated and measured
#Import the measured pitch values--the output of pitch_data.py script
measured_freq = pd.read_table('../Data/pitches.csv', sep=',')
#change data time variable to actual values of time.
env['time']= pd.to_datetime(env['time'])
#I test to make sure I'm importing the correct data
measured_freq
"""
Explanation: Next
1. Create a function for expected pitch (frequency of sound waves) from CO2 data
2. Add expected_frequency to dataframe
Calculated pitch from CO2 levels
Here I use Cramer's equation for frequency of sound from CO2 concentration (1992).
freq = a0 + a1(T) + ... + (a9 +...) +... + a14(xc^2)
where xc is the mole fraction of CO2 and T is temperature. Full derivation of these equations can be found in the "Doc" directory.
I will later plot measured pitch (frequency) data points from my "pitch" data frame on top of these calculated frequency values for comparison.
End of explanation
"""
print(calc_freq)
#define variables from dataframe columns
CO2_1 = env[['CO2_1']]
calc_freq=env[['calc_freq']]
#measured_pitch = output_from_'pitch_data.py'
#want to set x-axis as date_time
#how do I format the ax2 y axis scale
def make_plot(variable_1, variable_2):
'''Make a three variable plot with two axes'''
#plot title
plt.title('CO2 and Calculated Pitch', fontsize='14')
#twinx layering
ax1=plt.subplot()
ax2=ax1.twinx()
#ax3=ax1.twinx()
#call data for the plot
ax1.plot(CO2_1, color='g', linewidth=1)
ax2.plot(calc_freq, color= 'm', linewidth=1)
#ax3.plot(measured_freq, color = 'b', marker= 'x')
#axis labeling
ax1.yaxis.set_tick_params(labelcolor='grey')
ax1.set_xlabel('Sample Number')
ax1.set_ylabel('CO2 (ppm)', fontsize=12, color = 'g')
ax2.set_ylabel('Calculated Pitch (Hz)', fontsize=12, color='m')
#ax3.set_ylabel('Measured Pitch')
#axis limits
ax1.set_ylim([400,1300])
ax2.set_ylim([600, 1500])
#plt.savefig('../Figures/fig1.pdf')
#Close function
return()#'../Figures/fig1.pdf')
#Call my function to test it
make_plot(CO2_1, calc_freq)
measured_freq.head()
env.head()
Freq vs. CO2
plt.plot(env.CO2_1, measured_freq.time, color='g', linewidth=1)
#def make_fig(datasets, variable_1, variable_2, savename):
#twinx layering
ax1=plt.subplot()
ax2=ax1.twinx()
#plot 2 variables in predertermined plot above
ax1.plot(dataset.index, variable_1, 'k-', linewidth=2)
ax2.plot(dataset.index, variable_2, )
#moving plots lines
variable_2_spine=ax2.spines['right']
variable_2_spine.set_position(('axes', 1.2))
ax1.yaxi.set_tick_params(labelcolor='k')
ax1.set_ylabel(variable_1.name, fontsize=13, colour = 'k')
ax2.sey_ylabel(variable_2.name + '($^o$C)', fontsize=13, color='grey')
#plt.savefig(savename)
return(savename)
fig = plt.figure(figsize=(11,14))
plt.suptitle('')
ax1.plot(colum1, colum2, 'k-', linewidth=2)
" "
ax1.set_ylim([0,1])
ax2.set_ylim([0,1])
ax1.set_xlabel('name', fontsize=14, y=0)
ax1.set_ylabel
ax2.set_ylabel
#convert 'object' (CO2_1) to float
new = pd.Series([env.CO2_1], name = 'CO2_1')
CO2_1 = new.tolist()
CO2_array = np.array(CO2_1)
#Test type of data in "CO2_1" column
env.CO2_1.dtypes
#How can I format it so it's not an object?
cramer(CO2_array)
#'float' object not callable--the data in "CO2_1" are objects and cannot be called into the equation
#cramer(env.CO2_ave)
env.dtypes
env.CO2_1.dtypes
new = pd.Series([env.CO2_1], name = 'CO2_1')
CO2_1 = new.tolist()
CO2_array = np.array(CO2_1)
#Test type of data in "CO2_1" column
env.CO2_1.dtypes
cramer(CO2_array)
type(CO2_array)
# To choose which CO2 value to use, I first visualize which seems normal
#Create CO2-only dataframs
CO2 = env[['CO2_1', 'CO2_2']]
#Make a plot
CO2_fig = plt.plot(CO2)
plt.ylabel('CO2 (ppm)')
plt.xlabel('Sample number')
plt.title('Two CO2 sensors, same time and place')
#plt.savefig('CO2_fig.pdf')
input_file = env
#Upload environmental data file
env = pd.read_table('', sep=',')
#assigning columns names
env.columns=[['test', 'date_time','temp C', 'RH %', 'CO2_1', 'CO2_2']]
#change data time variable to actual values of time.
env['date_time']= pd.to_datetime(env['date_time'])
#test function
#def test_cramer():
#assert a0 + ((a9)*400)/100 + a14*((400/1000000)**2) == 672.339644669, 'Equation failure, math-mess-up'
#return()
#Call the test function
#test_cramer()
#pitch calculator function from Cramer equation
def cramer(data):
'''Calculate pitch from CO2_1 concentration'''
calc_freq = a0 + ((a9*data)/100) + a14*((data)**2)
return(calc_freq)
#Run the function for the input column (CO2 values) to get a new column of calculated_frequency
env['calc_freq'] = cramer(env['CO2_1'])
#Import the measured pitch values--the output of pitch_data.py script
measured_freq = pd.read_table('../organ_pitch/Data/munged_pitch.csv', sep=',')
#change data time variable to actual values of time.
env['time']= pd.to_datetime(env['time'])
#Function to make and save a plot
"""
Explanation: Visualizing the expected pitch values by time
1. Plot calculated frequency, CO2 (ppm), and measured frequency values
End of explanation
"""
|
AllenDowney/ThinkBayes2
|
examples/reddit_exam.ipynb
|
mit
|
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf
"""
Explanation: Think Bayes
Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
from scipy.stats import norm
norm.ppf(0.19)
"""
Explanation: What's the standard deviation?
Here's a question that appeared on Reddit recently:
I am trying to approximate a distribution based on the 68-95-99.7 rule--but in reverse. I would like to find an approximation for standard deviation, given:
Assumed normal dist.
Mean = 22.6
n = 100
19 of n scored less than 10.0
Min = 0, Max = 37
My intuition tells me that it is possible to solve this problem but I don't think I learned how to do it in school and don't know the right words to use to look it up. Thanks for any assistance!!!
A user named efrique responded:
You have 19% less than 10
If the sample size were large enough (100 isn't very large, so this will have a fair bit of noise in it), you could just look at the 19th percentile of a normal. That's -0.8779 standard deviations below the mean, implying 22.6-10 = 12.6 is 0.8779 of a standard deviation.
First, let's check his math. I'll compute the 19th percentile of the standard normal distribution:
End of explanation
"""
sigma = 12.6 / 0.8779
sigma
"""
Explanation: So we expect the 19th percentile to be 0.8779 standard deviations below the mean. In the data, the 19th percentile is 12.6 points below the mean, which suggests that the standard deviation is
End of explanation
"""
sigma = 14
dist = norm(22.6, sigma)
ple10 = dist.cdf(10)
ple10
"""
Explanation: Let's see what Bayes has to say about it.
If we knew that the standard deviation was 14, for example, we could compute the probability of a score less than or equal to 10:
End of explanation
"""
from scipy.stats import binom
binom(100, ple10).pmf(19)
"""
Explanation: Then we could use the binomial distribution to compute the probability that 19 out of 100 are less than or equal to 10.
End of explanation
"""
hypos = np.linspace(1, 41, 101)
"""
Explanation: But we don't know the standard deviation. So I'll make up a range of possible values.
End of explanation
"""
ple10s = norm(22.6, hypos).cdf(10)
ple10s.shape
"""
Explanation: Now we can compute the probability of a score less than or equal to 10 under each hypothesis.
End of explanation
"""
likelihood1 = binom(100, ple10s).pmf(19)
likelihood1.shape
"""
Explanation: And the probability that 19 out of 100 would be less than or equal to 10.
End of explanation
"""
plt.plot(hypos, likelihood1)
plt.xlabel('Standard deviation')
plt.ylabel('Likelihood');
"""
Explanation: Here's what it looks like.
End of explanation
"""
prior = Pmf(1, hypos)
posterior = prior * likelihood1
posterior.normalize()
"""
Explanation: If we have no other information about sigma, we could use a uniform prior.
End of explanation
"""
posterior.plot()
plt.xlabel('Standard deviation')
plt.ylabel('PMF');
"""
Explanation: In that case the posterior looks just like the likelihood, except that the probabilities are normalized.
End of explanation
"""
posterior.max_prob()
"""
Explanation: The most likely value in the posterior distribution is 14.2, which is consistent with the estimate we computed above.
End of explanation
"""
posterior.mean()
"""
Explanation: The posterior mean is a little higher.
End of explanation
"""
posterior.credible_interval(0.9)
"""
Explanation: And the credible interval is pretty wide.
End of explanation
"""
sigma = 14
dist = norm(22.6, sigma)
plt0 = dist.cdf(0)
plt0
"""
Explanation: Using the minimum
However, we have left some information on the table. We also know that the low score was 37, which is the minimum score possible.
If we knew sigma, we could compute the probability of a score less than or equal to 0.
End of explanation
"""
binom(100, plt0).sf(0)
"""
Explanation: And the probability that at least one person gets a score less than or equal to 0.
I'm using sf, which computes the survival function, also known as the complementary CDF, or 1 - cdf(x).
End of explanation
"""
plt0s = norm(22.6, hypos).cdf(0)
"""
Explanation: With mean 22.6 and standard deviation 14, it is likely that someone would get a 0.
If the standard deviation were lower, it would be less likely, so this data provides some evidence, but not much.
Nevertheless can do the same computation for the range of possible sigmas.
End of explanation
"""
likelihood2 = binom(100, plt0s).sf(0)
"""
Explanation: And compute the likelihood that at least one person gets a 37.
End of explanation
"""
plt.plot(hypos, likelihood2)
plt.xlabel('Standard deviation')
plt.ylabel('Likelihood');
"""
Explanation: Here's what it looks like.
End of explanation
"""
prior = Pmf(1, hypos)
prior.normalize()
posterior2 = prior * likelihood1 * likelihood2
posterior2.normalize()
"""
Explanation: The fact that someone got a 0 rules out some low standard deviations, but they were already nearly ruled out, so this information doesn't have much effect on the posterior.
End of explanation
"""
posterior.plot()
posterior2.plot()
plt.xlabel('Standard deviation')
plt.ylabel('PMF');
"""
Explanation: Here's what the posteriors look like:
End of explanation
"""
posterior.mean(), posterior2.mean()
"""
Explanation: They are pretty much identical. However, by eliminating lower standard deviations, the information we have about the minimum and maximum does increase the posterior mean, just slightly.
End of explanation
"""
# Solution
pgt37s = norm(22.6, hypos).sf(37)
# Solution
likelihood3 = binom(100, pgt37s).sf(0)
# Solution
plt.plot(hypos, likelihood2)
plt.xlabel('Standard deviation')
plt.ylabel('Likelihood');
# Solution
prior = Pmf(1, hypos)
prior.normalize()
posterior3 = prior * likelihood1 * likelihood2 * likelihood3
posterior3.normalize()
"""
Explanation: This section might be useful because it shows how to incorporate the information we have about the minimum score. But in this case it turns out to provide very little evidence about the standard deviation.
Using the maximum
We have one more piece of information to work with. We also know that the high score was 37, which is the maximum score possible.
As an exercise, compute the likelihood of this data under each of the hypothetical standard deviations in hypos and use the result to update the posterior.
You should find that it contains almost no additional evidence.
End of explanation
"""
# Solution
posterior.plot()
posterior2.plot()
posterior3.plot()
plt.xlabel('Standard deviation')
plt.ylabel('PMF');
# Solution
posterior.mean(), posterior2.mean(), posterior3.mean()
"""
Explanation: Here's what the posteriors look like:
End of explanation
"""
|
SKA-ScienceDataProcessor/crocodile
|
examples/notebooks/aaf.ipynb
|
apache-2.0
|
%matplotlib inline
import sys
sys.path.append('../..')
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = 12, 10
import numpy
import scipy
import scipy.special
from crocodile.clean import *
from crocodile.synthesis import *
from crocodile.simulate import *
from crocodile.antialias import *
from util.visualize import *
"""
Explanation: Anti-Aliasing Functions in Interferometry
End of explanation
"""
vlas = numpy.genfromtxt("../../data/configurations/VLA_A_hor_xyz.csv", delimiter=",")
uvw = xyz_to_baselines(vlas, numpy.arange(0,numpy.pi,0.04), numpy.pi/4) / 5
yyone = simulate_point(uvw, 0.001, 0.001)
yytwo = yyone + 5*simulate_point(uvw, 0.0025, 0.0025)
"""
Explanation: Test setup
We will use a field of view of 0.004 radian. We place one
source within the field of view ($l=m=0.001$) and another 5 times stronger source just outside ($l=m=0.0025$).
End of explanation
"""
theta = 0.004
lam = 30000
d,_,_=do_imaging(theta, lam, uvw, None, yyone, simple_imaging)
show_image(d, "simple[yyone]", theta)
print(d[40:60,40:60].std())
"""
Explanation: Simple Imaging
Imaging without convolution with just the first source within field of view:
End of explanation
"""
d,_,_=do_imaging(theta, lam, uvw, None, yytwo, simple_imaging)
show_image(d, "simple[yytwo]", theta)
print(d[40:60,40:60].std())
"""
Explanation: If we now again do simple imaging with both sources, we see that the strong
source at (0.0025, 0.0025) is getting "aliased" back into the field of view at (-0.0015, -0.0015):
End of explanation
"""
support = 6
aa = anti_aliasing_function(int(theta*lam), 0, support)
aa2 = numpy.outer(aa, aa)
pylab.rcParams['figure.figsize'] = 7, 5
pylab.plot(theta*coordinates(int(theta*lam)), aa); pylab.show()
show_image(aa2, "aa2", theta)
"""
Explanation: Anti-aliasing function
This is an example anti-aliasing function to use. It is separable, so we can work equivalently with one- or two-dimensional representations:
End of explanation
"""
oversample = 128
r = numpy.arange(-oversample*(support//2), oversample*((support+1)//2)) / oversample
kv=kernel_oversample(aa, oversample, support)
pylab.plot(r, numpy.transpose(kv).flatten().real);
"""
Explanation: After FFT-ing and extracting the middle this is what the oversampled anti-aliasing
kernel looks like in grid space:
End of explanation
"""
pylab.plot(r, numpy.transpose(kv)[::-1].flatten().imag);
"""
Explanation: Imaginary part is close to nil:
End of explanation
"""
d,_,_=do_imaging(theta, lam, uvw, None, yyone, conv_imaging, kv=kv)
pylab.rcParams['figure.figsize'] = 12, 10
show_image(d, "aa_{one}", theta)
print(d[40:60,40:60].std())
"""
Explanation: Gridding with anti-aliasing function
This is the image of single source within field of view without correcting the taper. Note that brightness falls off
towards the edges of the picture. This is because applying the anti-aliasing convolution kernel is equivalent to multiplying the picture with the anti-aliasing function shown above.
End of explanation
"""
show_image(d/numpy.outer(aa, aa), "aa'_{one}", theta)
print((d/aa2)[40:60,40:60].std())
"""
Explanation: However, as the anti-aliasing function never goes to zero, we can easily revert this effect by dividing out the anti-aliasing function:
End of explanation
"""
d,_,_=do_imaging(theta, lam, uvw, None, yytwo, conv_imaging, kv=kv)
show_image(d/numpy.outer(aa, aa), "aa'_{two}", theta)
print((d/aa2)[40:60,40:60].std())
"""
Explanation: Now we have restored image performance with just a single source in the field of view. In fact,
imaging is a good deal cleaner than before (and the source slightly stronger), as with
oversampling we are now taking fractional coordinates of visibilities into account.
Bust most critically if we now add back the source outside of the field of view, it gets
suppressed strongly. Because of its strength we still see noise centered around its off-screen
position at (0.0025, 0.0025), but the source itself is gone:
End of explanation
"""
|
AtmaMani/pyChakras
|
udemy_ml_bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps.ipynb
|
mit
|
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Choropleth Maps
Offline Plotly Usage
Get imports and set everything up to be working offline.
End of explanation
"""
init_notebook_mode(connected=True)
"""
Explanation: Now set up everything so that the figures show up in the notebook:
End of explanation
"""
import pandas as pd
"""
Explanation: More info on other options for Offline Plotly usage can be found here.
Choropleth US Maps
Plotly's mapping can be a bit hard to get used to at first, remember to reference the cheat sheet in the data visualization folder, or find it online here.
End of explanation
"""
data = dict(type = 'choropleth',
locations = ['AZ','CA','NY'],
locationmode = 'USA-states',
colorscale= 'Portland',
text= ['text1','text2','text3'],
z=[1.0,2.0,3.0],
colorbar = {'title':'Colorbar Title'})
"""
Explanation: Now we need to begin to build our data dictionary. Easiest way to do this is to use the dict() function of the general form:
type = 'choropleth',
locations = list of states
locationmode = 'USA-states'
colorscale=
Either a predefined string:
'pairs' | 'Greys' | 'Greens' | 'Bluered' | 'Hot' | 'Picnic' | 'Portland' | 'Jet' | 'RdBu' | 'Blackbody' | 'Earth' | 'Electric' | 'YIOrRd' | 'YIGnBu'
or create a custom colorscale
text= list or array of text to display per point
z= array of values on z axis (color of state)
colorbar = {'title':'Colorbar Title'})
Here is a simple example:
End of explanation
"""
layout = dict(geo = {'scope':'usa'})
"""
Explanation: Then we create the layout nested dictionary:
End of explanation
"""
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
"""
Explanation: Then we use:
go.Figure(data = [data],layout = layout)
to set up the object that finally gets passed into iplot()
End of explanation
"""
df = pd.read_csv('2011_US_AGRI_Exports')
df.head()
"""
Explanation: Real Data US Map Choropleth
Now let's show an example with some real data as well as some other options we can add to the dictionaries in data and layout.
End of explanation
"""
data = dict(type='choropleth',
colorscale = 'YIOrRd',
locations = df['code'],
z = df['total exports'],
locationmode = 'USA-states',
text = df['text'],
marker = dict(line = dict(color = 'rgb(255,255,255)',width = 2)),
colorbar = {'title':"Millions USD"}
)
"""
Explanation: Now out data dictionary with some extra marker and colorbar arguments:
End of explanation
"""
layout = dict(title = '2011 US Agriculture Exports by State',
geo = dict(scope='usa',
showlakes = True,
lakecolor = 'rgb(85,173,240)')
)
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
"""
Explanation: And our layout dictionary with some more arguments:
End of explanation
"""
df = pd.read_csv('2014_World_GDP')
df.head()
data = dict(
type = 'choropleth',
locations = df['CODE'],
z = df['GDP (BILLIONS)'],
text = df['COUNTRY'],
colorbar = {'title' : 'GDP Billions US'},
)
layout = dict(
title = '2014 Global GDP',
geo = dict(
showframe = False,
projection = {'type':'Mercator'}
)
)
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
"""
Explanation: World Choropleth Map
Now let's see an example with a World Map:
End of explanation
"""
|
chili-epfl/paper-JLA-deep-teaching-analytics
|
notebooks/evaluationFramework.ipynb
|
mit
|
import numpy
import pandas
from sklearn.cross_validation import cross_val_score
from sklearn.preprocessing import LabelEncoder, label_binarize
from sklearn.cross_validation import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
from sklearn import cross_validation
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score, make_scorer, f1_score
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from math import ceil, sqrt
from sklearn import decomposition
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2, f_classif
"""
Explanation: 1. Preparation
Load libraries
End of explanation
"""
data = pandas.read_csv("../data/processed/completeDataset.csv.gz", compression='gzip', header=True, sep=',', quotechar='"')
type(data)
"""
Explanation: Load dataset
End of explanation
"""
# We only look for predicting 4 states of activity and 3 of social, the rest (incl.NA) we bunch in 'Other'
#fulldata$Activity.clean <- ifelse(is.na(as.character(fulldata$Activity.win)) |
# as.character(fulldata$Activity.win)=='OFF' |
# as.character(fulldata$Activity.win)=='TDT' |
# as.character(fulldata$Activity.win)=='TEC',
# 'Other',as.character(fulldata$Activity.win))
#fulldata$Social.clean <- ifelse(is.na(as.character(fulldata$Social.win)),
# 'Other',as.character(fulldata$Social.win))
#names(fulldata)[7562:7563] <- c('Activity','Social')
#fulldata <- fulldata[,-c(1,4,5,6)]
#fulldata$Activity <- factor(fulldata$Activity)
#fulldata$Social <- factor(fulldata$Social)
#test <- fulldata[fulldata$session=='case2-day3-session1-teacher2' | fulldata$session=='case1-day1-session1-teacher1',]
#train <- fulldata[fulldata$session!='case2-day3-session1-teacher2' & fulldata$session!='case1-day1-session1-teacher1',]
notnull_data = data[data.notnull().all(axis=1)]
train = notnull_data.values
notnull_data2 = data2[data2.notnull().all(axis=1)]
test = notnull_data2.values
"""
Explanation: Basic split
... we leave out, as test set, one session per teacher, with variety of states of Activity and Social. This will give us a (quite optimistic) estimate of how good a "general model" (that works across subjects) can be on data from a teacher it has seen, in a classroom situation that it has seen (as there are multiple sessions for each kind of classroom situation), but with different students
End of explanation
"""
# Separate the target values (Activity and Social) from features, etc.
X_train = train[:,3:7558].astype(float)
Y_trainA = train[:,7558] #Activity
Y_trainS = train[:,7559] #Social
X_test = test[:,3:7558].astype(float)
Y_testA = test[:,7558]
Y_testS = test[:,7559]
# feature_names of X
feature_names = names[3:7558]
idx_eyetracking = range(0,10)
idx_acc = range(10,150)
idx_audio = range(150,6555)
idx_video = range(6555,7555)
#print feature_names[idx_video]
"""
Explanation: Other splits are possible! (TODO: create a test harness that tries all of these on our best models)
General model -- Leave one teacher out: train on data for one teacher, test on data for another teacher (we only have two teachers!)
General model -- Leave one situation out: train on data for two teachers, but leave all the sessions for one kind of situation out
Personalized model -- Leave one session out: train on data for one teacher, but leave one session out
Personalized model -- Leave one situation out: train on data for one teacher, but leave one kind of situation out (can only be done with teacher 2)
Dataset overview
Both the training and testing datasets have the following general structure:
''Rows'' represent the features of each 10s window (overlapping/sliding 5s), ordered by session ID and its timestamp (in ms)
''Columns'' are the features themselves (they have more-or-less-cryptic column names), up to 7559 of them!
[,2]: ''timestamp'' within the session (in ms)
[,3:12]: ''eyetracking'' features (mean/sd pupil diameter, nr. of long fixations, avg. saccade speed, fixation duration, fixation dispersion, saccade duration, saccade amplitude, saccade length, saccade velocity)
[,13:152]: ''accelerometer'' features, including X, Y, Z (mean, sd, max, min, median, and 30 FFT coefficients of each of them) and jerk (mean, sd, max, min, median, and 30 FFT coefficients of each of it)
[,153:6557]: ''audio'' features extracted from an audio snippet of the 10s window, using openSMILE. Includes features about whether there is someone speaking (153:163), emotion recognition models (164:184), and brute-force audio spectrum features and characteristics used in various audio recognition challenges/tasks (185:6557)
[,6558:7559]: ''video'' features extracted from an image taken in the middle of the window (the 1000 values of the last layer when passing the immage through a VGG pre-trained model)
A basic benchmark: Random Forest
Since RF performed quite well in most cases for our LAK paper dataset, let's try it on the whole dataset and see what comes out, as a baseline for modelling accuracy. In principle, we are using AUC (area under the ROC curve) as the main metric for model comparison
Teacher activity
End of explanation
"""
|
mari-linhares/tensorflow-workshop
|
code_samples/RNN/colorbot/colorbot_including_solutions.ipynb
|
apache-2.0
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Tensorflow
import tensorflow as tf
print('Tested with TensorFlow 1.2.0')
print('Your TensorFlow version:', tf.__version__)
# Feeding function for enqueue data
from tensorflow.python.estimator.inputs.queues import feeding_functions as ff
# Rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Model builder
from tensorflow.python.estimator import model_fn as model_fn_lib
# Plot images with pyplot
%matplotlib inline
from matplotlib import pyplot as plt
# Helpers for data processing
import pandas as pd
import numpy as np
import argparse
"""
Explanation: Colorbot
Special thanks to @MarkDaoust that helped us with this material
In order to have a better experience follow these steps:
Just read all the notebook, try to understand what each part of the code is doing and get familiar with the implementation.
For each exercise in this notebook make a copy of this notebook and try to implement what is expected. We suggest the following order for the exercises: HYPERPARAMETERS, EXPERIMENT, DATASET
Troubles or doubts about the code/exercises? Ask the instructor about it or check colorbot_solutions.ipnyb for a possible implementation/instruction if available
Content of this notebook
In this notebook you'll find a full implementation of a RNN model using the TensorFlow Estimators including comments and details about how to do it.
Once you finish this notebook, you'll have a better understanding of:
* TensorFlow Estimators
* TensorFlow DataSets
* RNNs
What is colorbot?
Colorbot is a RNN model that receives a word (sequence of characters) as input and learns to predict a rgb value that better represents this word. As a result we have a color generator!
Dependencies
End of explanation
"""
# Data files
TRAIN_INPUT = 'data/train.csv'
TEST_INPUT = 'data/test.csv'
MY_TEST_INPUT = 'data/mytest.csv'
# Parameters for training
BATCH_SIZE = 64
# Parameters for data processing
VOCAB_SIZE = 256
CHARACTERS = [chr(i) for i in range(VOCAB_SIZE)]
SEQUENCE_LENGTH_KEY = 'sequence_length'
COLOR_NAME_KEY = 'color_name'
"""
Explanation: Parameters
End of explanation
"""
# Returns the column values from a CSV file as a list
def _get_csv_column(csv_file, column_name):
with open(csv_file, 'r') as f:
df = pd.read_csv(f)
return df[column_name].tolist()
# Plots a color image
def _plot_rgb(rgb):
data = [[rgb]]
plt.figure(figsize=(2,2))
plt.imshow(data, interpolation='nearest')
plt.show()
"""
Explanation: Helper functions
End of explanation
"""
def get_input_fn(csv_file, batch_size, num_epochs=1, shuffle=True):
def _parse(line):
# each line: name, red, green, blue
# split line
items = tf.string_split([line],',').values
# get color (r, g, b)
color = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
# split color_name into a sequence of characters
color_name = tf.string_split([items[0]], '')
length = color_name.indices[-1, 1] + 1 # length = index of last char + 1
color_name = color_name.values
return color, color_name, length
def _length_bin(length, cast_value=5, max_bin_id=10):
'''
Chooses a bin for a word given it's length.
The goal is to use group_by_window to group words
with the ~ same ~ length in the same bin.
Each bin will have the size of a batch, so it can train faster.
'''
bin_id = tf.cast(length / cast_value, dtype=tf.int64)
return tf.minimum(bin_id, max_bin_id)
def _pad_batch(ds, batch_size):
return ds.padded_batch(batch_size,
padded_shapes=([None], [None], []),
padding_values=(0.0, chr(0), tf.cast(0, tf.int64)))
def input_fn():
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
dataset = (
tf.contrib.data.TextLineDataset(csv_file) # reading from the HD
.skip(1) # skip header
.repeat(num_epochs) # repeat dataset the number of epochs
.map(_parse) # parse text to variables
.group_by_window(key_func=lambda color, color_name, length: _length_bin(length), # choose a bin
reduce_func=lambda key, ds: _pad_batch(ds, batch_size), # apply reduce funtion
window_size=batch_size)
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
color, color_name, length = dataset.make_one_shot_iterator().get_next()
features = {
COLOR_NAME_KEY: color_name,
SEQUENCE_LENGTH_KEY: length,
}
return features, color
return input_fn
train_input_fn = get_input_fn(TRAIN_INPUT, BATCH_SIZE)
test_input_fn = get_input_fn(TEST_INPUT, BATCH_SIZE)
"""
Explanation: Input function
Here we are defining the input pipeline using the Dataset API.
One special operation that we're doing is called group_by_window, what this function does is to map each consecutive element in this dataset to a key using key_func and then groups the elements by key. It then applies reduce_func to at most window_size elements matching the same key. All except the final window for each key will contain window_size elements; the final window may be smaller.
In the code below what we're doing is using the group_by_window to batch color names that have similar length together, this makes the code more efficient since the RNN will be unfolded (approximately) the same number of steps in each batch.
Image from Sequence Models and the RNN API (TensorFlow Dev Summit 2017)
EXERCISE DATASET (first complete the EXERCISE EXPERIMENT: change the input function bellow so it will just use normal padded_batch instead sorting the batches. Then run each model using experiments and compare the efficiency (time, global_step/sec) using TensorBoard.
hint: to compare the implementations using tensorboard just copy the model_dir folder of both executions to the same directory (the model dir should be different at each time you run the model) and point tensorboard to it with: tensorboard --logdir=path_to_model_dirs_par)
End of explanation
"""
x, y = get_input_fn(TRAIN_INPUT, 1)()
with tf.Session() as s:
print(s.run(x))
print(s.run(y))
"""
Explanation: Testing the input function
End of explanation
"""
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01):
def model_fn(features, labels, mode):
color_name = features[COLOR_NAME_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], dtype=tf.int32) # int64 -> int32
# ----------- Preparing input --------------------
# Creating a tf constant to hold the map char -> index
mapping = tf.constant(CHARACTERS, name="mapping")
table = tf.contrib.lookup.index_table_from_tensor(mapping, dtype=tf.string)
int_color_name = table.lookup(color_name)
# converting color names to one hot representation
color_name_onehot = tf.one_hot(int_color_name, depth=len(CHARACTERS) + 1)
# ---------- RNN -------------------
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=color_name_onehot,
sequence_length=sequence_length,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs,
sequence_length)
# ------------ Dense layers -------------------
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
# ----------- Loss and Optimizer ----------------
loss = None
train_op = None
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.mean_squared_error(labels, predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions,
loss=loss,
train_op=train_op)
return model_fn
"""
Explanation: Creating the Estimator model
End of explanation
"""
model_fn = get_model_fn(rnn_cell_sizes=[256, 128], # size of the hidden layers
label_dimension=3, # since is RGB
dnn_layer_sizes=[128], # size of units in the dense layers on top of the RNN
optimizer='Adam', # changing optimizer to Adam
learning_rate=0.01)
estimator = tf.estimator.Estimator(model_fn=model_fn, model_dir='colorbot')
"""
Explanation: EXERCISE HYPERPARAMETERS: try making changes to the model and see if you can improve the results.
Run the original model, run yours and compare them using Tensorboard. What improvements do you see?
hint 0: change the type of RNNCell, maybe a GRUCell? Change the number of hidden layers, or add dnn layers.
hint 1: to compare the implementations using tensorboard just copy the model_dir folder of both executions to the same directory (the model dir should be different at each time you run the model) and point tensorboard to it with: tensorboard --logdir=path_to_model_dirs_par)
End of explanation
"""
NUM_EPOCHS = 40
for i in range(NUM_EPOCHS):
print('Training epoch %d' % i)
print('-' * 20)
estimator.train(input_fn=train_input_fn)
print('Evaluating epoch %d' % i)
print('-' * 20)
estimator.evaluate(input_fn = test_input_fn)
"""
Explanation: Trainning and Evaluating
EXERCISE EXPERIMENT: The code below works, but we can use an experiment instead. Add a cell that runs an experiment instead of interacting directly with the estimator.
hint 0: you'll need to change the train_input_fn definition, think about it...
hint 1: the change is related with the for loop
End of explanation
"""
def predict(estimator, input_file):
preds = estimator.predict(input_fn=get_input_fn(input_file, 1, shuffle=False))
color_names = _get_csv_column(input_file, 'name')
print()
for p, name in zip(preds, color_names):
color = tuple(map(int, p * 255))
print(name + ',', 'rgb:', color)
_plot_rgb(p)
predict(estimator, MY_TEST_INPUT)
"""
Explanation: Making Predictions
End of explanation
"""
pre_estimator = tf.estimator.Estimator(model_dir='pretrained', model_fn=model_fn)
predict(pre_estimator, MY_TEST_INPUT)
"""
Explanation: Pre-trained model predictions
In order to load the pre-trained model we can just create an estimator using the model_fn and use the model_dir that contains the pre-trained model files in this case it's 'pretrained'
End of explanation
"""
# small important detail, to train properly with the experiment you need to
# repeat the dataset the number of epochs desired
train_input_fn = get_input_fn(TRAIN_INPUT, BATCH_SIZE, num_epochs=40)
# create experiment
def generate_experiment_fn(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn
)
learn_runner.run(generate_experiment_fn, run_config=tf.contrib.learn.RunConfig(model_dir='model_dir'))
"""
Explanation: Colorbot Solutions
Here are the solutions to the exercises available at the colorbot notebook.
In order to compare the models we encourage you to use Tensorboard and also use play_colorbot.py --model_dir=path_to_your_model to play with the models and check how it does with general words other than color words.
EXERCISE EXPERIMENT
When using experiments you should make sure you repeat the datasets the number of epochs desired since the experiment will "run the for loop for you". Also, you can add a parameter to run a number of steps instead, it will run until the dataset ends or the number of steps.
You can add this cell to your colorbot notebook and run it.
End of explanation
"""
def get_input_fn(csv_file, batch_size, num_epochs=1, shuffle=True):
def _parse(line):
# each line: name, red, green, blue
# split line
items = tf.string_split([line],',').values
# get color (r, g, b)
color = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
# split color_name into a sequence of characters
color_name = tf.string_split([items[0]], '')
length = color_name.indices[-1, 1] + 1 # length = index of last char + 1
color_name = color_name.values
return color, color_name, length
def input_fn():
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
dataset = (
tf.contrib.data.TextLineDataset(csv_file) # reading from the HD
.skip(1) # skip header
.map(_parse) # parse text to variables
.padded_batch(batch_size, padded_shapes=([None], [None], []),
padding_values=(0.0, chr(0), tf.cast(0, tf.int64)))
.repeat(num_epochs) # repeat dataset the number of epochs
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
color, color_name, length = dataset.make_one_shot_iterator().get_next()
features = {
COLOR_NAME_KEY: color_name,
SEQUENCE_LENGTH_KEY: length,
}
return features, color
return input_fn
"""
Explanation: EXERCISE DATASET
Run the colorbot experiment and notice the choosen model_dir
Below is the input function definition,we don't need some of the auxiliar functions anymore
Add this cell and then add the solution to the EXERCISE EXPERIMENT
choose a different model_dir and run the cells
Copy the model_dir of the two models to the same path
tensorboard --logdir=path
End of explanation
"""
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01):
def model_fn(features, labels, mode):
color_name = features[COLOR_NAME_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], dtype=tf.int32) # int64 -> int32
# ----------- Preparing input --------------------
# Creating a tf constant to hold the map char -> index
# this is need to create the sparse tensor and after the one hot encode
mapping = tf.constant(CHARACTERS, name="mapping")
table = tf.contrib.lookup.index_table_from_tensor(mapping, dtype=tf.string)
int_color_name = table.lookup(color_name)
# representing colornames with one hot representation
color_name_onehot = tf.one_hot(int_color_name, depth=len(CHARACTERS) + 1)
# ---------- RNN -------------------
# Each RNN layer will consist of a GRU cell
rnn_layers = [tf.nn.rnn_cell.GRUCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=color_name_onehot,
sequence_length=sequence_length,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs,
sequence_length)
# ------------ Dense layers -------------------
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
# ----------- Loss and Optimizer ----------------
loss = None
train_op = None
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.mean_squared_error(labels, predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions,
loss=loss,
train_op=train_op)
return model_fn
"""
Explanation: As a result you will see something like:
We called the original model "sorted_batch" and the model using the simplified input function as "simple_batch"
Notice that both models have basically the same loss in the last step, but the "sorted_batch" model runs way faster , notice the global_step/sec metric, it measures how many steps the model executes per second. Since the "sorted_batch" has a larger global_step/sec it means it trains faster.
If you don't belive me you can change Tensorboard to compare the models in a "relative" way, this will compare the models over time. See result below.
EXERCISE HYPERPARAMETERS
This one is more personal, what you see will depends on what you change in the model.
Below is a very simple example we just changed the model to use a GRUCell, just in case...
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.2/examples/rossiter_mclaughlin.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
"""
Explanation: Rossiter-McLaughlin Effect
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
import numpy as np
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.set_value('q', value=0.7)
b.set_value('incl', component='binary', value=87)
b.set_value('requiv', component='primary', value=0.8)
b.set_value('teff', component='secondary', value=6500)
b.set_value('syncpar', component='secondary', value=1.5)
"""
Explanation: Now we'll try to exaggerate the effect by spinning up the secondary component.
End of explanation
"""
anim_times = phoebe.arange(0.44, 0.56, 0.002)
"""
Explanation: Adding Datasets
We'll add radial velocity, line profile, and mesh datasets. We'll compute the rvs through the whole orbit, but the mesh and line profiles right around the eclipse - just at the times that we want to plot for an animation.
End of explanation
"""
b.add_dataset('rv',
times=phoebe.linspace(0,1,201),
dataset='dynamicalrvs')
b.set_value_all('rv_method', dataset='dynamicalrvs', value='dynamical')
b.add_dataset('rv',
times=phoebe.linspace(0,1,201),
dataset='numericalrvs')
b.set_value_all('rv_method', dataset='numericalrvs', value='flux-weighted')
"""
Explanation: We'll add two identical datasets, one where we compute only dynamical RVs (won't include Rossiter-McLaughlin) and another where we compute flux-weighted RVs (will include Rossiter-McLaughlin).
End of explanation
"""
b.add_dataset('mesh',
compute_times=anim_times,
coordinates='uvw',
columns=['rvs@numericalrvs'],
dataset='mesh01')
"""
Explanation: For the mesh, we'll save some time by only exposing plane-of-sky coordinates and the 'rvs' column.
End of explanation
"""
b.add_dataset('lp',
compute_times=anim_times,
component=['primary', 'secondary'],
wavelengths=phoebe.linspace(549.5,550.5,101),
profile_rest=550)
"""
Explanation: And for the line-profile, we'll expose the line-profile for both of our stars separately, instead of for the entire system.
End of explanation
"""
b.run_compute(irrad_method='none')
"""
Explanation: Running Compute
End of explanation
"""
colors = {'primary': 'green', 'secondary': 'magenta'}
"""
Explanation: Plotting
Throughout all of these plots, we'll color the components green and magenta (to differentiate them from the red and blue of the RV mapping).
End of explanation
"""
afig, mplfig = b.plot(kind='rv',
c=colors,
ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},
show=True)
"""
Explanation: First let's compare between the dynamical and numerical RVs.
The dynamical RVs show the velocity of the center of each star along the line of sight. But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array).
End of explanation
"""
afig, mplfig= b.plot(time=0.46,
fc='rvs@numericalrvs', ec='face',
c=colors,
ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},
highlight={'numericalrvs': True, 'dynamicalrvs': False},
axpos={'mesh': 211, 'rv': 223, 'lp': 224},
xlim={'rv': (0.4, 0.6)}, ylim={'rv': (-80, 80)},
tight_layout=True,
show=True)
"""
Explanation: Now let's make a plot of the line profiles and mesh during ingress to visualize what's happening.
Let's go through these options (see the plot API docs for more details):
* time: make the plot at this single time
* fc: (will be ignored by everything but the mesh): set the facecolor to the rvs column. This will automatically apply a red-blue color mapping.
* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to "see-through" the triangle edges.
* c: set the colors as defined in our dictionary above. This will apply to the rv, lp, and horizon datasets, but will be ignored by the mesh.
* ls: set the linestyle to differentiate between numerical and dynamical rvs.
* highlight: highlight the current time on the numerical rvs only.
* axpos: define the layout of the axes so the mesh plot takes up the horizontal space it needs.
* xlim: "zoom-in" on the RM effect in the RVs, allow the others to fallback on automatic limits.
* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.
End of explanation
"""
afig, mplanim = b.plot(times=anim_times,
fc='rvs@numericalrvs', ec='face',
c=colors,
ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},
highlight={'numericalrvs': True, 'dynamicalrvs': False},
tight_layout=True, pad_aspect=False,
axpos={'mesh': 211, 'rv': 223, 'lp': 224},
xlim={'rv': (0.4, 0.6)}, ylim={'rv': (-80, 80)},
animate=True,
save='rossiter_mclaughlin.gif',
save_kwargs={'writer': 'imagemagick'})
"""
Explanation: Here we can see that star in front (green) is eclipsing more of the blue-shifted part of the back star (magenta), distorting the line profile, causing the apparent center of the line profile to be shifted to the right/red, and therefore the radial velocities to be articially increased as compared to the dynamical RVs.
Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:
times: pass our array of times that we want the animation to loop over.
pad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.
animate: self-explanatory.
save: we could use show=True, but that doesn't always play nice with jupyter notebooks
save_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.
End of explanation
"""
|
physion/ovation-python
|
examples/requisitions-and-documents.ipynb
|
gpl-3.0
|
import uuid
from pprint import pprint
from datetime import date
from ovation.session import connect
"""
Explanation: Requisitions and Documents
This example shows the Ovation Service Lab (OSL) APIs for sample accessioning and report download. We'll create a simple Requisition with one sample. Next, we'll upload supplemental documents for the requistion (e.g. a face sheet, medication list, etc.). Finally, we'll download the complete report(s) for the requisition.
Setup
End of explanation
"""
s = connect(input("Email: "), api='https://services-staging.ovation.io')
"""
Explanation: Connection
s is a Session object representing a connection to the Ovation API
End of explanation
"""
organization_id = input('Organization id: ')
"""
Explanation: Many OSL APIs require the Organization id.
End of explanation
"""
tube = s.post(s.path('container'),
data={'container': {'type': 'Tube'}},
params={'organization_id': organization_id})
"""
Explanation: Creating a Requisition
Create a container for the sample (in this case, a Tube). The tube identifier and barcode will be generated by Ovation. You can supply them using the identifier and barcode attributes of the container if needed:
End of explanation
"""
project_name = input("Project name: ")
project = s.get(s.path('project'),
params={'q': project_name, # Find project by name
'organization_id': organization_id}).projects[0]
pprint(project)
"""
Explanation: We need to know which project this Requisition belongs to. If you already know the Project Id, you can skip this step. If you need to look up the project by name, use a query:
End of explanation
"""
# See http://lab-services.ovation.io/api/docs#!/requisitions/createRequisition for additional information
# that can be transmitted with the Requisition including patient demographics, diagnosis, medications,
# requested test(s)/panel(s) and billing information
requisition_data = {"identifier": str(uuid.uuid4()), # Any unique (within organization) identifier
"template": "RNA Requisition", # The requisition template, for the selected project
"custom_attributes": {'my-attribute': 1.0}, # Optional; Requisition custom attributes
"samples": [
{"identifier": str(uuid.uuid4()), # Any unique (within organization) identifier
"date_received": date.today().isoformat(),
"custom_attributes": {'my-sample-attribute': 1.0}, # Optional; Sample custom attributes
"sample_states": [
{"container_id": tube.id,
"position": "A01"}
]
}
]
}
req = s.post(s.path('requisition'),
data={'requisition': requisition_data},
params={'organization_id': organization_id,
"project_id": project.id})
pprint(req)
"""
Explanation: Create a Requisition and the Sample:
End of explanation
"""
local_file_path = "example.pdf"
import base64
with open(local_file_path, "rb") as document_file:
document_data = base64.b64encode(document_file.read())
doc_body = {
"document": {
"name": "file1.txt", # Document name
"tags": [
{
"name": "Supplemental Documents" # Special tag for supporting materials
}
],
"file_data": document_data
}
}
doc = s.post(s.path('documents'),
data=doc_body,
params={"requisition_id": req.requisition.id} # Supply the Id of the Requisition that will receive the document
)
"""
Explanation: Uploading documents to the Requisition
Once a Requisition is created, you can upload Documents to be stored securely with the Requisition. You may use any document tag(s) to which you have write permission. The "Supplemental Documents" label is used specifically for documents associated with the requisiton form and supporting materials.
The simplest way to send the document data is as Base64-encoded data withint he POST:
End of explanation
"""
report_documents = s.get(s.path('document'),
params={"requisition_id": req.requisition.id,
"label": "Complete Reports"})
"""
Explanation: Downloading complete report(s)
Once a Requisition has been processed, you can retrieve the completed clinical report(s) from the Requisition's "Complete Reports" label.
End of explanation
"""
|
4dsolutions/Python5
|
Public Key Cryptography.ipynb
|
mit
|
import math
def totatives(n : int) -> list:
"""get co-primes to n between 0 and n"""
return [totative for totative in range(n) if math.gcd(totative, n) == 1]
def totient(n):
"""how many totatives have we?"""
return len(totatives(n))
print("Totient of 12:", totient(12))
print("Totient of 100:", totient(100))
"""
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
Chapter 5: PUBLIC KEY CRYPTOGRAPHY
What we discuss below is a recent breakthrough in cryptography. When this cryptosystem first appeared in the UK at GCHQ, it was kept secret, because the implications had not been worked out.
However with the emergence of the web and the need for businesses to securely transact as strangers, a public key crytosystem became very appealing. Exchanging secret keys like we used to would not be nearly as convenient. We needed public key crypto to transact business over the web.
The MIT team with initials R, S, A (Rivest, Shamir, and Adleman) managed to keep the USG (the NSA in particular) from clamping down on their patent, since expired. The history is pretty fascinating. Don't forget to read up on PGP (Pretty Good Privacy).
Another PK algorithm is Diffie-Hellman, which has it's own way of letting Bob and Alice exchange information in the clear that will convert to a symmetric secret by some process opaque to Eve. A modern browser will often offer a contract starting with DH as a public exchange, followed by AES as the symmetric key source of payload.
The thing to remember is how a public process becomes private through the TLS phase of the relationship, one of handshaking. The client and server agree to enter in on some shared secrets, for the purposes of keeping the transaction private to 3rd parties. Of course what happens to the encrypted information on either end, i.e. what Alice and/or Bob does with it, is not included in these models.
The explanation of RSA below is yet another illustration of how "clock arithmetic" lets us work with the infinite perumtation of symbols we call plaintext, and yet do invertible things with it, in conjuction with indecipherable keys.
Bob's RSA machine knows any incoming encrypted with Bob's public key, is going to need that key's secret twin, not public, to unlock the payload. Alice never needed Bob's secret, only his public number. The way that public and private pair were produced in the first place is what makes RSA still secure. Throwing computer power at a public key, in an attempt to reverse engineer the corresponding private one, is so far proving fruitless, or at least such is the educated opinion of cryptography experts.
Euler's Theorem
We're now in a position to test Euler's Theorem, which is going to help us understand an important topic: public key cryptography. We want to know what it means and how it's used in RSA, without offering a proof at this juncture.
We saw with our permutation class (which still needs a __pow__ method by the way) how Bob and Alice might use what's called symmetric key cryptography. This means Alice needs the same secret key Bob used, to encrypt a message, to decrypt it. They might use the Advanced Encryption Standard at this point, with any of three lengths of key.
AES one a global contest in the 1990s held by NIST, with Rijndeal considered the winner and later renamed.
In public key cryptography, Bob and Alice both publish a public number, which in the RSA system is a really large number with only two prime factors, call it Bob_N and Alice_N.
When Bob sends a message to Alice, he uses her public key, Alice_N, to encrypt it and only Alice has a corresponding secret key to get the message back. Not even Bob can decrypt his own message.
Let's begin by understanding Euler's Theorem, which says:
$ a^{\varphi(n)} \equiv 1 \mod n$ where $\varphi$ is the totient function, and where a and n are relatively prime (co-prime, strangers).
That $\varphi(n)$ refers to the totient function first introduced in the previous chapter. Euler used the Greek letter phi for it. It tells us the number of totatives (coprimes) for a given number.
Lets get those two functions side by side, the one for finding totatives, and the other for telling us how many.
End of explanation
"""
pow(93, totient(100), 100) # pow takes a modulus as an optional 3rd argument
"""
Explanation: Below is a single test of Euler's Theorem. $\varphi(100)$ == 40, as we've learned. So any number coprime to 100, raised to the 40th power modulo 100, should be 1.
End of explanation
"""
from random import choice
coprime = choice(totatives(100)) # choose any one of 100's totatives (always coprime)
pow(coprime, 40, 100) # raise that totative to the totient power, modulo 100
"""
Explanation: Likewise, the code cell below should always return 1, no matter how many times we run it, because we start with a co-prime of 100, satisfying the condition of Euler's Theorem. The totient of 100 is fixed at 40.
The choice function (imported from the random module) is choosing a coprime at random.
End of explanation
"""
# convert to str for horizontal display
str([pow(coprime, 40, 100) for coprime in totatives(100)])
"""
Explanation: If you run the above code cell over and over, you'll always get the same answer: 1.
That's what Euler's Theorem asserts.
Lets test it for every totative of 100, why not?
End of explanation
"""
from random import choice
coprime = choice(totatives(100)) # choose any one of 100's totatives (always coprime)
result = pow(coprime, 40 + 1, 100) # one power higher than before
print(coprime, result) # should always be the same number
"""
Explanation: Yep, we get 1 every time! Thank you Euler.
Fermat's Little Theorem is in the same ballpark. Check it out. If N is a prime number, then totient(N) = N - 1.
$a^{p-1} \equiv 1 \mod(p)$ were gcd(a, p) == 1 (Fermat's Little Theorem)
Euler's Theorem will prove useful to us, as if we go one power higher, we effectively raise our message to the first power, which is to effectively leave m unchanged.
End of explanation
"""
from groups import M
N = 3 * 47 # generate a new N from scratch, per web browser session
m = M(42, N) # 42 is original message, N a product of 2 primes
e = 7 # raise to power, which others take as a signal how far to go
c = m ** e # encrypted
d = ~M(e, totient(N)) # inverse of e mod totient(3 * 47), kept private
pow(c, d.val) # getting our message, using secret d
"""
Explanation: If our totative was some secret, raising it to a power modulo some public key N would encrypt it. To get our totative back, we'll need to raise it to an even higher power, effectively one higher than the totient power.
$(a^{\varphi(n)})^k \cdot a \equiv 1^ka \equiv a \mod{n}$
RSA in a Nutshell
The basic idea of RSA is raising a message m to some power e, modulo N, turns it into an encrypted message c.
You might think we would have a way of going backward. If e is public, say 3 or 7, why not just take the 3rd or 7th root of c, modulo whatever N? So far, no one has published a way to do that once N is large enough. Raising m to the eth power is what we call a trap door function meaning an inverse function has yet to be discovered, despite best efforts.
Some special preparation is needed first, to randomly pad message m, a kind of seeding, otherwise some clever hacks may crack into the Bob-Alice transmission, allowing Eve to listen in.
Think of the message going part way around a circle when we raise it to the eth power. We want it to go the rest of the way around the circle so that it pops out again, intact. Given Euler's Theorem, one more power beyond the totient of N power will do the trick.
We need the inverse of c modulo that totient, in other words. We've looked at code for that already, but will soon need a different method, given the large size of N.
In other words, raising anything to the 1st power is equivalent to the original. A zeroth power gives the multiplicative identity, typically. A**0 == One, whatever One means in that namespace. Raising to the totient power, modulo N, is like raising to the 0th power.
A**1 == A, in contrast.
$A^0 A = A^1 = A$.
Any power equivalent to 1, modulo totient(N), is our goal (thanks to Euler's Theorem), meaning that to decrypt c, we need some d such that (e * d) mod totient(N) == 1. That will be like raising our message to the first power, not changing it at all.
$(m^e)^d = m^{ed} = m^{k \varphi(n) + 1}\equiv m \mod{n}$
One notch beyond the totient power, or any multiple thereof, going around in our circle, is the original m, where we started the whole process by raising m by e.
So we need e's inverse, modulo totient(N). e will need to be coprime to totient(N) to ensure d exists. What makes RSA hard to crack is precisely the difficulty in determining d, even where (N, e) are public. As we'll see below, obtaining d requires knowing N's two prime factors.
The message m will pop right back out at exponent position (e * d), having gone all the way around the circle.
Alice_d is what only Alice has, remember, when Bob uses Alice_N and Alice_e to make c, and send it, fairly secure in knowing that even if Eve gets the bits, she won't have a way to unscramble them (unless she's secretly the same person as Alice, but then mathematics is no protection against some forms of deception).
Here's the complete round-trip process, from Alice to Bob or Bob to Alice:
End of explanation
"""
def xgcd(a,b):
"""
Extended Euclidean Algorithm (EEA)
Returns (m,n,gcd) such that: m*a + n*b = gcd(a,b)
"""
g,u,v = [b,a],[1,0],[0,1]
while g[1] != 0:
y = g[0] // g[1]
g[0],g[1] = g[1],g[0] % g[1]
u[0],u[1] = u[1],u[0] - y*u[1]
v[0],v[1] = v[1],v[0] - y*v[1]
m = v[0]%b
gcd = (m*a)%b
n = (gcd - m*a)/b
return (m,n,gcd)
"""
Explanation: 42 is not a very exciting message to be sending to Alice however, plus our N is ridiculously small, very easy to factor. What makes RSA hard to crack is we need those two prime factors of N to compute both its totient (as counting totatives won't do) and to thereby compute e's inverse i.e. d, the power we need to raise c by (mod N) to get our m back (the original message).
If you recall Chapter 4, our method for finding the inverse of an M-number involved using brute force. We simply go through all the totatives until we find the right one.
Where huge Ns are involved, this technique is impractical. Is our whole cryptosystem just a pipe dream then? Assembling all the puzzle pieces to make RSA a reality was not easy. Much wine was consumed in the process.
Extended Euclidean Algorithm
What comes to the rescue is called the Extended Euclidean Algorithm (EEA). Lets look at that below. It's purpose is to find two numbers, m and n, such that ma + nb gives the gcd of a and b. This is always possible, even if the gcd is 1.
End of explanation
"""
m, n, gcd = xgcd(97, 100)
print("m: ", m)
print("n: ", n)
print(m*97 + n*100)
"""
Explanation: For example, we know 97 and 100 are coprime (strangers), so what m and n will work in the above equation?
End of explanation
"""
from groups import M
def inverse(n : M, elems : set) -> M:
for m in elems:
if (n * m).val == 1:
return m
a = M(97, 100)
elems = {M(x, 100) for x in totatives(100)} # set comprehension
m = inverse(a, elems)
m
"""
Explanation: That's another way of saying M(97, 100) has an inverse of M(33, 100), since once we subtract away the 100s (however many), we're left with 1.
Lets check that with our "brute force" method from the last chapter:
End of explanation
"""
def inverse(a, N):
"""
If gcd(a,b)=1, then (inverse(a, b) * a) mod b = 1,
otherwise, if gcd(a,b)!=1, return 0
Useful in RSA encryption, for finding d such that
e*d mod totient(n) == 1
"""
m, n, gcd = xgcd(a, N) # is inverse of a mod N
return (gcd==1) * m
print(inverse(7, 141))
"""
Explanation: To make these computations a little more clear, lets define inverse to take our coprime and modulus, and return the multiplicative inverse of our coprime. Remember we were using brute force to find the inverse previously, knowing that every coprime has its inverse. Now we have a way of finding this inverse without trying billions of candidates.
End of explanation
"""
M(7, 141) * M(121, 141)
"""
Explanation: Spending some quality time with both Euclid's Method (EA) for finding the gcd, and the above more elaborate method (EEA) is always good practice.
Their coding has gotten very clever over the years, as coding languages have evolved and become more expressive.
However these methods have their historical roots in the more distant past, before people programmed in any computer language.
End of explanation
"""
N = 47 * 19
(47-1)*(19 - 1) == totient(N) # defined above
"""
Explanation: Another important fact from Number Theory also comes to our rescue. What is the totient of a number? Counting totatives will take forever, practically, once N is large enough. However, if N is comprised of two primes, p, q, then it will have (p-1)(q-1) totatives. We can test this:
End of explanation
"""
p = 435958568325940791799951965387214406385470910265220196318705482144\
524085345275999740244625255428455944579
q = 5625457617268841037562770073044474\
81743876944007510545104946851094548396577479473472146228550799322939273
N = RSA210 = p * q
t = (p-1)*(q-1)
d = ~M(3, t) # need a d, such that 3*d mod t == 1
(3 * d.val) % t # confirm our ability to find inverses with EEA
from binascii import hexlify, unhexlify
phrase = hexlify(b"Able was i ere i saw Elba")
phrase
m = int(phrase, 16)
m
m = M(m, RSA210)
c = m ** 3 # encrypted
c
result = pow(c.val, d.val, RSA210)
result
unhexlify(hex(result)[2:]) # drop the leading 0x from output of hex()
"""
Explanation: Why is this true? 47 has 46 totatives, 1 through 46, and 19 has 18 totatives. Because they're prime numbers. Nothing less divides into them evenly (with no remainder) by definition, otherwise we would say they have factors, are composite.
Their product will have like a multiplication table of these two sets, imagine 1-46 and 1-18 across the top and down the side, like a pandas Dataframe (see Chapter 1). All those products make up the newer, bigger list of totatives, of the newer bigger product number of only two factors.
Below is a published RSA-number from Wikipedia, already factored for us. If N were that easy to crack open, with a quick lookup in Wikipedia, RSA would not still serve as a first line of defense against unauthorized Eves.
The RSA company held contests to encourage people to try cracking them, and some of them did crack, leading to the practice of yet longer numbers, which take exponentially longer to crack according to current theories. The discovery of some algorithm to factor giant numbers in little time, would revolutionize our vista, as far as cryptosystems go.
An Example
Here's that RSA number, named RSA-210:
End of explanation
"""
from primes import bigppr # big probable prime algorithm
bigppr()
"""
Explanation: We've nearly reached the end of our explication of RSA.
However there's a last puzzle piece to consider, once our goal is to create a composite from large primes: where do those large primes come from?
It's easy enough to string digits together, randomly, but how do we know if it's prime or not? We enter the realm of primality testing.
The Miller-Rabin test considers candidate large numbers for testing with a bunch of bases, and running some special tests that composites rarely pass.
Primes are pretty common, and don't run out. The distance between consecutive primes is another subject of study.
The idea behind Miller-Rabin is that of a filter. If, after enough "bullets" have been fired, it doesn't yield, we call it "bullet proof" and good enough for our purposes.
More theoretical work goes here, to prove that even very probable primes that aren't truly primes, for p & q, will do the job.
End of explanation
"""
from Crypto.PublicKey import RSA
key = RSA.generate( 2048 ) # that's 2048 bits
key
"""
Explanation: So much for bare bones RSA. If you want to develop a trully robust cryptosystem from scratch then do more reading outside the confines of this particular Notebook. For example, 3 is probably not the best exponent to use at the outset.
You might be thinking all this coding from scratch is too much work and surely there's some 3rd party assets that make RSA a lot easier to work with. You'd be right. We did all this coding from scratch to gain deeper insights into the mathematics, however pyCrypto by Dwayne Litzenberger will do the job.
Let's check it out:
End of explanation
"""
key.p * key.q == key.n
"""
Explanation: The key object has everything you need to run a session. We expect p * q to equal n. Lets try:
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.19/_downloads/638c39682b0791ce4e430e4d2fcc4c45/plot_tf_dics.ipynb
|
bsd-3-clause
|
# Author: Roman Goj <roman.goj@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.event import make_fixed_length_events
from mne.datasets import sample
from mne.time_frequency import csd_fourier
from mne.beamformer import tf_dics
from mne.viz import plot_source_spectrogram
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
noise_fname = data_path + '/MEG/sample/ernoise_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
"""
Explanation: Time-frequency beamforming using DICS
Compute DICS source power [1]_ in a grid of time-frequency windows.
References
.. [1] Dalal et al. Five-dimensional neuroimaging: Localization of the
time-frequency dynamics of cortical activity.
NeuroImage (2008) vol. 40 (4) pp. 1686-1700
End of explanation
"""
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Pick a selection of magnetometer channels. A subset of all channels was used
# to speed up the example. For a solution based on all MEG channels use
# meg=True, selection=None and add mag=4e-12 to the reject dictionary.
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads',
selection=left_temporal_channels)
raw.pick_channels([raw.ch_names[pick] for pick in picks])
reject = dict(mag=4e-12)
# Re-normalize our empty-room projectors, which should be fine after
# subselection
raw.info.normalize_proj()
# Setting time windows. Note that tmin and tmax are set so that time-frequency
# beamforming will be performed for a wider range of time points than will
# later be displayed on the final spectrogram. This ensures that all time bins
# displayed represent an average of an equal number of time windows.
tmin, tmax, tstep = -0.5, 0.75, 0.05 # s
tmin_plot, tmax_plot = -0.3, 0.5 # s
# Read epochs
event_id = 1
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=None, preload=True, proj=True, reject=reject)
# Read empty room noise raw data
raw_noise = mne.io.read_raw_fif(noise_fname, preload=True)
raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
raw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])
raw_noise.info.normalize_proj()
# Create noise epochs and make sure the number of noise epochs corresponds to
# the number of data epochs
events_noise = make_fixed_length_events(raw_noise, event_id)
epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,
tmax_plot, baseline=None, preload=True, proj=True,
reject=reject)
epochs_noise.info.normalize_proj()
epochs_noise.apply_proj()
# then make sure the number of epochs is the same
epochs_noise = epochs_noise[:len(epochs.events)]
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
# Read label
label = mne.read_label(fname_label)
"""
Explanation: Read raw data
End of explanation
"""
# Setting frequency bins as in Dalal et al. 2008
freq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz
win_lengths = [0.3, 0.2, 0.15, 0.1] # s
# Then set FFTs length for each frequency range.
# Should be a power of 2 to be faster.
n_ffts = [256, 128, 128, 128]
# Subtract evoked response prior to computation?
subtract_evoked = False
# Calculating noise cross-spectral density from empty room noise for each
# frequency bin and the corresponding time window length. To calculate noise
# from the baseline period in the data, change epochs_noise to epochs
noise_csds = []
for freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):
noise_csd = csd_fourier(epochs_noise, fmin=freq_bin[0], fmax=freq_bin[1],
tmin=-win_length, tmax=0, n_fft=n_fft)
noise_csds.append(noise_csd.sum())
# Computing DICS solutions for time-frequency windows in a label in source
# space for faster computation, use label=None for full solution
stcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,
freq_bins=freq_bins, subtract_evoked=subtract_evoked,
n_ffts=n_ffts, reg=0.05, label=label, inversion='matrix')
# Plotting source spectrogram for source with maximum activity
# Note that tmin and tmax are set to display a time range that is smaller than
# the one for which beamforming estimates were calculated. This ensures that
# all time bins shown are a result of smoothing across an identical number of
# time windows.
plot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,
source_index=None, colorbar=True)
"""
Explanation: Time-frequency beamforming based on DICS
End of explanation
"""
|
agile-geoscience/striplog
|
docs/tutorial/11_Parse_a_description_into_components.ipynb
|
apache-2.0
|
import striplog
striplog.__version__
"""
Explanation: Parse a description into components
This notebook requires at least version 0.8.8.
End of explanation
"""
text = "wet silty fine sand with tr clay"
"""
Explanation: We have some text:
End of explanation
"""
from striplog import Lexicon
lex_dict = {
'lithology': ['sand', 'clay'],
'grainsize': ['fine'],
'modifier': ['silty'],
'amount': ['trace'],
'moisture': ['wet', 'dry'],
'abbreviations': {'tr': 'trace'},
'splitters': ['with'],
'parts_of_speech': {'noun': ['lithology'],
'adjective': ['grainsize', 'modifier', 'moisture'],
'subordinate': ['amount'],
}
}
lexicon = Lexicon(lex_dict)
"""
Explanation: To read this with striplog, we need to define a Lexicon. This is a dictionary-like object full of regular expressions, which acts as a bridge between this unstructured description and a dictionary-like Component object which striplog wants. The Lexicon also contains abbreviations for converting abbreviated text like cuttings descriptions into expanded words.
A Lexicon to read only this text might look like:
End of explanation
"""
from striplog import Interval
Interval._parse_description(text, lexicon=lexicon, max_component=3, abbreviations=True)
"""
Explanation: Now we can parse the text with it:
End of explanation
"""
# Make and expand the lexicon.
lexicon = Lexicon.default()
# Add moisture words (or could add as other 'modifiers').
lexicon.moisture = ['wet(?:tish)?', 'dry(?:ish)?']
lexicon.parts_of_speech['adjective'] += ['moisture']
# Add the comma as component splitter.
lexicon.splitters += [', ']
"""
Explanation: But this is obviously a bit of a pain to make and maintain. So instead of definining a Lexicon from scratch, we'll modify the default one:
End of explanation
"""
Interval._parse_description(text, lexicon=lexicon, max_component=3)
"""
Explanation: Parsing with this yields the same results as before...
End of explanation
"""
Interval._parse_description("Coarse sandstone with minor limestone", lexicon=lexicon, max_component=3)
"""
Explanation: ...but we can parse more things now:
End of explanation
"""
|
staeiou/reddit_downvote
|
swingers/swingers-analysis.ipynb
|
mit
|
!pip install bokeh
import pandas as pd
import seaborn as sns
from bokeh.charts import TimeSeries, output_file, show
%matplotlib inline
posts_df = pd.DataFrame.from_csv("reddit_posts_swingers_201503.csv")
posts_df[0:5]
posts_df['created'] = pd.to_datetime(posts_df.created_utc, unit='s')
posts_df['created_date'] = posts_df.created.dt.date
posts_df['downs'] = posts_df.score - posts_df.ups
posts_time_ups = posts_df.set_index('created_date').ups.sort_index()
posts_time_ups[0:5]
posts_date_df = posts_df.set_index('created').sort_index()
posts_date_df[0:5]
posts_groupby = posts_date_df.groupby([pd.TimeGrouper('1D', closed='left')])
"""
Explanation: When /r/swingers disabled the downvoting button via CSS
Subreddit disabled button on 2015-03-19, thread here
Data processing
Google BigQuery
SELECT author, num_comments, score, ups, downs, gilded, created_utc FROM [fh-bigquery:reddit_posts.full_corpus_201509]
WHERE created BETWEEN 1425168000 AND 1427846400
AND subreddit = 'Swingers'
End of explanation
"""
posts_groupby.mean().num_comments.plot(kind='barh', figsize=[8,8])
"""
Explanation: Visualizations
Daily average of number of comments per post
End of explanation
"""
posts_groupby.mean().ups.plot(kind='barh', figsize=[8,8])
"""
Explanation: Daily average of number of upvotes per post
End of explanation
"""
|
vravishankar/Jupyter-Books
|
Conditional+Statements.ipynb
|
mit
|
x = 1
if x > 0:
print(x,"is positive number")
"""
Explanation: Python Statements
if..elif..else
The "if..elif..else" statement is used for decision making based on some conditions.
if statement
The syntax of "if" statement is
python
if test expression:
statement(2)
End of explanation
"""
x = 1
if x > 0:
print(x,"is positive number")
print('Outside of if condition')
"""
Explanation: Pls note the indentation is important in Python. All the statements that is part of if statement must be indented.
End of explanation
"""
x = 1
if x >= 0:
print(x,"is positive number")
else:
print(x,"is negative number")
print('End of if..else..')
"""
Explanation: if..else.. statement
python
if test expression:
# body of if
else:
# body of else
End of explanation
"""
x = -5
if x == 0:
print(x,"is zero")
elif x > 0:
print(x,"is positive")
else:
print(x,"is negative")
print('End of if statement')
"""
Explanation: if..elif..else statement
python
if test expression:
# body of if
elif test expression:
# body of elif
else
# body of else
End of explanation
"""
x = 8
if x == 0:
print(x,"is zero")
elif x > 0:
if x == 10:
print(x,"is equal to 10")
elif x > 10:
print(x,"is greater than 10")
else:
print(x,"is lesser than 10")
else:
print(x,"is negative")
print('End of if statement')
"""
Explanation: Nested if
End of explanation
"""
sum = 0 # variable to hold the sum
nums = [1,2,3,6,8] # list of numbers
for num in nums:
sum += num
print(sum)
"""
Explanation: for loop
The for loop statement is used to iterate over a sequence data type or other iterable objects. Iterating over a sequence is called traversal.
Loop continues until the condition is met or the last item in the sequence is processed.
python
for val in seq:
# body of for
End of explanation
"""
digits = [0,1,5]
for i in digits:
print(i)
else:
print('No more digits to print')
"""
Explanation: for loop with else
End of explanation
"""
# range function with required sequence of numbers
print(range(10)) # this will not print sequence of numbers but generated on the fly.
# range function to start with 1 to 10 (note the stop argument is always treated as stop -1)
print(list(range(1,11)))
# range function different start and stop
print(list(range(2,8)))
# range function with start,stop and increment parameters
print(list(range(0,10,2)))
colors = ['Blue','White','Green']
for i in range(len(colors)):
print('I like',colors[i],'color')
"""
Explanation: Range Function
Range function is used to generate sequence of numbers. For example range(5) will generate 5 numbers starting from 0 to 4.
The syntax for range function is: range(start,stop,increment)
End of explanation
"""
sum = 0
i = 1
while i <= 10:
sum += c
i += 1
print(sum)
n = 5
sum = 0
i = 1
while i <= n:
sum += i
i += 1
print(sum)
"""
Explanation: While
While loop is used to iterate over a block of code as long as the test expression is true. Please note in python any non-zero value is True and zero is interpreted as False.
Mainly used when we do not know how many times to iterate
python
while test_expression:
# body of while
End of explanation
"""
counter = 0
while counter < 3:
counter += 1
print('Inside Loop')
else:
print('Inside Else')
"""
Explanation: While...else
The "else" part is executed when the while condition evaluates to false.
The "else" part occurs if no break occurs and the condition is false.
End of explanation
"""
for val in "string":
if val == "i":
break
print(val)
print("The End")
"""
Explanation: Break Statement
The break statement terminates the loop containing it. The control of the program flows to the statement immediately below the loop.
End of explanation
"""
for val in "string":
if val == "i":
continue
print(val)
print("End Loop")
"""
Explanation: Continue Statement
The continue statement is used to skip the rest of the code inside a loop for the current iteration only. Loop does not terminate but continues on with next iteration.
End of explanation
"""
def dummy():
pass
dummy()
for i in range(3):
pass
"""
Explanation: Pass Statement
Pass statement is generally used as a placeholder for loop or function or classes as they cannot have empty body. So "Pass" is used to construct the body that does nothing.
The difference between comment (#) and "pass" is that python interpreter ignores the comments while it treat "pass" as no operation.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/mri/cmip6/models/mri-agcm3-2/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-agcm3-2', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MRI
Source ID: MRI-AGCM3-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:18
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
rishuatgithub/MLPy
|
PyTorchStuff.ipynb
|
apache-2.0
|
import torch
"""
Explanation: <a href="https://colab.research.google.com/github/rishuatgithub/MLPy/blob/master/PyTorchStuff.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
All about Pytorch
End of explanation
"""
x = torch.empty(5,3) ## empty
x
x = torch.randn(5,3) ## random initialized
x
x = torch.zeros(5,3, dtype=torch.long)
x
type(x)
x = torch.ones(5,2)
x
myarr = [[10,20.2],[30,40]] ## sample data
x = torch.tensor(myarr)
x
## create a tensor from an existing tensor
x = torch.tensor([[1,2],[3,4]], dtype=torch.int16)
print(f"X tensor: {x}")
y = torch.tensor(x, dtype=torch.float16)
print(f"Y tensor: {y}")
## size of tensor
x.size()
"""
Explanation: Tensors
Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
End of explanation
"""
x = torch.randn(5,2)
x
y = torch.ones(5,2)
y
x + y ## sum of two tensors
torch.add(x,y) ## alternative: sum of two tensors
## In Place addition : Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.
y.add_(x)
### Standard numpy operations on tensors
x[:1]
type(x[:1])
### Resize the tensor
x = torch.randn(4,4)
x
y = x.view(16)
y
z = x.view([-1,8])
z
## transpose an array
torch.transpose(x, 0,1)
## to get the value of a tensor
x = torch.randn(1)
print(x)
print(x.item())
"""
Explanation: Operation on Tensors
End of explanation
"""
a = torch.ones(5)
a
type(a)
### convert to numpy
a.numpy()
## converting numpy array to torch tensors
import numpy as np
a = np.ones(5)
t = torch.tensor(a, dtype=torch.int)
print(a, type(a))
print(t, type(t))
"""
Explanation: Numpy Bridge with Tensors
End of explanation
"""
### if CUDA is available or not
torch.cuda.is_available()
if torch.cuda.is_available():
device = torch.device("cuda") ## define device
x = torch.ones(5) ## normal stuff
print(x)
y = torch.ones_like(x, device=device) ### running it on gpu
print(y)
x = x.to(device) ## change the execution to device
z = x + y
print(z)
print(z.to("cpu", dtype=torch.int32)) ## change the data type of z using .to and run it on cpu
"""
Explanation: Running on the Device
End of explanation
"""
x = torch.ones(2,2, requires_grad=True)
x
y = x + 2
y
y.grad_fn ### y was created as a result of an operation, hence it has a grad_fn
## more operation on y
z = y*y*3
out = z.mean()
print(z, out)
### .requires_grad_( ... ) changes an existing Tensor’s requires_grad flag in-place. The input flag defaults to False if not given.
a = torch.randn(2,2)
a = ((a*2)/(a-1))
print(a.requires_grad)
a.requires_grad_(True) ## changing the grad inplace
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
"""
Explanation: AUTOGRAD
End of explanation
"""
out ## out contains a single scalar, out.backward() is equivalent to out.backward(torch.tensor(1.)).
out.backward()
print(x.grad) ### Print gradients d(out)/dx
## another example
t1 = torch.ones(1, requires_grad= True)
t2 = torch.ones(1, requires_grad=True)
print(t1, t2)
s = t1+t2
print(s)
s.grad_fn
s.backward()
t1.grad
"""
Explanation: Gradient
End of explanation
"""
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
print(x.requires_grad)
y = x.detach()
print(y.requires_grad)
print(x.eq(y).all())
"""
Explanation: Vector Jacobian Product
End of explanation
"""
import torch.nn as nn
import torch.nn.functional as F
X = torch.tensor(([2, 9], [5, 1], [3, 6]), dtype=torch.float) ## 3x2 tensor
y = torch.tensor(([92], [100], [89]), dtype=torch.float) ## 3x1 tensor
xPredicted = torch.tensor(([4, 8]), dtype=torch.float) # 1 X 2 tensor
print(X)
print(y)
X_max, X_max_ind = torch.max(X, 0) ## return max including indices and max values per col. 0 for col, 1 for row
print(X_max, X_max_ind)
xPredicted_max, _ = torch.max(xPredicted, 0)
print(xPredicted_max)
y_max = torch.max(y)
print(y_max)
## scaling
X = torch.div(X, X_max)
xPredicted = torch.div(xPredicted, xPredicted_max)
y = y/y_max
print(f"X is : {X}")
print(f"xPredicted is : {xPredicted}")
print(f"y is : {y}")
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
## parameters
self.input_size = 2
self.hidden_layer = 3
self.output_layer = 1
## initializing the weights
self.W1 = torch.randn(self.input_size, self.hidden_layer) # 2x3 tensor
self.W2 = torch.randn(self.hidden_layer, self.output_layer) # 3x1 tensor
def forward(self, X):
'''
Forward propagation
'''
self.z = torch.matmul(X, self.W1)
self.z2 = torch.sigmoid(self.z)
self.z3 = torch.matmul(self.z2, self.W2)
o = torch.sigmoid(self.z3) ## final activation function
return o
def sigmoid(self, s):
return 1 / (1 + torch.exp(-s))
def sigmoidPrime(self, s):
# derivative of sigmoid
return s * (1 - s)
def backward(self, X, y, o):
'''
Backward propagation
'''
self.o_error = y - o ## calculate the difference b/w predicted and actual
self.o_delta = self.o_error * self.sigmoidPrime(o) ## derivative of sig to error
self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2))
self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2)
self.W1 += torch.matmul(torch.t(X), self.z2_delta)
self.W2 += torch.matmul(torch.t(self.z2), self.o_delta)
def train(self, X, y):
# forward + backward pass for training
o = self.forward(X)
self.backward(X, y, o)
def saveWeights(self, model):
# we will use the PyTorch internal storage functions
torch.save(model, "NN")
def predict(self):
print ("Predicted data based on trained weights: ")
print ("Input (scaled):" + str(xPredicted))
print ("Output:" + str(self.forward(xPredicted)))
NN = SimpleNN()
print(NN)
for i in range(10): # trains the NN 10 times
print ("#" + str(i) + " Loss: " + str(torch.mean((y - NN(X))**2).detach().item())) # mean sum squared loss
NN.train(X, y)
NN.saveWeights(NN)
NN.predict()
"""
Explanation: Neural Network
A typical training procedure for a neural network is as follows:
Define the neural network that has some learnable parameters (or weights)
Iterate over a dataset of inputs
Process input through the network
Compute the loss (how far is the output from being correct)
Propagate gradients back into the network’s parameters
Update the weights of the network, typically using a simple update rule:
weight = weight - learning_rate * gradient
Simple NN
End of explanation
"""
|
YAtOff/python0-reloaded
|
week3/Print.ipynb
|
mit
|
print(2)
print("is even.")
print(2, "is even.")
"""
Explanation: print
Процедурата print приема 1 или повече аргумента и ги извежда на екрана разделени разделени със интервал.
След изпълнението на и курсурът минава на следващия ред.
Аргументите може да бъдат от различни типове.
End of explanation
"""
print(1, 2, 3)
print(1, 2, 3, sep='|')
"""
Explanation: Можем да променим разделителя като добавим аргумент sep='<<разделител>>'.
End of explanation
"""
print(1)
print(2)
print(3)
print(1, end=' ')
print(2, end=' ')
print(3, end=' ')
"""
Explanation: По подразбиране print поставя символ за край на реда след като изведе задацените данни.
Това може да се промени като се добави аргумент end='<<символ в края на реда>>'.
End of explanation
"""
print('%d is odd, %d is even' % (3, 4))
print('Hello %s!' % 'Pesho')
"""
Explanation: Форматиране на символни низове
Можем да създаваме символни низове от шаблони.
В шаблона думите състоящи се от %d ще бъдат заместени от подадено цяло число.
Думите състоящи се от %s ще бъдат заместени от подадено символен низ.
End of explanation
"""
|
jameslao/Algorithmic-Pearls
|
Normal.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sigma = 1
mu = 0
sns.set(style="dark", palette="muted", color_codes=True, font_scale=1.5)
x = [np.arange(i - 4, i - 3, 0.01) for i in range(8)]
f = [1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (x[i] - mu)**2 / (2 * sigma**2) ) for i in range(8)]
alpha = [0.3,0.5,0.7,0.9,0.9,0.7,0.5,0.3]
plt.figure(figsize=(10,5))
for i in range(8):
plt.fill_between(x[i], 0, f[i], alpha= alpha[i])
plt.axis((-4, 4, 0, 0.5))
"""
Explanation: 正态分布随机数的生成
@jameslao / www.jlao.net
正态分布——听起来非常耳熟,用起来也很顺手——因为很多语言都已经内置了有关正态分布的各种工具。但其实,在这个最普遍、最广泛的正态分布背后,要生成它还有很多学问呢。
$$f(x \; | \; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi} } \; e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$$
End of explanation
"""
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
N = 10 ** 7
%matplotlib inline
%time x = stats.norm.ppf(np.random.rand(N, 1))
plt.figure(figsize=(10,5))
plt.hist(x,50)
plt.show()
"""
Explanation: 难道教科书上没有讲吗?看看概率书上是怎么说的……比如我手头这本浙大版的《概率论与数理统计》(第四版)第 378 页上说……“标准正态变量的分布函数 $\Phi(x)$ 的反函数不存在显式,故不能用逆变换法产生标准正态变量……”
反变换法
等下!反函数不存在显式……这都什么年代了,没有解析解难道不能用数值解嘛!求百分位这么常见的动作,怎么会不能做呢?Excel 里面提供了NORMINV 函数,R 语言里面有 qnorm,在 Python 里面可以用 SciPy.stats 里提供的 norm.ppf:
End of explanation
"""
x = stats.norm.cdf(x)
plt.hist(x, 50)
"""
Explanation: 当然……不算快啦,但还是可以凑合用的。这个给高斯积分求逆的实现可以看 SciPy 的 ndtri() 函数。这段代码来自于 Cephes 数学库,采用了分段近似的方法但是精度还相当不错——明明是 80 年代末就有了!
这个变换当然很直观啦,如果你再想变回均匀分布,只要再用一次分布函数就好了:
End of explanation
"""
%time g = np.sum(np.random.rand(N, 12), 1) - 6
plt.figure(figsize=(10,5))
plt.hist(g,50)
plt.show()
"""
Explanation: 中心极限定理……还是不要用的好
那教科书上教的是什么方法呢?它祭出了中心极限定理…… 取 $n$ 个相互独立的均匀分布 $X_i = U(0,1)$,$E(X_i)=\frac{1}{2}$,$\mathrm{Var}(X_i)=\frac{1}{12}$,那么根据中心极限定理,$n$ 比较大的时候近似有
$$Z = \frac{\displaystyle\sum_{i=1}^n X_i - E\left(\displaystyle\sum_{i=1}^n X_i\right)}{\sqrt{\mathrm{Var}\left(\displaystyle\sum_{i=1}^n X_i\right)}}= \frac{\displaystyle\sum_{i=1}^n X_i - \frac{n}{2}}{\sqrt{n} \sqrt{\frac{1}{12}}} \sim N(0,1).$$
取 $n=12$ 则近似有
$$Z = \sum_{i=1}^{12} U_i - 6 \sim N(0,1).$$
这个呢……我们也来试试看
End of explanation
"""
import scipy.stats as stats
stats.normaltest(g)
"""
Explanation: 更慢了。形状倒是有那么点意思。那我们来看看它生成的质量如何:
End of explanation
"""
stats.normaltest(np.sum(np.random.rand(1000, 12), 1) - 6)
"""
Explanation: 竟然可耻地毫无争议地失败了…… (╯‵□′)╯︵┻━┻ 我们的样本数比较大($10^7$),用仅仅 12 个做平均是很难得到合理的“正态”样本的。可要是取更大的 $n$ 的话,要生成太多的随机数又实在太慢了。如果需要的样本数少一点(比如 1000 个)倒还可以勉强凑合:
End of explanation
"""
import random
#%time x = [random.gauss(0, 1) for _ in range(N)]
%time x = np.sqrt(-2 * np.log(np.random.rand(N, 1))) * np.cos(2 * np.pi * np.random.rand(N, 1))
plt.figure(figsize=(8, 4))
plt.hist(x, 50)
plt.show()
"""
Explanation: 好吧,这方法大概只有理论上的意义。我们来看一个比较常用的方法是怎么做的:
Box-Muller 变换
我们再来看看这个反变换的事。基本上我们的问题就是要计算
$$I = \int_{-\infty}^{\infty} e^{-\frac{x^2}{2}} \mathrm{d} x$$
大家都知道这个积分没有初等函数的表示。不过呢
$$I^2 = \int_{-\infty}^{\infty} e^{-\frac{x^2}{2}} \mathrm{d} x \int_{-\infty}^{\infty} e^{-\frac{y^2}{2}} \mathrm{d} y = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-\frac{x^2+y^2}{2}} \mathrm{d} x \, \mathrm{d} y$$
注意看右边,这个形式让我们想到了……极坐标!令 $x = r\cos\theta$,$y = r\sin\theta$,那么 $\mathrm{d}x\,\mathrm{d}y$ 变成 $\mathrm{d}r\,\mathrm{d}\theta$ 的时候要记得乘上雅各比矩阵:
$$\mathrm{d}x\,\mathrm{d}y = \begin{vmatrix}\frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta} \end{vmatrix} \mathrm{d}r\,\mathrm{d}\theta= r\, \mathrm{d}r\,\mathrm{d}\theta$$
于是
$$I ^2 = \int_{r=0}^{\infty}\int_{\theta=0}^{2\pi}e^{-\frac{r^2}{2}} r\,\mathrm{d}r\,\mathrm{d}\theta = 2\pi\int_{r=0}^{\infty}e^{-\frac{r^2}{2}} r\,\mathrm{d}r = 2\pi\int_{r=0}^{\infty}e^{-\frac{r^2}{2}} \mathrm{d}\left(\frac{r^2}{2}\right) =2\pi$$
有了这个技巧就求出了积分。如果再把反变换方法应用到这里,$\Theta$ 可以均匀地取 $[0,2\pi]$ 中的值,即
$$\Theta = 2\pi U_1$$
还可以同理计算出
$$\mathbb{P}(R\leq r) = \int_{r'=0}^r e^{-\frac{r'^2}{2}}\,r'\,\mathrm{d}r' = 1- e^{-r^2/2}$$
令其满足均匀分布 $1-U_2$,则
$$R = \sqrt{-2\ln(U_2)}$$
因此,只需要产生均匀分布 $U_1$ 和 $U_2$,就可以计算 $R$ 和 $\Theta$,进而计算出 $X$ 和 $Y$ 两个相互独立的正态分布了。
Python 里面的 random.gauss() 函数用的就是这样一个实现,但是用它实在太慢了,我们还是靠 NumPy 吧:
End of explanation
"""
%time x = np.random.randn(N)
plt.figure(figsize=(8,4))
plt.hist(x,50)
"""
Explanation: 当然……不是很快。不但是因为 Python 本身的速度,更是因为要计算好多次三角函数。NumPy 里面的 numpy.random.randn() 则又做了进一步的优化。代码可以见这里的 rk_gauss() 函数。它的原理是这样的:我们要的分布是
$$
\begin{aligned}
X &= R \cos(\Theta) =\sqrt{-2 \ln U_1} \cos(2 \pi U_2)\
Y &= R \sin(\Theta) =\sqrt{-2 \ln U_1} \sin(2 \pi U_2)
\end{aligned}$$
如果我们产生两个独立的均匀分布 $U_1$ 和 $U_2$,并且抛弃单位圆之外的点,那么 $s = U_1^2 + U_2^2$ 也是均匀分布的。为什么呢?因为
$$f_{U_1,U_2}(u,v) = \frac{1}{\pi}$$
将坐标代换为 $r$ 和 $\theta$,乘上一个雅各比行列式,我们前面算过了这个行列式就等于 $r$,所以:
$$f_{R,\Theta}(r, \theta) = \frac{r}{\pi}$$
$\Theta$ 是均匀分布在 $[0, 2\pi)$ 上的,所以
$$f_R(r) = \int_0^{2\pi} f_{R,\Theta}(r, \theta)\,\mathrm{d}\theta = 2r$$
再做一次变量代换
$$f_{R^2}(s) = f_R(r) \frac{\mathrm{d}r}{\mathrm{d}(r^2)} = 2r \cdot \frac{1}{2r} = 1$$
好了,既然 $s$ 也是均匀分布的,那么 $\sqrt{-2 \ln U_1}$ 和 $\sqrt{-2 \ln s}$ 就是同分布的。而又因为
$$\cos \Theta, \sin\Theta = \frac{U_1}{R}, \frac{U_2}{R} = \frac{U_1}{\sqrt{s}}, \frac{U_2}{\sqrt{s}}$$
那么
$$u\sqrt{\frac{-2\ln s}{s}}, v\sqrt{\frac{-2\ln s}{s}}$$
就是我们要找的两个独立正态分布。
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sigma = 1
mu = 0
sns.set(style="dark", palette="muted", color_codes=True, font_scale=1.5)
x = np.arange(-4,4,0.01)
f = 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (x - mu)**2 / (2 * sigma**2) )
Np = 1500
px = np.random.rand(Np) * 8 - 4
py = np.random.rand(Np) * 0.5
pd = py < 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (px - mu)**2 / (2 * sigma**2) )
pu = np.logical_not(pd)
plt.figure(figsize=(10,5))
plt.plot(x, f)
plt.plot(px[pu], py[pu], 'ro', px[pd], py[pd], 'go')
plt.axis((-4, 4, 0, 0.5))
"""
Explanation: 这速度还是十分不错的(当然一大原因是 NumPy 是 C 实现的)。本来 Box-Muller 包括 Matlab 在内的各大数值软件所采用的标准正态分布生成方法,直到出现了速度更快的金字塔 (Ziggurat) 方法。NumPy 出于兼容性的考虑迟迟没有更新,导致生成随机数的速度已经落在了 Matlab 和 Julia 后面。那么这个神奇的金字塔又是怎么回事呢?我们另开一篇细谈。
接受—拒绝法
求反变换固然还可行,但是碰到无法解析求逆的函数,用数值方法总归比较慢。下面我们就来说说另一个能够适合任何概率密度分布的方法——接受—拒绝法 (Acceptance-Rejection Method),国内也有翻译成叫做舍选法的。接受—拒绝法的思路其实很简单——比如说你想要正态分布,我们就弄个方框框把它框起来,然后均匀地往里面扔飞镖。扔到曲线以下我就留着,扔到曲线以上就不要了。这样搞好以后来看,曲线之下的点就是(二维)均匀分布的。那这些点的横坐标就正好满足我们要的分布——高的地方的点就多,低的地方的点就少嘛。
End of explanation
"""
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
n = 7
f = lambda x : np.exp(- x * x / 2)
fi = lambda f : np.sqrt(-np.log(f) * 2)
def z(r):
v = r * f(r) + (1 - norm.cdf(r))* np.sqrt(2 * np.pi)
x = [r]
for i in range(1, n):
x.append(fi(v / x[i - 1] + f(x[i - 1])))
return x[n - 1] *(f(0) - f(x[n - 1])) - v
r = 2.33837169825 # 7
print (z(r))
plt.figure(figsize = (10, 10))
plt.xlim(0,4)
plt.ylim(0,1)
xp = np.arange(0,4, 0.01)
fp = 1 * np.exp( -xp ** 2 / 2 )
v = r * f(r) + (1 - norm.cdf(r)) * np.sqrt(2 * np.pi)
x = [r]
for i in range(1, n):
x += [fi(v / x[i - 1] + f(x[i - 1]))]
x += [0, r]
y = [f(_) for _ in x]
plt.plot(xp, fp)
plt.plot(x, y, 'ro')
for i in range(n + 1):
plt.axhline(y=y[i], xmin=0, xmax=x[i - 1]/4, linewidth=1, color='r')
plt.axvline(x=x[i], ymin=y[i], ymax=y[i + 1], linewidth=1, color='r')
plt.axvline(x=x[i], ymin=y[i], ymax=y[i + 1], linewidth=1, color='r')
plt.axvline(x=x[i], ymin=y[i - 1], ymax=y[i], linewidth=1, color='r', ls = '--')
plt.annotate('x', xy = (x[i], y[i]), xytext=(x[i]+0.03, y[i] + 0.01), style='italic', size = 14)
plt.annotate(str(i), xy = (x[i], y[i]), xytext=(x[i]+0.09, y[i] + 0.005), style='normal', size = 10)
plt.axvline(x=x[0], ymin=0, ymax=y[0], linewidth=1, color='r', ls = '--')
"""
Explanation: 很直观是吧?更普遍来讲,如果要生成一个概率密度为 $f(x)$ 的分布,我们可以
先找到一个容易抽样的辅助分布 $g(x)$(也就是框框,不一定是均匀分布啦),使得存在一个常数 $M>1$,在整个 $x$ 的定义域上都有 $f(x)\leq Mg(x)$。
生成符合 $g(x)$ 分布的随机数 $x$。
生成一个在 $(0,1)$ 上均匀分布的随机数 $u$。
看看是不是满足 $u < f(x)/Mg(x) $。如果满足就保留 $x$,否则就丢弃。于是得到的 $x$ 就符合 $f(x)$ 分布。
实际上这个思路就是生成一堆 $x$ 轴均匀分布,$y$ 轴在 $Mg(x)$ 之内的点,然后仅保留 $f(x)$ 曲线下的那部分,就和我们看到的这个图是一个意思。
要比较严格的证明的话,我们先看看在操作中接受数据点 $x$ 的概率。由于 $u$ 是均匀分布的,所以接受概率
$$\begin{aligned}
P(\textrm{accept}) & =P\left(U < \frac{f(X)}{Mg(X)}\right) \
&= \mathbb{E}\left[\frac{f(X)}{Mg(X)}\right]\
&= \int \frac{f(X)}{Mg(X)} g(x) \mathrm{d}x \
& =\frac{1}{M}\int f(x)\mathrm{d}x = \frac{1}{M}
\end{aligned}
$$
也就是说能够保留数据点的概率是 $1/M$。那么利用贝叶斯法则,在接受条件下得到的分布
$$
\begin{aligned}
g(x|\textrm{accept}) &= \frac{P(\textrm{accept}|X=x)g(x)}{P(\textrm{accept})}\
&= \frac{\frac{f(x)}{Mg(x)}g(x)}{1/M} = f(x)
\end{aligned}$$
这东西看起来很美很方便啊,但是请注意,所有的抽样中,被接受的概率只有 $1/M$,意味着如果 $M$ 很大,就有大量的采样被浪费掉了。特别是像正态分布这种尾巴很长的……要是直接用方框框的话,得浪费多少采样才能遇上一个在 $5\sigma$ 之外的啊。为了改进算法的效率,就需要让 $g(x)$ 尽量能够贴近 $f(x)$,于是就有了这个神奇的金字塔 (Ziggurat) 方法。
Ziggurat 方法
Ziggurat 方法的思路其实也很直观,就是要让 $g(x)$ 尽量贴近 $f(x)$。怎么贴近呢?就像这样:
End of explanation
"""
import numpy as np
import zignor
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
N = 10**7
%time x = zignor.randn(N)
plt.figure(figsize=(8,4))
plt.hist(x,100)
stats.normaltest(x)
"""
Explanation: 是不是像一个有阶梯的金字塔?Ziggurat 这个词最初是说苏美尔人建的金字塔,但是其实玛雅人造的那个奇琴伊察的金字塔看起来也差不多……我前两年去的时候还画了一幅画就像这样
跑题了……为了计算方便起见,我们生成的是 $e^{-x^2/2}$ 而不是原始的正态分布。首先把图形分成好多个(一般实际中用 128 个或 256 个)阶梯一样的长方块,每个长方块的面积都是相等的,并且还和最下面的带长长的尾巴的这一条的面积相等。这些点的位置……只能靠数值方法了。习惯上把 $x_0$ 的位置叫做参数 $r$,那最下面一块的面积 $v$ 就是虚线左边的长方形面积加上尾巴:
$$v = r\cdot f(r) + \int_r ^\infty f(x) \mathrm{d} x$$
先假定一个 $r$ 值,求出 $v$ 后逐个求到最上面一个 $x_{n-1}$ 的位置,如果最上面一块面积不是 $v$ 再调整 $r$ 直到各块面积相等。
这些块块分好了以后怎么办呢?先不考虑尾巴,它是这样操作的:
随机选定一层 $0 \leq i < n$;
产生一个 $[0,1]$ 的均匀分布的随机数 $U_0$,令 $x = U_0x_i$,也就是随机产生一个均匀分布在实线框中的 $x$ 值。
如果 $x < x_{i+1}$,也就是落在虚线框内,那肯定就在图形之内啦,直接返回 $x$。
否则,那就是落在虚线和实现之间的部分,必须要做个判定了。在这个小框框中随机产生一个 $y$,即先产生一个均匀分布的 $U_1$,令 $y = y_i + U_1(y_{i+1}-y_i)$。
如果 $y <f(x)$,返回 $x$。否则就重新来过。
要是正好选到了尾巴怎么办呢?算法用了一个技巧,它用指数函数来逼近这个尾巴,生成 $x = -\ln(U_0) / r$,$y = -\ln(U_1)$,只要 $2y > x^2$ 就可以返回 $x + r$。
这个方法好就好在,分块的多少只影响速度,不影响精度——因为在每一块中都是经过接受—拒绝的,所以生成的是精确的正态分布,哪怕只用这 8 块也可以。
原始代码可以看这里,基本思路就是上面说的这些,程序里面用了 SHR3 随机数生成器来生成均匀分布的 32 位整数,把所有需要用于比较的分划点都算好后存起来,并且用了一些位操作来提高效率。我把它移植到了 Python 上,配合 NumPy 使用,可以去 GitHub 上下载,或者直接 pip install zignor 就可以啦!
来看下速度
End of explanation
"""
|
marc-moreaux/Deep-Learning-classes
|
notebooks/Classification.ipynb
|
mit
|
import keras
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print "input of training set has shape {} and output has shape {}".format(x_train.shape, y_train.shape)
print "input of testing set has shape {} and output has shape {}".format(x_test.shape, y_test.shape)
"""
Explanation: What is classification ?
Import the data you'll be using
Visualize/Analyze your dataset
Perform classification on it
1.a - We use the mnist dataset
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
fig, axs = plt.subplots(2,5)
axs = [b for a in axs for b in a]
for i in range(2*5):
axs[i].imshow(x_train[i], cmap='gray')
axs[i].axis('off')
plt.show()
"""
Explanation: 1.b - How does Mnist look like ?
End of explanation
"""
fig, axs = plt.subplots(2,2)
axs[0][0].hist(x_train.reshape([-1]), bins = 25)
axs[0][1].hist(y_train.reshape([-1]), bins = 10)
axs[1][0].hist(x_test.reshape([-1]), bins = 25)
axs[1][1].hist(y_test.reshape([-1]), bins = 10)
plt.show()
"""
Explanation: 1.c - Distribution of the Mnist dataset
End of explanation
"""
# Normalize the MNIST data
x_train = x_train/255.
x_test = x_test/255.
# Change the one-hot-encoding
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
"""
Explanation: 1.c - Normalize and change the encoding of the data
End of explanation
"""
sample = x_test[0]
plt.imshow(sample)
"""
Explanation: 2 - Classify our data
We are going to choose in between 3 classifier to classify our data:
SVM
Nearest Neighboor
Logistic Regression
End of explanation
"""
from sklearn import svm
from skimage.transform import resize
# 24*24 images'll be too big, we downsample them to 8*8
def to_svm_image(img):
img = resize(img, [8,8])
return img.reshape([-1])
x_train_svm = map(to_svm_image, x_train)
x_train_svm = np.array(x_train_svm)
# Train the classifier here
clf = svm.SVC(gamma=0.001, C=100.)
clf.fit(x_train_svm, y_train.argmax(axis=1))
# Test the classifier
sample = to_svm_image(x_test[0])
sample = sample.reshape([1,-1])
prediction = clf.predict(sample)
print "With SVM, our sample is closest to class {}".format(prediction[0])
"""
Explanation: 2.a - SVM
https://www.youtube.com/watch?v=_PwhiWxHK8o
End of explanation
"""
sample = x_test[0]
def distance(tensor1, tensor2, norm='l1'):
if norm == "l1":
dist = np.abs(tensor1 - tensor2)
if norm == "l2":
dist = tensor1 ** 2 - tensor2 ** 2
dist = np.sum(dist)
return dist
def predict(sample, norm='l1'):
min_dist = 100000000000
min_idx = -1
for idx, im in enumerate(x_train):
if distance(sample, im) < min_dist:
min_dist = distance(sample, im, norm)
min_idx = idx
y_pred = y_train[min_idx]
return y_pred
y = predict(sample, 'l1')
print "With NN, our sample is closest to class {}".format(y.argmax())
"""
Explanation: 2.b - Nearest neighboor
Browse throught the entire dataset which is the closest "neighboor" to our current example.
End of explanation
"""
from sklearn import linear_model, datasets
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
# Train the classifier here
clf_sgd = SGDClassifier()
clf_sgd.fit(x_train_svm, y_train.argmax(axis=1))
# Test the classifier
sample = to_svm_image(x_test[0])
sample = sample.reshape([1,-1])
prediction = clf.predict(sample)
print "With Softmax regression, our sample is closest to class {}".format(prediction[0])
"""
Explanation: 2.c - Softmax regression
$ y = \sigma(W^T \cdot X + b) $
End of explanation
"""
|
qwertzuhr/2015_Data_Analyst_Project_3
|
Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data.ipynb
|
agpl-3.0
|
from Project.notebook_stub import project_coll
import pprint
# Query used - see function Project.audit_stats_map.stats_general
pipeline = [
{"$group": {"_id": "$type", "count": {"$sum": 1}}},
{"$match": {"_id": {"$in": ["node", "way"]}}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l)
"""
Explanation: Data Analyst Project 3
Data Wrangle (Retrieve, Analyze and Clean) OpenStreetMaps Data from the City of Dresden
by Benjamin Söllner, benjamin.soellner@gmail.com
based on the Udacity.com Data Wrangling With MongoDB
<img src="city_dresden_json.png" alt="The city of Dresden as a JSON object illustration" width="400" height="312" style="display: inline; margin: 6pt;" />
Abstract
This paper describes describes the process of downloading, analyzing and cleaning of an OpenStreet Map data set of my former home town as a student: Dresden, a state capital in eastern Germany, a baroque town beautifully located on the board of the river Elbe and town home to a high-tech conglomerate from the micro-electronics sector called Silicon Saxony.
In this paper, first, the pipeline (and python script) to perform retrieval, analysis and cleaning of the data is introduced (chapters Approach) and results of the analysis stage are presented (chapter Overview of the Data). During the analysis, interesting facts of Dresden are uncovered, like the most popular religion, sport, beer, cuisine or leisure activity.
For the cleaning stage (chapter Problems Encountered in the Map), canonicalizing phone numbers present in the data set and unifying cuisine classifications where the challenge of choice. Some other cleaning techniques like cleaning street names and post codes where tried, but proved not fruitful. The paper is finally concluded with some further ideas for data set cleaning (chapter Other Ideas about the Data Set).
The Approach
I implemented retrieving / storing / analysing and cleaning in a python script. The script can be used like this:
```
python project.py
Usage:
python project.py -d Download & unpack bz2 file to OSM file (experimental)
python project.py -p Process OSM file and write JSON file
python project.py -w Write JSON file to MongoDB
python project.py -z Download and install the zipcode helpers"
python project.py -f Audit format / structure of data
python project.py -s Audit statistics of data
python project.py -q Audit quality of data
python project.py -Z Audit quality of data: Zipcodes - (see -z option)
python project.py -c Clean data in MongoDB
python project.py -C Clean data debug mode - don't actually write to DB
```
Different options can be combined, so python project.py -dpwfsqc will do the whole round trip. During the process, I re-used most of the code and data format developed during the "Data Wrangling With MongoDB" Udacity course. For example, the data format used for storing the data (-p and -w option) is completely based on Lesson 6 - with some fine-tuning.
Some output of the script is shown on the terminal, some is written to local files. If a file is written, this is indicated in the terminal output. A sample of the script's terminal output is included in the output_*.txt files included in the submission.
Data Format
Try it out: Use python project.py -f to obtain the data for this chapter. This is a long-running process which might take a few hours to complete! There is an output file written to Project/data/audit_format_map.csv which can be beautified into an Excel spreadsheet.
First, the data format was audited, which consisted of going through all the documents and aggregating the occurence of any attributes and the prevalence of their types (string, integer, float and other). For this, batches of 1000 documents each are retrieved from the collection and each combed through by the python code while a Python Dataframe keeps track of the counters. Since there are 1,360,000 elements, this process takes many hours; an alternative would be to run the query natively in JavaScript code on the MongoDB shell or to issue the command as a BSON command.
The overview of the format showed no obvious big problems with the data at first glance but provided some valuable insights:
One area of improvement could be the phone number, which is scattered across multiple data fields (address:phone, phone and phone_mobile) and was identified as a potential candidate for cleaning (see Auditing Phone Numbers and Cleaning Phone Numbers).
Some values are present in the dataset as sometimes string, othertimes numeric: The XML parsing process takes care that each value is, whenever parsable, stored as integer or float. For attributes like street numbers, mixed occurences may be in the data set.
This automatic parsing of int or float turned out to be not always useful: a problem are leading zeros which in certain cases hold semantics. For german phone numbers, a leading zero signifies the start of an area code (0) or the start of a country code (00). For german postcodes, a leading zero in a postcode represents the german state of Saxony. As an outcome of this insight, I changed the parsing routine of the XML data to only parse values as numeric, if they do not contain a leading zero (not s.startswith("0"))
I checked some of the lesser-common values for sanity. E.g., there is a parameter dogshit which appears three times. As it turns out, this is not a prank of some map editors, who document dog feces they find in the area, but an indication about whether a public trash can contains a dispenser of plastic bags for relevant situations.
Overview of the Data
Try it out: Use python project.py -s to obtain the data for this chapter. See Sample Output in file Project/output_project.py_-s.txt.
A couple of basic MongoDB queries were run to explore the data set based on the knowledge of its format from the previous chapter. The queries produce mostly rankings of values for certain data fields. Some of them are subsequently also visualized in a ggplot graph (png file) relying on the skill set gained in Udacity's Intro to Data Science course, Lesson 4: Data Visualization while not too much effort was put in making the graphs look particularily beautiful. The graphs are located in Project/data/stats_*.png.
Filesize, Number of Nodes and Ways
The total file size of the OSM export is 281.778.428 Bytes, there are 208813 nodes and 1146807 ways in the dataset.
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_users(...):
pipeline = [
{"$match": {"created.user": {"$exists": True}}},
{"$group": {"_id": "$created.user", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
print str(len(l)) + " users were involved:"
pprint.pprint(l[1:5]+["..."]+l[-5:])
"""
Explanation: Users Involved
There were about 1634 users involved in creating the data set, the top 10 of all users accounts for 40% of the created data. There is no direct evidence from the user name that any of them are bot-like users. This could be determined by further research. Many users (over 60%) have made less than 10 entries.
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_amenities(...):
pipeline = [
{"$match": {"amenity": {"$exists": True}}},
{"$group": {"_id": "$amenity", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l[1:10]+['...'])
"""
Explanation: Types of Amenities
The attribute amenity inspired me to do further research in which kind of buildings / objects / facilities are stored in the Open Street Map data in larger quantities in order to do more detailed research on those objects. Especially Restaurants, Pubs and Churches / Places of Worship were investigated further (as can be seen below).
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_amenities(...):
pipeline = [
{"$match": {"leisure": {"$exists": True}}},
{"$group": {"_id": "$leisure", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l[1:10]+['...'])
"""
Explanation: Popular Leisure Activities
The attribute leisure shows the types of leisure activities one can do in Dresden and inspired me to invesigate more on popular sports in the city (leisure=sports_center or leisure=stadium).
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_religions(...):
pipeline = [
{"$match": {"amenity":{"$in": ["place_of_worship","community_center"]}}},
{"$group": {"_id": "$religion", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l)
"""
Explanation: Religions in Places of Worship
Grouping and sorting by the occurences of the religion attribute for all amenities classified as place_of_worship or community_center gives us an indication, how prevalent religions are in our city: obviously, christian is the most prevalent here.
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_cuisines(...):
pipeline = [
{"$match": {"amenity": "restaurant"}},
{"$group": {"_id": "$cuisine", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l[1:10]+['...'])
"""
Explanation: Cuisines in Restaurants
We can list the types of cuisines in restaurants (elements with attribute amenity matching restaurant) and sort them in decending order. We can notice certain inconsistencies or overlaps in the classifications of this data: e.g., a kebab cuisine may very well be also classified as an arab cuisine or may, in fact a sub- or super-classification of this cuisine. One could, e.g., eliminate or cluster together especially occurences of cuisines which are less common, but Without having a formal taxonomy of all cuisines, I decided that is probably best to leave the data as-is in order to not sacrifice preciseness for consistency.
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_beers(...):
pipeline = [
{"$match": {"amenity": {"$in":["pub","bar","restaurant"]}}},
{"$group": {"_id": "$brewery", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l)
"""
Explanation: Beers in Pubs
Germans do love their beers and the dataset shows that certain pubs, restaurants or bars are sponsored by certain beer brands (often advertised on the pubs entrance). We can analyze the prevalence of beer brands by grouping and sorting by occurence of the attribute brewery for all the amenities classified as respective establishment. Most popular are Radeberger, a very popular local beer, Feldschlösschen, a swiss beer and Dresdner Felsenkeller, a very local and niche-sort-of beer.
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_sports(...):
pipeline = [
{"$match": {"leisure": {"$in": ["sports_centre","stadium"]}}},
{"$group": {"_id": "$sport", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
l = list(project_coll.aggregate(pipeline))
pprint.pprint(l[1:5]+['...'])
"""
Explanation: Popular Sports
To investigate, which sports are popular, we can group and sort by the (occurence of the) sport attribute for all elements classified as sports_centre or stadium in their leisure attribute. Unsurprisingly for a german city, we notice that 9pin (bowling) and soccer are the most popular sports, followed by climbing, an activity very much enjoyed by people in Dresden, presumably because of the close-by sand-stone mountains of the national park Sächsische Schweiz.
End of explanation
"""
from Project.notebook_stub import project_coll
import pprint
# Query used - see function: Project.audit_stats_map.stats_dances(...):
l = list(project_coll.distinct("name", {"leisure": "dance"}))
pprint.pprint(l[1:10]+['...'])
"""
Explanation: Where to Dance in Dresden
I am a passionate social dancer, so a list of dance schools in Dresden should not be abscent from this investigation. We can quickly grab all elements which have the leisure attribute set to dancing.
End of explanation
"""
from Project.notebook_stub import project_coll
# Query used - see function: Project.audit_quality_map.audit_phone_numbers(...):
pipeline = [
{"$match": {"$or": [
{"phone": {"$exists": True}},
{"mobile_phone": {"$exists": True}},
{"address.phone": {"$exists": True}}
]}},
{"$project": {
"_id": 1,
"phone": {"$ifNull": ["$phone", {"$ifNull": ["$mobile_phone", "$address.phone"]}]}
}}
]
l = project_coll.aggregate(pipeline)
# Output too long... See the file Project/output_project.py_-q.txt
"""
Explanation: Problems Encountered in the Map / Data Quality
Try it out: Use python project.py -q to obtain the data from this chapter. See Sample Output in file Project/output_project.py_-q.txt. The script also writes a CSV file to Project/data/audit_buildings.csv, which is also beautified into a Excel File.
Leading Zeros
As already discussed, during the parsing stage, we are using an optimistic approach of parsing any numerical value as integer or float, if it is parsable as such. However, we noticed that we should not do this, if leading zeros are present as those hold semantics for phone numbers and zip codes. Otherwise, this cleaning approach gives us a much smaller representation of the data in MongoDB and in-memory.
Normalizing / Cleaning Cuisines
As hinted in section Cuisines in Restaurant, classification of cuisines is inconsistent. There are two problems with this value:
There are multiple values separated by ';' which makes the parameter hard to parse. We overcome this by creating a parameter cuisineTag which stores the cuisine classifications as an array:
python
db.eval('''db.osmnodes.find({
"cuisine": {"$exists": true},
"amenity": "restaurant"
}).snapshot().forEach(function(val, idx) {
val.cuisineTags = val.cuisine.split(';');
db.osmnodes.save(val)
})
''')
Some values are inconsistently used; therefore, we unify them with a mapping table and a subsequent MongoDB update:
```python
cuisines_synonyms = {
'german': ['regional', 'schnitzel', 'buschenschank'],
'portuguese': ['Portugiesisches_Restaurant_&_Weinbar'],
'italian': ['pizza', 'pasta'],
'mediterranean': ['fish', 'seafood'],
'japanese': ['sushi'],
'turkish': ['kebab'],
'american': ['steak_house']
}
# not mapped:
# greek, asian, chinese, indian, international, vietnamese, thai, spanish, arabic
# sudanese, russian, korean, hungarian, syrian, vegan, soup, croatian, african
# balkan, mexican, french, cuban, lebanese
for target in cuisines_synonyms:
db.osmnodes.update( {
"cuisine": {"$exists": True},
"amenity": "restaurant",
"cuisineTags": {"$in": cuisines_synonyms[target]}
}, {
"$pullAll": { "cusineTags": cuisines_synonyms[target] },
"$addToSet": { "cuisineTags": [ target ] }
}, multi=False )
```
This allows us to convert a restaurant with the MongoDB representation
{..., "cuisine": "pizza;kebab", ...}
to the alternative representation
{..., "cuisine": "pizza;kebab", "cuisineTag": ["italian", "turkish"], ...}
Auditing Phone Numbers
Phone re scattered over different attributes (address.phone, phone and mobile_phone) and come in different styles of formating (like +49 351 123 45 vs. 0049-351-12345). First, we retrieve a list of all phone numbers. With the goal in mind to later store the normalized phone number back into the attribute phone, this value has to be read first, and only if it is empty, mobile_phone or address.phone should be used.
End of explanation
"""
from Project.notebook_stub import project_coll
# Query used - see function: Project.audit_quality_map.audit_streets(...):
expectedStreetPattern = \
u"^.*(?<![Ss]tra\u00dfe)(?<![Ww]eg)(?<![Aa]llee)(?<![Rr]ing)(?<![Bb]erg)" + \
u"(?<![Pp]ark)(?<![Hh]\u00f6he)(?<![Pp]latz)(?<![Bb]r\u00fccke)(?<![Gg]rund)$"
l = list(project_coll.distinct("name", {
"type": "way",
"name": {"$regex": expectedStreetPattern}
}))
# Output too long... See the file Project/output_project.py_-q.txt
"""
Explanation: Cleaning Phone Numbers
Try it out: Use python project.py -C to clean in debug mode. See Sample Output in file Project/output_project.py_-C.txt. The script also writes a CSV file to Project/data/clean_phones.csv, which is also beautified into a Excel File.
Cleaning the phone numbers involves:
* unifying the different phone attributes (phone, address.phone and mobile_phone) - this is already taken care by extracting the phone numbers during the audit stage
* if possible, canonicalizing the phone number notations by parsing them using a regular expression:
python
phone_regex = re.compile(ur'^(\(?([\+|\*]|00) *(?P<country>[1-9][0-9]*)\)?)?' + # country code
ur'[ \/\-\.]*\(?0?\)?[ \/\-\.]*' + # separator
ur'(\(0?(?P<area1>[1-9][0-9 ]*)\)|0?(?P<area2>[1-9][0-9]*))?' + # area code
ur'[ \/\-\.]*' + # separator
ur'(?P<number>([0-9]+ *[\/\-.]? *)*)$', # number
re.UNICODE)
The regular expression is resilient to various separators ("/", "-", " ", "(0)") and bracket notation of phone numbers. It is not resilient for some unicode characters or written lists of phone numbers which are designed to be interpreted by humans (using separators like ",", "/-" or "oder" lit. or). During the cleaning stage, an output is written which phone numbers could not be parsed. This contains only a tiny fraction of phone numbers (9 or 0.5%) which would be easily cleanable by hand.
The following objects couldn't be parsed:
normalized
55f57294b1c8a72c34523897 +49 35207 81429 or 81469
55f57299b1c8a72c345272cd +49 351 8386837, +49 176 67032256
55f572c2b1c8a72c34546689 0351 4810426
55f572c3b1c8a72c34546829 +49 351 8902284 or 2525375
55f572fdb1c8a72c34574963 +49 351 4706625, +49 351 0350602
55f573bdb1c8a72c3460bdb3 +49 351 87?44?44?00
55f573bdb1c8a72c3460c066 0162 2648953, 0162 2439168
55f573edb1c8a72c346304b1 03512038973, 03512015831
55f5740eb1c8a72c34649008 0351 4455193 / -118
If the phone number was parsable, the country code, area code and rest of the phone number are separated and subsequently strung together to a canonical form. The data to be transformed is stored into a Pandas Dataframe. By using the option -C instead of -c the execution of the transformation can be surpressed and the Dataframe instead be written to a CSV file which might be further beautified into an Excel File in order to test or debug the transformation before writing it to the database with the -c option.
Auditing Street Names (Spoiler Alert: No Cleaning Necessary)
Auditing the map's street names analogous to how it was done in the Data Wrangling course was done as follows: Check, whether 'weird' street names occur, which do not end on a suffix like street (in German -straße or Straße, depending on whether it is a compound word or not). It is assumed that then, they would most likely end in an abbreviation like str.. For this we use a regular expression querying all streets <u>not</u> ending with a particular suffix like [Ss]traße (street), [Ww]eg (way) etc. This is accomplished by a chain of "negative lookbehind" expressions ((?<!...)) which must all in sequence evaluate to "true" in order to flag a street name as non-conforming.
End of explanation
"""
from Project.notebook_stub import project_db
# Query used - see function: Project.audit_quality_map.audit_buildings(...):
buildings_with_streets = project_db.eval('''
db.osmnodes.ensureIndex({pos:"2dsphere"});
result = [];
db.osmnodes.find(
{"building": {"$exists": true}, "address.street": {"$exists": true}, "pos": {"$exists": true}},
{"address.street": "", "pos": ""}
).forEach(function(val, idx) {
val.nearby = db.osmnodes.distinct("address.street",
{"_id": {"$ne": val._id}, "pos": {"$near": {"$geometry": {"type": "Point", "coordinates": val.pos}, "$maxDistance": 50, "$minDistance": 0}}}
);
result.push(val);
})
return result;
''')
# Output too long... See the file Project/output_project.py_-q.txt
"""
Explanation: Skimming through the list, it was noticable that the nature of the german language (and how in Germany streetnames work) results in the fact, that there are many small places without a suffix like "street" but "their own thing" (like Am Hang lit. 'At The Slope', Beerenhut lit. 'Berry Hat', Im Grunde lit. 'In The Ground'). The street names can therefore not be processed just by looking at the suffixes - I tried something different...
Cross Auditing Street Names with Street Addresses (Spoiler Alert: No Cleaning Necessary)
I did not want to trust the street names of the data set fully yet. Next, I tried figuring out if street names of buildings were consistent with street names of objects in close proximity. Therefore, a JavaScript query is run directly on the database server returning all buildings with the objects nearby having an address.street parameter. This should allow us to cross-audit if objects in close proximity do have the same street names.
End of explanation
"""
from Project.audit_zipcode_map import audit_zipcode_map
from Project.notebook_stub import project_server, project_port
import pprint
zipcodeJoined = audit_zipcode_map(project_server, project_port, quiet=True)
pprint.pprint(zipcodeJoined[1:10]+['...'])
"""
Explanation: The resulting objects are then iterated through and the best and worst fitting nearby street name are identified each using the Levenshtein distance. For each object, a row is created in a DataFrame which is subsequently exported to a csv file Project/data/audit_buildings.csv that was manually beautified into an Excel File.
As can be seen, street names of nearby objects mostly match those of the building itself (Levenshtein distance is zero). If they deviate greatly, they are totally different street names in the same area and not just "typos" or non-conforming abbreviations.
Auditing Zip Codes (Spoiler Alert: No Cleaning Necessary)
Try it out: Use python project.py -Z which runs the auditing script for zipcodes. See Sample Output in file Project/output_project.py_-Z.txt. To be able to run this script correctly, the zipcode data from Geonames.org needs to be downloaded and installed first using the -z option.
This part of the auditing process makes use of an additional at Geonames.org to resolve and audit the zip codes in the data set. During the "installation process" (option -z) the zipcode data (provided as a tab-separated file) is downloaded and, line-by-line, stored to a (separate) MongoDB collection. However, we are only interested "zipcode" (2) and "place" (3)
During the auditing stage (option -Z) we first get a list of all used zipcode using the following query:
python
pipeline = [
{ "$match": {"address.postcode": {"$exists": 1}} },
{ "$group": {"_id": "$address.postcode", "count": {"$sum": 1}} },
{ "$sort": {"count": 1} }
]
The zipcodes are then all looked up in the zipcode collection using the $in-operator. The data obtained is joined back into the original result.
python
zipcodeObjects = zipcodeColl.find( {"zipcode": {"$in": [z["_id"] for z in zipcodeList]}} )
The following output shows that there the lesser used zipcodes are from the Dresden metropolitan area, not Dresden itself:
End of explanation
"""
|
taspinar/siml
|
notebooks/WV1 - Using PyWavelets for Wavelet Analysis.ipynb
|
mit
|
import pywt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
"""
Explanation: This jupyter notebooks provides the code to give an introduction to the PyWavelets library.
To get some more background information, please have a look at the accompanying blog-post:
http://ataspinar.com/2018/12/21/a-guide-for-using-the-wavelet-transform-in-machine-learning/
End of explanation
"""
wavelet_families = pywt.families(short=False)
discrete_mother_wavelets = pywt.wavelist(kind='discrete')
continuous_mother_wavelets = pywt.wavelist(kind='continuous')
print("PyWavelets contains the following families: ")
print(wavelet_families)
print()
print("PyWavelets contains the following Continuous families: ")
print(continuous_mother_wavelets)
print()
print("PyWavelets contains the following Discrete families: ")
print(discrete_mother_wavelets)
print()
for family in pywt.families():
print(" * The {} family contains: {}".format(family, pywt.wavelist(family)))
"""
Explanation: 1. Which Wavelets are present in PyWavelets?
End of explanation
"""
discrete_wavelets = ['db5', 'sym5', 'coif5', 'bior2.4']
continuous_wavelets = ['mexh', 'morl', 'cgau5', 'gaus5']
list_list_wavelets = [discrete_wavelets, continuous_wavelets]
list_funcs = [pywt.Wavelet, pywt.ContinuousWavelet]
fig, axarr = plt.subplots(nrows=2, ncols=4, figsize=(16,8))
for ii, list_wavelets in enumerate(list_list_wavelets):
func = list_funcs[ii]
row_no = ii
for col_no, waveletname in enumerate(list_wavelets):
wavelet = func(waveletname)
family_name = wavelet.family_name
biorthogonal = wavelet.biorthogonal
orthogonal = wavelet.orthogonal
symmetry = wavelet.symmetry
if ii == 0:
_ = wavelet.wavefun()
wavelet_function = _[0]
x_values = _[-1]
else:
wavelet_function, x_values = wavelet.wavefun()
if col_no == 0 and ii == 0:
axarr[row_no, col_no].set_ylabel("Discrete Wavelets", fontsize=16)
if col_no == 0 and ii == 1:
axarr[row_no, col_no].set_ylabel("Continuous Wavelets", fontsize=16)
axarr[row_no, col_no].set_title("{}".format(family_name), fontsize=16)
axarr[row_no, col_no].plot(x_values, wavelet_function)
axarr[row_no, col_no].set_yticks([])
axarr[row_no, col_no].set_yticklabels([])
plt.tight_layout()
plt.show()
"""
Explanation: 2. Visualizing several Discrete and Continuous wavelets
End of explanation
"""
fig, axarr = plt.subplots(ncols=5, nrows=5, figsize=(20,16))
fig.suptitle('Daubechies family of wavelets', fontsize=16)
db_wavelets = pywt.wavelist('db')[:5]
for col_no, waveletname in enumerate(db_wavelets):
wavelet = pywt.Wavelet(waveletname)
no_moments = wavelet.vanishing_moments_psi
family_name = wavelet.family_name
for row_no, level in enumerate(range(1,6)):
wavelet_function, scaling_function, x_values = wavelet.wavefun(level = level)
axarr[row_no, col_no].set_title("{} - level {}\n{} vanishing moments\n{} samples".format(
waveletname, level, no_moments, len(x_values)), loc='left')
axarr[row_no, col_no].plot(x_values, wavelet_function, 'bD--')
axarr[row_no, col_no].set_yticks([])
axarr[row_no, col_no].set_yticklabels([])
plt.tight_layout()
plt.subplots_adjust(top=0.9)
plt.show()
"""
Explanation: 3. Visualizing how the wavelet form depends on the order and decomposition level
End of explanation
"""
time = np.linspace(0, 1, num=2048)
chirp_signal = np.sin(250 * np.pi * time**2)
(cA1, cD1) = pywt.dwt(chirp_signal, 'db2', 'smooth')
(cA2, cD2) = pywt.dwt(cA1, 'db2', 'smooth')
(cA3, cD3) = pywt.dwt(cA2, 'db2', 'smooth')
(cA4, cD4) = pywt.dwt(cA3, 'db2', 'smooth')
(cA5, cD5) = pywt.dwt(cA4, 'db2', 'smooth')
coefficients_level1 = [cA1, cD1]
coefficients_level2 = [cA2, cD2, cD1]
coefficients_level3 = [cA3, cD3, cD2, cD1]
coefficients_level4 = [cA4, cD4, cD3, cD2, cD1]
coefficients_level5 = [cA5, cD5, cD4, cD3, cD2, cD1]
reconstructed_signal_level1 = pywt.waverec(coefficients_level1, 'db2', 'smooth')
reconstructed_signal_level2 = pywt.waverec(coefficients_level2, 'db2', 'smooth')
reconstructed_signal_level3 = pywt.waverec(coefficients_level3, 'db2', 'smooth')
reconstructed_signal_level4 = pywt.waverec(coefficients_level4, 'db2', 'smooth')
reconstructed_signal_level5 = pywt.waverec(coefficients_level5, 'db2', 'smooth')
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(chirp_signal, label='signal')
ax.plot(reconstructed_signal_level1, label='reconstructed level 1', linestyle='--')
ax.plot(reconstructed_signal_level2, label='reconstructed level 2', linestyle='--')
ax.plot(reconstructed_signal_level3, label='reconstructed level 3', linestyle='--')
ax.plot(reconstructed_signal_level4, label='reconstructed level 4', linestyle='--')
ax.plot(reconstructed_signal_level5, label='reconstructed level 5', linestyle='--')
ax.legend(loc='upper right')
ax.set_title('single reconstruction', fontsize=20)
ax.set_xlabel('time axis', fontsize=16)
ax.set_ylabel('Amplitude', fontsize=16)
plt.show()
"""
Explanation: 4.A Using the pywt.dwt() for the decomposition of a signal into the frequency sub-bands
(and reconstrucing it again)
End of explanation
"""
time = np.linspace(0, 1, num=2048)
chirp_signal = np.sin(250 * np.pi * time**2)
coefficients_level1 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=1)
coefficients_level2 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=2)
coefficients_level3 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=3)
coefficients_level4 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=4)
coefficients_level5 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=5)
reconstructed_signal_level1 = pywt.waverec(coefficients_level1, 'db2', 'smooth')
reconstructed_signal_level2 = pywt.waverec(coefficients_level2, 'db2', 'smooth')
reconstructed_signal_level3 = pywt.waverec(coefficients_level3, 'db2', 'smooth')
reconstructed_signal_level4 = pywt.waverec(coefficients_level4, 'db2', 'smooth')
reconstructed_signal_level5 = pywt.waverec(coefficients_level5, 'db2', 'smooth')
fig, ax = plt.subplots(figsize=(12,4))
ax.plot(chirp_signal, label='signal')
ax.plot(reconstructed_signal_level1, label='reconstructed level 1', linestyle='--')
ax.plot(reconstructed_signal_level2, label='reconstructed level 2', linestyle='--')
ax.plot(reconstructed_signal_level3, label='reconstructed level 3', linestyle='--')
ax.plot(reconstructed_signal_level4, label='reconstructed level 4', linestyle='--')
ax.plot(reconstructed_signal_level5, label='reconstructed level 5', linestyle='--')
ax.legend(loc='upper right')
ax.set_title('single reconstruction', fontsize=20)
ax.set_xlabel('time axis', fontsize=16)
ax.set_ylabel('Amplitude', fontsize=16)
plt.show()
"""
Explanation: 4.B Using the pywt.wavedec() for the decomposition of a signal into the frequency sub-bands
(and reconstrucing it again)
End of explanation
"""
fig = plt.figure(figsize=(6,8))
spec = gridspec.GridSpec(ncols=2, nrows=6)
ax0 = fig.add_subplot(spec[0, 0:2])
ax1a = fig.add_subplot(spec[1, 0])
ax1b = fig.add_subplot(spec[1, 1])
ax2a = fig.add_subplot(spec[2, 0])
ax2b = fig.add_subplot(spec[2, 1])
ax3a = fig.add_subplot(spec[3, 0])
ax3b = fig.add_subplot(spec[3, 1])
ax4a = fig.add_subplot(spec[4, 0])
ax4b = fig.add_subplot(spec[4, 1])
ax5a = fig.add_subplot(spec[5, 0])
ax5b = fig.add_subplot(spec[5, 1])
axarr = np.array([[ax1a, ax1b],[ax2a, ax2b],[ax3a, ax3b],[ax4a, ax4b],[ax5a, ax5b]])
time = np.linspace(0, 1, num=2048)
chirp_signal = np.sin(250 * np.pi * time**2)
# First we reconstruct a signal using pywt.wavedec() as we have also done at #4.2
coefficients_level1 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=1)
coefficients_level2 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=2)
coefficients_level3 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=3)
coefficients_level4 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=4)
coefficients_level5 = pywt.wavedec(chirp_signal, 'db2', 'smooth', level=5)
# pywt.wavedec() returns a list of coefficients. Below we assign these list of coefficients to variables explicitely.
[cA1_l1, cD1_l1] = coefficients_level1
[cA2_l2, cD2_l2, cD1_l2] = coefficients_level2
[cA3_l3, cD3_l3, cD2_l3, cD1_l3] = coefficients_level3
[cA4_l4, cD4_l4, cD3_l4, cD2_l4, cD1_l4] = coefficients_level4
[cA5_l5, cD5_l5, cD4_l5, cD3_l5, cD2_l5, cD1_l5] = coefficients_level5
# Since the the list of coefficients have been assigned explicitely to variables, we can set a few of them to zero.
approx_coeff_level1_only = [cA1_l1, None]
detail_coeff_level1_only = [None, cD1_l1]
approx_coeff_level2_only = [cA2_l2, None, None]
detail_coeff_level2_only = [None, cD2_l2, None]
approx_coeff_level3_only = [cA3_l3, None, None, None]
detail_coeff_level3_only = [None, cD3_l3, None, None]
approx_coeff_level4_only = [cA4_l4, None, None, None, None]
detail_coeff_level4_only = [None, cD4_l4, None, None, None]
approx_coeff_level5_only = [cA5_l5, None, None, None, None, None]
detail_coeff_level5_only = [None, cD5_l5, None, None, None, None]
# By reconstrucing the signal back from only one set of coefficients, we can see how
# the frequency-sub band for that specific set of coefficient looks like
rec_signal_cA_level1 = pywt.waverec(approx_coeff_level1_only, 'db2', 'smooth')
rec_signal_cD_level1 = pywt.waverec(detail_coeff_level1_only, 'db2', 'smooth')
rec_signal_cA_level2 = pywt.waverec(approx_coeff_level2_only, 'db2', 'smooth')
rec_signal_cD_level2 = pywt.waverec(detail_coeff_level2_only, 'db2', 'smooth')
rec_signal_cA_level3 = pywt.waverec(approx_coeff_level3_only, 'db2', 'smooth')
rec_signal_cD_level3 = pywt.waverec(detail_coeff_level3_only, 'db2', 'smooth')
rec_signal_cA_level4 = pywt.waverec(approx_coeff_level4_only, 'db2', 'smooth')
rec_signal_cD_level4 = pywt.waverec(detail_coeff_level4_only, 'db2', 'smooth')
rec_signal_cA_level5 = pywt.waverec(approx_coeff_level5_only, 'db2', 'smooth')
rec_signal_cD_level5 = pywt.waverec(detail_coeff_level5_only, 'db2', 'smooth')
ax0.set_title("Chirp Signal", fontsize=16)
ax0.plot(time, chirp_signal)
ax0.set_xticks([])
ax0.set_yticks([])
ax1a.plot(rec_signal_cA_level1, color='red')
ax1b.plot(rec_signal_cD_level1, color='green')
ax2a.plot(rec_signal_cA_level2, color='red')
ax2b.plot(rec_signal_cD_level2, color='green')
ax3a.plot(rec_signal_cA_level3, color='red')
ax3b.plot(rec_signal_cD_level3, color='green')
ax4a.plot(rec_signal_cA_level4, color='red')
ax4b.plot(rec_signal_cD_level4, color='green')
ax5a.plot(rec_signal_cA_level5, color='red')
ax5b.plot(rec_signal_cD_level5, color='green')
for ii in range(0,5):
axarr[ii,0].set_xticks([])
axarr[ii,0].set_yticks([])
axarr[ii,1].set_xticks([])
axarr[ii,1].set_yticks([])
axarr[ii,0].set_title("Approximation Coeff", fontsize=16)
axarr[ii,1].set_title("Detail Coeff", fontsize=16)
axarr[ii,0].set_ylabel("Level {}".format(ii+1), fontsize=16)
plt.tight_layout()
plt.show()
"""
Explanation: 5. Reconstrucing a signal with only one level of coefficients
End of explanation
"""
|
tensorflow/lattice
|
docs/tutorials/custom_estimators.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice
"""
Explanation: TF Lattice Custom Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/custom_estimators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Warning: Estimators are not recommended for new code. Estimators run v1. Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our [compatibility guarantees] (https://tensorflow.org/guide/versions), but they will not receive any additional features, and there will be no fixes other than to security vulnerabilities. See the migration guide for details.
Overview
You can use custom estimators to create arbitrarily monotonic models using TFL layers. This guide outlines the steps needed to create such estimators.
Setup
Installing TF Lattice package:
End of explanation
"""
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
from tensorflow_estimator.python.estimator.canned import optimizers
from tensorflow_estimator.python.estimator.head import binary_class_head
logging.disable(sys.maxsize)
"""
Explanation: Importing required packages:
End of explanation
"""
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
target = df.pop('target')
train_size = int(len(df) * 0.8)
train_x = df[:train_size]
train_y = target[:train_size]
test_x = df[train_size:]
test_y = target[train_size:]
df.head()
"""
Explanation: Downloading the UCI Statlog (Heart) dataset:
End of explanation
"""
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 1000
"""
Explanation: Setting the default values used for training in this guide:
End of explanation
"""
# Feature columns.
# - age
# - sex
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
feature_columns = [
fc.numeric_column('age', default_value=-1),
fc.categorical_column_with_vocabulary_list('sex', [0, 1]),
fc.numeric_column('ca'),
fc.categorical_column_with_vocabulary_list(
'thal', ['normal', 'fixed', 'reversible']),
]
"""
Explanation: Feature Columns
As for any other TF estimator, data needs to be passed to the estimator, which is typically via an input_fn and parsed using FeatureColumns.
End of explanation
"""
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=train_x,
y=train_y,
shuffle=True,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
num_threads=1)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=test_x,
y=test_y,
shuffle=False,
batch_size=BATCH_SIZE,
num_epochs=1,
num_threads=1)
"""
Explanation: Note that categorical features do not need to be wrapped by a dense feature column, since tfl.laysers.CategoricalCalibration layer can directly consume category indices.
Creating input_fn
As for any other estimator, you can use input_fn to feed data to the model for training and evaluation.
End of explanation
"""
def model_fn(features, labels, mode, config):
"""model_fn for the custom estimator."""
del config
input_tensors = tfl.estimators.transform_features(features, feature_columns)
inputs = {
key: tf.keras.layers.Input(shape=(1,), name=key) for key in input_tensors
}
lattice_sizes = [3, 2, 2, 2]
lattice_monotonicities = ['increasing', 'none', 'increasing', 'increasing']
lattice_input = tf.keras.layers.Concatenate(axis=1)([
tfl.layers.PWLCalibration(
input_keypoints=np.linspace(10, 100, num=8, dtype=np.float32),
# The output range of the calibrator should be the input range of
# the following lattice dimension.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
monotonicity='increasing',
)(inputs['age']),
tfl.layers.CategoricalCalibration(
# Number of categories including any missing/default category.
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
)(inputs['sex']),
tfl.layers.PWLCalibration(
input_keypoints=[0.0, 1.0, 2.0, 3.0],
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
# You can specify TFL regularizers as tuple
# ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4),
monotonicity='increasing',
)(inputs['ca']),
tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Categorical monotonicity can be partial order.
# (i, j) indicates that we must have output(i) <= output(j).
# Make sure to set the lattice monotonicity to 'increasing' for this
# dimension.
monotonicities=[(0, 1), (0, 2)],
)(inputs['thal']),
])
output = tfl.layers.Lattice(
lattice_sizes=lattice_sizes, monotonicities=lattice_monotonicities)(
lattice_input)
training = (mode == tf.estimator.ModeKeys.TRAIN)
model = tf.keras.Model(inputs=inputs, outputs=output)
logits = model(input_tensors, training=training)
if training:
optimizer = optimizers.get_optimizer_instance_v2('Adagrad', LEARNING_RATE)
else:
optimizer = None
head = binary_class_head.BinaryClassHead()
return head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=optimizer,
logits=logits,
trainable_variables=model.trainable_variables,
update_ops=model.updates)
"""
Explanation: Creating model_fn
There are several ways to create a custom estimator. Here we will construct a model_fn that calls a Keras model on the parsed input tensors. To parse the input features, you can use tf.feature_column.input_layer, tf.keras.layers.DenseFeatures, or tfl.estimators.transform_features. If you use the latter, you will not need to wrap categorical features with dense feature columns, and the resulting tensors will not be concatenated, which makes it easier to use the features in the calibration layers.
To construct a model, you can mix and match TFL layers or any other Keras layers. Here we create a calibrated lattice Keras model out of TFL layers and impose several monotonicity constraints. We then use the Keras model to create the custom estimator.
End of explanation
"""
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('AUC: {}'.format(results['auc']))
"""
Explanation: Training and Estimator
Using the model_fn we can create and train the estimator.
End of explanation
"""
|
modin-project/modin
|
examples/tutorial/jupyter/execution/omnisci_on_native/local/exercise_1.ipynb
|
apache-2.0
|
import modin.config as cfg
cfg.StorageFormat.put('omnisci')
# Note: Importing notebooks dependencies. Do not change this code!
import numpy as np
import pandas
import sys
import modin
pandas.__version__
modin.__version__
# Implement your answer here. You are also free to play with the size
# and shape of the DataFrame, but beware of exceeding your memory!
import pandas as pd
frame_data = np.random.randint(0, 100, size=(2**10, 2**5))
df = pd.DataFrame(frame_data)
# ***** Do not change the code below! It verifies that
# ***** the exercise has been done correctly. *****
try:
assert df is not None
assert frame_data is not None
assert isinstance(frame_data, np.ndarray)
except:
raise AssertionError("Don't change too much of the original code!")
assert "modin.pandas" in sys.modules, "Not quite correct. Remember the single line of code change (See above)"
import modin.pandas
assert pd == modin.pandas, "Remember the single line of code change (See above)"
assert hasattr(df, "_query_compiler"), "Make sure that `df` is a modin.pandas DataFrame."
print("Success! You only need to change one line of code!")
"""
Explanation: <center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 1: How to use Modin
GOAL: Learn how to import Modin to accelerate and scale pandas workflows.
Modin is a drop-in replacement for pandas that distributes the computation
across all of the cores in your machine or in a cluster.
In practical terms, this means that you can continue using the same pandas scripts
as before and expect the behavior and results to be the same. The only thing that needs
to change is the import statement. Normally, you would change:
python
import pandas as pd
to:
python
import modin.pandas as pd
Changing this line of code will allow you to use all of the cores in your machine to do computation on your data. One of the major performance bottlenecks of pandas is that it only uses a single core for any given computation. Modin exposes an API that is identical to pandas, allowing you to continue interacting with your data as you would with pandas. There are no additional commands required to use Modin locally. Partitioning, scheduling, data transfer, and other related concerns are all handled by Modin under the hood.
<p style="text-align:left;">
<h1>pandas on a multicore laptop
<span style="float:right;">
Modin on a multicore laptop
</span>
<div>
<img align="left" src="../../../img/pandas_multicore.png"><img src="../../../img/modin_multicore.png">
</div>
### Concept for exercise: Dataframe constructor
Often when playing around in pandas, it is useful to create a DataFrame with the constructor. That is where we will start.
```python
import numpy as np
import pandas as pd
frame_data = np.random.randint(0, 100, size=(2**10, 2**5))
df = pd.DataFrame(frame_data)
```
When creating a dataframe from a non-distributed object, it will take extra time to partition the data for Modin. When this is happening, you will see this message:
```
UserWarning: Distributing <class 'numpy.ndarray'> object. This may take some time.
```
Modin uses Ray as an execution engine by default. Since this notebook is related to OmniSci, let's run examples on the OmniSci engine. For reaching this, we need to activate OmniSci either via Modin config or Modin environment variable. See more in [OmniSci usage](https://github.com/modin-project/modin/blob/master/docs/development/using_omnisci.rst) section.
End of explanation
"""
# When working with non-string column labels it could happen that some backend logic would try to insert a column
# with a string name to the frame, so we do add_prefix()
df = df.add_prefix("col")
# Print the first 10 lines.
df.head(10)
df.count()
"""
Explanation: Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.
Concept for Exercise: Data Interaction and Printing
When interacting with data, it is very imporant to look at different parts of the data (e.g. df.head()). Here we will show that you can print the modin.pandas DataFrame in the same ways you would pandas.
End of explanation
"""
|
rriehle/Python300-2017q3
|
2017-07-19.ipynb
|
gpl-3.0
|
cond1 = True
def func1(): print("Hi I'm func1")
"""
Explanation: Functional Programming
Expression based flow control
Using the basic structure of an if/elif chain....
if <cond1>:
func1()
elif <cond3>:
func2()
else:
func3()
...combined with what we know about logical truth tables....
```
AND | True False
True | true false
False | false false
OR | True False
True | true true
False | true false
XOR | True False
True | false true
False | true false
```
AND is Logical Conjunction:
https://en.wikipedia.org/wiki/Truth_table#Logical_conjunction_.28AND.29
OR is Logical Disjunction:
https://en.wikipedia.org/wiki/Truth_table#Logical_disjunction_.28OR.29
XOR is Exclusive Disjunction:
https://en.wikipedia.org/wiki/Truth_table#Exclusive_disjunction
... AND combined with what we know about short-cicuiting logical operators....
https://docs.python.org/3/library/stdtypes.html#boolean-operations-and-or-not
https://en.wikipedia.org/wiki/Short-circuit_evaluation
... THEN how might we handle flow control with expressions rather than an if/elif chain?
End of explanation
"""
(cond1 and func1())
cond1 = False
(cond1 and func1())
cond1 = True
def func1(): return True
(True and func1())
"""
Explanation: Remember the general form:
if <cond1>:
func1()
End of explanation
"""
x = 1
cond1 = (3 > x)
cond1
(cond1 and func1())
((3 > x) and func1())
((3 < x) and func1())
x = 5
cond2 = (3 > x)
cond2
func1()
((cond1) and func1())
((cond2) and func1())
x = 5
cond1 = (3 > x)
(True and func1())
"""
Explanation: Remember to think of cond1 and cond2 as expressions even when using literals; in other words....
End of explanation
"""
cond1 = True
cond2 = False
def func1(): print("Hi im func1")
def func2(): print("Hi im func2")
((cond1 and func1()) or (cond2 and func2()))
ret = func1()
type(ret)
cond1 = False
cond2 = False
def func1(): print("Hi im func1")
def func2(): print("Hi im func2")
def func3(): print("Hi im func3")
"""
Explanation: Now let's try more of the general form:
if (cond1):
func1()
elif (cond2):
func2()
And its corollary as an expression:
(cond1 and func1()) or (cond2 and func2())
End of explanation
"""
((cond1 and func1()) or (cond2 and func2()) or func3())
"""
Explanation: Finally for the entire form:
if (cond1):
func1()
elif (cond2):
func2()
else:
func3()
And its corollary as an expression:
((cond1 and func1()) or (cond2 and func2()) or func3())
End of explanation
"""
my_list = list(range(1000))
print(*my_list)
def multiply_by_two(x):
return x * 2
my_doubled_list = map(multiply_by_two, my_list)
my_doubled_list
print(*my_doubled_list)
"""
Explanation: map, filter, reduce, and yes, lambda
Map/filter/reduce is generally cagegorized as belonging to a functional style for the reasons we discussed in class. They pass functions around to do their work and they perform transformations on data sets.
https://en.wikipedia.org/wiki/MapReduce
In some programming environments that offer immutable data structures map/filter/reduce operate fully functionally, returning entirerly new data sets/structures from their operations. This gives rise to embarassingly parallel map/filter/reduce agorithms.
https://en.wikipedia.org/wiki/Embarrassingly_parallel
End of explanation
"""
my_doubled_list = map(lambda x: x*2, my_list)
"""
Explanation: Same thing with a lambda
End of explanation
"""
my_doubled_list = [i * 2 for i in my_list]
print(*my_doubled_list)
def my_filter(x):
return x > 900
my_filtered_list = filter(my_filter, my_list)
print(*my_filtered_list)
my_doubled_list = map(lambda x: x*2, filter(my_filter, my_list))
print(*my_doubled_list)
my_filtered_list = filter(my_filter, my_list)
type(my_filtered_list)
"""
Explanation: Same thing as a comprehension
End of explanation
"""
my_doubled_list = map(lambda x: x*2, my_filtered_list)
print(*my_doubled_list)
"""
Explanation: my_filtered_list is a filter
and a filter is an iterator
https://docs.python.org/3/library/functions.html#filter
so like any other iterator it can be used only once,
so don't spend it here in the print statement if you want to use it below
print(*my_filtered_list)
End of explanation
"""
from functools import reduce
my_list = list(range(100))
print(*my_list)
sum(my_list)
min(my_list)
max(my_list)
my_sum = reduce(lambda x, y: x+y, my_list)
my_sum
"""
Explanation: reduce
End of explanation
"""
import random
# https://github.com/mrocklin/multipledispatch/
from multipledispatch import dispatch
class Thing(object):
pass
class Paper(Thing):
pass
class Scissors(Thing):
pass
class Rock(Thing):
pass
options = [Paper, Scissors, Rock]
player1 = random.choice(options)
player2 = random.choice(options)
player1
player2
"""
Explanation: Multiple dispatch
There are several multiple dispatch libraries for python. I got confused as to which I had installed into which virtualenv during class. Sorry about that.
https://en.wikipedia.org/wiki/Multiple_dispatch
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
End of explanation
"""
def draw(x, y):
if isinstance(x, Rock):
if isinstance(y, Rock):
return None # No winner
if isinstance(y, Paper):
return y
if isinstance(y, Scissors):
return x
else:
raise TypeError("Unknown type involved")
elif isinstance(x, Paper):
if isinstance(y, Rock):
return x
if isinstance(y, Paper):
return None # No winner
if isinstance(y, Scissors):
return y
else:
raise TypeError("Unknown type involved")
elif isinstance(x, Scissors):
""" This method left as a exercise for the reader. """
pass
draw(player1(), player2())
"""
Explanation: First, the non-functional approach. Notice the pattern from Guido's blog post?'
End of explanation
"""
@dispatch(Rock, Rock)
def draw(x, y):
return None
@dispatch(Paper, Scissors)
def draw(x, y):
return(y)
# This is by no means all of the combinations. Here again they're left as a exercise for the reader.
winner = draw(player1(), player2())
(isinstance(winner, player1) and print("Player1 won... bam!"))
(isinstance(winner, player2) and print("Player2 won... bowzza!"))
"""
Explanation: Now the more functional, multi-dispatch method
End of explanation
"""
|
w4zir/ml17s
|
lectures/lec05-multivariate-regression.ipynb
|
mit
|
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
import matplotlib as mpl
# read data in pandas frame
dataframe = pd.read_csv('datasets/house_dataset2.csv', encoding='utf-8')
# check data by printing first few rows
dataframe.head()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.set_size_inches(12.5, 7.5)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=dataframe['size'], ys=dataframe['bedrooms'], zs=dataframe['price'])
ax.set_ylabel('bedrooms'); ax.set_xlabel('size'); ax.set_zlabel('price')
# ax.view_init(10, -45)
plt.show()
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 5: Multivariate Regression
Overview
Machine Learning pipeline
Linear Regression with one variable
Model Representation
Vectorize Model
Linear Regression with multiple variables
Cost Function
Gradient Descent
Speed up gradient descent
Feature Scaling
Mean Normalization
Combining Both
Learning Rate $\alpha$
Automatic Covergence Test
Linear Regression with Multiple Variables Example
Read data
Feature Scaling and Mean Normalization
Initialize Hyper Parameters
Model/Hypothesis Function
Cost Function
Gradient Descent
Run Gradient Descent
Plot Convergence
Predict output using trained model
Resources
Credits
<br>
<br>
Classification vs Regression
<img style="float: left;" src="images/05_05.jpg" width=300> <img style="float: center;" src="images/05_04.png" width=400>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
<br>
Vectorize Model
<img style="float: right;" src="images/02_02.png" width=300>
Write model in form of matrix multiplication
$h(x)$ = $X \times \theta$
$X$ and $\theta$ are both matrices
$X = \left[ \begin{array}{cc}
x_1 \
x_2 \
x_3 \
... \
x_{m}
\end{array} \right]$
$h(x)$ = $\theta_0 + \theta_1 x$
= $X \times \theta$ = $\left[ \begin{array}{cc}
1 & x_i
\end{array} \right] \times \left[ \begin{array}{cc}
\theta_0 \
\theta_1
\end{array} \right]$
$h(x)$ = $\left[ \begin{array}{cc}
\theta_0 + \theta_1 x_1 \
\theta_0 + \theta_1 x_2 \
\theta_0 + \theta_1 x_3 \
... \
\theta_0 + \theta_1 x_{m}
\end{array} \right] = \left[ \begin{array}{cc}
1 & x_1 \
1 & x_2 \
1 & x_3 \
... \
1 & x_{m}
\end{array} \right] \times \left[ \begin{array}{cc}
\theta_0 \
\theta_1
\end{array} \right]$
In given dataset $X$ has dimensions $m \times 1$ because of 1 variable
$\theta$ has dimension $2\times 1$
Append a column vector of all 1's to X
New X has dimensions $m\times 2$
$h(x) = X \times \theta$ has dimensions $m\times 1$
<br>
Linear Regression with multiple variables
<img style="float: right;" src="images/02_03.png" width=300>
Model $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
Dimensions of $X$ is $m\times n$
$X = \left[ \begin{array}{cc}
x_1^1 & x_1^2 & .. & x_1^{n} \
x_2^1 & x_2^2 & .. & x_2^{n} \
x_3^1 & x_3^2 & .. & x_3^{n} \
... \
x_{m}^1 & x_{m}^2 & .. & x_{m}^{n}
\end{array} \right]$
$\theta$ has dimension $(n+1)\times 1$
$\theta = \left[ \begin{array}{cc}
\theta_0 \
\theta_1 \
\theta_2 \
... \
\theta_{n} \
\end{array} \right]$
<br>
- Append a column vector of all 1's to X
- Now X has dimensions $m\times (n+1)$
$X = \left[ \begin{array}{cc}
1 & x_1^1 & x_1^2 & .. & x_1^{n} \
1 & x_2^1 & x_2^2 & .. & x_2^{n} \
1 & x_3^1 & x_3^2 & .. & x_3^{n} \
... \
1 & x_{m}^1 & x_{m}^2 & .. & x_{m}^{n}
\end{array} \right]$
where $x_i$ is $i^{th}$ sample, e.g.
$x_2 = [ \begin{array}{cc} 4.9 & 3.0 & 1.4 & 0.2 \end{array}]$
and $x_i^{j}$ is value of feature $j$ in the $i^{th}$ training example e.g. $x_2^3=1.4$
$h(x) = X \times \theta$ has dimensions $m\times 1$
<br>
<br>
<br>
Cost Function
Cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
where $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
<img style="float: center;" src="images/03_02.png", width=300>
<br>
<br>
Gradient Descent
Cost function:
J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Gradient descent equation:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
<br>
Replacing J($\theta$) for each j
$\begin{align} & \text{repeat until convergence:} \; \lbrace \newline \; & \theta_0 := \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^0_{i}\newline \; & \theta_1 := \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^1_{i} \newline \; & \theta_2 := \theta_2 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^2_{i} \newline & \cdots \newline \rbrace \end{align}$
<br>
or more generally
$\begin{align}& \text{repeat until convergence:} \; \lbrace \newline \; & \theta_j := \theta_j - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^j_{i} \; & \text{for j := 0...n}\newline \rbrace\end{align}$
<br>
<img style="float: left;" src="images/03_04.gif">
Speed up gradient descent
Tricks to make gradient descent converge faster to optimal value
Each of our input values in roughly the same range.
$\theta$ will descend quickly on small ranges and slowly on large ranges.
$\theta$ Will oscillate inefficiently down to the optimum when the variables are very uneven.
Aim is to have:
$-1 \le x^i \le 1$
or
$-0.5 \le x^i \le 0.5$
<br>
Feature Scaling
Divide the values of a feature by its range
$x^i = \frac{x^i}{\max(x^i) - \min(x^i)}$
<img style="float: center;" src="images/05_06.png">
Mean Normalization
Bring mean of each feature to zero
$x^i = x^i - \mu^i$
where $\mu^i$ is the mean of feature $i$
Combining both
$x^i = \frac{x^i - \mu^i}{\max(x^i) - \min(x^i)}$
or
$x^i = \frac{x^i - \mu^i}{\rho^i}$
where $\rho^i$ is standard deviation of feature $i$
<br>
Learning Rate $\alpha$
Appropriate $\alpha$ value will speed up gradient descent.
If $\alpha$ is too small: slow convergence.
If $\alpha$ is too large: may not decrease on every iteration and thus may not converge.
<img src="images/05_01.png">
For implementation purpose, try out different values of $\alpha$ e.g. 0.001, 0.003, 0.01, 0.03, 0.1 and plot $J(\theta)$ with respect to iterations.
Choose the one the makes gradient descent coverge quickly.
<br>
Automatic Covergence Test
Plot $J(\theta)$ vs iterations.
$J(\theta)$ should descrease on each iteration.
If $J(\theta)$ decrease by a very small value in an iteration, you may have reached optimal value.
<img style="float: left;" src="images/05_02.png">
<br>
<br>
Linear Regression with Multiple Variables Example
Read data
End of explanation
"""
dataframe.describe()
#Quick visualize data
plt.grid(True)
plt.xlim([-1,5000])
dummy = plt.hist(dataframe["size"],label = 'Size')
dummy = plt.hist(dataframe["bedrooms"],label = 'Bedrooms')
plt.title('Clearly we need feature normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
mean_size = dataframe["size"].mean()
std_size = dataframe["size"].std()
mean_bed = dataframe["bedrooms"].mean()
std_bed = dataframe["bedrooms"].std()
dataframe["size"] = (dataframe["size"] - mean_size)/std_size
dataframe["bedrooms"] = (dataframe["bedrooms"] - mean_bed)/std_bed
dataframe.describe()
# reassign X
# assign X
X = np.array(dataframe[['size','bedrooms']])
X = np.insert(X,0,1,axis=1)
#Quick visualize data
plt.grid(True)
plt.xlim([-5,5])
dummy = plt.hist(dataframe["size"],label = 'Size')
dummy = plt.hist(dataframe["bedrooms"],label = 'Bedrooms')
plt.title('Feature scaled and normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
# assign X and y
X = np.array(dataframe[['size','bedrooms']])
y = np.array(dataframe[['price']])
m = y.size # number of training examples
# insert all 1's column for theta_0
X = np.insert(X,0,1,axis=1)
# initialize theta
# initial_theta = np.zeros((X.shape[1],1))
initial_theta = np.random.rand(X.shape[1],1)
initial_theta
X.shape
initial_theta.shape
"""
Explanation: Feature Scaling and Mean Normalization
End of explanation
"""
iterations = 1500
alpha = 0.1
"""
Explanation: Initialize Hyper Parameters
End of explanation
"""
def h(X, theta): #Linear hypothesis function
hx = np.dot(X,theta)
return hx
"""
Explanation: Model/Hypothesis Function
End of explanation
"""
def computeCost(theta,X,y): #Cost function
"""
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
"""
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))
#Test that running computeCost with 0's as theta returns 65591548106.45744:
initial_theta = np.zeros((X.shape[1],1)) #(theta is a vector with n rows and 1 columns (if X has n features) )
print (computeCost(initial_theta,X,y))
"""
Explanation: Cost Function
End of explanation
"""
#Actual gradient descent minimizing routine
def gradientDescent(X, theta_start = np.zeros(2)):
"""
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
"""
theta = theta_start
j_history = [] #Used to plot cost as function of iteration
theta_history = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
j_history.append(computeCost(theta,X,y))
theta_history.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, theta_history, j_history
"""
Explanation: Gradient Descent Function
End of explanation
"""
#Actually run gradient descent to get the best-fit theta values
theta, thetahistory, j_history = gradientDescent(X,initial_theta)
theta
"""
Explanation: Run Gradient Descent
End of explanation
"""
plt.plot(j_history)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
"""
Explanation: Plot Convergence
End of explanation
"""
dataframe.head()
x_test = np.array([1,0.130010,-0.22367])
print("$%0.2f" % float(h(x_test,theta)))
hx = h(X, theta)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
fig.set_size_inches(12.5, 7.5)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=dataframe['size'], ys=dataframe['bedrooms'], zs=dataframe['price'])
ax.set_ylabel('bedrooms'); ax.set_xlabel('size'); ax.set_zlabel('price')
# ax.plot(xs=np.array(X[:,0],dtype=object).reshape(-1,1), ys=np.array(X[:,1],dtype=object).reshape(-1,1), zs=hx, color='green')
ax.plot(X[:,0], X[:,1], np.array(hx[:,0]), label='fitted line', color='green')
# ax.view_init(20, -165)
plt.show()
"""
Explanation: Predict output using trained model
End of explanation
"""
|
arne-cl/alt-mulig
|
python/python-metaprogramming-david-beazley.ipynb
|
gpl-3.0
|
from functools import wraps
def debug(func):
msg = func.__name__
# wraps is used to keep the metadata of the original function
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
@debug
def add(x,y):
return x+y
add(2,3)
def add(x,y):
return x+y
debug(add)
debug(add)(2,3)
"""
Explanation: Notes from David Beazley's Python3 Metaprogramming tutorial (2013)
"ported" to Python 2.7, unless noted otherwise
A Debugging Decorator
End of explanation
"""
def debug_with_args(prefix=''):
def decorate(func):
msg = prefix + func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
return decorate
@debug_with_args(prefix='***')
def mul(x,y):
return x*y
mul(2,3)
def mul(x,y):
return x*y
debug_with_args(prefix='***')
debug_with_args(prefix='***')(mul)
debug_with_args(prefix='***')(mul)(2,3)
"""
Explanation: Decorators with arguments
Calling convention
python
@decorator(args)
def func():
pass
Evaluation
python
func = decorator(args)(func)
End of explanation
"""
from functools import wraps, partial
def debug_with_args2(func=None, prefix=''):
if func is None: # no function was passed
return partial(debug_with_args2, prefix=prefix)
msg = prefix + func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
@debug_with_args2(prefix='***')
def div(x,y):
return x / y
div(4,2)
def div(x,y):
return x / y
debug_with_args2(prefix='***')
debug_with_args2(prefix='***')(div)
debug_with_args2(prefix='***')(div)(4,2)
f = debug_with_args2(prefix='***')
def div(x,y):
return x / y
debug_with_args2(prefix='***')(div)
"""
Explanation: Decorators with arguments: a reformulation
TODO: show what happens without the partial application to itself!
End of explanation
"""
def debug_with_args_nonpartial(func, prefix=''):
msg = prefix + func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
def plus1(x):
return x+1
debug_with_args_nonpartial(plus1, prefix='***')(23)
@debug_with_args_nonpartial
def plus1(x):
return x+1
plus1(23)
@debug_with_args_nonpartial(prefix='***')
def plus1(x):
return x+1
"""
Explanation: Debug with arguments: without partial()
this won't work with arguments
End of explanation
"""
def debug_with_args3(*args, **kwargs):
def inner(func, **kwargs):
if 'prefix' in kwargs:
msg = kwargs['prefix'] + func.__name__
else:
msg = func.__name__
print(msg)
return func
# decorator without arguments
if len(args) == 1 and callable(args[0]):
func = args[0]
return inner(func)
# decorator with keyword arguments
else:
return partial(inner, prefix=kwargs['prefix'])
def plus2(x):
return x+2
debug_with_args3(plus2)(23)
debug_with_args3(prefix='***')(plus2)(23)
@debug_with_args3 # WRONG: this shouldn't print anything during creation
def plus2(x):
return x+2
plus2(12) # WRONG: this should print the function name and the prefix
@debug_with_args3(prefix='###') # WRONG: this shouldn't print anything during creation
def plus2(x):
return x+2
plus2(12) # WRONG: this should print the function name and the prefix
"""
Explanation: Decorators with arguments: memprof-style
this doesn't work at all
```python
def memprof(args, kwargs):
def inner(func):
return MemProf(func, args, **kwargs)
# To allow @memprof with parameters
if len(args) and callable(args[0]):
func = args[0]
args = args[1:]
return inner(func)
else:
return inner
```
End of explanation
"""
def debugmethods(cls):
for name, val in vars(cls).items():
if callable(val):
setattr(cls, name, debug(val))
return cls
@debugmethods
class Spam(object):
def foo(self):
pass
def bar(self):
pass
s = Spam()
s.foo()
s.bar()
"""
Explanation: Class decorators
decorate all methods of a class at once
NOTE: only instance methods will be wrapped, i.e. this won't work with static- or class methods
End of explanation
"""
def debugattr(cls):
orig_getattribute = cls.__getattribute__
def __getattribute__(self, name):
print('Get:', name)
return orig_getattribute(self, name)
cls.__getattribute__ = __getattribute__
return cls
@debugattr
class Ham(object):
def foo(self):
pass
def bar(self):
pass
h = Ham()
h.foo()
h.bar
"""
Explanation: Class decoration: debug access to attributes
End of explanation
"""
class debugmeta(type):
def __new__(cls, clsname, bases, clsdict):
clsobj = super(cls).__new__(cls, clsname, bases, clsdict)
clsobj = debugmethods(clsobj)
return clsobj
# class Base(metaclass=debugmeta): # won't work in Python 2.7
# pass
# class Bam(Base):
# pass
# cf. minute 27
"""
Explanation: Debug all the classes?
TODO: this looks Python3-specific
Solution: A Metaclass
End of explanation
"""
class Spam:
pass
s = Spam()
from copy import deepcopy
current_vars = deepcopy(globals())
for var in current_vars:
if callable(current_vars[var]):
print var,
frozendict
for var in current_vars:
cls = getattr(current_vars[var], '__class__')
if cls:
print var, cls
print current_vars['Spam']
type(current_vars['Spam'])
callable(Spam)
callable(s)
isinstance(Spam, classobj)
__name__
sc = s.__class__
type('Foo', (), {})
"""
Explanation: Can we inject the debugging code into all known classes?
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II
|
02_Mesh_generation/3_Quad_mesh_TFI_sea_dike.ipynb
|
gpl-3.0
|
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
"""
# Import Libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Here, I introduce a new library, which is useful
# to define the fonts and size of a figure in a notebook
from pylab import rcParams
# Get rid of a Matplotlib deprecation warning
import warnings
warnings.filterwarnings("ignore")
# Define number of grid points in x-direction and spatial vectors
NXtopo = 100
x_dike = np.linspace(0.0, 61.465, num=NXtopo)
z_dike = np.zeros(NXtopo)
# calculate dike topograpy
def dike_topo(x_dike, z_dike, NX1):
for i in range(NX1):
if(x_dike[i]<4.0):
z_dike[i] = 0.0
if(x_dike[i]>=4.0 and x_dike[i]<18.5):
z_dike[i] = (x_dike[i]-4) * 6.76/14.5
if(x_dike[i]>=18.5 and x_dike[i]<22.5):
z_dike[i] = 6.76
if(x_dike[i]>=22.5 and x_dike[i]<x_dike[-1]):
z_dike[i] = -(x_dike[i]-22.5) * 3.82/21.67 + 6.76
return x_dike, z_dike
# Define figure size
rcParams['figure.figsize'] = 10, 7
# Plot sea dike topography
dike_topo(x_dike,z_dike,NXtopo)
plt.plot(x_dike,z_dike)
plt.title("Sea dike topography" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
"""
Explanation: Mesh generation by Transfinite Interpolation applied to the sea dike problem
We have implemented and tested our mesh generation approach using Transfinite Interpolation (TFI) in the previous lesson. Now, let's apply it to the problem of the sea dike with strong topography.
Revisiting the sea dike problem
To generate a deformed quad mesh incorporating the strong topography of the sea dike, we only have to describe the topography by a parametrized curve. We can roughly describe it by the following equations:
$x = 0\; m - 4\; m\; \rightarrow\; z(x) = 0\; m$
$x = 4\; m - 18.5\; m\; \rightarrow\; z(x) = \frac{6.76}{14.5}(x-4)\; m$
$x = 18.5\; m - 22.5\; m\; \rightarrow\; z(x) = 6.76\; m$
$x = 22.5\; m - 44.17\; m\; \rightarrow\; z(x) = -\frac{3.82}{21.67}(x-22.5)\; m$
This might be a little bit rough approximation, because photos of the data acquisition show a smooth transition between the tilted and horizontal surfaces of the dike. Nevertheless, let's try to generate a mesh for this topography model.
End of explanation
"""
# Normalize sea dike topography
xmax_dike = np.max(x_dike)
zmax_dike = np.max(z_dike)
x_dike_norm = x_dike / xmax_dike
z_dike_norm = z_dike / zmax_dike + 1
# Plot normalized sea dike topography
plt.plot(x_dike_norm,z_dike_norm)
plt.title("Normalized sea dike topography" )
plt.xlabel("x []")
plt.ylabel("z []")
plt.axes().set_aspect('equal')
"""
Explanation: Unfortunately, the TFI is defined on the unit square, so we have to normalize the sea dike topography, before applying the TFI.
End of explanation
"""
# Define parameters for deformed Cartesian mesh
NX = 80
NZ = 20
# Define parametric curves at model boundaries ...
# ... bottom boundary
def Xb(s):
x = s
z = 0.0
xzb = [x,z]
return xzb
# ... top boundary
def Xt(s):
x = s
# normalized x-coordinate s -> unnormalized x-coordinate x_d
x_d = xmax_dike * s
z_d = 0.0
if(x_d<4.0):
z_d = 0.0
if(x_d>=4.0 and x_d<18.5):
z_d = (x_d-4) * 6.76/14.5
if(x_d>=18.5 and x_d<22.5):
z_d = 6.76
if(x_d>=22.5 and x_d<xmax_dike):
z_d = -(x_d-22.5) * 3.82/21.67 + 6.76
# unnormalized z-coordinate z_d -> normalized z-coordinate z
z = z_d / zmax_dike + 1
xzt = [x,z]
return xzt
# ... left boundary
def Xl(s):
x = 0.0
z = s
xzl = [x,z]
return xzl
# ... right boundary
def Xr(s):
x = 1
z = s
xzr = [x,z]
return xzr
# Transfinite interpolation
# Discretize along xi and eta axis
xi = np.linspace(0.0, 1.0, num=NX)
eta = np.linspace(0.0, 1.0, num=NZ)
xi1, eta1 = np.meshgrid(xi, eta)
# Intialize matrices for x and z axis
X = np.zeros((NX,NZ))
Z = np.zeros((NX,NZ))
# loop over cells
for i in range(NX):
Xi = xi[i]
for j in range(NZ):
Eta = eta[j]
xb = Xb(Xi)
xb0 = Xb(0)
xb1 = Xb(1)
xt = Xt(Xi)
xt0 = Xt(0)
xt1 = Xt(1)
xl = Xl(Eta)
xr = Xr(Eta)
# Transfinite Interpolation (Gordon-Hall algorithm)
X[i,j] = (1-Eta) * xb[0] + Eta * xt[0] + (1-Xi) * xl[0] + Xi * xr[0] \
- (Xi * Eta * xt1[0] + Xi * (1-Eta) * xb1[0] + Eta * (1-Xi) * xt0[0] \
+ (1-Xi) * (1-Eta) * xb0[0])
Z[i,j] = (1-Eta) * xb[1] + Eta * xt[1] + (1-Xi) * xl[1] + Xi * xr[1] \
- (Xi * Eta * xt1[1] + Xi * (1-Eta) * xb1[1] + Eta * (1-Xi) * xt0[1] \
+ (1-Xi) * (1-Eta) * xb0[1])
"""
Explanation: OK, now we have the normalized dike topography on a unit square, so we can define the parametric curve for the topography.
End of explanation
"""
# Unnormalize the mesh
X = X * xmax_dike
Z = Z * zmax_dike
# Plot TFI mesh (physical domain)
plt.plot(X, Z, 'k')
plt.plot(X.T, Z.T, 'k')
plt.title("Sea dike TFI grid (physical domain)" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.savefig('sea_dike_TFI.pdf', bbox_inches='tight', format='pdf')
plt.show()
"""
Explanation: No error so far. Before plotting the generated mesh, we have to unnormalize the spatial coordinates.
End of explanation
"""
|
woters/ds101
|
Titanic_completed.ipynb
|
mit
|
# pandas
import pandas as pd
from pandas import DataFrame
import re
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import learning_curve, train_test_split, GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score
"""
Explanation: Импортируем все необходимые библиотеки
End of explanation
"""
train_df = pd.read_csv("titanic/train.csv")
test_df = pd.read_csv("titanic/test.csv")
test_df.head()
train_df.info()
print("----------------------------")
test_df.info()
"""
Explanation: Загружаем наши данные и смотрим на их состояние
End of explanation
"""
# Embarked
train_df[train_df.Embarked.isnull()]
"""
Explanation: Легко заметить, что в тренировочном датасете у нас не хватает данных о возрасте, каюте и месте погружения пассажира на корабль.
В тестовом датасете нам не хватает данных о возрасте, каюте и плате за пребывание на корабле.
Для начала разберемся с полем Embarked в тренировочном датасете, которое отвечает за место погружения.
Проверим, в каких строках у нас отсутствуют данные.
End of explanation
"""
# plot
#sns.factorplot('Embarked','Survived', data=train_df,size=4,aspect=3)
fig, (axis1,axis2,axis3) = plt.subplots(1,3,figsize=(15,5))
sns.countplot(x='Embarked', data=train_df, ax=axis1)
sns.countplot(x='Survived', hue="Embarked", data=train_df, order=[1,0], ax=axis2)
# group by embarked, and get the mean for survived passengers for each value in Embarked
embark_perc = train_df[["Embarked", "Survived"]].groupby(['Embarked'],as_index=False).mean()
sns.barplot(x='Embarked', y='Survived', data=embark_perc,order=['S','C','Q'],ax=axis3)
"""
Explanation: Посмотрим на общею зависимость шанса выживания от пункта погружения.
End of explanation
"""
train_df.loc[train_df.Ticket == '113572']
print( 'C == ' + str( len(train_df.loc[train_df.Pclass == 1].loc[train_df.Fare > 75].loc[train_df.Fare < 85].loc[train_df.Embarked == 'C']) ) )
print( 'S == ' + str( len(train_df.loc[train_df.Pclass == 1].loc[train_df.Fare > 75].loc[train_df.Fare < 85].loc[train_df.Embarked == 'S']) ) )
train_df = train_df.set_value(train_df.Embarked.isnull(), 'Embarked', 'C')
train_df.loc[train_df.Embarked.isnull()]
"""
Explanation: Смотрим на другие возможные зависимости, которые могли б нам указать на то, где пассажиры попали на корабль.
End of explanation
"""
test_df[test_df.Fare.isnull()]
"""
Explanation: Теперь исправим пустое поле с платой за путешествение в тестовом датасете.
End of explanation
"""
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111)
test_df[(test_df.Pclass==3)&(test_df.Embarked=='S')].Fare.hist(bins=100, ax=ax)
plt.xlabel('Fare')
plt.ylabel('Frequency')
plt.title('Histogram of Fare, Plcass 3 and Embarked S')
print ("The top 5 most common value of Fare")
test_df[(test_df.Pclass==3)&(test_df.Embarked=='S')].Fare.value_counts().head()
"""
Explanation: Давайте посмотрим на всех пассажиров, с похожими другими признаками.
End of explanation
"""
test_df.set_value(test_df.Fare.isnull(), 'Fare', 8.05)
test_df.loc[test_df.Fare.isnull()]
"""
Explanation: Делаем вывод, что вероятнее всего плата была в таком размере.
End of explanation
"""
test_df.loc[test_df.Age.isnull()].head()
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
axis1.set_title('Original Age values')
axis2.set_title('New Age values')
# среднее, дисперсия и пустые значение в тестовом датасете
average_age_train = train_df["Age"].mean()
std_age_train = train_df["Age"].std()
count_nan_age_train = train_df["Age"].isnull().sum()
# среднее, дисперсия и пустые значение в тестовом датасете
average_age_test = test_df["Age"].mean()
std_age_test = test_df["Age"].std()
count_nan_age_test = test_df["Age"].isnull().sum()
# генерируем случайные значения (mean - std) & (mean + std)
rand_1 = np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train, size = count_nan_age_train)
rand_2 = np.random.randint(average_age_test - std_age_test, average_age_test + std_age_test, size = count_nan_age_test)
# строим гистограму возраста до изменений (пустые конвертим в инты)
train_df['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
test_df['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
# заполняем случайными значениями пустые поля с возрастом
train_df["Age"][np.isnan(train_df["Age"])] = rand_1
test_df["Age"][np.isnan(test_df["Age"])] = rand_2
# конвертим флоаты в инты
train_df['Age'] = train_df['Age'].astype(int)
test_df['Age'] = test_df['Age'].astype(int)
# гистограма нового возраста
train_df['Age'].hist(bins=70, ax=axis2)
test_df['Age'].hist(bins=70, ax=axis2)
# Еще немного графиков
# пик выживаемости в зависимости от возраста
facet = sns.FacetGrid(train_df, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train_df['Age'].max()))
facet.add_legend()
# средняя выживаемость по возрасту
fig, axis1 = plt.subplots(1,1,figsize=(18,4))
average_age = train_df[["Age", "Survived"]].groupby(['Age'],as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=average_age)
train_df.info()
test_df.info()
"""
Explanation: Теперь разберемся с полем Возраста в тренировочном датасете. Ему нужно уделить больше внимания, т.к. это очень важный признак, который сильно влияет на выживаемость пассажиров.
End of explanation
"""
Title_Dictionary = {
"Capt": "Officer",
"Col": "Officer",
"Major": "Officer",
"Jonkheer": "Nobel",
"Don": "Nobel",
"Sir" : "Nobel",
"Dr": "Officer",
"Rev": "Officer",
"the Countess":"Nobel",
"Dona": "Nobel",
"Mme": "Mrs",
"Mlle": "Miss",
"Ms": "Mrs",
"Mr" : "Mr",
"Mrs" : "Mrs",
"Miss" : "Miss",
"Master" : "Master",
"Lady" : "Nobel"
}
train_df['Title'] = train_df['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
test_df['Title'] = test_df['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
train_df.head(100)
"""
Explanation: В именах есть приставки, с ними тоже можно кое-что сделать, т.к. социальный статус может быть важным признаком выживаемости.
End of explanation
"""
train_df['FamilySize'] = train_df['SibSp'] + train_df['Parch']
test_df['FamilySize'] = test_df['SibSp'] + test_df['Parch']
train_df.head()
"""
Explanation: Вместо двух полей указывающий на наличие партнера (Parch) или родственника (SibSp), сделаем одно поле FamilySize
End of explanation
"""
def get_person(passenger):
age,sex = passenger
return 'child' if age < 16 else sex
train_df['Person'] = train_df[['Age','Sex']].apply(get_person,axis=1)
test_df['Person'] = test_df[['Age','Sex']].apply(get_person,axis=1)
train_df.head()
train_df.info()
print("----------------------------")
train_df.info()
"""
Explanation: Пол тоже очень важный признак, но если вы смотрели фильм титаник, то наверное помните "Сначала женщин и детей." Поэтому предлагаю сооздать новый признак, который будет учитывать как пол, так и возраст
End of explanation
"""
train_df.drop(labels=['PassengerId', 'Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
test_df.drop(labels=['Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
train_df.head()
"""
Explanation: Убедились, что теперь наши данные в порядке и переходим к откидыванию лишнего.
End of explanation
"""
dummies_person_train = pd.get_dummies(train_df['Person'],prefix='Person')
dummies_embarked_train = pd.get_dummies(train_df['Embarked'], prefix= 'Embarked')
dummies_title_train = pd.get_dummies(train_df['Title'], prefix= 'Title')
dummies_pclass_train = pd.get_dummies(train_df['Pclass'], prefix= 'Pclass')
train_df = pd.concat([train_df, dummies_person_train, dummies_embarked_train, dummies_title_train, dummies_pclass_train], axis=1)
train_df = train_df.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
train_df.head()
dummies_person_test = pd.get_dummies(test_df['Person'],prefix='Person')
dummies_embarked_test = pd.get_dummies(test_df['Embarked'], prefix= 'Embarked')
dummies_title_test = pd.get_dummies(test_df['Title'], prefix= 'Title')
dummies_pclass_test = pd.get_dummies(test_df['Pclass'], prefix= 'Pclass')
test_df = pd.concat([test_df, dummies_person_test, dummies_embarked_test, dummies_title_test, dummies_pclass_test], axis=1)
test_df = test_df.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
test_df.head()
"""
Explanation: У нас есть дискретные переменные и нам стоило б их закодировать. Для этого в пандас уже существует функция get_dummies
End of explanation
"""
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):
plt.figure(figsize=(10,6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel(scoring)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, scoring=scoring,
n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
"""
Explanation: Создадим функцию, которая будет строить зависимость обучаемости от кол-ва тестовых семплов.
End of explanation
"""
X = train_df.drop(['Survived'], axis=1)
y = train_df.Survived
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size = 0.3)
"""
Explanation: Разбиваем наш тренировочный датасет на 2, что б прежде чем сабмитить нашу модель, мы убедились что она не переобучается на наших данных (т.н. кросс-валидация)
End of explanation
"""
# Choose the type of classifier.
clf = RandomForestClassifier()
# Choose some parameter combinations to try
parameters = {'n_estimators': [4, 6, 9],
'max_features': ['log2', 'sqrt','auto'],
'criterion': ['entropy', 'gini'],
'max_depth': [2, 3, 5, 10],
'min_samples_split': [2, 3, 5],
'min_samples_leaf': [1,5,8]
}
# Type of scoring used to compare parameter combinations
acc_scorer = make_scorer(accuracy_score)
# Run the grid search
grid_obj = GridSearchCV(clf, parameters, scoring=acc_scorer)
grid_obj = grid_obj.fit(X_train, y_train)
# Set the clf to the best combination of parameters
clf = grid_obj.best_estimator_
# Fit the best algorithm to the data.
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print(accuracy_score(y_test, predictions))
plot_learning_curve(clf, 'Random Forest', X, y, cv=4);
from sklearn.cross_validation import KFold
def run_kfold(clf):
kf = KFold(891, n_folds=10)
outcomes = []
fold = 0
for train_index, test_index in kf:
fold += 1
X_train, X_test = X.values[train_index], X.values[test_index]
y_train, y_test = y.values[train_index], y.values[test_index]
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
accuracy = accuracy_score(y_test, predictions)
outcomes.append(accuracy)
print("Fold {0} accuracy: {1}".format(fold, accuracy))
mean_outcome = np.mean(outcomes)
print("Mean Accuracy: {0}".format(mean_outcome))
run_kfold(clf)
"""
Explanation: Посмотрим модель рандом фореста. Параметры укажем обычные, потом благодаря GridSearchCV подберем оптимальные. Ну и в конце взглянем на то, что у нас вышло
End of explanation
"""
from sklearn.linear_model import LogisticRegression
lg = LogisticRegression(random_state=42, penalty='l1')
parameters = {'C':[0.5]}
# Type of scoring used to compare parameter combinations
acc_scorer_lg = make_scorer(accuracy_score)
# Run the grid search
grid_obj_lg = GridSearchCV(lg, parameters, scoring=acc_scorer_lg)
grid_obj_lg = grid_obj_lg.fit(X_train, y_train)
# Set the clf to the best combination of parameters
lg = grid_obj_lg.best_estimator_
# Fit the best algorithm to the data.
lg.fit(X_train, y_train)
predictions_lg = lg.predict(X_test)
print(accuracy_score(y_test, predictions_lg))
plot_learning_curve(lg, 'Logistic Regression', X, y, cv=4);
"""
Explanation: Повторим все выше описанные процедуры, которые мы делали для рандом фореста, теперь для логистической регрессии.
End of explanation
"""
ids = test_df['PassengerId']
predictions = clf.predict(test_df.drop('PassengerId', axis=1))
output = pd.DataFrame({ 'PassengerId' : ids, 'Survived': predictions })
output.to_csv('titanic-predictions.csv', index = False)
output.head()
"""
Explanation: Выбираем ту модель, которая нам больше понравилась и сабмитим ее на кагл.
End of explanation
"""
|
tsivula/BDA_py_demos
|
demos_ch4/demo4_1.ipynb
|
gpl-3.0
|
import numpy as np
from scipy import optimize, stats
%matplotlib inline
import matplotlib.pyplot as plt
import os, sys
# add utilities directory to path
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
sys.path.insert(0, util_path)
# import from utilities
import plot_tools
# edit default plot settings
plt.rc('font', size=12)
# apply custom background plotting style
plt.style.use(plot_tools.custom_styles['gray_background'])
# Bioassay data, (BDA3 page 86)
x = np.array([-0.86, -0.30, -0.05, 0.73])
n = np.array([5, 5, 5, 5])
y = np.array([0, 1, 3, 5])
# compute the posterior density in grid
# - usually should be computed in logarithms!
# - with alternative prior, check that range and spacing of A and B
# are sensible
ngrid = 100
A = np.linspace(-4, 8, ngrid)
B = np.linspace(-10, 40, ngrid)
ilogit_abx = 1 / (np.exp(-(A[:,None] + B[:,None,None] * x)) + 1)
p = np.prod(ilogit_abx**y * (1 - ilogit_abx)**(n - y), axis=2)
"""
Explanation: Bayesian Data Analysis, 3rd ed
Chapter 4, demo 1
Normal approximaton for Bioassay model.
End of explanation
"""
# sample from the grid
nsamp = 1000
samp_indices = np.unravel_index(
np.random.choice(p.size, size=nsamp, p=p.ravel()/np.sum(p)),
p.shape
)
samp_A = A[samp_indices[1]]
samp_B = B[samp_indices[0]]
# add random jitter, see BDA3 p. 76
samp_A += (np.random.rand(nsamp) - 0.5) * (A[1]-A[0])
samp_B += (np.random.rand(nsamp) - 0.5) * (B[1]-B[0])
# samples of LD50
samp_ld50 = - samp_A / samp_B
"""
Explanation: The following demonstrates an alternative "bad" way of calcuting the posterior density p in a for loop. The vectorised statement above is numerically more efficient. In this small example however, it would not matter that much.
p = np.empty((len(B),len(A))) # allocate space
for i in range(len(A)):
for j in range(len(B)):
ilogit_abx_ij = (1 / (np.exp(-(A[i] + B[j] * x)) + 1))
p[j,i] = np.prod(ilogit_abx_ij**y * ilogit_abx_ij**(n - y))
N.B. the vectorised expression can be made even more efficient, e.g. by optimising memory usage with in-place statements, but it would result in a less readable code.
End of explanation
"""
# define the optimised function
def bioassayfun(w):
a = w[0]
b = w[1]
et = np.exp(a + b * x)
z = et / (1 + et)
e = - np.sum(y * np.log(z) + (n - y) * np.log(1 - z))
return e
# initial guess
w0 = np.array([0.0, 0.0])
# optimise
optim_res = optimize.minimize(bioassayfun, w0)
# extract desired results
w = optim_res['x']
S = optim_res['hess_inv']
# compute the normal approximation density in grid
# this is just for the illustration
# Construct a grid array of shape (ngrid, ngrid, 2) from A and B. Although
# Numpy's concatenation functions do not support broadcasting, a clever trick
# can be applied to overcome this without unnecessary memory copies
# (see Numpy's documentation for strides for more information):
A_broadcasted = np.lib.stride_tricks.as_strided(
A, shape=(ngrid,ngrid), strides=(0, A.strides[0]))
B_broadcasted = np.lib.stride_tricks.as_strided(
B, shape=(ngrid,ngrid), strides=(B.strides[0], 0))
grid = np.dstack((A_broadcasted, B_broadcasted))
p_norm = stats.multivariate_normal.pdf(x=grid, mean=w, cov=S)
# draw samples from the distribution
samp_norm = stats.multivariate_normal.rvs(mean=w, cov=S, size=1000)
# create figure
fig, axes = plt.subplots(3, 2, figsize=(9, 10))
# plot the posterior density
ax = axes[0, 0]
ax.imshow(
p,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
# plot the samples
ax = axes[1, 0]
ax.scatter(samp_A, samp_B, 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(samp_B>0)))
# plot the histogram of LD50
ax = axes[2, 0]
ax.hist(samp_ld50, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
# plot the posterior density for normal approx.
ax = axes[0, 1]
ax.imshow(
p_norm,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
# plot the samples from the normal approx.
ax = axes[1, 1]
ax.scatter(samp_norm[:,0], samp_norm[:,1], 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
# Normal approximation does not take into account that the posterior
# is not symmetric and that there is very low density for negative
# beta values. Based on the samples from the normal approximation
# it is estimated that there is about 4% probability that beta is negative!
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(samp_norm[:,1]>0)))
# Plot the histogram of LD50
ax = axes[2, 1]
# Since we have strong prior belief that beta should not be negative we can
# improve our normal approximation by conditioning on beta>0.
bpi = samp_norm[:,1] > 0
samp_ld50_norm = - samp_norm[bpi,0] / samp_norm[bpi,1]
ax.hist(samp_ld50_norm, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
fig.tight_layout()
"""
Explanation: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize.
End of explanation
"""
|
hannorein/reboundx
|
ipython_examples/EccAndIncDamping.ipynb
|
gpl-3.0
|
import rebound
import reboundx
import numpy as np
sim = rebound.Simulation()
ainner = 1.
aouter = 10.
e0 = 0.1
inc0 = 0.1
sim.add(m=1.)
sim.add(m=1e-6,a=ainner,e=e0, inc=inc0)
sim.add(m=1e-6,a=aouter,e=e0, inc=inc0)
sim.move_to_com() # Moves to the center of momentum frame
ps = sim.particles
"""
Explanation: Eccentricity & Inclination Damping
For modifying orbital elements, REBOUNDx offers two implementations. modify_orbits_direct directly calculates orbital elements and modifies those, while modify_orbits_forces applies forces that when orbit-averaged yield the desired behavior. Let's set up a simple simulation of two planets on initially eccentric and inclined orbits:
End of explanation
"""
rebx = reboundx.Extras(sim)
mod = rebx.load_operator("modify_orbits_direct")
rebx.add_operator(mod)
"""
Explanation: As opposed to most of the other effects, modify_orbits_direct is an operator rather than a force, so we have to add it as such:
End of explanation
"""
tmax = 1.e3
ps[1].params["tau_e"] = -tmax/10.
ps[1].params["tau_inc"] = -tmax/10.
ps[2].params["tau_e"] = -tmax
ps[2].params["tau_inc"] = -tmax
"""
Explanation: Both modify_orbits_forces and modify_orbits_direct exponentially alter the eccentricities and inclinations, on an e-folding timescale tau_e and tau_inc, respectively. Negative timescales yield exponential decay, while positive timescales give exponential growth::
\begin{equation}
e = e_0e^{t/\tau_e},\:\:i = i_0e^{t/\tau_i}
\end{equation}
In general, each body will have different damping timescales. By default all particles have timescales of infinity, i.e., no effect. The units of time are set by the units of time in your simulation.
Let's set a maximum time for our simulation, and give our two planets different (damping) timescales. This can simply be done through:
End of explanation
"""
Nout = 1000
e1,e2,inc1,inc2 = np.zeros(Nout), np.zeros(Nout), np.zeros(Nout), np.zeros(Nout)
times = np.linspace(0.,tmax,Nout)
for i,time in enumerate(times):
sim.integrate(time)
e1[i] = ps[1].e
e2[i] = ps[2].e
inc1[i] = ps[1].inc
inc2[i] = ps[2].inc
"""
Explanation: Now we simply run the simulation like we would normally with REBOUND. Here we store the semimajor axes at 1000 equally spaced intervals:
End of explanation
"""
e1pred = [e0*np.e**(t/ps[1].params["tau_e"]) for t in times]
e2pred = [e0*np.e**(t/ps[2].params["tau_e"]) for t in times]
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_yscale('log')
plt.plot(times,e1)
plt.plot(times,e1pred, 'r--')
plt.plot(times,e2)
plt.plot(times,e2pred, 'r--')
plt.axes().set_xlabel("Time", fontsize=24)
plt.axes().set_ylabel("Eccentricity", fontsize=24)
inc1pred = [inc0*np.e**(t/ps[1].params["tau_inc"]) for t in times]
inc2pred = [inc0*np.e**(t/ps[2].params["tau_inc"]) for t in times]
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_yscale('log')
plt.plot(times,inc1)
plt.plot(times,inc1pred, 'r--')
plt.plot(times,inc2)
plt.plot(times,inc2pred, 'r--')
plt.axes().set_xlabel("Time", fontsize=24)
plt.axes().set_ylabel("Inclination (rad)", fontsize=24)
"""
Explanation: Now let's plot it on a linear-log scale to check whether we get the expected exponential behavior. We'll also overplot the expected exponential decays for comparison.
End of explanation
"""
mod.params["p"] = 0.7
"""
Explanation: Eccentricity-semimajor axis coupling
Goldreich & Schlichting (2014) argue that a physical process that induces eccentricity damping should induce semimajor axis damping at order $e^2$, e.g., tides. We follow the Deck & Batygin (2015) of parametrizing this through a coefficient $p$ that varies between 0 and 1. p=0 corresponds to no coupling, while p=1 represents the limit of eccentricity damping at constant angular momentum, which to a good approximation is the case with tides (our p=1 therefore corresponds to Golreich and Schlichting's p=3). We set effect parameters through the effect object returned when we add the effect, which we called effect above. To set p:
End of explanation
"""
mod.params["coordinates"] = reboundx.coordinates["BARYCENTRIC"]
"""
Explanation: The default is p = 0, i.e., no coupling, so for a single planet, if you don't set tau_a, the planet will not migrate. The current modify_orbits_forces implementation always damps eccentricity at constant angular momentum, i.e., p=1 (so you can't set it to an arbitrary value).
Coordinate Systems
Everything in REBOUND by default uses Jacobi coordinates. If you would like to change the reference relative to which the particles are damped:
End of explanation
"""
mod.params["coordinates"] = reboundx.coordinates["PARTICLE"]
ps[0].params["primary"] = 1
"""
Explanation: to reference orbits to the system's barycenter, or
End of explanation
"""
|
t-vi/candlegp
|
notebooks/gp_regression.ipynb
|
apache-2.0
|
from matplotlib import pyplot
%matplotlib inline
import IPython
import torch
import numpy
import sys, os
sys.path.append(os.path.join(os.getcwd(),'..'))
pyplot.style.use('ggplot')
import candlegp
import candlegp.training.hmc
"""
Explanation: Gaussian Process Regression in Pytorch
Thomas Viehmann, tv@lernapparat.de
Modelled after GPFlow Regression notebook by James Hensman
End of explanation
"""
N = 12
X = torch.rand(N,1).double()
Y = (torch.sin(12*X) + 0.6*torch.cos(25*X) + torch.randn(N,1).double()*0.1+3.0).squeeze(1)
pyplot.figure()
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
"""
Explanation: Let's have a regression example
End of explanation
"""
k = candlegp.kernels.Matern52(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m = candlegp.models.GPR(X, Y.unsqueeze(1), kern=k, mean_function=mean)
m.likelihood.variance.set(torch.tensor([0.01], dtype=torch.double))
m
xstar = torch.linspace(0,1,100).double()
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
"""
Explanation: Creating the model
Not adapted to the data yet...
End of explanation
"""
opt = torch.optim.LBFGS(m.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.item())
m
xstar = torch.linspace(0,1,100).double()
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
"""
Explanation: Maximum-A-Posteriori
One commonly used approach to model selection is to maximize the marginal log likelihood. This is the "gp" equivalent of a maximum-likelihood estimate.
End of explanation
"""
k2 = candlegp.kernels.RBF(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean2 = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m2 = candlegp.models.GPR(X, Y.unsqueeze(1), kern=k2, mean_function=mean2)
m2.load_state_dict(m.state_dict())
dt = torch.double
m2.likelihood.variance.prior = candlegp.priors.Gamma(1.0,1.0, dtype=dt)
m2.kern.variance.prior = candlegp.priors.Gamma(1.0,1.0, dtype=dt)
m2.kern.lengthscales.prior = candlegp.priors.Gamma(1.0,1.0,dtype=dt)
m2.mean_function.A.prior = candlegp.priors.Gaussian(0.0,10.0, dtype=dt)
m2.mean_function.b.prior = candlegp.priors.Gaussian(0.0,10.0, dtype=dt)
print("likelihood with priors",m2().item())
m2
# res = candlegp.training.hmc.hmc_sample(m2,500,0.2,burn=50, thin=10)
res = candlegp.training.hmc.hmc_sample(m2,50,0.2,burn=50, thin=10)
pyplot.plot(res[0]); pyplot.title("likelihood");
for (n,p0),p,c in zip(m.named_parameters(),res[1:],['r','g','b','y','b']):
pyplot.plot(torch.stack(p).squeeze().numpy(), c=c, label=n)
pyplot.plot((0,len(p)),(p0.data.view(-1)[0],p0.data.view(-1)[0]), c=c)
pyplot.legend();
"""
Explanation: Hamiltonian Monte Carlo
We can go more Bayesian by putting a prior on the parameters and do Hamiltonian Monte Carlo to draw parameters.
End of explanation
"""
xstar = torch.linspace(0,1,100).double()
mc_params = torch.stack([torch.cat(p, dim=0).view(-1) for p in res[1:]], dim=1)
allsims = []
for ps in mc_params[:50]:
for mp, p in zip(m2.parameters(), ps):
with torch.no_grad():
mp.set(p)
allsims.append(m2.predict_f_samples(xstar.unsqueeze(1), 1).squeeze(0).t())
allsims = torch.cat(allsims, dim=0)
pyplot.plot(xstar.numpy(),allsims.data.numpy().T, 'b', lw=2, alpha=0.1)
mu, var = m.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
"""
Explanation: Plotting simulated functions
(Note that the simulations are for the de-noised functions - i.e. without the noise contribution of the likelihood.)
End of explanation
"""
k3 = candlegp.kernels.RBF(1, lengthscales=torch.tensor([0.3], dtype=torch.double),
variance=torch.tensor([1.0], dtype=torch.double))
mean3 = candlegp.mean_functions.Linear(torch.tensor([1], dtype=torch.double), torch.tensor([0], dtype=torch.double))
m3 = candlegp.models.SGPR(X, Y.unsqueeze(1), k3, X[:7].clone(), mean_function=mean3)
m3.likelihood.variance.set(torch.tensor([0.01], dtype=torch.double))
m3
opt = torch.optim.LBFGS(m3.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m3()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m3()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.item())
m3
xstar = torch.linspace(0,1,100).double()
mu, var = m3.predict_y(xstar.unsqueeze(1))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
pyplot.plot(m3.Z.data.numpy(), torch.zeros(m3.Z.size(0)).numpy(),'o')
"""
Explanation: Sparse Regression
End of explanation
"""
|
peterwittek/ipython-notebooks
|
Unbounded_randomness.ipynb
|
gpl-3.0
|
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from itertools import product
from math import sqrt, sin, cos, pi, atan
from qutip import tensor, basis, sigmax, sigmaz, expect, qeye
from ncpol2sdpa import SdpRelaxation, flatten, generate_measurements, \
projective_measurement_constraints
π = pi
"""
Explanation: We spell out the computational details of the paper "A single entangled system is an unbounded source of nonlocal correlations and of certified random numbers," published in the Proceedings of TQC-17, which is an extended version of Unbounded randomness certification using sequences of measurements (arXiv:1510.03394). Calculations were performed at arbitrary precision with the SDP solver SDPA-GMP, and it is assumed that the executable binary is in the path. We import the rest of the dependencies:
End of explanation
"""
def get_moments(ξ, θ):
mu = atan(sin(2*θ))
psi = (cos(θ) * tensor(basis(2, 0),basis(2, 0)) +
sin(θ) * tensor(basis(2, 1),basis(2, 1)))
A_1 = cos(mu)*sigmaz() - sin(mu)*sigmax()
A_0 = cos(mu)*sigmaz() + sin(mu)*sigmax()
B_0 = sigmaz()
B_1 = (qeye(2) + cos(2*ξ)*sigmax())/2
A_00 = (qeye(2) + A_0)/2
A_10 = (qeye(2) + A_1)/2
B_00 = (qeye(2) + B_0)/2
B_10 = B_1
p = []
p.append(expect(tensor(A_00, qeye(2)), psi))
p.append(expect(tensor(A_10, qeye(2)), psi))
p.append(expect(tensor(qeye(2), B_00), psi))
p.append(expect(tensor(qeye(2), B_10), psi))
p.append(expect(tensor(A_00, B_00), psi))
p.append(expect(tensor(A_00, B_10), psi))
p.append(expect(tensor(A_10, B_00), psi))
p.append(expect(tensor(A_10, B_10), psi))
moments = ["-0[0,0]-1[0,0]+1"]
k = 0
for i in range(len(A_configuration)):
moments.append(P_0_A[i][0] + P_1_A[i][0] - p[k])
k += 1
for i in range(len(B_configuration)):
moments.append(P_0_B[i][0] + P_1_B[i][0] - p[k])
k += 1
for i in range(len(A_configuration)):
for j in range(len(B_configuration)):
moments.append(P_0_A[i][0]*P_0_B[j][0] + P_1_A[i][0]*P_1_B[j][0] - p[k])
k += 1
return moments
"""
Explanation: First we define the state and the measurements in the observed probability distribution. Here we work in the standard scenario with only one measurement $n = 1$ in the sequence. We used states of the form
$$
|\psi(\theta)\rangle = \cos(\theta)|00\rangle+\sin(\theta)|11\rangle
$$
and measurements:
$$
\mathbb{A}0 = \cos\mu\,\sigma_z + \sin\mu\,\sigma_x, \hspace{1cm} \mathbb{B}_0 = \sigma_z, \
\mathbb{A}_1 = \cos\mu\,\sigma_z - \sin\mu\,\sigma_x, \hspace{1cm} \mathbb{B}_1 = \hat{\sigma}_x(\xi)=\cos(2\xi)\sigma_x,
$$
that correspond to the ones in our scheme for an unbounded amount of randomness and where the second measurement $y = 1$ of $B$ is the tunable version $\hat{\sigma}_x(\xi)\equiv{M{+1}^{\dagger}M_{+1},M_{-1}^{\dagger}M_{-1}}$:
$$
M_{\pm1}(\xi)=\cos\xi|\pm\rangle!\langle\pm|+\sin\xi|\mp\rangle!\langle\mp|.
$$
End of explanation
"""
level = 4
A_configuration = [2, 2]
B_configuration = [2, 2]
P_0_A = generate_measurements(A_configuration, 'P_0_A')
P_0_B = generate_measurements(B_configuration, 'P_0_B')
P_1_A = generate_measurements(A_configuration, 'P_1_A')
P_1_B = generate_measurements(B_configuration, 'P_1_B')
substitutions = projective_measurement_constraints(P_0_A, P_0_B)
substitutions.update(projective_measurement_constraints(P_1_A, P_1_B))
guessing_probability = - (P_0_B[1][0] - P_1_B[1][0])
sdp = SdpRelaxation([flatten([P_0_A, P_0_B]), flatten([P_1_A, P_1_B])],
verbose=0, normalized=False)
"""
Explanation: We initialize a level-4 SDP relaxation given the CHSH scenario, assuming that the operators in the two possible behaviours observe the algebra of projective measurements.
End of explanation
"""
def iterate_over_parameters(Ξ, Θ):
result = []
for ξ, θ in product(Ξ, Θ):
if sdp.block_struct == []:
sdp.get_relaxation(level, objective=guessing_probability,
momentequalities=get_moments(ξ, θ),
substitutions=substitutions,
extraobjexpr="-1[0,0]")
else:
sdp.process_constraints(momentequalities=get_moments(ξ, θ))
sdp.solve(solver='sdpa', solverparameters={"executable": "sdpa_gmp",
"paramsfile": "param.gmp.sdpa"})
result.append({"ξ": ξ, "θ": θ, "primal": sdp.primal, "dual": sdp.dual, "status": sdp.status})
return result
"""
Explanation: We defined a helper function to iterate over a range of different $\xi$ and $\theta$ values.
End of explanation
"""
def print_latex_table(results):
range_ = set([result["θ"] for result in results])
print("$\\xi$ & Bits \\\\")
for θ in range_:
print("$\\theta=%.3f$ & \\\\" % θ)
for result in results:
if result["θ"] == θ:
print("%.3f & %.3f\\\\" %
(result["ξ"], abs(np.log2(-result["primal"]))))
def plot_results(results, labels, filename=None):
domain = sorted(list(set(result["ξ"] for result in results)))
range_ = sorted(list(set(result["θ"] for result in results)))
fig, axes = plt.subplots(ncols=1)
for i, θ in enumerate(range_):
randomness = [abs(np.log2(-result["primal"]))
for result in results if result["θ"] == θ]
axes.plot(domain, randomness, label=labels[i])
axes.set_xlabel("$ξ$")
axes.set_ylabel("Randomness [bits]")
axes.legend()
plt.tight_layout()
if filename is not None:
plt.savefig(filename)
plt.show()
"""
Explanation: Finally we define two functions to print out the results as a table, and plot the figure.
End of explanation
"""
exponents = range(2, 6)
results = iterate_over_parameters(np.linspace(0, π/4, 60),
[π/2**i for i in exponents] + [0])
"""
Explanation: We can run the calculations now. If the parameter $\xi = 0$, the four (projective) measurements on any quantum state $|\psi(\theta)\rangle$ with angle $\theta$ generates a behavior $P_{obs}^{\theta}$ leading to the maximal violation of the inequality $I_{\theta}$ for the same value of $\theta$. This implies that extremal nonlocal correlations are generated and we know that a perfect random bit -- equivalently $G(y^{0}=1,P_{obs}) = \frac{1}{2}$ -- is produced. This corresponds to the strongest (projective) version of the measurements. Now, as we increase the parameter $\xi > 0$ of $B$'s $y = 1$ measurement, $\hat{\sigma}x(\xi)$ gets weaker and the generated correlations cease to be extremal and less than a random bit is produced. At some point, at a particular value $\xi^{\theta}{\textrm{max}}$ the measurement of $B$ is so weak that we expect the generated correlations to become local. This exact value might depend on the amount of entanglement $\theta$ in the state. The bounds obtained by SDP indicate that this dependency on the angle $\theta$ of the maximal value $\xi^{\theta}{\textrm{max}}$ is relatively small. As we vary the angle $\theta$, the minimal required strength of the measurement stays within a narrow interval: $\xi^{\theta}{\textrm{max}} \in [0.519,0.576]$ for $\theta \in [\frac{\pi}{32},\frac{\pi}{4}]$.
End of explanation
"""
plot_results(results, ["$θ=0$"] + ["$θ=π/%d$" % 2**i for i in sorted(exponents, reverse=True)])
"""
Explanation: Lower bounds on the amount of randomness certified from the quantum state with angles $\theta = 0,\frac{\pi}{32},\frac{\pi}{16},\frac{\pi}{8},\frac{\pi}{4}$ in function of the strength of the measurement $\xi$. The measurement is projective for $\xi = 0$ -- which certifies the maximal amount of randomness -- and is non interacting with the system when $\xi = 1$. It is intriguing to see that for all states with $\theta > 0$ the generated behavior become local at almost the same $\xi_{\textrm{max}} \in [0.519,0.576]$.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_cluster_stats_evoked.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation F-test on sensor data with 1D cluster level
One tests if the evoked response is significantly different
between conditions. Multiple comparison problem is addressed
with cluster level permutation test.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
channel = 'MEG 1332' # include only this channel in analysis
include = [channel]
"""
Explanation: Set parameters
End of explanation
"""
picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,
exclude='bads')
event_id = 1
reject = dict(grad=4000e-13, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition1 = epochs1.get_data() # as 3D matrix
event_id = 2
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition2 = epochs2.get_data() # as 3D matrix
condition1 = condition1[:, 0, :] # take only one channel to get a 2D array
condition2 = condition2[:, 0, :] # take only one channel to get a 2D array
"""
Explanation: Read epochs for the channel of interest
End of explanation
"""
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([condition1, condition2], n_permutations=1000,
threshold=threshold, tail=1, n_jobs=1)
"""
Explanation: Compute statistic
End of explanation
"""
times = epochs1.times
plt.close('all')
plt.subplot(211)
plt.title('Channel : ' + channel)
plt.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),
label="ERF Contrast (Event 1 - Event 2)")
plt.ylabel("MEG (T / m)")
plt.legend()
plt.subplot(212)
for i_c, c in enumerate(clusters):
c = c[0]
if cluster_p_values[i_c] <= 0.05:
h = plt.axvspan(times[c.start], times[c.stop - 1],
color='r', alpha=0.3)
else:
plt.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),
alpha=0.3)
hf = plt.plot(times, T_obs, 'g')
plt.legend((h, ), ('cluster p-value < 0.05', ))
plt.xlabel("time (ms)")
plt.ylabel("f-values")
plt.show()
"""
Explanation: Plot
End of explanation
"""
|
dietmarw/EK5312_ElectricalMachines
|
Chapman/Ch9-Problem_9-03.ipynb
|
unlicense
|
%pylab notebook
%precision %.4g
"""
Explanation: Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-3
End of explanation
"""
V = 120 # [V]
p = 4
R1 = 2.0 # [Ohm]
R2 = 2.8 # [Ohm]
X1 = 2.56 # [Ohm]
X2 = 2.56 # [Ohm]
Xm = 60.5 # [Ohm]
n = 400 # [r/min]
Prot = 51 # [W]
n_sync = 1800 # [r/min]
"""
Explanation: Description
Suppose that the motor in Problem 9-1 is started and the auxiliary winding fails open while the rotor is accelerating through 400 r/min.
How much induced torque will the motor be able to produce on its main winding alone?
Assuming that the rotational losses are still 51 W, will this motor continue accelerating or will it slow down again? Prove your answer.
End of explanation
"""
s = (n_sync - n) / n_sync
s
"""
Explanation: SOLUTION
At a speed of 400 r/min, the slip is:
$$s = \frac{n_\text{sync} - n}{n_\text{sync}}$$
End of explanation
"""
Zf = ((R2/s + X2*1j)*(Xm*1j)) / (R2/s + X2*1j + Xm*1j)
Zf
"""
Explanation: The impedances $Z_F$ and $Z_B$ are:
$$Z_F = \frac{(R_2/s + jX_2)(jX_M)}{R_2/s + jX_2 + jX_M}$$
End of explanation
"""
Zb = ((R2/(2-s) + X2*1j)*(Xm*1j)) / (R2/(2-s) + X2*1j + Xm*1j)
Zb
"""
Explanation: $$Z_B = \frac{(R_2/(2-s) + jX_2)(jX_M)}{R_2/(2-s) + jX_2 + jX_M}$$
End of explanation
"""
I1 = V / (R1 +X1*1j + 0.5*Zf + 0.5*Zb)
I1_angle = arctan(I1.imag/I1.real)
print('I1 = {:.2f} V ∠{:.1f}°'.format(abs(I1), I1_angle/pi*180))
"""
Explanation: The input current is:
$$\vec{I}_1 = \frac{\vec{V}}{R_1 + jX_1 + 0.5Z_F + 0.5Z_B}$$
End of explanation
"""
Pag_f = abs(I1)**2 * 0.5*Zf.real
Pag_f
Pag_b = abs(I1)**2 * 0.5*Zb.real
Pag_b
Pag = Pag_f - Pag_b
print('Pag = {:.1f} W'.format(Pag))
"""
Explanation: The air-gap power is:
End of explanation
"""
Pconv_f = (1-s)*Pag_f
Pconv_f
Pconv_b = (1-s)*Pag_b
Pconv_b
Pconv = Pconv_f - Pconv_b
print('Pconv = {:.1f} W'.format(Pconv))
"""
Explanation: The power converted from electrical to mechanical form is:
End of explanation
"""
Pout = Pconv - Prot
print('Pout = {:.1f} W'.format(Pout))
"""
Explanation: The output power is:
End of explanation
"""
w_sync = n_sync * (2.0*pi/1.0) * (1.0/60.0)
tau_ind = Pag / w_sync
print('''
τ_ind = {:.3f} Nm
================'''.format(tau_ind))
"""
Explanation: The induced torque is
$$\tau_\text{ind} = \frac{P_\text{AG}}{\omega_\text{sync}}$$
End of explanation
"""
|
kgrodzicki/machine-learning-specialization
|
course-3-classification/module-6-decision-tree-practical-assignment-blank.ipynb
|
mit
|
import graphlab
"""
Explanation: Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will:
Implement binary decision trees with different early stopping methods.
Compare models with different stopping parameters.
Visualize the concept of overfitting in decision trees.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
"""
loans = graphlab.SFrame('lending-club-data.gl/')
"""
Explanation: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
End of explanation
"""
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
"""
Explanation: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
"""
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
"""
Explanation: We will be using the same 4 categorical features as in the previous assignment:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding.
End of explanation
"""
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
"""
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed = 1 so everyone gets the same results.
End of explanation
"""
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
"""
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
Since we are implementing binary decision trees, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
"""
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
"""
Explanation: The feature columns now look like this:
End of explanation
"""
train_data, validation_set = loans_data.random_split(.8, seed=1)
"""
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
"""
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
return True if len(data) <= min_node_size else False
"""
Explanation: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:
Reached a maximum depth. (set by parameter max_depth).
Reached a minimum node size. (set by parameter min_node_size).
Don't split if the gain in error reduction is too small. (set by parameter min_error_reduction).
For the rest of this assignment, we will refer to these three as early stopping conditions 1, 2, and 3.
Early stopping condition 1: Maximum depth
Recall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.
We will be reusing code from the previous assignment and then building upon this. We will alert you when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code.
Early stopping condition 2: Minimum node size
The function reached_minimum_node_size takes 2 arguments:
The data (from a node)
The minimum number of data points that a node is allowed to split on, min_node_size.
This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
"""
print "stop it !"
assert reached_minimum_node_size([1], 1) == True
"""
Explanation: Quiz question: Given an intermediate node with 6 safe loans and 3 risky loans, if the min_node_size parameter is 10, what should the tree learning algorithm do next?
End of explanation
"""
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
return error_before_split - error_after_split
"""
Explanation: Early stopping condition 3: Minimum gain in error reduction
The function error_reduction takes 2 arguments:
The error before a split, error_before_split.
The error after a split, error_after_split.
This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
"""
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
nr_of_safe_loans = 0
for e in labels_in_node:
if e == 1:
nr_of_safe_loans += 1
# Count the number of -1's (risky loans)
## YOUR CODE HERE
nr_of_risky_loans = 0
for e in labels_in_node:
if e == -1:
nr_of_risky_loans += 1
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
if nr_of_safe_loans > nr_of_risky_loans:
return nr_of_risky_loans
return nr_of_safe_loans
"""
Explanation: Quiz question: Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the minimum gain in error reduction parameter is set to 0.2, what should the tree learning algorithm do next?
Grabbing binary decision tree helper functions from past assignment
Recall from the previous assignment that we wrote a function intermediate_node_num_mistakes that calculates the number of misclassified examples when predicting the majority class. This is used to help determine which feature is best to split on at a given node of the tree.
Please copy and paste your code for intermediate_node_num_mistakes here.
End of explanation
"""
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_error = error
best_feature = feature
return best_feature # Return the best feature we found
"""
Explanation: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
End of explanation
"""
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True} ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
"""
Explanation: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
End of explanation
"""
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data, min_node_size): ## YOUR CODE HERE
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values)
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split[target]) ## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target]) ## YOUR CODE HERE
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
print "Mistakes", left_mistakes, right_mistakes, error_after_split
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values) ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
"""
Explanation: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2: minimum node size:
Step 1: Use the function reached_minimum_node_size that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the min_node_size argument.
Step 2: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Implementing early stopping condition 3: minimum error reduction:
Note: This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.
Step 1: Calculate the classification error before splitting. Recall that classification error is defined as:
$$
\text{classification error} = \frac{\text{# mistakes}}{\text{# total examples}}
$$
* Step 2: Calculate the classification error after splitting. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.
* Step 3: Use the function error_reduction to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (min_error_reduction). Don't forget to use that argument.
* Step 4: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
"""
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
"""
Explanation: Here is a function to count the nodes in your tree:
End of explanation
"""
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 7'
"""
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
"""
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
"""
Explanation: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning: This code block may take a minute to learn.
End of explanation
"""
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
"""
Explanation: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
End of explanation
"""
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
### YOUR CODE HERE
return classify(tree['right'], x, annotate)
"""
Explanation: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
End of explanation
"""
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
"""
Explanation: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
End of explanation
"""
classify(my_decision_tree_new, validation_set[0], annotate = True)
"""
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
"""
classify(my_decision_tree_old, validation_set[0], annotate = True)
"""
Explanation: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
End of explanation
"""
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
nr_of_mistakes = data[data[target] != prediction]
print len(nr_of_mistakes) / float(len(data))
"""
Explanation: Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for validation_set[0] shorter, longer, or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for any point always shorter, always longer, always the same, shorter or the same, or longer or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For a tree trained on any dataset using max_depth = 6, min_node_size = 100, min_error_reduction=0.0, what is the maximum number of splits encountered while making a single prediction?
Evaluating the model
Now let us evaluate the model that we have trained. You implemented this evautation in the function evaluate_classification_error from the previous assignment.
Please copy and paste your evaluate_classification_error code here.
End of explanation
"""
evaluate_classification_error(my_decision_tree_new, validation_set)
"""
Explanation: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
End of explanation
"""
evaluate_classification_error(my_decision_tree_old, validation_set)
"""
Explanation: Now, evaluate the validation error using my_decision_tree_old.
End of explanation
"""
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 100, min_error_reduction=0.0)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 4,
min_node_size = 100, min_error_reduction=0.0)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14,
min_node_size = 100, min_error_reduction=0.0)
"""
Explanation: Quiz question: Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment?
Exploring the effect of max_depth
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (too small, just right, and too large).
Train three models with these parameters:
model_1: max_depth = 2 (too small)
model_2: max_depth = 6 (just right)
model_3: max_depth = 14 (may be too large)
For each of these three, we set min_node_size = 0 and min_error_reduction = -1.
Note: Each tree can take up to a few minutes to train. In particular, model_3 will probably take the longest to train.
End of explanation
"""
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
"""
Explanation: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data:
End of explanation
"""
print "Validation set, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "Validation set, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "Validation set, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
"""
Explanation: Now evaluate the classification error on the validation data.
End of explanation
"""
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
"""
Explanation: Quiz Question: Which tree has the smallest error on the validation data?
Quiz Question: Does the tree with the smallest error in the training data also have the smallest error in the validation data?
Quiz Question: Is it always true that the tree with the lowest classification error on the training set will result in the lowest classification error in the validation set?
Measuring the complexity of the tree
Recall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as
complexity(T) = number of leaves in the tree T
Here, we provide a function count_leaves that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in model_1, model_2, and model_3.
End of explanation
"""
model_1_complexity = count_leaves(model_1)
model_2_complexity = count_leaves(model_2)
model_3_complexity = count_leaves(model_3)
print model_1_complexity, model_2_complexity, model_3_complexity
"""
Explanation: Compute the number of nodes in model_1, model_2, and model_3.
End of explanation
"""
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1.)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=0.0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=5.0)
"""
Explanation: Quiz question: Which tree has the largest complexity?
Quiz question: Is it always true that the most complex tree will result in the lowest classification error in the validation_set?
Exploring the effect of min_error
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (negative, just right, and too positive).
Train three models with these parameters:
1. model_4: min_error_reduction = -1 (ignoring this early stopping condition)
2. model_5: min_error_reduction = 0 (just right)
3. model_6: min_error_reduction = 5 (too positive)
For each of these three, we set max_depth = 6, and min_node_size = 0.
Note: Each tree can take up to 30 seconds to train.
End of explanation
"""
print "Validation set, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation set, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation set, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
"""
Explanation: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
End of explanation
"""
model_4_complexity = count_leaves(model_4)
model_5_complexity = count_leaves(model_5)
model_6_complexity = count_leaves(model_6)
print model_4_complexity, model_5_complexity, model_6_complexity
"""
Explanation: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
End of explanation
"""
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1.)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 2000, min_error_reduction=-1.)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 50000, min_error_reduction=-1.)
"""
Explanation: Quiz Question: Using the complexity definition above, which model (model_4, model_5, or model_6) has the largest complexity?
Did this match your expectation?
Quiz Question: model_4 and model_5 have similar classification error on the validation set but model_5 has lower complexity? Should you pick model_5 over model_4?
Exploring the effect of min_node_size
We will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (too small, just right, and just right).
Train three models with these parameters:
1. model_7: min_node_size = 0 (too small)
2. model_8: min_node_size = 2000 (just right)
3. model_9: min_node_size = 50000 (too large)
For each of these three, we set max_depth = 6, and min_error_reduction = -1.
Note: Each tree can take up to 30 seconds to train.
End of explanation
"""
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_set)
"""
Explanation: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
End of explanation
"""
model_7_complexity = count_leaves(model_7)
model_8_complexity = count_leaves(model_8)
model_9_complexity = count_leaves(model_9)
"""
Explanation: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9).
End of explanation
"""
print model_7_complexity, model_8_complexity, model_9_complexity
"""
Explanation: Quiz Question: Using the results obtained in this section, which model (model_7, model_8, or model_9) would you choose to use?
End of explanation
"""
|
Heerozh/deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 7
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
x = (x) / 256
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
lb = preprocessing.LabelBinarizer()
lb.fit(range(10))
return lb.transform(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
return tf.placeholder(tf.float32, [None, *image_shape], name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, [None, n_classes], name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
# my debug code
from IPython.display import display, HTML
TREE = []
def add_display_tree(net, add=''):
TREE.append((net.name + ' ' + str(add), str(net.get_shape().as_list()[1:])))
def print_tree():
html = ["<table width=50%>"]
for row in TREE:
html.append("<tr>")
html.append("<td>{0}</td> <td>{1}</td>".format(*row))
html.append("</tr>")
html.append("</table>")
display(HTML(''.join(html)))
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
weight = tf.Variable(tf.truncated_normal(
(*conv_ksize, x_tensor.get_shape().as_list()[-1], conv_num_outputs), stddev=0.02))
biases = tf.Variable(tf.zeros(conv_num_outputs))
conv = tf.nn.conv2d(x_tensor, weight, strides=[1, *conv_strides, 1], padding='SAME')
add_display_tree(conv, str(conv_ksize) + (conv_strides[0] > 1 and ' /' + str(conv_strides[0]) or ''))
conv = tf.nn.relu(conv + biases)
add_display_tree(conv)
conv = tf.nn.max_pool(conv, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME')
add_display_tree(conv, '/' + str(pool_strides[0]))
TREE.append(('---', '---'))
return conv
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
shape = x_tensor.get_shape().as_list()
size = shape[1] * shape[2] * shape[3]
reshape = tf.reshape(x_tensor, [-1, size])
add_display_tree(reshape)
return reshape
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs, activation=tf.nn.relu):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
weight = tf.Variable(tf.truncated_normal((x_tensor.get_shape().as_list()[-1], num_outputs), stddev=0.02))
biases = tf.Variable(tf.zeros(num_outputs))
net = tf.matmul(x_tensor, weight) + biases
if activation is not None:
net = activation(net)
add_display_tree(net)
return net
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return fully_conn(x_tensor, num_outputs, activation=None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# net = conv2d_maxpool(x, conv_num_outputs=64, conv_ksize=(3, 3), conv_strides=(1, 1),
# pool_ksize=(2, 2), pool_strides=(2, 2))
# net = conv2d_maxpool(net, conv_num_outputs=128, conv_ksize=(3, 3), conv_strides=(1, 1),
# pool_ksize=(2, 2), pool_strides=(2, 2))
# net = conv2d_maxpool(net, conv_num_outputs=256, conv_ksize=(3, 3), conv_strides=(1, 1),
# pool_ksize=(2, 2), pool_strides=(2, 2))
#net = tf.nn.dropout(net, keep_prob)
# normal net
net = conv2d_maxpool(x, conv_num_outputs=64, conv_ksize=(3, 3), conv_strides=(1, 1),
pool_ksize=(1, 1), pool_strides=(1, 1))
net = conv2d_maxpool(net, conv_num_outputs=64, conv_ksize=(3, 3), conv_strides=(1, 1),
pool_ksize=(2, 2), pool_strides=(2, 2))
net = conv2d_maxpool(net, conv_num_outputs=128, conv_ksize=(3, 3), conv_strides=(1, 1),
pool_ksize=(1, 1), pool_strides=(1, 1))
net = conv2d_maxpool(net, conv_num_outputs=128, conv_ksize=(3, 3), conv_strides=(1, 1),
pool_ksize=(2, 2), pool_strides=(2, 2))
net = conv2d_maxpool(net, conv_num_outputs=256, conv_ksize=(3, 3), conv_strides=(1, 1),
pool_ksize=(1, 1), pool_strides=(1, 1))
net = conv2d_maxpool(net, conv_num_outputs=256, conv_ksize=(3, 3), conv_strides=(1, 1),
pool_ksize=(2, 2), pool_strides=(2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
net = flatten(net)
net = tf.nn.dropout(net, keep_prob)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
net = fully_conn(net, 2048)
net = fully_conn(net, 1024)
net = fully_conn(net, 256)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
net = output(net, 10)
# TODO: return output
return net
TREE = []
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
print('my debug info, net struct table:')
print_tree()
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability,
})
pass
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
tc, ta = session.run([cost, accuracy], feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1,
})
valid_accuracy_avg, valid_loss_avg = 0, 0
for batch_tx, batch_ty in helper.batch_features_labels(valid_features, valid_labels, 128):
vc, va = session.run([cost, accuracy], feed_dict={
x: batch_tx,
y: batch_ty,
keep_prob: 1,
})
valid_accuracy_avg += va
valid_loss_avg += vc
valid_accuracy_avg /= (len(valid_features) / 128)
valid_loss_avg /= (len(valid_features) / 128)
print('train: {:.2%} (loss: {:.4}), valid: {:.2%} (loss: {:.4})'.format(
ta, tc, valid_accuracy_avg, valid_loss_avg))
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 15
batch_size = 128
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.15/_downloads/plot_movement_compensation.ipynb
|
bsd-3-clause
|
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
from os import path as op
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = op.join(mne.datasets.misc.data_path(verbose=True), 'movement')
pos = mne.chpi.read_head_pos(op.join(data_path, 'simulated_quats.pos'))
raw = mne.io.read_raw_fif(op.join(data_path, 'simulated_movement_raw.fif'))
raw_stat = mne.io.read_raw_fif(op.join(data_path,
'simulated_stationary_raw.fif'))
"""
Explanation: Maxwell filter data with movement compensation
Demonstrate movement compensation on simulated data. The simulated data
contains bilateral activation of auditory cortices, repeated over 14
different head rotations (head center held fixed). See the following for
details:
https://github.com/mne-tools/mne-misc-data/blob/master/movement/simulate.py
End of explanation
"""
mne.viz.plot_head_positions(pos, mode='traces')
"""
Explanation: Visualize the "subject" head movements (traces)
End of explanation
"""
# extract our resulting events
events = mne.find_events(raw, stim_channel='STI 014')
events[:, 2] = 1
raw.plot(events=events)
topo_kwargs = dict(times=[0, 0.1, 0.2], ch_type='mag', vmin=-500, vmax=500)
# 0. Take average of stationary data (bilateral auditory patterns)
evoked_stat = mne.Epochs(raw_stat, events, 1, -0.2, 0.8).average()
evoked_stat.plot_topomap(title='Stationary', **topo_kwargs)
# 1. Take a naive average (smears activity)
evoked = mne.Epochs(raw, events, 1, -0.2, 0.8).average()
evoked.plot_topomap(title='Moving: naive average', **topo_kwargs)
# 2. Use raw movement compensation (restores pattern)
raw_sss = maxwell_filter(raw, head_pos=pos)
evoked_raw_mc = mne.Epochs(raw_sss, events, 1, -0.2, 0.8).average()
evoked_raw_mc.plot_topomap(title='Moving: movement compensated', **topo_kwargs)
"""
Explanation: Process our simulated raw data (taking into account head movements)
End of explanation
"""
|
fullmetalfelix/ML-CSC-tutorial
|
LMBTR.ipynb
|
gpl-3.0
|
# --- INITIAL DEFINITIONS ---
from dscribe.descriptors import LMBTR
import numpy as np
from visualise import view
from ase import Atoms
import ase.data
import matplotlib.pyplot as mpl
"""
Explanation: Local Many Body Tensor Representation
LMBTR is a local descriptor for an atom in a molecule/unit cell. It eliminates rotational and translational variances for the central atom by gathering information about different configurations of $K$ atoms into tensors that are stratified by the involved chemical elements. All element combinations have an associated gaussian-smeared exponentially-weighted histogram. It is essentially the same as the regular MBTR, but calculated for only atom combinations including the central atom.
The Tensor
The tensor comprises combinations of elements in different numbers. So, K1 is the atom, K2 is the atom with all elements, and so on. These K's represent different expression of the molecule/unit-cell.
K1
As LMBTR encodes information about a local region, smoothly encoding the presence of different atomic species in that environment is problematic (and is already included in the other terms). For this reason the K1 term in LMBTR is not used.
K2
K2 represents the gaussian-smeared exponentially-weighted histogram inverse distances of pairs of elements with the atom. So, this becomes a matrix of size MxN, where M is the number of elements, and N is the number of bins.
K3
K3 represents the gaussian-smeared exponentially-weighted histogram angles between triplets of 2 elements, and the atom. So, this becomes a tensor of size MxMxN, where M is the number of elements, and N is the number of bins.
Weighting
The distributions for K2 and K3 are weighted. This ensures that contributions from nearby elements is higher, than from farther ones.
For more info about MBTR see:
Huo, Haoyan, and Matthias Rupp. arXiv preprint arXiv:1704.06439 (2017)
For calculating LMBTR, we use the DScribe package as developed by Surfaces and Interfaces at the Nanoscale, Aalto
Example
We are going to see MBTR in action for a simple molecule system.
End of explanation
"""
# atomic positions as matrix
molxyz = np.load("./data/molecule.coords.npy")
# atom types
moltyp = np.load("./data/molecule.types.npy")
atoms_sys = Atoms(positions=molxyz, numbers=moltyp)
view(atoms_sys)
"""
Explanation: Atom description
We'll make a ase.Atoms class for our molecule
End of explanation
"""
# Create the MBTR desciptor for the system
mbtr = LMBTR(
species=['H', 'C', 'N', 'O', 'F'],
periodic=False,
k2={
"geometry": {"function": "distance"},
"grid": { "min": 0.0, "max": 2.0, "sigma": 0.1, "n": 100 },
"weighting": {"function": "unity"}
},
k3={
"geometry": {"function": "cosine"},
"grid": { "min": -1.0, "max": 1.0, "sigma": 0.05, "n": 100 },
"weighting": {"function": "unity"}
},
flatten=True,
sparse=False
)
print("Number of features: {}".format(mbtr.get_number_of_features()))
"""
Explanation: Setting LMBTR hyper-parameters
Next we set-up hyper-parameters:
1. species, the chemical elements to include in the MBTR, helps comparing two structures with missing elements
2. k, list/set of K's to be computed
3. grid: dictionary for K1, K2, K3 with
min, max: are the min and max values for each distribution
sigma, the exponent coefficient for smearing
n, number of bins.
4. weights: dictionary of weighting functions to be used.
Note: The dscribe package has implementation of LMBTR up to K3
End of explanation
"""
#Create Descriptor
desc = mbtr.create(atoms_sys, positions=[0])
print("shape of descriptor: ", desc.shape)
"""
Explanation: Calculate LMBTR
We call the create function of the LMBTR class over our Atoms object. The calculation will be done only for one atom:
End of explanation
"""
# Plot K2
x2 = mbtr.get_k2_axis() # this is the x axis of the histogram
# create some dictionaries to make atom Z <-- type index --> type name
imap = mbtr.index_to_atomic_number
smap = {}
for index, number in imap.items():
smap[index] = ase.data.chemical_symbols[number]
# make the plots
for i in range(1, mbtr.n_elements): # avoid showing type 0 = X (the central atom)
# this is the slice of the flattened MBTR tensor that contains the histogram
# for X-type_i - X is the central atom of the LMBTR expansion
slc = mbtr.get_location(('X',smap[i]))
# this is the slice
y2 = desc[0][slc]
mpl.plot(x2, y2, label="{}".format(smap[i]))
mpl.ylabel("$\phi$ (arbitrary units)", size=14)
mpl.xlabel("Distance (angstrom)", size=14)
mpl.title("Distance distribution", size=20)
mpl.legend()
mpl.show()
# Plot K3
x3 = mbtr.get_k3_axis()
for i in range(1, mbtr.n_elements):
for j in range(1, mbtr.n_elements):
if j <= i:
slc = mbtr.get_location(('X',smap[i],smap[j]))
mpl.plot(x3, desc[0][slc], label="{}, {}".format(smap[i], smap[j]))
mpl.xlim(left=-2)
mpl.ylabel("$\phi$ (arbitrary units)", size=14)
mpl.xlabel("cos(angle)", size=14)
mpl.title("Angle distribution", size=20)
mpl.legend(loc=3)
mpl.show()
"""
Explanation: Plotting
We will now plot all the tensors, in the same plot for K2, and K3.
End of explanation
"""
|
UltronAI/Deep-Learning
|
CS231n/reference/CS231n-master/assignment2/ConvolutionalNetworks.ipynb
|
mit
|
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
"""
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
"""
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
"""
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
"""
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
"""
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
"""
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
"""
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
"""
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
"""
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
"""
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
"""
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
"""
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
"""
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
"""
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
"""
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
"""
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
"""
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
"""
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
"""
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
"""
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
"""
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
"""
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
"""
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
"""
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
"""
# Train a really good model on CIFAR-10
"""
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation
"""
|
liufuyang/ManagingBigData_MySQL_DukeUniv
|
notebooks/MySQL_Exercise_03_Formatting_Selected_Data.ipynb
|
mit
|
%load_ext sql
%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb
%sql USE dognitiondb
%config SqlMagic.displaylimit=25
"""
Explanation: Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
MySQL Exercise 3: Formatting Selected Data
In this lesson, we are going to learn about three SQL clauses or functionalities that will help you format and edit the output of your queries. We will also learn how to export the results of your formatted queries to a text file so that you can analyze them in other software packages such as Tableau or Excel.
Begin by loading the SQL library into Jupyter, connecting to the Dognition database, and setting Dognition as the default database.
python
%load_ext sql
%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb
%sql USE dognitiondb
End of explanation
"""
%%sql
SELECT start_time as 'exam start time'
FROM exam_answers
LIMIT 0, 5;
"""
Explanation: 1. Use AS to change the titles of the columns in your output
The AS clause allows you to assign an alias (a temporary name) to a table or a column in a table. Aliases can be useful for increasing the readability of queries, for abbreviating long names, and for changing column titles in query outputs. To implement the AS clause, include it in your query code immediately after the column or table you want to rename. For example, if you wanted to change the name of the time stamp field of the completed_tests table from "created_at" to "time_stamp" in your output, you could take advantage of the AS clause and execute the following query:
mySQL
SELECT dog_guid, created_at AS time_stamp
FROM complete_tests
Note that if you use an alias that includes a space, the alias must be surrounded in quotes:
mySQL
SELECT dog_guid, created_at AS "time stamp"
FROM complete_tests
You could also make an alias for a table:
mySQL
SELECT dog_guid, created_at AS "time stamp"
FROM complete_tests AS tests
Since aliases are strings, again, MySQL accepts both double and single quotation marks, but some database systems only accept single quotation marks. It is good practice to avoid using SQL keywords in your aliases, but if you have to use an SQL keyword in your alias for some reason, the string must be enclosed in backticks instead of quotation marks.
Question 1: How would you change the title of the "start_time" field in the exam_answers table to "exam start time" in a query output? Try it below:
End of explanation
"""
%%sql
SELECT DISTINCT breed
FROM dogs;
"""
Explanation: 2. Use DISTINCT to remove duplicate rows
Especially in databases like the Dognition database where no primary keys were declared in each table, sometimes entire duplicate rows can be entered in error. Even with no duplicate rows present, sometimes your queries correctly output multiple instances of the same value in a column, but you are interested in knowing what the different possible values in the column are, not what each value in each row is. In both of these cases, the best way to arrive at the clean results you want is to instruct the query to return only values that are distinct, or different from all the rest. The SQL keyword that allows you to do this is called DISTINCT. To use it in a query, place it directly after the word SELECT in your query.
For example, if we wanted a list of all the breeds of dogs in the Dognition database, we could try the following query from a previous exercise:
mySQL
SELECT breed
FROM dogs;
However, the output of this query would not be very helpful, because it would output the entry for every single row in the breed column of the dogs table, regardless of whether it duplicated the breed of a previous entry. Fortunately, we could arrive at the list we want by executing the following query with the DISTINCT modifier:
mySQL
SELECT DISTINCT breed
FROM dogs;
Try it yourself (If you do not limit your output, you should get 2006 rows in your output):
End of explanation
"""
%%sql
SELECT DISTINCT state, city
FROM users;
"""
Explanation: If you scroll through the output, you will see that no two entries are the same. Of note, if you use the DISTINCT clause on a column that has NULL values, MySQL will include one NULL value in the DISTINCT output from that column.
<mark> When the DISTINCT clause is used with multiple columns in a SELECT statement, the combination of all the columns together is used to determine the uniqueness of a row in a result set.</mark>
For example, if you wanted to know all the possible combinations of states and cities in the users table, you could query:
mySQL
SELECT DISTINCT state, city
FROM users;
Try it (if you don't limit your output you'll see 3999 rows in the query result, of which the first 1000 are displayed):
End of explanation
"""
%%sql
SELECT DISTINCT test_name, subcategory_name
FROM complete_tests;
"""
Explanation: If you examine the query output carefully, you will see that there are many rows with California (CA) in the state column and four rows that have Gainesville in the city column (Georgia, Arkansas, Florida, and Virginia all have cities named Gainesville in our user table), but no two rows have the same state and city combination.
When you use the DISTINCT clause with the LIMIT clause in a statement, MySQL stops searching when it finds the number of unique rows specified in the LIMIT clause, not when it goes through the number of rows in the LIMIT clause.
For example, if the first 6 entries of the breed column in the dogs table were:
Labrador Retriever
Shetland Sheepdog
Golden Retriever
Golden Retriever
Shih Tzu
Siberian Husky
The output of the following query:
mySQL
SELECT DISTINCT breed
FROM dogs LIMIT 5;
would be the first 5 different breeds:
Labrador Retriever
Shetland Sheepdog
Golden Retriever
Shih Tzu
Siberian Husky
not the distinct breeds in the first 5 rows:
Labrador Retriever
Shetland Sheepdog
Golden Retriever
Shih Tzu
Question 2: How would you list all the possible combinations of test names and subcategory names in complete_tests table? (If you do not limit your output, you should retrieve 45 possible combinations)
End of explanation
"""
%%sql
SELECT DISTINCT breed
FROM dogs
ORDER BY breed
"""
Explanation: 3. Use ORDER BY to sort the output of your query
As you might have noticed already when examining the output of the queries you have executed thus far, databases do not have built-in sorting mechanisms that automatically sort the output of your query. However, SQL permits the use of the powerful ORDER BY clause to allow you to sort the output according to your own specifications. Let's look at how you would implement a simple ORDER BY clause.
Recall our query outline:
<img src="https://duke.box.com/shared/static/l9v2khefe7er98pj1k6oyhmku4tz5wpf.jpg" width=400 alt="SELECT FROM WHERE ORDER BY" />
Your ORDER BY clause will come after everything else in the main part of your query, but before a LIMIT clause.
If you wanted the breeds of dogs in the dog table sorted in alphabetical order, you could query:
mySQL
SELECT DISTINCT breed
FROM dogs
ORDER BY breed
Try it yourself:
End of explanation
"""
%%sql
SELECT DISTINCT user_guid, state, membership_type
FROM users
WHERE country="US"
ORDER BY state ASC, membership_type ASC
"""
Explanation: (You might notice that some of the breeds start with a hyphen; we'll come back to that later.)
The default is to sort the output in ascending order. However, you can tell SQL to sort the output in descending order as well:
mySQL
SELECT DISTINCT breed
FROM dogs
ORDER BY breed DESC
Combining ORDER BY with LIMIT gives you an easy way to select the "top 10" and "last 10" in a list or column. For example, you could select the User IDs and Dog IDs of the 5 customer-dog pairs who spent the least median amount of time between their Dognition tests:
mySQL
SELECT DISTINCT user_guid, median_ITI_minutes
FROM dogs
ORDER BY median_ITI_minutes
LIMIT 5
or the greatest median amount of time between their Dognition tests:
mySQL
SELECT DISTINCT user_guid, median_ITI_minutes
FROM dogs
ORDER BY median_ITI_minutes DESC
LIMIT 5
You can also sort your output based on a derived field. If you wanted your inter-test interval to be expressed in seconds instead of minutes, you could incorporate a derived column and an alias into your last query to get the 5 customer-dog pairs who spent the greatest median amount of time between their Dognition tests in seconds:
mySQL
SELECT DISTINCT user_guid, (median_ITI_minutes * 60) AS median_ITI_sec
FROM dogs
ORDER BY median_ITI_sec DESC
LIMIT 5
Note that the parentheses are important in that query; without them, the database would try to make an alias for 60 instead of median_ITI_minutes * 60.
SQL queries also allow you to sort by multiple fields in a specified order, similar to how Excel allows to include multiple levels in a sort (see image below):
<img src="https://duke.box.com/shared/static/lbubaw9rkqoyv5xd61y57o3lpqkvrj10.jpg" width=600 alt="SELECT FROM WHERE" />
To achieve this in SQL, you include all the fields (or aliases) by which you want to sort the results after the ORDER BY clause, separated by commas, in the order you want them to be used for sorting. You can then specify after each field whether you want the sort using that field to be ascending or descending.
If you wanted to select all the distinct User IDs of customers in the United States (abbreviated "US") and sort them according to the states they live in in alphabetical order first, and membership type second, you could query:
mySQL
SELECT DISTINCT user_guid, state, membership_type
FROM users
WHERE country="US"
ORDER BY state ASC, membership_type ASC
Go ahead and try it yourself (if you do not limit the output, you should get 9356 rows in your output):
End of explanation
"""
%%sql
SELECT DISTINCT user_guid, state, membership_type
FROM users
WHERE country="US"
ORDER BY membership_type DESC, state ASC
"""
Explanation: You might notice that some of the rows have null values in the state field. You could revise your query to only select rows that do not have null values in either the state or membership_type column:
mySQL
SELECT DISTINCT user_guid, state, membership_type
FROM users
WHERE country="US" AND state IS NOT NULL and membership_type IS NOT NULL
ORDER BY state ASC, membership_type ASC
Question 3: Below, try executing a query that would sort the same output as described above by membership_type first in descending order, and state second in ascending order:
End of explanation
"""
breed_list = %sql SELECT DISTINCT breed FROM dogs ORDER BY breed;
"""
Explanation: 4. Export your query results to a text file
Next week, we will learn how to complete some basic forms of data analysis in SQL. However, if you know how to use other analysis or visualization software like Excel or Tableau, you can implement these analyses with the SQL skills you have gained already, as long as you can export the results of your SQL queries in a format other software packages can read. Almost every database interface has a different method for exporting query results, so you will need to look up how to do it every time you try a new interface (another place where having a desire to learn new things will come in handy!).
There are two ways to export your query results using our Jupyter interface.
You can select and copy the output you see in an output window, and paste it into another program. Although this strategy is very simple, it only works if your output is very limited in size (since you can only paste 1000 rows at a time).
You can tell MySQL to put the results of a query into a variable (for our purposes consider a variable to be a temporary holding place), and then use Python code to format the data in the variable as a CSV file (comma separated value file, a .CSV file) that can be downloaded. When you use this strategy, all of the results of a query will be saved into the variable, not just the first 1000 rows as displayed in Jupyter, even if we have set up Jupyter to only display 1000 rows of the output.
Let's see how we could export query results using the second method.
To tell MySQL to put the results of a query into a variable, use the following syntax:
python
variable_name_of_your_choice = %sql [your full query goes here];
In this case, you must execute your SQL query all on one line. So if you wanted to export the list of dog breeds in the dogs table, you could begin by executing:
python
breed_list = %sql SELECT DISTINCT breed FROM dogs ORDER BY breed;
Go ahead and try it:
End of explanation
"""
breed_list.csv('breed_list.csv')
"""
Explanation: Once your variable is created, using the above command tell Jupyter to format the variable as a csv file using the following syntax:
python
the_output_name_you_want.csv('the_output_name_you_want.csv')
Since this line is being run in Python, do NOT include the %sql prefix when trying to execute the line. We could therefore export the breed list by executing:
python
breed_list.csv('breed_list.csv')
When you do this, all of the results of the query will be saved in the text file but the results will not be displayed in your notebook. This is a convenient way to retrieve large amounts of data from a query without taxing your browser or the server.
Try it yourself:
End of explanation
"""
%%sql
SELECT DISTINCT breed,
REPLACE(breed,'-','') AS breed_fixed
FROM dogs
ORDER BY breed_fixed
LIMIT 0, 5;
"""
Explanation: You should see a link in the output line that says "CSV results." You can click on this link to see the text file in a tab in your browser or to download the file to your computer (exactly how this works will differ depending on your browser and settings, but your options will be the same as if you were trying to open or download a file from any other website.)
You can also open the file directly from the home page of your Jupyter account. Behind the scenes, your csv file was written to your directory on the Jupyter server, so you should now see this file listed in your Jupyter account landing page along with the list of your notebooks. Just like a notebook, you can copy it, rename it, or delete it from your directory by clicking on the check box next to the file and clicking the "duplicate," "rename," or trash can buttons at the top of the page.
<img src="https://duke.box.com/shared/static/0k33vrxct1k03iz5u0cunfzf81vyn3ns.jpg" width=400 alt="JUPYTER SCREEN SHOT" />
5. A Bird's Eye View of Other Functions You Might Want to Explore
When you open your breed list results file, you will notice the following:
1) All of the rows of the output are included, even though you can only see 1000 of those rows when you run the query through the Jupyter interface.
2) There are some strange values in the breed list. Some of the entries in the breed column seem to have a dash included before the name. This is an example of what real business data sets look like...they are messy! We will use this as an opportunity to highlight why it is so important to be curious and explore MySQL functions on your own.
If you needed an accurate list of all the dog breeds in the dogs table, you would have to find some way to "clean up" the breed list you just made. Let's examine some of the functions that could help you achieve this cleaning using SQL syntax rather than another program or language outside of the database.
I included these links to MySQL functions in an earlier notebook:
http://dev.mysql.com/doc/refman/5.7/en/func-op-summary-ref.html
http://www.w3resource.com/mysql/mysql-functions-and-operators.php
The following description of a function called REPLACE is included in that resource:
"REPLACE(str,from_str,to_str)
Returns the string str with all occurrences of the string from_str replaced by the string to_str. REPLACE() performs a case-sensitive match when searching for from_str."
One thing we could try is using this function to replace any dashes included in the breed names with no character:
mySQL
SELECT DISTINCT breed,
REPLACE(breed,'-','') AS breed_fixed
FROM dogs
ORDER BY breed_fixed
In this query, we put the field/column name in the replace function where the syntax instructions listed "str" in order to tell the REPLACE function to act on the entire column. The "-" was the "from_str", which is the string we wanted to replace. The "" was the to_str, which is the character with which we want to replace the "from_str".
Try looking at the output:
End of explanation
"""
%%sql
SELECT DISTINCT breed, TRIM(LEADING '-' FROM breed) AS breed_fixed
FROM dogs
ORDER BY breed_fixed
"""
Explanation: That was helpful, but you'll still notice some issues with the output.
First, the leading dashes are indeed removed in the breed_fixed column, but now the dashes used to separate breeds in entries like 'French Bulldog-Boston Terrier Mix' are missing as well. So REPLACE isn't the right choice to selectively remove leading dashes.
Perhaps we could try using the TRIM function:
http://www.w3resource.com/mysql/string-functions/mysql-trim-function.php
sql
SELECT DISTINCT breed, TRIM(LEADING '-' FROM breed) AS breed_fixed
FROM dogs
ORDER BY breed_fixed
Try the query written above yourself, and inspect the output carefully:
End of explanation
"""
%%sql
SELECT DISTINCT subcategory_name
FROM complete_tests
ORDER BY subcategory_name;
"""
Explanation: That certainly gets us a lot closer to the list we might want, but there are still some entries in the breed_fixed column that are conceptual duplicates of each other, due to poor consistency in how the breed names were entered. For example, one entry is "Beagle Mix" while another is "Beagle- Mix". These entries are clearly meant to refer to the same breed, but they will be counted as separate breeds as long as their breed names are different.
Cleaning up all of the entries in the breed column would take quite a bit of work, so we won't go through more details about how to do it in this lesson. Instead, use this exercise as a reminder for why it's so important to always look at the details of your data, and as motivation to explore the MySQL functions we won't have time to discuss in the course. If you push yourself to learn new SQL functions and embrace the habit of getting to know your data by exploring its raw values and outputs, you will find that SQL provides very efficient tools to clean real-world messy data sets, and you will arrive at the correct conclusions about what your data indicate your company should do.
Now it's time to practice using AS, DISTINCT, and ORDER BY in your own queries.
Question 4: How would you get a list of all the subcategories of Dognition tests, in alphabetical order, with no test listed more than once (if you do not limit your output, you should retrieve 16 rows)?
End of explanation
"""
%%sql
SELECT DISTINCT country
FROM users
WHERE country != 'US'
ORDER BY country;
%%sql
Describe users
"""
Explanation: Question 5: How would you create a text file with a list of all the non-United States countries of Dognition customers with no country listed more than once?
End of explanation
"""
%%sql
SELECT user_guid, dog_guid, test_name, created_at
FROM complete_tests
complete_tests
LIMIT 0, 10;
"""
Explanation: Question 6: How would you find the User ID, Dog ID, and test name of the first 10 tests to ever be completed in the Dognition database?
End of explanation
"""
%%sql
Describe users
%%sql
SELECT user_guid, state, created_at
FROM users
WHERE membership_type=2 AND state='NC' AND created_at >= '2014-03-01'
ORDER BY created_at DESC;
"""
Explanation: Question 7: How would create a text file with a list of all the customers with yearly memberships who live in the state of North Carolina (USA) and joined Dognition after March 1, 2014, sorted so that the most recent member is at the top of the list?
End of explanation
"""
%%sql
SELECT DISTINCT breed,
UPPER(TRIM(LEADING '-' FROM breed)) AS breed_fixed
FROM dogs
ORDER BY breed;
"""
Explanation: Question 8: See if you can find an SQL function from the list provided at:
http://www.w3resource.com/mysql/mysql-functions-and-operators.php
that would allow you to output all of the distinct breed names in UPPER case. Create a query that would output a list of these names in upper case, sorted in alphabetical order.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/sandbox-2/toplevel.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
agile-geoscience/xlines
|
notebooks/13_Physical_units_with_pint.ipynb
|
apache-2.0
|
#!pip install pint
#!pip install git+https://github.com/hgrecco/pint-pandas#egg=Pint-Pandas-0.1.dev0
"""
Explanation: X LINES OF PYTHON
Physical units with pint
This notebook goes with a blog post on the same subject.
Have you ever wished you could carry units around with your quantities — and have the computer figure out the best units and multipliers to use?
pint is a nince, compact library for doing just this, handling all your dimensional analysis needs. It can also detect units from strings. We can define our own units, it knows about multipliers (kilo, mega, etc), and it even works with numpy and pandas.
Install pint with pip or conda, e.g.
pip install pint
NB If you are running this on Google Colaboratory, you must uncomment these lines (delete the initial #) and run this first:
End of explanation
"""
import pint
units = pint.UnitRegistry()
pint.__version__
"""
Explanation: To use it in its typical mode, we import the library then instantiate a UnitRegistry object. The registry contains lots of physical units.
End of explanation
"""
thickness = 68 * units.m
thickness
"""
Explanation: Attaching and printing units
End of explanation
"""
thickness.magnitude, thickness.units, thickness.dimensionality
"""
Explanation: In a Jupyter Notebook you see a 'pretty' version of the quantity. In the interpreter, you'll see something slightly different (the so-called repr of the class):
>>> thickness
<Quantity(68, 'meter')>
We can get at the magnitude, the units, and the dimensionality of this quantity:
End of explanation
"""
f'{thickness**2}'
"""
Explanation: You can also use the following abbreviations for magnitude and units:
thickness.m, thickness.u
For printing, we can use Python's string formatting:
End of explanation
"""
print(f'{thickness**2:P}')
print(f'{thickness**2:~P}')
print(f'{thickness**2:~L}')
print(f'{thickness**2:~H}')
"""
Explanation: But pint extends the string formatting options to include special options for Quantity objects. The most useful option is P for 'pretty', but there's also L for $\LaTeX$ and H for HTML. Adding a ~ (tilde) before the option tells pint to use unit abbreviations instead of the full names:
End of explanation
"""
thickness * 2
"""
Explanation: Doing maths
If we multiply by a scalar, pint produces the result you'd expect:
End of explanation
"""
thickness + 10
# This is meant to produce an error...
"""
Explanation: Note that you must use units when you need them:
End of explanation
"""
area = 60 * units.km**2
n2g = 0.5 * units.dimensionless # Optional dimensionless 'units'...
phi = 0.2 # ... but you can just do this.
sat = 0.7
volume = area * thickness * n2g * phi * sat
volume
"""
Explanation: Let's try defining an area of $60\ \mathrm{km}^2$, then multiplying it by our thickness. To make it more like a hydrocarbon volume, I'll also multiply by net:gross n2g, porosity phi, and saturation sat, all of which are dimensionless:
End of explanation
"""
volume.to_compact()
"""
Explanation: We can convert to something more compact:
End of explanation
"""
volume.to('m**3') # Or use m^3
"""
Explanation: Or be completely explicit about the units and multipliers we want:
End of explanation
"""
volume.to_compact('L')
"""
Explanation: The to_compact() method can also take units, if you want to be more explicit; it applies multipliers automatically:
End of explanation
"""
volume.to_compact('oil_barrel')
"""
Explanation: Oil barrels are already defined (careful, they are abbreviated as oil_bbl not bbl — that's a 31.5 gallon barrel, about the same as a beer barrel).
End of explanation
"""
f"The volume is {volume.to_compact('oil_barrel'):~0.2fL}"
"""
Explanation: If we use string formatting (see above), we can get pretty specific:
End of explanation
"""
units.define('barrel_of_oil_equivalent = 6000 ft**3 = boe')
"""
Explanation: Defining new units
pint defines hundreads of units (here's the list), and it knows about tonnes of oil equivalent... but it doesn't know about barrels of oil equivalent (for more on conversion to BOE). So let's define a custom unit, using the USGS's conversion factor:
End of explanation
"""
volume.to('boe')
volume.to_compact('boe')
"""
Explanation: Let's suspend reality for a moment and imagine we now want to compute our gross rock volume in BOEs...
End of explanation
"""
units('2.34 km')
"""
Explanation: Getting units from strings
pint can also parse strings and attempt to convert them to Quantity instances:
End of explanation
"""
units('2.34*10^3 km')
units('-12,000.ft')
units('3.2 m')
"""
Explanation: This looks useful! Let's try something less nicely formatted.
End of explanation
"""
from uncertainties import ufloat
area = ufloat(64, 5) * units.km**2 # 64 +/- 5 km**2
(thickness * area).to('Goil_bbl')
"""
Explanation: You can also use the Quantity constructor, like this:
>>> qty = pint.Quantity
>>> qty('2.34 km')
2.34 kilometer
But the UnitRegistry seems to do the same things and might be more convenient.
pint with uncertainties
Conveniently, pint works well with uncertainties. Maybe I'll do an X lines on that package in the future. Install it with conda or pip, e.g.
pip install uncertainties
End of explanation
"""
import numpy as np
vp = np.array([2300, 2400, 2550, 3200]) * units.m/units.s
rho = np.array([2400, 2550, 2500, 2650]) * units.kg/units.m**3
z = vp * rho
z
"""
Explanation: pint with numpy
pint works fine with NumPy arrays:
End of explanation
"""
print(z)
"""
Explanation: For some reason, this sometimes doesn't render properly. But we can always do this:
End of explanation
"""
z.m
"""
Explanation: As expected, the magnitude of this quantity is just a NumPy array:
End of explanation
"""
pint._HAS_PINTPANDAS
"""
Explanation: pint with pandas
Note that this functionality is fairly new and is still settling down. YMMV.
To use pint (version 0.9 and later) with pandas (version 0.24.2 works; 0.25.0 does not work at the time of writing), we must first install pintpandas, which must be done from source; get the code from GitHub. Here's how I do it:
cd pint-pandas
python setup.py sdist
pip install dist/Pint-Pandas-0.1.dev0.tar.gz
You could also do:
pip install git+https://github.com/hgrecco/pint-pandas#egg=Pint-Pandas-0.1.dev0
Once you have done that, the following should evaluate to True:
End of explanation
"""
import pandas as pd
df = pd.DataFrame({
"Vp": pd.Series(vp.m, dtype="pint[m/s]"),
"Vs": pd.Series([1200, 1200, 1250, 1300], dtype="pint[m/s]"),
"rho": pd.Series(rho.m, dtype="pint[kg/m**3]"),
})
df
import bruges as bg
df['E'] = bg.rockphysics.moduli.youngs(df.Vp, df.Vs, df.rho)
df.E
"""
Explanation: To use this integration, we pass special pint data types to the pd.Series() object:
End of explanation
"""
df.loc[0, 'E'].to('GPa')
"""
Explanation: We can't convert the units of a whole Series but we can do one:
End of explanation
"""
df.E.apply(lambda x: x.to('GPa'))
"""
Explanation: So to convert a whole series, we can use Series.apply():
End of explanation
"""
class UnitDataFrame(pd.DataFrame):
def _repr_html_(self):
"""New repr for Jupyter Notebook."""
html = super()._repr_html_() # Get the old repr string.
units = [''] + [f"{dtype.units:~H}" for dtype in self.dtypes]
style = "text-align: right; color: gray;"
new = f'<tr style="{style}"><th>' + "</th><th>".join(units) + "</th></tr></thead>"
return html.replace('</thead>', new)
df = UnitDataFrame({
"Vp": pd.Series(vp.m, dtype="pint[m/s]"),
"Vs": pd.Series([1200, 1200, 1250, 1300], dtype="pint[m/s]"),
"rho": pd.Series(rho.m, dtype="pint[kg/m**3]"),
})
df
"""
Explanation: Bonus: dataframe display with units
We could subclass dataframes to tweak their _repr_html_() method, which would allow us to make units show up in the Notebook representation of the dataframe...
End of explanation
"""
|
obulpathi/datascience
|
scikit/Chapter 2/Linear models.ipynb
|
apache-2.0
|
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
"""
Explanation: Linear models for regression
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
End of explanation
"""
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
"""
Explanation: Linear Regression
End of explanation
"""
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
"""
Explanation: Ridge Regression (L2 penalty)
End of explanation
"""
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
"""
Explanation: Lasso (L1 penalty)
End of explanation
"""
from figures import plot_linear_svc_regularization
plot_linear_svc_regularization()
"""
Explanation: Linear models for classification
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_ > 0
The influence of C in LinearSVC
End of explanation
"""
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
"""
Explanation: Multi-Class linear classification
End of explanation
"""
|
tuanavu/coursera-university-of-washington
|
machine_learning/2_regression/lecture/week5/.ipynb_checkpoints/Overfitting_Demo_Ridge_Lasso-checkpoint.ipynb
|
mit
|
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Overfitting demo
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
End of explanation
"""
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
"""
Explanation: Create random values for x in interval [0,1)
End of explanation
"""
y = x.apply(lambda x: math.sin(4*x))
"""
Explanation: Compute y
End of explanation
"""
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
"""
Explanation: Add random Gaussian noise to y
End of explanation
"""
data = graphlab.SFrame({'X1':x,'Y':y})
data
"""
Explanation: Put data into an SFrame to manipulate later
End of explanation
"""
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
"""
Explanation: Create a function to plot the data, since we'll do it many times
End of explanation
"""
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
"""
Explanation: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
End of explanation
"""
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
"""
Explanation: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
End of explanation
"""
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
"""
Explanation: Define function to plot data and predictions made, since we are going to use it many times.
End of explanation
"""
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
"""
Explanation: Create a function that prints the polynomial coefficients in a pretty way :)
End of explanation
"""
model = polynomial_regression(data, deg=2)
"""
Explanation: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
End of explanation
"""
print_coefficients(model)
"""
Explanation: Inspect learned parameters
End of explanation
"""
plot_poly_predictions(data,model)
"""
Explanation: Form and plot our predictions along a grid of x values:
End of explanation
"""
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Fit a degree-4 polynomial
End of explanation
"""
model = polynomial_regression(data, deg=16)
print_coefficients(model)
"""
Explanation: Fit a degree-16 polynomial
End of explanation
"""
plot_poly_predictions(data,model)
"""
Explanation: Woah!!!! Those coefficients are crazy! On the order of 10^6.
End of explanation
"""
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
"""
Explanation: Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
#
#
Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
End of explanation
"""
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
End of explanation
"""
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
End of explanation
"""
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
"""
Explanation: Let's look at fits for a sequence of increasing lambda values
End of explanation
"""
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
"""
Explanation: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
End of explanation
"""
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
"""
Explanation: Run LOO cross validation for "num" values of lambda, on a log scale
End of explanation
"""
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\L2_penalty$')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
"""
Explanation: Plot results of estimating LOO for each value of lambda
End of explanation
"""
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
End of explanation
"""
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
"""
Explanation: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
End of explanation
"""
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
"""
Explanation: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty"
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.21/_downloads/ef89d1f7daeb4e357098461753c3af0f/plot_source_alignment.ipynb
|
bsd-3-clause
|
import os.path as op
import numpy as np
import nibabel as nib
from scipy import linalg
import mne
from mne.io.constants import FIFF
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
trans_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
trans = mne.read_trans(trans_fname)
src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
# load the T1 file and change the header information to the correct units
t1w = nib.load(op.join(data_path, 'subjects', 'sample', 'mri', 'T1.mgz'))
t1w = nib.Nifti1Image(t1w.dataobj, t1w.affine)
t1w.header['xyzt_units'] = np.array(10, dtype='uint8')
t1_mgh = nib.MGHImage(t1w.dataobj, t1w.affine)
"""
Explanation: Source alignment and coordinate frames
This tutorial shows how to visually assess the spatial alignment of MEG sensor
locations, digitized scalp landmark and sensor locations, and MRI volumes. This
alignment process is crucial for computing the forward solution, as is
understanding the different coordinate frames involved in this process.
:depth: 2
Let's start out by loading some data.
End of explanation
"""
fig = mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
subjects_dir=subjects_dir, surfaces='head-dense',
show_axes=True, dig=True, eeg=[], meg='sensors',
coord_frame='meg')
mne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0., 0., 0.))
print('Distance from head origin to MEG origin: %0.1f mm'
% (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3])))
print('Distance from head origin to MRI origin: %0.1f mm'
% (1000 * np.linalg.norm(trans['trans'][:3, 3])))
dists = mne.dig_mri_distances(raw.info, trans, 'sample',
subjects_dir=subjects_dir)
print('Distance from %s digitized points to head surface: %0.1f mm'
% (len(dists), 1000 * np.mean(dists)))
"""
Explanation: .. raw:: html
<style>
.pink {color:DarkSalmon; font-weight:bold}
.blue {color:DeepSkyBlue; font-weight:bold}
.gray {color:Gray; font-weight:bold}
.magenta {color:Magenta; font-weight:bold}
.purple {color:Indigo; font-weight:bold}
.green {color:LimeGreen; font-weight:bold}
.red {color:Red; font-weight:bold}
</style>
.. role:: pink
.. role:: blue
.. role:: gray
.. role:: magenta
.. role:: purple
.. role:: green
.. role:: red
Understanding coordinate frames
For M/EEG source imaging, there are three coordinate frames must be
brought into alignment using two 3D transformation matrices <wiki_xform_>_
that define how to rotate and translate points in one coordinate frame
to their equivalent locations in another. The three main coordinate frames
are:
:blue:"meg": the coordinate frame for the physical locations of MEG
sensors
:gray:"mri": the coordinate frame for MRI images, and scalp/skull/brain
surfaces derived from the MRI images
:pink:"head": the coordinate frame for digitized sensor locations and
scalp landmarks ("fiducials")
Each of these are described in more detail in the next section.
A good way to start visualizing these coordinate frames is to use the
mne.viz.plot_alignment function, which is used for creating or inspecting
the transformations that bring these coordinate frames into alignment, and
displaying the resulting alignment of EEG sensors, MEG sensors, brain
sources, and conductor models. If you provide subjects_dir and
subject parameters, the function automatically loads the subject's
Freesurfer MRI surfaces. Important for our purposes, passing
show_axes=True to ~mne.viz.plot_alignment will draw the origin of each
coordinate frame in a different color, with axes indicated by different sized
arrows:
shortest arrow: (R)ight / X
medium arrow: forward / (A)nterior / Y
longest arrow: up / (S)uperior / Z
Note that all three coordinate systems are RAS coordinate frames and
hence are also right-handed_ coordinate systems. Finally, note that the
coord_frame parameter sets which coordinate frame the camera
should initially be aligned with. Let's take a look:
End of explanation
"""
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src,
subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
"""
Explanation: Coordinate frame definitions
Neuromag/Elekta/MEGIN head coordinate frame ("head", :pink:pink axes)
The head coordinate frame is defined through the coordinates of
anatomical landmarks on the subject's head: usually the Nasion (NAS),
and the left and right preauricular points (LPA and RPA).
Different MEG manufacturers may have different definitions of the head
coordinate frame. A good overview can be seen in the
FieldTrip FAQ on coordinate systems.
For Neuromag/Elekta/MEGIN, the head coordinate frame is defined by the
intersection of
the line between the LPA (:red:red sphere) and RPA
(:purple:purple sphere), and
the line perpendicular to this LPA-RPA line one that goes through
the Nasion (:green:green sphere).
The axes are oriented as X origin→RPA, Y origin→NAS,
Z origin→upward (orthogonal to X and Y).
.. note:: The required 3D coordinates for defining the head coordinate
frame (NAS, LPA, RPA) are measured at a stage separate from
the MEG data recording. There exist numerous devices to
perform such measurements, usually called "digitizers". For
example, see the devices by the company Polhemus_.
MEG device coordinate frame ("meg", :blue:blue axes)
The MEG device coordinate frame is defined by the respective MEG
manufacturers. All MEG data is acquired with respect to this coordinate
frame. To account for the anatomy and position of the subject's head, we
use so-called head position indicator (HPI) coils. The HPI coils are
placed at known locations on the scalp of the subject and emit
high-frequency magnetic fields used to coregister the head coordinate
frame with the device coordinate frame.
From the Neuromag/Elekta/MEGIN user manual:
The origin of the device coordinate system is located at the center
of the posterior spherical section of the helmet with X axis going
from left to right and Y axis pointing front. The Z axis is, again
normal to the plane with positive direction up.
.. note:: The HPI coils are shown as :magenta:magenta spheres.
Coregistration happens at the beginning of the recording and
the head↔meg transformation matrix is stored in
raw.info['dev_head_t'].
MRI coordinate frame ("mri", :gray:gray axes)
Defined by Freesurfer, the "MRI surface RAS" coordinate frame has its
origin at the center of a 256×256×256 1mm anisotropic volume (though the
center may not correspond to the anatomical center of the subject's
head).
.. note:: We typically align the MRI coordinate frame to the head
coordinate frame through a
rotation and translation matrix <wiki_xform_>_,
that we refer to in MNE as trans.
A bad example
Let's try using ~mne.viz.plot_alignment with trans=None, which
(incorrectly!) equates the MRI and head coordinate frames.
End of explanation
"""
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
src=src, subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
"""
Explanation: A good example
Here is the same plot, this time with the trans properly defined
(using a precomputed transformation matrix).
End of explanation
"""
# the head surface is stored in "mri" coordinate frame
# (origin at center of volume, units=mm)
seghead_rr, seghead_tri = mne.read_surface(
op.join(subjects_dir, 'sample', 'surf', 'lh.seghead'))
# to put the scalp in the "head" coordinate frame, we apply the inverse of
# the precomputed `trans` (which maps head → mri)
mri_to_head = linalg.inv(trans['trans'])
scalp_pts_in_head_coord = mne.transforms.apply_trans(
mri_to_head, seghead_rr, move=True)
# to put the scalp in the "meg" coordinate frame, we use the inverse of
# raw.info['dev_head_t']
head_to_meg = linalg.inv(raw.info['dev_head_t']['trans'])
scalp_pts_in_meg_coord = mne.transforms.apply_trans(
head_to_meg, scalp_pts_in_head_coord, move=True)
# The "mri_voxel"→"mri" transform is embedded in the header of the T1 image
# file. We'll invert it and then apply it to the original `seghead_rr` points.
# No unit conversion necessary: this transform expects mm and the scalp surface
# is defined in mm.
vox_to_mri = t1_mgh.header.get_vox2ras_tkr()
mri_to_vox = linalg.inv(vox_to_mri)
scalp_points_in_vox = mne.transforms.apply_trans(
mri_to_vox, seghead_rr, move=True)
"""
Explanation: Visualizing the transformations
Let's visualize these coordinate frames using just the scalp surface; this
will make it easier to see their relative orientations. To do this we'll
first load the Freesurfer scalp surface, then apply a few different
transforms to it. In addition to the three coordinate frames discussed above,
we'll also show the "mri_voxel" coordinate frame. Unlike MRI Surface RAS,
"mri_voxel" has its origin in the corner of the volume (the left-most,
posterior-most coordinate on the inferior-most MRI slice) instead of at the
center of the volume. "mri_voxel" is also not an RAS coordinate system:
rather, its XYZ directions are based on the acquisition order of the T1 image
slices.
End of explanation
"""
def add_head(renderer, points, color, opacity=0.95):
renderer.mesh(*points.T, triangles=seghead_tri, color=color,
opacity=opacity)
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(600, 600), bgcolor='w', scene=False)
add_head(renderer, seghead_rr, 'gray')
add_head(renderer, scalp_pts_in_meg_coord, 'blue')
add_head(renderer, scalp_pts_in_head_coord, 'pink')
add_head(renderer, scalp_points_in_vox, 'green')
mne.viz.set_3d_view(figure=renderer.figure, distance=800,
focalpoint=(0., 30., 30.), elevation=105, azimuth=180)
renderer.show()
"""
Explanation: Now that we've transformed all the points, let's plot them. We'll use the
same colors used by ~mne.viz.plot_alignment and use :green:green for the
"mri_voxel" coordinate frame:
End of explanation
"""
# get the nasion
nasion = [p for p in raw.info['dig'] if
p['kind'] == FIFF.FIFFV_POINT_CARDINAL and
p['ident'] == FIFF.FIFFV_POINT_NASION][0]
assert nasion['coord_frame'] == FIFF.FIFFV_COORD_HEAD
nasion = nasion['r'] # get just the XYZ values
# transform it from head to MRI space (recall that `trans` is head → mri)
nasion_mri = mne.transforms.apply_trans(trans, nasion, move=True)
# then transform to voxel space, after converting from meters to millimeters
nasion_vox = mne.transforms.apply_trans(
mri_to_vox, nasion_mri * 1e3, move=True)
# plot it to make sure the transforms worked
renderer = mne.viz.backends.renderer.create_3d_figure(
size=(400, 400), bgcolor='w', scene=False)
add_head(renderer, scalp_points_in_vox, 'green', opacity=1)
renderer.sphere(center=nasion_vox, color='orange', scale=10)
mne.viz.set_3d_view(figure=renderer.figure, distance=600.,
focalpoint=(0., 125., 250.), elevation=45, azimuth=180)
renderer.show()
"""
Explanation: The relative orientations of the coordinate frames can be inferred by
observing the direction of the subject's nose. Notice also how the origin of
the :green:mri_voxel coordinate frame is in the corner of the volume
(above, behind, and to the left of the subject), whereas the other three
coordinate frames have their origin roughly in the center of the head.
Example: MRI defacing
For a real-world example of using these transforms, consider the task of
defacing the MRI to preserve subject anonymity. If you know the points in
the "head" coordinate frame (as you might if you're basing the defacing on
digitized points) you would need to transform them into "mri" or "mri_voxel"
in order to apply the blurring or smoothing operations to the MRI surfaces or
images. Here's what that would look like (we'll use the nasion landmark as a
representative example):
End of explanation
"""
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
"""
Explanation: Defining the head↔MRI trans using the GUI
You can try creating the head↔MRI transform yourself using
:func:mne.gui.coregistration.
First you must load the digitization data from the raw file
(Head Shape Source). The MRI data is already loaded if you provide the
subject and subjects_dir. Toggle Always Show Head Points to see
the digitization points.
To set the landmarks, toggle Edit radio button in MRI Fiducials.
Set the landmarks by clicking the radio button (LPA, Nasion, RPA) and then
clicking the corresponding point in the image.
After doing this for all the landmarks, toggle Lock radio button. You
can omit outlier points, so that they don't interfere with the finetuning.
.. note:: You can save the fiducials to a file and pass
mri_fiducials=True to plot them in
:func:mne.viz.plot_alignment. The fiducials are saved to the
subject's bem folder by default.
* Click Fit Head Shape. This will align the digitization points to the
head surface. Sometimes the fitting algorithm doesn't find the correct
alignment immediately. You can try first fitting using LPA/RPA or fiducials
and then align according to the digitization. You can also finetune
manually with the controls on the right side of the panel.
* Click Save As... (lower right corner of the panel), set the filename
and read it with :func:mne.read_trans.
For more information, see step by step instructions
in these slides
<https://www.slideshare.net/mne-python/mnepython-coregistration>_.
Uncomment the following line to align the data yourself.
End of explanation
"""
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
src = mne.setup_volume_source_space(sphere=sphere, pos=10.)
mne.viz.plot_alignment(
raw.info, eeg='projected', bem=sphere, src=src, dig=True,
surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
"""
Explanation: Alignment without MRI
The surface alignments above are possible if you have the surfaces available
from Freesurfer. :func:mne.viz.plot_alignment automatically searches for
the correct surfaces from the provided subjects_dir. Another option is
to use a spherical conductor model <eeg_sphere_model>. It is
passed through bem parameter.
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/tutorials/keras/classification.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
"""
Explanation: はじめてのニューラルネットワーク:分類問題の初歩
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/keras/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このガイドでは、スニーカーやシャツなど、身に着けるものの画像を分類するニューラルネットワークのモデルをトレーニングします。すべての詳細を理解できなくても問題ありません。ここでは、完全な TensorFlow プログラムについて概説し、細かいところはその過程において見ていきます。
このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである tf.kerasを使用します。
End of explanation
"""
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
"""
Explanation: Fashion MNIST データセットをインポートする
このガイドでは、Fashion MNIST データセットを使用します。このデータセットには、10 カテゴリの 70,000 のグレースケール画像が含まれています。次のように、画像は低解像度(28 x 28 ピクセル)で個々の衣料品を示しています。
<table>
<tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr>
<tr><td align="center"> <b>図 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST サンプル</a> (作成者:Zalando、MIT ライセンス)<br> </td></tr>
</table>
Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場するMNIST データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。
Fashion MNIST を使うのは、目先を変える意味もありますが、普通の MNIST よりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確認するために使われます。プログラムのテストやデバッグのためには、よい出発点になります。
ここでは、60,000 枚の画像を使用してネットワークをトレーニングし、10,000 枚の画像を使用して、ネットワークが画像の分類をどの程度正確に学習したかを評価します。Tensor Flow から直接 Fashion MNIST にアクセスできます。Tensor Flow から直接 Fashion MNIST データをインポートして読み込みます。
End of explanation
"""
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
"""
Explanation: 読み込んだデータセットは、NumPy 配列になります。
train_images と train_labels の 2 つの配列は、モデルのトレーニングに使用されるトレーニング用データセットです。
モデルは、テストセット、test_imagesおよびtest_labels 配列に対してテストされます。
画像は 28×28 の NumPy 配列から構成されています。それぞれのピクセルの値は 0 から 255 の間です。ラベルは、0 から 9 までの整数の配列です。それぞれの数字が下表のように、衣料品のクラスに対応しています。
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
画像はそれぞれ単一のラベルに分類されます。データセットには上記のクラス名が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
End of explanation
"""
train_images.shape
"""
Explanation: データの観察
モデルのトレーニングを行う前に、データセットの形式を見てみましょう。下記のように、トレーニング用データセットには 28 × 28 ピクセルの画像が 60,000 含まれています。
End of explanation
"""
len(train_labels)
"""
Explanation: 同様に、トレーニング用データセットには 60,000 のラベルが含まれています。
End of explanation
"""
train_labels
"""
Explanation: ラベルはそれぞれ、0 から 9 までの間の整数です。
End of explanation
"""
test_images.shape
"""
Explanation: テスト用データセットには、10,000 の画像が含まれます。画像は 28 × 28 ピクセルで構成されています。
End of explanation
"""
len(test_labels)
"""
Explanation: テスト用データセットには 10,000 のラベルが含まれます。
End of explanation
"""
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
"""
Explanation: データの前処理
ネットワークをトレーニングする前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は 0 から 255 の間の数値です。
End of explanation
"""
train_images = train_images / 255.0
test_images = test_images / 255.0
"""
Explanation: これらの値をニューラルネットワークモデルに供給する前に、0 から 1 の範囲にスケーリングします。これを行うには、値を 255 で割ります。トレーニングセットとテストセットを同じ方法で前処理することが重要です。
End of explanation
"""
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
"""
Explanation: 訓練用データセットの最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
"""
Explanation: モデルの構築
ニューラルネットワークを構築するには、まずモデルのレイヤーを定義し、その後モデルをコンパイルします。
レイヤーの設定
ニューラルネットワークの基本的な構成要素は、レイヤーです。レイヤーは、レイヤーに入力されたデータから表現を抽出します。 これらの表現は解決しようとする問題に有用であることが望まれます。
ディープラーニングモデルのほとんどは、単純なレイヤーの積み重ねで構成されています。tf.keras.layers.Dense のようなレイヤーのほとんどには、トレーニング中に学習されるパラメータが存在します。
End of explanation
"""
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
"""
Explanation: このネットワークの最初のレイヤーは、tf.keras.layers.Flatten です。このレイヤーは、画像を(28 × 28 ピクセルの)2 次元配列から、28×28=784 ピクセルの、1 次元配列に変換します。このレイヤーが、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。このレイヤーには学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。
ピクセルが1次元化されたあと、ネットワークは 2 つの tf.keras.layers.Dense レイヤーとなります。これらのレイヤーは、密結合あるいは全結合されたニューロンのレイヤーとなります。最初の Dense レイヤーには、128 個のノード(あるはニューロン)があります。最後のレイヤーでもある 2 番めのレイヤーは、長さが 10 のロジット配列を返します。それぞれのノードは、今見ている画像が 10 個のクラスのひとつひとつに属する確率を出力します。
モデルのコンパイル
モデルのトレーニングの準備が整う前に、さらにいくつかの設定が必要です。これらは、モデルのコンパイルステップ中に追加されます。
損失関数 —これは、トレーニング中のモデルの正解率を測定します。この関数を最小化して、モデルを正しい方向に「操縦」する必要があります。
オプティマイザ —これは、モデルが表示するデータとその損失関数に基づいてモデルが更新される方法です。
指標 —トレーニングとテストの手順を監視するために使用されます。次の例では、正しく分類された画像の率である正解率を使用しています。
End of explanation
"""
model.fit(train_images, train_labels, epochs=10)
"""
Explanation: モデルの訓練
ニューラルネットワークモデルのトレーニングには、次の手順が必要です。
モデルトレーニング用データを投入します。この例では、トレーニングデータは train_images および <br>train_labels 配列にあります。
モデルは、画像とラベルの対応関係を学習します。
モデルにテスト用データセットの予測(分類)を行わせます。この例では test_images 配列です。その後、予測結果と test_labels 配列を照合します。
予測が test_labels 配列のラベルと一致することを確認します。
モデルに投入する
トレーニングを開始するには、model.fit メソッドを呼び出します。
End of explanation
"""
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
"""
Explanation: モデルのトレーニングの進行とともに、損失値と正解率が表示されます。このモデルの場合、トレーニング用データでは 0.91 (すなわち 91%) の正解率に達します。
正解率を評価する
次に、モデルがテストデータセットでどのように機能するかを比較します。
End of explanation
"""
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
"""
Explanation: ご覧の通り、テスト用データセットでの正解率は、トレーニング用データセットでの正解率よりも少し低くなります。このトレーニング時の正解率とテスト時の正解率の差は、過適合の一例です。過適合とは、新しいデータに対する機械学習モデルの性能が、トレーニング時と比較して低下する現象です。過適合モデルは、トレーニングデータセットのノイズと詳細を「記憶」するため、新しいデータでのモデルのパフォーマンスに悪影響を及ぼします。詳細については、以下を参照してください。
過学習のデモ
過学習を防止するためのストラテジー
予測する
トレーニングされたモデルを使用して、いくつかの画像に関する予測を行うことができます。ソフトマックスレイヤーをアタッチして、モデルの線形出力であるロジットを解釈しやすい確率に変換します。
End of explanation
"""
predictions[0]
"""
Explanation: これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
End of explanation
"""
np.argmax(predictions[0])
"""
Explanation: 予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
End of explanation
"""
test_labels[0]
"""
Explanation: このモデルは、この画像が、アンクルブーツ、class_names[9]である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
End of explanation
"""
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
"""
Explanation: これをグラフ化して、10 クラスの予測の完全なセットを確認します。
End of explanation
"""
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
"""
Explanation: 予測を検証する
トレーニングされたモデルを使用して、いくつかの画像に関する予測を行うことができます。
0 番目の画像、予測、および予測配列を見てみましょう。 正しい予測ラベルは青で、間違った予測ラベルは赤です。 数値は、予測されたラベルのパーセンテージ (/100) を示します。
End of explanation
"""
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
"""
Explanation: いくつかの画像をそれらの予測とともにプロットしてみましょう。確信度が高い場合でも、モデルが間違っていることがあることに注意してください。
End of explanation
"""
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
"""
Explanation: トレーニングされたモデルを使用する
最後に、トレーニング済みモデルを使って 1 つの画像に対する予測を行います。
End of explanation
"""
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
"""
Explanation: tf.keras モデルは、サンプルの中のバッチあるいは「集まり」についてまとめて予測を行うように最適化されています。そのため、1 つの画像を使う場合でも、リスト化する必要があります。
End of explanation
"""
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
plt.show()
"""
Explanation: そして、予測を行います。
End of explanation
"""
np.argmax(predictions_single[0])
"""
Explanation: tf.keras.Model.predict は、リストのリストを返します。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが) 予測を取り出します。
End of explanation
"""
|
saudijack/unfpyboot
|
Day_00/02_Strings_and_FileIO/00 Strings in Python.ipynb
|
mit
|
s1 = 'Godzilla'
print s1, s1.upper(), s1
"""
Explanation: Strings in Python
What is a string?
A "string" is a series of characters of arbitrary length.
Strings are immutable - they cannot be changed once created. When you modify a string, you automatically make a copy and modify the copy.
End of explanation
"""
"Godzilla"
"""
Explanation: String literals
A "literal" is essentially a string constant, already spelled out for you. Python uses either on output, but that's just for formatting simplicity.
End of explanation
"""
"Godzilla's a kaiju."
'Godzilla\'s a kaiju.'
'We call him... "Godzilla".'
"""
Explanation: Single and double quotes
Generally, a string literal can be in single ('), double ("), or triple (''') quotes. Single and double quotes are equivalent - use whichever you prefer (but be consistent). If you need to have a single or double quote in your literal, surround your literal with the other type, or use the backslash to escape the quote.
End of explanation
"""
print('This is a\ncomplicated string with newline escapes in it.')
print(r'This is a\ncomplicated string with newline escapes in it.')
"""
Explanation: Triple quotes (''')
Triple quotes are a special form of quoting used for documenting your Python files (docstrings). We won't discuss that type here.
Raw strings
Raw strings don't use any escape character interpretation. Use them when you have a complicated string that you don't want to clutter with lots of backslashes. Python puts them in for you.
End of explanation
"""
x=int('122', 3)
x+1
"""
Explanation: Strings and numbers
End of explanation
"""
kaiju = 'Godzilla'
print(kaiju)
kaiju
"""
Explanation: String objects
String objects are just the string variables you create in Python.
End of explanation
"""
repr(kaiju)
print(repr(kaiju))
"""
Explanation: Note the print() call shows no quotes, while the simple variable name did. That is a Python output convention. Just entering the name will call the repr() method, which displays the value of the argument as Python would see it when it reads it in, not as the user wants it.
End of explanation
"""
one = 1
two = '2'
print one, two, one + two
one = 1
two = int('2')
print one, two, one + two
num1 = 1.1
num2 = float('2.2')
print num1, num2, num1 + num2
"""
Explanation: String operators
When you read text from a file, it's just that - text. No matter what the data represents, it's still text. To use it as a number, you have to explicitly convert it to a number.
End of explanation
"""
print int('FF', 16)
print int('0xff', 16)
print int('777', 8)
print int('0777', 8)
print int('222', 7)
print int('110111001', 2)
"""
Explanation: You can also do this with hexadecimal and octal numbers, or any other base, for that matter.
End of explanation
"""
print int('0xGG', 16)
"""
Explanation: If the conversion cannot be done, an exception is thrown.
End of explanation
"""
kaiju1 = 'Godzilla'
kaiju2 = 'Mothra'
kaiju1 + ' versus ' + kaiju2
"""
Explanation: Concatenation
End of explanation
"""
'Run away! ' * 3
"""
Explanation: Repetition
End of explanation
"""
'Godzilla' in 'Godzilla vs Gamera'
"""
Explanation: String keywords
in()
NOTE: This particular statement is false regardless of how the statement is evaluated! :^)
End of explanation
"""
len(kaiju)
"""
Explanation: String functions
len()
End of explanation
"""
kaiju.capitalize()
kaiju.lower()
kaiju.upper()
kaiju.swapcase()
'godzilla, king of the monsters'.title()
"""
Explanation: String methods
Remember - methods are functions attached to objects, accessed via the 'dot' notation.
Basic formatting and manipulation
capitalize()/lower()/upper()/swapcase()/title()
End of explanation
"""
kaiju.center(20, '*')
kaiju.ljust(20, '*')
kaiju.rjust(20, '*')
"""
Explanation: center()/ljust()/rjust()
End of explanation
"""
tabbed_kaiju = '\tGodzilla'
print('[' + tabbed_kaiju + ']')
print('[' + tabbed_kaiju.expandtabs(16) + ']')
"""
Explanation: expandtabs()
End of explanation
"""
' vs '.join(['Godzilla', 'Hedorah'])
','.join(['Godzilla', 'Mothra', 'King Ghidorah'])
"""
Explanation: join()
End of explanation
"""
' Godzilla '.strip()
'xxxGodzillayyy'.strip('xy')
' Godzilla '.lstrip()
' Godzilla '.rstrip()
"""
Explanation: strip()/lstrip()/rstrip()
End of explanation
"""
battle = 'Godzilla x Gigan'
battle.partition(' x ')
battle = 'Godzilla and Jet Jaguar vs. Gigan and Megalon'
battle.partition(' vs. ')
battle = 'Godzilla vs Megalon vs Jet Jaguar'
battle.partition('vs')
battle = 'Godzilla vs Megalon vs Jet Jaguar'
battle.rpartition('vs')
"""
Explanation: partition()/rpartition()
End of explanation
"""
battle = 'Godzilla vs Mothra'
battle.replace('Mothra', 'Anguiras')
battle = 'Godzilla vs a monster and another monster'
battle.replace('monster', 'kaiju', 2)
battle = 'Godzilla vs a monster and another monster and yet another monster'
battle.replace('monster', 'kaiju', 2)
"""
Explanation: replace()
End of explanation
"""
battle = 'Godzilla vs King Ghidorah vs Mothra'
battle.split(' vs ')
kaijus = 'Godzilla,Mothra,King Ghidorah'
kaijus.split(',')
kaijus = 'Godzilla Mothra King Ghidorah'
kaijus.split()
kaijus = 'Godzilla,Mothra,King Ghidorah,Megalon'
kaijus.rsplit(',', 2)
"""
Explanation: split()/rsplit()
End of explanation
"""
kaijus_in_lines = 'Godzilla\nMothra\nKing Ghidorah\nEbirah'
print(kaijus_in_lines)
kaijus_in_lines.splitlines()
kaijus_in_lines.splitlines(True)
"""
Explanation: splitlines()
End of explanation
"""
age_of_Godzilla = 60
age_string = str(age_of_Godzilla)
print(age_string, age_string.zfill(5))
"""
Explanation: zfill()
End of explanation
"""
print('Godzilla'.isalnum())
print('*Godzilla*'.isalnum())
print('Godzilla123'.isalnum())
print('Godzilla'.isalpha())
print('Godzilla123'.isalpha())
print('Godzilla'.isdigit())
print('60'.isdigit())
print('SpaceGodzilla'.isspace())
print(' '.isspace())
print('Godzilla'.islower())
print('godzilla'.islower())
print('Godzilla'.isupper())
print('GODZILLA'.isupper())
print('Godzilla vs Mothra'.istitle())
print('Godzilla X Mothra'.istitle())
"""
Explanation: String information
isXXX()
End of explanation
"""
monsters = 'Godzilla and Space Godzilla and MechaGodzilla'
print 'There are ', monsters.count('Godzilla'), ' Godzillas.'
print 'There are ', monsters.count('Godzilla', len('Godzilla')), ' pseudo-Godzillas.'
"""
Explanation: count()
End of explanation
"""
king_kaiju = 'Godzilla'
print king_kaiju.startswith('God')
print king_kaiju.endswith('lla')
print king_kaiju.startswith('G')
print king_kaiju.endswith('amera')
"""
Explanation: startswith()/endswith()
End of explanation
"""
kaiju_string = 'Godzilla,Gamera,Gorgo,Space Godzilla'
print 'The first Godz is at position', kaiju_string.find('Godz')
print 'The second Godz is at position', kaiju_string.find('Godz', len('Godz'))
kaiju_string.index('Minilla')
kaiju_string.rindex('Godzilla')
"""
Explanation: find()/index()/rfind()/rindex()
End of explanation
"""
kaiju = 'Godzilla'
age = 60
print '%s is %d years old.' % (kaiju, age)
"""
Explanation: Advanced features
decode()/encode()/translate()
Used to convert strings to/from Unicode and other systems. Rarely used in science code.
String formatting
Similar to formatting in C, FORTRAN, etc.. There is a lot more to this than I am showing here.
End of explanation
"""
import string
print string.ascii_letters
print string.ascii_lowercase
print string.ascii_uppercase
print string.digits
print string.hexdigits
print string.octdigits
print string.letters
print string.lowercase
print string.uppercase
print string.printable
print string.punctuation
print string.whitespace
"""
Explanation: The string module
The string module is the Python equivalent of "junk DNA" in living organisms. It's been around since the beginning, but many of its functions have been superseded by evolution. But some ancient code still relies on it, so they leave the old parts in....
For modern code, the string module does have some useful constants and functions.
End of explanation
"""
import re
"""
Explanation: The string module also provides the Formatter class, which can be useful for sophisticated text formatting.
Regular Expressions
What is a regular expression?
Regular expressions ('regexps') are essentially a mini-language for describing string operations. Everything shown above with string methods and operators can be done with regular expressions. Most of the time, the regular expression verrsion is more concise. But not always more readable....
To use regular expressions, you have to import the 're' module.
End of explanation
"""
kaiju_truth = 'Godzilla is the King of the Monsters. Ebirah is also a monster, but looks like a giant lobster.'
re.findall('Godz', kaiju_truth)
print re.findall('(^.+) is the King', kaiju_truth)
"""
Explanation: A very short, whirlwind tour of regular expressions
Scanning
End of explanation
"""
print re.findall('\. (.+) is also', kaiju_truth)
print re.findall('(.+) is also a (.+)', kaiju_truth)[0]
print re.findall('\. (.+) is also a (.+),', kaiju_truth)[0]
"""
Explanation: For simple searches like this, using in() is typically easier.
Regexps are by default case-sensitive.
End of explanation
"""
some_kaiju = 'Godzilla, Space Godzilla, Mechagodzilla'
print re.sub('Godzilla', 'Gamera', some_kaiju)
print re.sub('(?i)Godzilla', 'Gamera', some_kaiju)
"""
Explanation: Changing
End of explanation
"""
|
matthiaskoenig/sbmlutils
|
docs_builder/notebooks/sbml_distrib.ipynb
|
lgpl-3.0
|
%load_ext autoreload
%autoreload 2
from notebook_utils import print_xml
from sbmlutils.factory import *
from sbmlutils.validation import validate_doc
"""
Explanation: SBML distrib
The following examples demonstrate the creation of SBML models with SBML distrib information.
End of explanation
"""
class U(Units):
"""UnitDefinitions."""
hr = UnitDefinition("hr")
m2 = UnitDefinition("m2", "meter^2")
mM = UnitDefinition("mM", "mmole/liter")
# model definition
model = Model(
'distrib_assignment',
packages= ['distrib'],
units=U,
model_units= ModelUnits(
time=U.hr, extent=U.mole, substance=U.mole,
length=U.meter, area=U.m2, volume=U.liter),
parameters= [
Parameter(sid="p1", value=0.0, unit=U.mM)
],
assignments= [
InitialAssignment('p1', 'normal(0 mM, 1 mM)'),
]
)
# create model and print SBML
doc = Document(model)
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Assigning a distribution to a parameter
Here we create a parameter $$p_1 = 0.0$$ and assign the initial value from a normal distribution with mean=0 and standard deviation=1
$$p_1 = \sigma(0,1)$$
End of explanation
"""
model = Model(
'normal',
packages=['distrib'],
objects=[
Parameter('y', value=1.0),
Parameter('z', value=1.0),
InitialAssignment('y', 'normal(z, 10)'),
]
)
# create model and print SBML
doc = Document(model)
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Using a normal distribution
In this example, the initial value of y is set as a draw from the normal distribution normal(z,10):
End of explanation
"""
model = Model(
'truncated_normal',
packages = ['distrib'],
objects = [
Parameter('y', value=1.0),
Parameter('z', value=1.0),
InitialAssignment('y', 'normal(z, 10, z-2, z+2)'),
]
)
# create model and print SBML
doc = Document(model)
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Defining a truncated normal distribution
When used with four arguments instead of two, the normal distribution is truncated to normal(z, 10, z-2, z+2). This use would apply a draw from a normal distribution with mean z, standard deviation 10, lower bound z-2 (inclusive) and upper bound z+2 (not inclusive) to the SBML symbol y.
End of explanation
"""
model = Model(
'conditional_events',
packages=['distrib'],
objects=[
Parameter('x', value=1.0, constant=False),
Event(
"E0",
trigger="time>2 && x<1",
priority="uniform(0, 1)",
trigger_initialValue=True, trigger_persistent=False,
assignments={"x": "3"}
),
Event(
"E1",
trigger="time>2 && x<1",
priority="uniform(0, 2)",
trigger_initialValue=True, trigger_persistent=False,
assignments={"x": "5"}
)
]
)
# create model and print SBML
doc = Document(model)
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Defining conditional events
Simultaneous events in SBML are ordered based on their Priority values, with higher values being executed first, and potentially cancelling events that fire after them. In this example, two simultaneous events have priorities set with csymbols defined in distrib. The event E0 has a priority of uniform(0,1), while the event E1 has a priority of uniform(0,2). This means that 75% of the time, event E1 will have a higher priority than E0, and will fire first, assigning a value of 5 to parameter x. Because this negates the trigger condition for E0, which is set persistent="false", this means that E0 never fires, and the value of x remains at 5. The remaining 25% of the time, the reverse happens, with E0 setting the value of x to 3 instead.
End of explanation
"""
model = Model(
'all_distributions',
packages = ['distrib'],
objects = [
InitialAssignment('p_normal_1', 'normal(0, 1)'),
InitialAssignment('p_normal_2', 'normal(0, 1, 0, 10)'),
InitialAssignment('p_uniform', 'uniform(5, 10)'),
InitialAssignment('p_bernoulli', 'bernoulli(0.4)'),
InitialAssignment('p_binomial_1', 'binomial(100, 0.3)'),
InitialAssignment('p_binomial_2', 'binomial(100, 0.3, 0, 2)'),
InitialAssignment('p_cauchy_1', 'cauchy(0, 1)'),
InitialAssignment('p_cauchy_2', 'cauchy(0, 1, 0, 5)'),
InitialAssignment('p_chisquare_1', 'chisquare(10)'),
InitialAssignment('p_chisquare_2', 'chisquare(10, 0, 10)'),
InitialAssignment('p_exponential_1', 'exponential(1.0)'),
InitialAssignment('p_exponential_2', 'exponential(1.0, 0, 10)'),
InitialAssignment('p_gamma_1', 'gamma(0, 1)'),
InitialAssignment('p_gamma_2', 'gamma(0, 1, 0, 10)'),
InitialAssignment('p_laplace_1', 'laplace(0, 1)'),
InitialAssignment('p_laplace_2', 'laplace(0, 1, 0, 10)'),
InitialAssignment('p_lognormal_1', 'lognormal(0, 1)'),
InitialAssignment('p_lognormal_2', 'lognormal(0, 1, 0, 10)'),
InitialAssignment('p_poisson_1', 'poisson(0.5)'),
InitialAssignment('p_poisson_2', 'poisson(0.5, 0, 10)'),
InitialAssignment('p_raleigh_1', 'rayleigh(0.5)'),
InitialAssignment('p_raleigh_2', 'rayleigh(0.5, 0, 10)'),
]
)
# create model and print SBML
doc = Document(model)
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Overview of all distributions
The following gives an example how to use all of the various distributions
End of explanation
"""
import libsbml
md: ModelDict = {
'sid': 'basic_example_1',
'packages': ['distrib'],
'compartments': [
Compartment("C", value=1.0)
],
'species': [
Species(sid="s1", compartment="C", initialAmount=3.22,
uncertainties=[
Uncertainty(uncertParameters=[
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION, value=0.3)
])
])
],
}
# create model and print SBML
doc = Document(Model(**md))
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Basic uncertainty example
Here, the species with an initial amount of 3.22 is described as having a standard deviation of 0.3, a value that might
be written as 3.22 +- 0.3.
End of explanation
"""
import libsbml
md: ModelDict = {
'sid': 'basic_example_2',
'packages': ['distrib'],
'compartments': [
Compartment("C", value=1.0)
],
'species': [
Species(sid="s1", compartment="C", initialAmount=3.22,
uncertainties=[
Uncertainty(uncertParameters=[
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MEAN, value=3.2),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION, value=0.3),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_VARIANCE, value=0.09),
])
])
],
}
# create model and print SBML
doc = Document(Model(**md))
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: It is also possible to include additional information about the species, should more be known. In this example, the initial amount of 3.22 is noted as having a mean of 3.2, a standard deviation of 0.3, and a variance
of 0.09.
End of explanation
"""
import libsbml
class U(Units):
hr = UnitDefinition("hr")
m2 = UnitDefinition("m2", "meter^2")
mM = UnitDefinition("mM", "mmole/liter")
md: ModelDict = {
'sid': 'multiple_uncertainties',
'packages': ['distrib'],
'units': U,
'model_units': ModelUnits(time=U.hr, extent=U.mole, substance=U.mole,
length=U.meter, area=U.m2, volume=U.liter),
'parameters': [
Parameter(sid="p1", value=5.0, unit=U.mM,
uncertainties=[
Uncertainty('p1_uncertainty_1', uncertParameters=[
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MEAN, value=5.0, unit=U.mM),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION, value=0.3, unit=U.mM),
UncertSpan(type=libsbml.DISTRIB_UNCERTTYPE_RANGE, valueLower=2.0, valueUpper=8.0, unit=U.mM),
]),
Uncertainty('p1_uncertainty_2', uncertParameters=[
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MEAN, value=4.5, unit=U.mM),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION, value=1.1, unit=U.mM),
UncertSpan(type=libsbml.DISTRIB_UNCERTTYPE_RANGE, valueLower=1.0, valueUpper=10.0, unit=U.mM),
])
])
],
'assignments': [
InitialAssignment('p1', 'normal(0 mM, 1 mM)'),
]
}
# create model and print SBML
doc = Document(Model(**md))
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Multiple uncertainties
The following gives an example how to encode multiple uncertainties for a parameter.
Here the two uncertainties
5.0 (mean) +- 0.3 (std) [2.0 - 8.0]
and
4.5 (mean) +- 1.1 (std) [1.0 - 10.0]
are set.
End of explanation
"""
import libsbml
md: ModelDict = {
'sid': 'random_variable',
'packages': ['distrib'],
'parameters': [
Parameter("shape_Z", value=10.0),
Parameter("scale_Z", value=0.1),
Parameter("Z", value=0.1,
uncertainties=[
Uncertainty(formula="gamma(shape_Z, scale_Z)",
uncertParameters=[
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MEAN, value=1.03),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_VARIANCE, value=0.97),
])
])
]
}
# create model and print SBML
doc = Document(Model(**md))
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Defining a random variable
In addition to describing the uncertainty about an experimental observation one can also use this mechanism
to describe a parameter as a random variable.
End of explanation
"""
import libsbml
md: ModelDict = {
'sid': 'parameters_spans',
'packages': ['distrib'],
'parameters': [
Parameter("p",
uncertainties=[
Uncertainty(
formula="normal(0, 1)", # distribution
uncertParameters=[
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_COEFFIENTOFVARIATION, value=1.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_KURTOSIS, value=2.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MEAN, value=3.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MEDIAN, value=4.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_MODE, value=5.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_SAMPLESIZE, value=6.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_SKEWNESS, value=7.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION, value=8.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_STANDARDERROR, value=9.0),
UncertParameter(type=libsbml.DISTRIB_UNCERTTYPE_VARIANCE, value=10.0),
UncertSpan(type=libsbml.DISTRIB_UNCERTTYPE_CONFIDENCEINTERVAL, valueLower=1.0, valueUpper=2.0),
UncertSpan(type=libsbml.DISTRIB_UNCERTTYPE_CREDIBLEINTERVAL, valueLower=2.0, valueUpper=3.0),
UncertSpan(type=libsbml.DISTRIB_UNCERTTYPE_INTERQUARTILERANGE, valueLower=3.0, valueUpper=4.0),
UncertSpan(type=libsbml.DISTRIB_UNCERTTYPE_RANGE, valueLower=4.0, valueUpper=5.0),
])
])
]
}
# create model and print SBML
doc = Document(Model(**md))
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Overview over UncertParameters and UncertSpans
The following example provides an overview over the available fields.
End of explanation
"""
import libsbml
from sbmlutils.metadata import *
class U(Units):
hr = UnitDefinition("hr")
m2 = UnitDefinition("m2", "meter^2")
mM = UnitDefinition("mM", "mmole/liter")
md: ModelDict = {
'sid': 'sabiork_parameter',
'packages': ['distrib'],
'units': U,
'model_units': ModelUnits(time=U.hr, extent=U.mole,
substance=U.mole,
length=U.meter, area=U.m2,
volume=U.liter),
'parameters': [
Parameter(
sid="Km_glc", name="Michelis-Menten constant glucose",
value=5.0, unit=U.mM, sboTerm=SBO.MICHAELIS_CONSTANT,
uncertainties=[
Uncertainty(
sid="uncertainty1",
uncertParameters=[
UncertParameter(
type=libsbml.DISTRIB_UNCERTTYPE_MEAN,
value=5.07),
UncertParameter(
type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION,
value=0.97),
], annotations=[
(BQB.IS, "sabiork.kineticrecord/793"), # entry in SABIO-RK
(BQB.HAS_TAXON, "taxonomy/9606"), # homo sapiens
(BQB.IS, "ec-code/2.7.1.2"), # glucokinase
(BQB.IS, "uniprot/P35557"), # Glucokinase homo sapiens
(BQB.IS, "bto/BTO:0000075"), # liver
]),
Uncertainty(
sid="uncertainty2",
uncertParameters=[
UncertParameter(
type=libsbml.DISTRIB_UNCERTTYPE_MEAN,
value=2.7),
UncertParameter(
type=libsbml.DISTRIB_UNCERTTYPE_STANDARDDEVIATION,
value=0.11),
], annotations=[
(BQB.IS, "sabiork.kineticrecord/2581"),
# entry in SABIO-RK
(BQB.HAS_TAXON, "taxonomy/9606"), # homo sapiens
(BQB.IS, "ec-code/2.7.1.2"), # glucokinase
(BQB.IS, "uniprot/P35557"), # Glucokinase homo sapiens
(BQB.IS, "bto/BTO:0000075"), # liver
]),
])
]
}
# create model and print SBML
doc = Document(Model(**md))
print_xml(doc.get_sbml())
# validate model
validate_doc(doc.doc, units_consistency=False);
"""
Explanation: Information on experimental parameters (SABIO-RK)
In the following example we store the experimental information which was used for setting the parameter in the model.
End of explanation
"""
|
ellisztamas/faps
|
docs/tutorials/00_quickstart_guide.ipynb
|
mit
|
import faps as fp
import numpy as np
"""
Explanation: Quickstart guide to FAPS
Tom Ellis, May 2020.
If you are impatient to do an analyses as quickly as possible without reading the rest of the documentation, this page provides a minimal example. The work flow is as follows:
Import marker data on offspring and parents
Create a matrix of paternity of each individual offspring
Cluster offspring into full sibships.
????
Profit.
It goes without saying that to understand what the code is doing and get the most out of the data, you should read the tutorials.
Import the package.
End of explanation
"""
adults = fp.read_genotypes('../data/parents_2012_genotypes.csv', genotype_col=1)
progeny = fp.read_genotypes('../data/offspring_2012_genotypes.csv', genotype_col=2, mothers_col=1)
# Mothers are a subset of the adults.
mothers = adults.subset(individuals=np.unique(progeny.mothers))
"""
Explanation: Import genotype data. These are CSV files with:
A column giving the name of each individual
For the offspring, the second column gives the name of the known mother.
Subsequent columns give genotype data for each marker, with column headers giving marker names.
End of explanation
"""
progeny = progeny.split(progeny.mothers)
mothers = mothers.split(mothers.names)
"""
Explanation: In this example, the data are for multiple maternal families, each containing a mixture of full- and half-siblings. We need to divide the offspring and mothers into maternal families.
End of explanation
"""
patlik = fp.paternity_array(progeny, mothers, adults, mu = 0.0015)
"""
Explanation: I expect that multiple maternal families will be the most common scenario, but if you happen to only have a sigle maternal family, you can skip this.
Calculate paternity of individuals. This is equivalent to the G matrix in Ellis et al (2018).
End of explanation
"""
sibships = fp.sibship_clustering(patlik)
"""
Explanation: Cluster offspring in each family into full-sibling families.
End of explanation
"""
sibships["J1246"].mean_nfamilies()
"""
Explanation: You can pull out various kinds of information about the each clustered maternal family. For example, get the most-likely number of full-sib families in maternal family J1246.
End of explanation
"""
{k: v.mean_nfamilies() for k,v in sibships.items()}
"""
Explanation: Or do this for all families with a dict comprehension:
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.2/examples/extinction_wd_subdwarf.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Extinction: White Dwarf - Subdwarf Binary
In this example, we'll reproduce Figure 4 in the extinction release paper (Jones et al. 2020).
"SDSS J2355 is a short-period post-CE binary comprising a relatively cool white dwarf (Teff∼13,250 K) and a low-mass, metal-poor, sub-dwarf star (spectral type ∼sdK7). As before, calculating synthetic light curves for the system with no extinction and then with extinction consistent with the Galactic bulge, we now see significant deviations between the two models in u, g and r bands" (Jones et al. 2020)
<img src="jones+20_fig4.png" alt="Figure 4" width="600px"/>
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
from matplotlib import gridspec
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.set_value('period', component='binary', value=0.0897780065*u.d)
b.set_value('teff', component='primary', value=13247*u.K)
b.set_value('teff', component='secondary', value=3650*u.K)
b.set_value('requiv', component='primary', value=0.0160*u.solRad)
b.set_value('requiv', component='secondary', value=0.1669*u.solRad)
b.flip_constraint('mass@primary', solve_for='sma@binary')
b.set_value('mass', component='primary', value=0.4477*u.solMass)
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value('mass', component='secondary', value=0.1501*u.solMass)
"""
Explanation: Adopt system parameters from Rebassa-Mansergas+ 2019.
End of explanation
"""
period = b.get_value('period', component='binary')
times=phoebe.linspace(-0.1*period, 0.6*period, 501)
b.add_dataset('lc', times=times, dataset='u', passband="LSST:u")
b.add_dataset('lc', times=times, dataset='g', passband="LSST:g")
b.add_dataset('lc', times=times, dataset='r', passband="LSST:r")
b.add_dataset('lc', times=times, dataset='i', passband="LSST:i")
"""
Explanation: Now we'll create datasets for LSST u,g,r, and i bands.
End of explanation
"""
b.set_value_all('atm', component='primary', value='blackbody')
b.set_value_all('ld_mode', component='primary', value='manual')
b.set_value_all('ld_func', component='primary', value='quadratic')
b.set_value('ld_coeffs', component='primary', dataset='u', value=[0.2665,0.2544])
b.set_value('ld_coeffs', component='primary', dataset='g', value=[0.1421,0.3693])
b.set_value('ld_coeffs', component='primary', dataset='r', value=[0.1225,0.3086])
b.set_value('ld_coeffs', component='primary', dataset='i', value=[0.1063,0.2584])
b.set_value_all('ld_mode_bol@primary','manual')
b.set_value_all('ld_func_bol@primary','quadratic')
b.set_value('ld_coeffs_bol', component='primary', value=[0.1421,0.3693])
b.set_value_all('atm', component='secondary', value='phoenix')
b.set_value('abun', component='secondary', value=-1.55)
"""
Explanation: And set options for the atmospheres and limb-darkening.
End of explanation
"""
b.set_value('incl', component='binary', value=90.0*u.deg)
b.set_value_all('ntriangles', value=10000)
b.set_value_all('intens_weighting', value='photon')
b.set_value_all('Rv', value=2.5)
"""
Explanation: We'll set the inclination to 90 degrees and set some compute options.
End of explanation
"""
b.set_value_all('Av', value=0.0)
b.run_compute(model='noext',overwrite=True)
"""
Explanation: For comparison, we'll first compute a model with zero extinction.
End of explanation
"""
b.set_value_all('Av',2.0)
b.run_compute(model='ext',overwrite=True)
"""
Explanation: And then a second model with extinction.
End of explanation
"""
uextmags=-2.5*np.log10(b['value@fluxes@u@ext@model'])
unoextmags=-2.5*np.log10(b['value@fluxes@u@noext@model'])
uextmags_norm=uextmags-uextmags.min()+1
unoextmags_norm=unoextmags-unoextmags.min()+1
uresid=uextmags_norm-unoextmags_norm
gextmags=-2.5*np.log10(b['value@fluxes@g@ext@model'])
gnoextmags=-2.5*np.log10(b['value@fluxes@g@noext@model'])
gextmags_norm=gextmags-gextmags.min()+1
gnoextmags_norm=gnoextmags-gnoextmags.min()+1
gresid=gextmags_norm-gnoextmags_norm
rextmags=-2.5*np.log10(b['value@fluxes@r@ext@model'])
rnoextmags=-2.5*np.log10(b['value@fluxes@r@noext@model'])
rextmags_norm=rextmags-rextmags.min()+1
rnoextmags_norm=rnoextmags-rnoextmags.min()+1
rresid=rextmags_norm-rnoextmags_norm
iextmags=-2.5*np.log10(b['value@fluxes@i@ext@model'])
inoextmags=-2.5*np.log10(b['value@fluxes@i@noext@model'])
iextmags_norm=iextmags-iextmags.min()+1
inoextmags_norm=inoextmags-inoextmags.min()+1
iresid=iextmags_norm-inoextmags_norm
fig=plt.figure(figsize=(12,12))
gs=gridspec.GridSpec(4,2,height_ratios=[4,1,4,1],width_ratios=[1,1])
ax=plt.subplot(gs[0,0])
ax.plot(b['value@times@u@noext@model']/7.,unoextmags_norm,color='k',linestyle="--")
ax.plot(b['value@times@u@ext@model']/7.,uextmags_norm,color='k',linestyle="-")
ax.set_ylabel('Magnitude')
ax.set_xticklabels([])
ax.set_ylim([6.2,0.95])
ax.set_title('(a) LSST u')
ax2=plt.subplot(gs[0,1])
ax2.plot(b['value@times@g@noext@model']/b['period@orbit'].quantity,gnoextmags_norm,color='k',linestyle="--")
ax2.plot(b['value@times@g@ext@model']/b['period@orbit'].quantity,gextmags_norm,color='k',linestyle="-")
ax2.set_ylabel('Magnitude')
ax2.set_xticklabels([])
ax2.set_ylim([3.2,0.95])
ax2.set_title('(b) LSST g')
ax_1=plt.subplot(gs[1,0])
ax_1.plot(b['value@times@u@noext@model']/b['period@orbit'].quantity,uresid,color='k',linestyle='-')
ax_1.set_ylabel(r'$\Delta m$')
ax_1.set_xlabel('Phase')
ax_1.set_ylim([0.05,-0.3])
ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax2_1=plt.subplot(gs[1,1])
ax2_1.plot(b['value@times@g@noext@model']/b['period@orbit'].quantity,gresid,color='k',linestyle='-')
ax2_1.set_ylabel(r'$\Delta m$')
ax2_1.set_xlabel('Phase')
ax2_1.set_ylim([0.05,-0.3])
ax2_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax3=plt.subplot(gs[2,0])
ax3.plot(b['value@times@r@noext@model']/b['period@orbit'].quantity,rnoextmags_norm,color='k',linestyle="--")
ax3.plot(b['value@times@r@ext@model']/b['period@orbit'].quantity,rextmags_norm,color='k',linestyle="-")
ax3.set_ylabel('Magnitude')
ax3.set_xticklabels([])
ax3.set_ylim([2.0,0.95])
ax3.set_title('(c) LSST r')
ax4=plt.subplot(gs[2,1])
ax4.plot(b['value@times@i@noext@model']/b['period@orbit'].quantity,inoextmags_norm,color='k',linestyle="--")
ax4.plot(b['value@times@i@ext@model']/b['period@orbit'].quantity,iextmags_norm,color='k',linestyle="-")
ax4.set_ylabel('Magnitude')
ax4.set_xticklabels([])
ax4.set_ylim([1.6,0.95])
ax4.set_title('(d) LSST i')
ax3_1=plt.subplot(gs[3,0])
ax3_1.plot(b['value@times@r@noext@model']/b['period@orbit'].quantity,rresid,color='k',linestyle='-')
ax3_1.set_ylabel(r'$\Delta m$')
ax3_1.set_xlabel('Phase')
ax3_1.set_ylim([0.01,-0.03])
ax3_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax4_1=plt.subplot(gs[3,1])
ax4_1.plot(b['value@times@i@noext@model']/b['period@orbit'].quantity,iresid,color='k',linestyle='-')
ax4_1.set_ylabel(r'$\Delta m$')
ax4_1.set_xlabel('Phase')
ax4_1.set_ylim([0.01,-0.03])
ax4_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax_1.axhspan(-0.0075,0.0075,color='lightgray')
ax2_1.axhspan(-0.005,0.005,color='lightgray')
ax3_1.axhspan(-0.005,0.005,color='lightgray')
ax4_1.axhspan(-0.005,0.005,color='lightgray')
plt.tight_layout()
fig.canvas.draw()
"""
Explanation: Finally we'll convert the output fluxes to magnitudes and format the figure.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_cluster_stats_spatio_temporal.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
"""
Explanation: .. _tut_stats_cluster_source_1samp:
Permutation t-test on source data with spatio-temporal clustering
Tests if the evoked response is significantly different between
conditions across subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
"""
Explanation: Set parameters
End of explanation
"""
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
"""
Explanation: Read epochs for all channels, removing a bad one
End of explanation
"""
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50)
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50)
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
"""
Explanation: Transform to source space
End of explanation
"""
# Normally you would read in estimates across several subjects and morph
# them to the same cortical space (e.g. fsaverage). For example purposes,
# we will simulate this by just having each "subject" have the same
# response (just noisy in source space) here. Note that for 7 subjects
# with a two-sided statistical test, the minimum significance under a
# permutation test is only p = 1/(2 ** 6) = 0.015, which is large.
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
# It's a good idea to spatially smooth the data, and for visualization
# purposes, let's morph these to fsaverage, which is a grade 5 source space
# with vertices 0:10242 for each hemisphere. Usually you'd have to morph
# each subject's data separately (and you might want to use morph_data
# instead), but here since all estimates are on 'sample' we can use one
# morph matrix for all the heavy lifting.
fsave_vertices = [np.arange(10242), np.arange(10242)]
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
# Finally, we want to compare the overall activity levels in each condition,
# the diff is taken along the last axis (condition). The negative sign makes
# it so condition1 > condition2 shows up as "red blobs" (instead of blue).
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
"""
Explanation: Transform to common cortical space
End of explanation
"""
# To use an algorithm optimized for spatio-temporal clustering, we
# just pass the spatial connectivity matrix (instead of spatio-temporal)
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=2,
threshold=t_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
"""
Explanation: Compute statistic
End of explanation
"""
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(hemi='both', subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.set_data_time_index(0)
brain.show_view('lateral')
brain.save_image('clusters.png')
"""
Explanation: Visualize the clusters
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.22/_downloads/ad79868fcd6af353ce922b8a3a2fc362/plot_30_info.ipynb
|
bsd-3-clause
|
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
"""
Explanation: The Info data structure
This tutorial describes the :class:mne.Info data structure, which keeps track
of various recording details, and is attached to :class:~mne.io.Raw,
:class:~mne.Epochs, and :class:~mne.Evoked objects.
:depth: 2
We'll begin by loading the Python modules we need, and loading the same
example data <sample-dataset> we used in the introductory tutorial
<tut-overview>:
End of explanation
"""
print(raw.info)
"""
Explanation: As seen in the introductory tutorial <tut-overview>, when a
:class:~mne.io.Raw object is loaded, an :class:~mne.Info object is
created automatically, and stored in the raw.info attribute:
End of explanation
"""
info = mne.io.read_info(sample_data_raw_file)
print(info)
"""
Explanation: However, it is not strictly necessary to load the :class:~mne.io.Raw object
in order to view or edit the :class:~mne.Info object; you can extract all
the relevant information into a stand-alone :class:~mne.Info object using
:func:mne.io.read_info:
End of explanation
"""
print(info.keys())
print() # insert a blank line
print(info['ch_names'])
"""
Explanation: As you can see, the :class:~mne.Info object keeps track of a lot of
information about:
the recording system (gantry angle, HPI details, sensor digitizations,
channel names, ...)
the experiment (project name and ID, subject information, recording date,
experimenter name or ID, ...)
the data (sampling frequency, applied filter frequencies, bad channels,
projectors, ...)
The complete list of fields is given in :class:the API documentation
<mne.Info>.
Querying the Info object
The fields in a :class:~mne.Info object act like Python :class:dictionary
<dict> keys, using square brackets and strings to access the contents of a
field:
End of explanation
"""
print(info['chs'][0].keys())
"""
Explanation: Most of the fields contain :class:int, :class:float, or :class:list
data, but the chs field bears special mention: it contains a list of
dictionaries (one :class:dict per channel) containing everything there is
to know about a channel other than the data it recorded. Normally it is not
necessary to dig into the details of the chs field — various MNE-Python
functions can extract the information more cleanly than iterating over the
list of dicts yourself — but it can be helpful to know what is in there. Here
we show the keys for the first channel's :class:dict:
End of explanation
"""
print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005']))
print(mne.pick_channels(info['ch_names'], include=[],
exclude=['MEG 0312', 'EEG 005']))
"""
Explanation: Obtaining subsets of channels
It is often useful to convert between channel names and the integer indices
identifying rows of the data array where those channels' measurements are
stored. The :class:~mne.Info object is useful for this task; two
convenience functions that rely on the :class:mne.Info object for picking
channels are :func:mne.pick_channels and :func:mne.pick_types.
:func:~mne.pick_channels minimally takes a list of all channel names and a
list of channel names to include; it is also possible to provide an empty
list to include and specify which channels to exclude instead:
End of explanation
"""
print(mne.pick_types(info, meg=False, eeg=True, exclude=[]))
"""
Explanation: :func:~mne.pick_types works differently, since channel type cannot always
be reliably determined from channel name alone. Consequently,
:func:~mne.pick_types needs an :class:~mne.Info object instead of just a
list of channel names, and has boolean keyword arguments for each channel
type. Default behavior is to pick only MEG channels (and MEG reference
channels if present) and exclude any channels already marked as "bad" in the
bads field of the :class:~mne.Info object. Therefore, to get all and
only the EEG channel indices (including the "bad" EEG channels) we must
pass meg=False and exclude=[]:
End of explanation
"""
print(mne.pick_channels_regexp(info['ch_names'], '^E.G'))
"""
Explanation: Note that the meg and fnirs parameters of :func:~mne.pick_types
accept strings as well as boolean values, to allow selecting only
magnetometer or gradiometer channels (via meg='mag' or meg='grad') or
to pick only oxyhemoglobin or deoxyhemoglobin channels (via fnirs='hbo'
or fnirs='hbr', respectively).
A third way to pick channels from an :class:~mne.Info object is to apply
regular expression_ matching to the channel names using
:func:mne.pick_channels_regexp. Here the ^ represents the beginning of
the string and . character matches any single character, so both EEG and
EOG channels will be selected:
End of explanation
"""
print(mne.channel_type(info, 25))
"""
Explanation: :func:~mne.pick_channels_regexp can be especially useful for channels named
according to the 10-20 <ten-twenty_>_ system (e.g., to select all channels
ending in "z" to get the midline, or all channels beginning with "O" to get
the occipital channels). Note that :func:~mne.pick_channels_regexp uses the
Python standard module :mod:re to perform regular expression matching; see
the documentation of the :mod:re module for implementation details.
<div class="alert alert-danger"><h4>Warning</h4><p>Both :func:`~mne.pick_channels` and :func:`~mne.pick_channels_regexp`
operate on lists of channel names, so they are unaware of which channels
(if any) have been marked as "bad" in ``info['bads']``. Use caution to
avoid accidentally selecting bad channels.</p></div>
Obtaining channel type information
Sometimes it can be useful to know channel type based on its index in the
data array. For this case, use :func:mne.channel_type, which takes
an :class:~mne.Info object and a single integer channel index:
End of explanation
"""
picks = (25, 76, 77, 319)
print([mne.channel_type(info, x) for x in picks])
print(raw.get_channel_types(picks=picks))
"""
Explanation: To obtain several channel types at once, you could embed
:func:~mne.channel_type in a :term:list comprehension, or use the
:meth:~mne.io.Raw.get_channel_types method of a :class:~mne.io.Raw,
:class:~mne.Epochs, or :class:~mne.Evoked instance:
End of explanation
"""
ch_idx_by_type = mne.channel_indices_by_type(info)
print(ch_idx_by_type.keys())
print(ch_idx_by_type['eog'])
"""
Explanation: Alternatively, you can get the indices of all channels of all channel types
present in the data, using :func:~mne.channel_indices_by_type,
which returns a :class:dict with channel types as keys, and lists of
channel indices as values:
End of explanation
"""
print(info['nchan'])
eeg_indices = mne.pick_types(info, meg=False, eeg=True)
print(mne.pick_info(info, eeg_indices)['nchan'])
"""
Explanation: Dropping channels from an Info object
If you want to modify an :class:~mne.Info object by eliminating some of the
channels in it, you can use the :func:mne.pick_info function to pick the
channels you want to keep and omit the rest:
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp
|
day03/3.1 AutoEncoders and Embeddings.ipynb
|
mit
|
# based on: https://blog.keras.io/building-autoencoders-in-keras.html
encoding_dim = 32
input_img = Input(shape=(784,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input=input_img, output=decoded)
encoder = Model(input=input_img, output=encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
#note: x_train, x_train :)
autoencoder.fit(x_train, x_train,
nb_epoch=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
"""
Explanation: Unsupervised learning
AutoEncoders
An autoencoder, is an artificial neural network used for learning efficient codings.
The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.
<img src="imgs/autoencoder.png" width="25%">
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
End of explanation
"""
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Testing the Autoencoder
End of explanation
"""
encoded_imgs = np.random.rand(10,32)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# generation
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Sample generation with Autoencoder
End of explanation
"""
# Use the encoder to pretrain a classifier
"""
Explanation: Pretraining encoders
One of the powerful tools of auto-encoders is using the encoder to generate meaningful representation from the feature vectors.
End of explanation
"""
from gensim.models import word2vec
from gensim.models.word2vec import Word2Vec
"""
Explanation: Natural Language Processing using Artificial Neural Networks
“In God we trust. All others must bring data.” – W. Edwards Deming, statistician
Word Embeddings
What?
Convert words to vectors in a high dimensional space. Each dimension denotes an aspect like gender, type of object / word.
"Word embeddings" are a family of natural language processing techniques aiming at mapping semantic meaning into a geometric space. This is done by associating a numeric vector to every word in a dictionary, such that the distance (e.g. L2 distance or more commonly cosine distance) between any two vectors would capture part of the semantic relationship between the two associated words. The geometric space formed by these vectors is called an embedding space.
Why?
By converting words to vectors we build relations between words. More similar the words in a dimension, more closer their scores are.
Example
W(green) = (1.2, 0.98, 0.05, ...)
W(red) = (1.1, 0.2, 0.5, ...)
Here the vector values of green and red are very similar in one dimension because they both are colours. The value for second dimension is very different because red might be depicting something negative in the training data while green is used for positiveness.
By vectorizing we are indirectly building different kind of relations between words.
Example of word2vec using gensim
End of explanation
"""
import os
import pickle
DATA_DIRECTORY = os.path.join(os.path.abspath(os.path.curdir), 'data', 'word_embeddings')
male_posts = []
female_post = []
with open(os.path.join(DATA_DIRECTORY,"male_blog_list.txt"),"rb") as male_file:
male_posts= pickle.load(male_file)
with open(os.path.join(DATA_DIRECTORY,"female_blog_list.txt"),"rb") as female_file:
female_posts = pickle.load(female_file)
print(len(female_posts))
print(len(male_posts))
filtered_male_posts = list(filter(lambda p: len(p) > 0, male_posts))
filtered_female_posts = list(filter(lambda p: len(p) > 0, female_posts))
posts = filtered_female_posts + filtered_male_posts
print(len(filtered_female_posts), len(filtered_male_posts), len(posts))
"""
Explanation: Reading blog post from data directory
End of explanation
"""
w2v = Word2Vec(size=200, min_count=1)
w2v.build_vocab(map(lambda x: x.split(), posts[:100]), )
w2v.vocab
w2v.similarity('I', 'My')
print(posts[5])
w2v.similarity('ring', 'husband')
w2v.similarity('ring', 'housewife')
w2v.similarity('women', 'housewife') # Diversity friendly
"""
Explanation: Word2Vec
End of explanation
"""
import numpy as np
# 0 for male, 1 for female
y_posts = np.concatenate((np.zeros(len(filtered_male_posts)),
np.ones(len(filtered_female_posts))))
len(y_posts)
"""
Explanation: Doc2Vec
The same technique of word2vec is extrapolated to documents. Here, we do everything done in word2vec + we vectorize the documents too
End of explanation
"""
import numpy as np
import word_embedding
from word2vec import train_word2vec
from keras.models import Sequential, Model
from keras.layers import (Activation, Dense, Dropout, Embedding,
Flatten, Input,
Conv1D, MaxPooling1D)
from keras.layers.merge import Concatenate
np.random.seed(2)
"""
Explanation: Convolutional Neural Networks for Sentence Classification
Train convolutional network for sentiment analysis.
Based on
"Convolutional Neural Networks for Sentence Classification" by Yoon Kim
http://arxiv.org/pdf/1408.5882v2.pdf
For 'CNN-non-static' gets to 82.1% after 61 epochs with following settings:
embedding_dim = 20
filter_sizes = (3, 4)
num_filters = 3
dropout_prob = (0.7, 0.8)
hidden_dims = 100
For 'CNN-rand' gets to 78-79% after 7-8 epochs with following settings:
embedding_dim = 20
filter_sizes = (3, 4)
num_filters = 150
dropout_prob = (0.25, 0.5)
hidden_dims = 150
For 'CNN-static' gets to 75.4% after 7 epochs with following settings:
embedding_dim = 100
filter_sizes = (3, 4)
num_filters = 150
dropout_prob = (0.25, 0.5)
hidden_dims = 150
it turns out that such a small data set as "Movie reviews with one
sentence per review" (Pang and Lee, 2005) requires much smaller network
than the one introduced in the original article:
embedding dimension is only 20 (instead of 300; 'CNN-static' still requires ~100)
2 filter sizes (instead of 3)
higher dropout probabilities and
3 filters per filter size is enough for 'CNN-non-static' (instead of 100)
embedding initialization does not require prebuilt Google Word2Vec data.
Training Word2Vec on the same "Movie reviews" data set is enough to
achieve performance reported in the article (81.6%)
** Another distinct difference is slidind MaxPooling window of length=2
instead of MaxPooling over whole feature map as in the article
End of explanation
"""
model_variation = 'CNN-rand' # CNN-rand | CNN-non-static | CNN-static
print('Model variation is %s' % model_variation)
# Model Hyperparameters
sequence_length = 56
embedding_dim = 20
filter_sizes = (3, 4)
num_filters = 150
dropout_prob = (0.25, 0.5)
hidden_dims = 150
# Training parameters
batch_size = 32
num_epochs = 100
val_split = 0.1
# Word2Vec parameters, see train_word2vec
min_word_count = 1 # Minimum word count
context = 10 # Context window size
"""
Explanation: Parameters
Model Variations. See Kim Yoon's Convolutional Neural Networks for
Sentence Classification, Section 3 for detail.
End of explanation
"""
# Load data
print("Loading data...")
x, y, vocabulary, vocabulary_inv = word_embedding.load_data()
if model_variation=='CNN-non-static' or model_variation=='CNN-static':
embedding_weights = train_word2vec(x, vocabulary_inv,
embedding_dim, min_word_count,
context)
if model_variation=='CNN-static':
x = embedding_weights[0][x]
elif model_variation=='CNN-rand':
embedding_weights = None
else:
raise ValueError('Unknown model variation')
# Shuffle data
shuffle_indices = np.random.permutation(np.arange(len(y)))
x_shuffled = x[shuffle_indices]
y_shuffled = y[shuffle_indices].argmax(axis=1)
print("Vocabulary Size: {:d}".format(len(vocabulary)))
"""
Explanation: Data Preparation
End of explanation
"""
graph_in = Input(shape=(sequence_length, embedding_dim))
convs = []
for fsz in filter_sizes:
conv = Conv1D(filters=num_filters,
filter_length=fsz,
padding='valid',
activation='relu',
strides=1)(graph_in)
pool = MaxPooling1D(pool_length=2)(conv)
flatten = Flatten()(pool)
convs.append(flatten)
if len(filter_sizes)>1:
out = Concatenate()(convs)
else:
out = convs[0]
graph = Model(input=graph_in, output=out)
# main sequential model
model = Sequential()
if not model_variation=='CNN-static':
model.add(Embedding(len(vocabulary), embedding_dim, input_length=sequence_length,
weights=embedding_weights))
model.add(Dropout(dropout_prob[0], input_shape=(sequence_length, embedding_dim)))
model.add(graph)
model.add(Dense(hidden_dims))
model.add(Dropout(dropout_prob[1]))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
# Training model
# ==================================================
model.fit(x_shuffled, y_shuffled, batch_size=batch_size,
nb_epoch=num_epochs, validation_split=val_split, verbose=2)
"""
Explanation: Building CNN Model
End of explanation
"""
|
nwjs/chromium.src
|
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
|
bsd-3-clause
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow_text
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
"""
Explanation: Neural machine translation with attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/nmt_with_attention">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/nmt_with_attention.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on Effective Approaches to Attention-based Neural Machine Translation. This is an advanced example that assumes some knowledge of:
Sequence to sequence models
TensorFlow fundamentals below the keras layer:
Working with tensors directly
Writing custom keras.Models and keras.layers
While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to Transformers).
After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?"
The resulting model is exportable as a tf.saved_model, so it can be used in other TensorFlow environments.
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 minutes to run on a single P100 GPU.
Setup
End of explanation
"""
use_builtins = True
"""
Explanation: This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
End of explanation
"""
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
"""
Explanation: This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
End of explanation
"""
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
"""
Explanation: The data
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
They have a variety of languages available, but we'll use the English-Spanish dataset.
Download and prepare the dataset
For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
Add a start and end token to each sentence.
Clean the sentences by removing special characters.
Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
Pad each sentence to a maximum length.
End of explanation
"""
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
"""
Explanation: Create a tf.data dataset
From these arrays of strings you can create a tf.data.Dataset of strings that shuffles and batches them efficiently:
End of explanation
"""
example_text = tf.constant('¿Todavía está en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
"""
Explanation: Text preprocessing
One of the goals of this tutorial is to build a model that can be exported as a tf.saved_model. To make that exported model useful it should take tf.string inputs, and retrun tf.string outputs: All the text processing happens inside the model.
Standardization
The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.
The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.
The tensroflow_text package contains a unicode normalize operation:
End of explanation
"""
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
"""
Explanation: Unicode normalization will be the first step in the text standardization function:
End of explanation
"""
max_vocab_size = 5000
input_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
"""
Explanation: Text Vectorization
This standardization function will be wrapped up in a preprocessing.TextVectorization layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
End of explanation
"""
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
"""
Explanation: The TextVectorization layer and many other experimental.preprocessing layers have an adapt method. This method reads one epoch of the training data, and works a lot like Model.fix. This adapt method initializes the layer based on the data. Here it determines the vocabulary:
End of explanation
"""
output_text_processor = preprocessing.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
"""
Explanation: That's the Spanish TextVectorization layer, now build and .adapt() the English one:
End of explanation
"""
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
"""
Explanation: Now these layers can convert a batch of strings into a batch of token IDs:
End of explanation
"""
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
"""
Explanation: The get_vocabulary method can be used to convert token IDs back to text:
End of explanation
"""
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
"""
Explanation: The returned token IDs are zero-padded. This can easily be turned into a mask:
End of explanation
"""
embedding_dim = 256
units = 1024
"""
Explanation: The encoder/decoder model
The following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from Luong's paper.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
Before getting into it define a few constants for the model:
End of explanation
"""
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
"""
Explanation: The encoder
Start by building the encoder, the blue part of the diagram above.
The encoder:
Takes a list of token IDs (from input_text_processor).
Looks up an embedding vector for each token (Using a layers.Embedding).
Processes the embeddings into a new sequence (Using a layers.GRU).
Returns:
The processed sequence. This will be passed to the attention head.
The internal state. This will be used to initialize the decoder
End of explanation
"""
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
"""
Explanation: Here is how it fits together so far:
End of explanation
"""
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
"""
Explanation: The encoder returns its internal state so that its state can be used to initialize the decoder.
It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder.
The attention head
The decoder uses attention to selectively focus on parts of the input sequence.
The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a layers.GlobalAveragePoling1D but the attention layer performs a weighted average.
Let's look at how this works:
<img src="images/attention_equation_1.jpg" alt="attention equation 1" width="800">
<img src="images/attention_equation_2.jpg" alt="attention equation 2" width="800">
Where:
$s$ is the encoder index.
$t$ is the decoder index.
$\alpha_{ts}$ is the attention weights.
$h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).
$h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).
$c_t$ is the resulting context vector.
$a_t$ is the final output combining the "context" and "query".
The equations:
Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.
Calculates the context vector as the weighted sum of the encoder outputs.
Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:
<img src="images/attention_equation_4.jpg" alt="attention equation 4" width="800">
This tutorial uses Bahdanau's additive attention. TensorFlow includes implementations of both as layers.Attention and
layers.AdditiveAttention. The class below handles the weight matrices in a pair of layers.Dense layers, and calls the builtin implementation.
End of explanation
"""
attention_layer = BahdanauAttention(units)
"""
Explanation: Test the Attention layer
Create a BahdanauAttention layer:
End of explanation
"""
(example_tokens != 0).shape
"""
Explanation: This layer takes 3 inputs:
The query: This will be generated by the decoder, later.
The value: This Will be the output of the encoder.
The mask: To exclude the padding, example_tokens != 0
End of explanation
"""
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
"""
Explanation: The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:
A batch of sequences of result vectors the size of the queries.
A batch attention maps, with size (query_length, value_length).
End of explanation
"""
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
"""
Explanation: The attention weights should sum to 1.0 for each sequence.
Here are the attention weights across the sequences at t=0:
End of explanation
"""
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
"""
Explanation: Because of the small-random initialization the attention weights are all close to 1/(sequence_length). If you zoom in on the weights for a single sequence, you can see that there is some small variation that the model can learn to expand, and exploit.
End of explanation
"""
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
"""
Explanation: The decoder
The decoder's job is to generate predictions for the next output token.
The decoder receives the complete encoder output.
It uses an RNN to keep track of what it has generated so far.
It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.
It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".
It generates logit predictions for the next token based on the "attention vector".
<img src="images/attention_equation_3.jpg" alt="attention equation 3" width="800">
Here is the Decoder class and its initializer. The initializer creates all the necessary layers.
End of explanation
"""
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
"""
Explanation: The call method for this layer takes and returns multiple tensors. Organize those into simple container classes:
End of explanation
"""
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
"""
Explanation: Here is the implementation of the call method:
End of explanation
"""
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
"""
Explanation: The encoder processes its full input sequence with a single call to its RNN. This implementation of the decoder can do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:
Flexibility: Writing the loop gives you direct control over the training procedure.
Clarity: It's possible to do masking tricks and use layers.RNN, or tfa.seq2seq APIs to pack this all into a single call. But writing it out as a loop may be clearer.
Loop free training is demonstrated in the Text generation tutiorial.
Now try using this decoder.
End of explanation
"""
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
"""
Explanation: The decoder takes 4 inputs.
new_tokens - The last token generated. Initialize the decoder with the "[START]" token.
enc_output - Generated by the Encoder.
mask - A boolean tensor indicating where tokens != 0
state - The previous state output from the decoder (the internal state
of the decoder's RNN). Pass None to zero-initialize it. The original
paper initializes it from the encoder's final RNN state.
End of explanation
"""
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
"""
Explanation: Sample a token according to the logits:
End of explanation
"""
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
"""
Explanation: Decode the token as the first word of the output:
End of explanation
"""
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
"""
Explanation: Now use the decoder to generate a second set of logits.
Pass the same enc_output and mask, these haven't changed.
Pass the sampled token as new_tokens.
Pass the decoder_state the decoder returned last time, so the RNN continues with a memory of where it left off last time.
End of explanation
"""
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
"""
Explanation: Training
Now that you have all the model components, it's time to start training the model. You'll need:
A loss function and optimizer to perform the optimization.
A training step function defining how to update the model for each input/target batch.
A training loop to drive the training and save checkpoints.
Define the loss function
End of explanation
"""
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
"""
Explanation: Implement the training step
Start with a model class, the training process will be implemented as the train_step method on this model. See Customizing fit for details.
Here the train_step method is a wrapper around the _train_step implementation which will come later. This wrapper includes a switch to turn on and off tf.function compilation, to make debugging easier.
End of explanation
"""
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
"""
Explanation: Overall the implementation for the Model.train_step method is as follows:
Receive a batch of input_text, target_text from the tf.data.Dataset.
Convert those raw text inputs to token-embeddings and masks.
Run the encoder on the input_tokens to get the encoder_output and encoder_state.
Initialize the decoder state and loss.
Loop over the target_tokens:
Run the decoder one step at a time.
Calculate the loss for each step.
Accumulate the average loss.
Calculate the gradient of the loss and use the optimizer to apply updates to the model's trainable_variables.
The _preprocess method, added below, implements steps #1 and #2:
End of explanation
"""
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
"""
Explanation: The _train_step method, added below, handles the remaining steps except for actually running the decoder:
End of explanation
"""
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
"""
Explanation: The _loop_step method, added below, executes the decoder and calculates the incremental loss and new decoder state (dec_state).
End of explanation
"""
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
"""
Explanation: Test the training step
Build a TrainTranslator, and configure it for training using the Model.compile method:
End of explanation
"""
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
"""
Explanation: Test out the train_step. For a text model like this the loss should start near:
End of explanation
"""
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
"""
Explanation: While it's easier to debug without a tf.function it does give a performance boost. So now that the _train_step method is working, try the tf.function-wrapped _tf_train_step, to maximize performance while training:
End of explanation
"""
translator.train_step([example_input_batch, example_target_batch])
"""
Explanation: The first call will be slow, because it traces the function.
End of explanation
"""
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
"""
Explanation: But after that it's usually 2-3x faster than the eager train_step method:
End of explanation
"""
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
"""
Explanation: A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
End of explanation
"""
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
"""
Explanation: Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
End of explanation
"""
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
"""
Explanation: Train the model
While there's nothing wrong with writing your own custom training loop, implementing the Model.train_step method, as in the previous section, allows you to run Model.fit and avoid rewriting all that boiler-plate code.
This tutorial only trains for a couple of epochs, so use a callbacks.Callback to collect the history of batch losses, for plotting:
End of explanation
"""
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
"""
Explanation: The visible jumps in the plot are at the epoch boundaries.
Translate
Now that the model is trained, implement a function to execute the full text => text translation.
For this the model needs to invert the text => token IDs mapping provided by the output_text_processor. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.
Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
End of explanation
"""
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
"""
Explanation: Convert token IDs to text
The first method to implement is tokens_to_text which converts from token IDs to human readable text.
End of explanation
"""
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
"""
Explanation: Input some random token IDs and see what it generates:
End of explanation
"""
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
"""
Explanation: Sample from the decoder's predictions
This function takes the decoder's logit outputs and samples token IDs from that distribution:
End of explanation
"""
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
"""
Explanation: Test run this function on some random inputs:
End of explanation
"""
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
"""
Explanation: Implement the translation loop
Here is a complete implementation of the text to text translation loop.
This implementation collects the results into python lists, before using tf.concat to join them into tensors.
This implementation statically unrolls the graph out to max_length iterations.
This is okay with eager execution in python.
End of explanation
"""
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
"""
Explanation: Run it on a simple input:
End of explanation
"""
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
"""
Explanation: If you want to export this model you'll need to wrap this method in a tf.function. This basic implementation has a few issues if you try to do that:
The resulting graphs are very large and take a few seconds to build, save or load.
You can't break from a statically unrolled loop, so it will always run max_length iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
End of explanation
"""
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
"""
Explanation: Run the tf.function once to compile it:
End of explanation
"""
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
"""
Explanation: The initial implementation used python lists to collect the outputs. This uses tf.range as the loop iterator, allowing tf.autograph to convert the loop. The biggest change in this implementation is the use of tf.TensorArray instead of python list to accumulate tensors. tf.TensorArray is required to collect a variable number of tensors in graph mode.
With eager execution this implementation performs on par with the original:
End of explanation
"""
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
"""
Explanation: But when you wrap it in a tf.function you'll notice two differences.
End of explanation
"""
%%time
result = translator.tf_translate(
input_text = input_text)
"""
Explanation: First: Graph creation is much faster (~10x), since it doesn't create max_iterations copies of the model.
End of explanation
"""
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
"""
Explanation: Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
End of explanation
"""
a = result['attention'][0]
print(np.sum(a, axis=-1))
"""
Explanation: Visualize the process
The attention weights returned by the translate method show where the model was "looking" when it generated each output token.
So the sum of the attention over the input should return all ones:
End of explanation
"""
_ = plt.bar(range(len(a[0, :])), a[0, :])
"""
Explanation: Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
End of explanation
"""
plt.imshow(np.array(a), vmin=0.0)
"""
Explanation: Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
End of explanation
"""
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
"""
Explanation: Here is some code to make a better attention plot:
End of explanation
"""
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'¿Todavía están en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
"""
Explanation: Translate a few more sentences and plot them:
End of explanation
"""
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
"""
Explanation: The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:
The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.
The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. Transformers solve this by using self-attention in the encoder and decoder.
End of explanation
"""
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
"""
Explanation: Export
Once you have a model you're satisfied with you might want to export it as a tf.saved_model for use outside of this python program that created it.
Since the model is a subclass of tf.Module (through keras.Model), and all the functionality for export is compiled in a tf.function the model should export cleanly with tf.saved_model.save:
Now that the function has been traced it can be exported using saved_model.save:
End of explanation
"""
|
ernestyalumni/MLgrabbag
|
SVM_theano.ipynb
|
mit
|
%matplotlib inline
import theano
from theano import function, config, sandbox, shared
import theano.tensor as T
print( theano.config.device )
print( theano.config.lib.cnmem) # cf. http://deeplearning.net/software/theano/library/config.html
print( theano.config.print_active_device)# Print active device at when the GPU device is initialized.
print(theano.config.allow_gc)
print(theano.config.optimizer_excluding)
import numpy as np
import scipy
import sys
sys.path.append( './ML' )
from SVM import SVM, SVM_serial, SVM_parallel
import pandas as pd
"""
Explanation: Support Vector Machines (SVM)
I ran this at the command prompt
THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=1,allow_gc=False' jupyter notebook
End of explanation
"""
X = np.random.randn(300,2)
y = np.logical_xor(X[:,0] > 0, X[:,1] > 0)
"""
Explanation: Dataset examples
from sci-kit learn, sklearn
End of explanation
"""
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
"""
Explanation: cf. RBF SVM parameters
End of explanation
"""
iris = load_iris()
X = iris.data
y = iris.target
# Dataset for decision function visualization: we only keep the first two
# features in X and sub-sample the dataset to keep only 2 classes and
# make it a binary classification problem
X_2d = X[:,:2]
X_2d=X_2d[y>0]
y_2d=y[y>0]
y_2d -= 1
# It is usually a good idea to scale the data for SVM training.
# We are cheating a bit in this example in scaling all of the data,
# instead of fitting the transformation on the training set and
# just applying it on the test set.
scaler = StandardScaler()
X= scaler.fit_transform(X)
X_2d=scaler.fit_transform(X_2d)
print(type(X)); print(X.shape); print(type(X_2d));print(X_2d.shape);print(type(y));print(y.shape);
print(type(y_2d));print(y_2d.shape)
ratio_of_train_to_total = 0.6
numberofexamples = len(y_2d)
numberoftrainingexamples = int(numberofexamples*ratio_of_train_to_total)
numbertovalidate = (numberofexamples - numberoftrainingexamples)/2
numbertotest= numberofexamples - numberoftrainingexamples - numbertovalidate
print(numberofexamples);print(numbertotest);print(numberoftrainingexamples);print(numbertovalidate)
shuffledindices = np.random.permutation( numberofexamples)
X_2d_train = X_2d[:numberoftrainingexamples]
y_2d_train = y_2d[:numberoftrainingexamples]
X_2d_valid = X_2d[numberoftrainingexamples:numberoftrainingexamples + numbertovalidate]
y_2d_valid = y_2d[numberoftrainingexamples:numberoftrainingexamples + numbertovalidate]
X_2d_test = X_2d[numberoftrainingexamples + numbertovalidate:]
y_2d_test = y_2d[numberoftrainingexamples + numbertovalidate:]
"""
Explanation: Load and prepare data set
dataset for grid search
End of explanation
"""
y_2d_train
y_2d_train[y_2d_train < 1] = -1
print(y_2d_train.shape);print(y_2d_train)
y_2d_valid[y_2d_valid < 1] = -1
y_2d_test[y_2d_test < 1] = -1
"""
Explanation: Clarke, Fokoue, and Zhang in Principles and Theory for Data Mining and Machine Learning (2009) and Bishop, Pattern Recognition and Machine Learning (2007) both, for support vector machines, for the case of binary classification, has $y\in \lbrace -1, 1\rbrace$, as opposed to $y\in \lbrace 0,1 \rbrace$ for $K=2$ total number of classes that outcome $y$ could belong to. Should this be made more explicit, noted more prominently, in practice?
End of explanation
"""
where_ex6_is_str = './coursera_Ng/machine-learning-ex6/ex6/'
ex6data1_mat_data = scipy.io.loadmat( where_ex6_is_str + "ex6data1.mat")
"""
Explanation: from Coursera's Machine Learning Introduction by Andrew Ng, Ex. 6, i.e. Programming Exercise 6
End of explanation
"""
SVM_iris = SVM(X_2d_train,y_2d_train,len(y_2d_train),1.0,1,0.001)
SVM_iris.build_W();
"""
Explanation: Using SVM
End of explanation
"""
SVM_iris.build_update();
SVM_iris.train_model_full();
SVM_iris.build_b();
SVM_iris.make_predict(X_2d_valid[0])
SVM_iris.make_predictions(X_2d_valid)
X_2d_test
y_2d_test
y_test_pred= SVM_iris.make_predictions(X_2d_test)
np.array( [np.array(yhat) for yhat in y_test_pred] )
y_valid_pred = [ SVM_iris.make_predict(X_2d_valid_ele) for X_2d_valid_ele in X_2d_valid ]
y_valid_pred = [y_valid_pred_ele[0] for y_valid_pred_ele in y_valid_pred]
y_valid_pred = np.array( y_valid_pred).flatten()
#y_valid_pred[ y_valid_pred>0 ] = 1
#y_valid_pred[ y_valid_pred<0 ] = -1
y_valid_pred = np.sign( y_valid_pred)
(y_2d_valid == y_valid_pred).astype(theano.config.floatX).sum()/len(y_valid_pred)
y_valid_pred
y_2d_valid
SVM_iris_X
SVM_iris = SVM(X_2d_train,y_2d_train,len(y_2d_train),0.1,1.0,0.001)
SVM_iris.build_W();
SVM_iris.build_update();
SVM_iris.train_model_full();
SVM_iris.build_b();
y_valid_pred = np.array( [ SVM_iris.make_predict(X_2d_valid_ele)[0] for X_2d_valid_ele in X_2d_valid ] ).flatten()
y_valid_pred[ y_valid_pred>0 ] = 1
y_valid_pred[ y_valid_pred<0 ] = -1
(y_2d_valid == y_valid_pred).astype(theano.config.floatX).sum()/len(y_valid_pred)
SVM_iris = SVM(X_2d_train,y_2d_train,len(y_2d_train),0.1,0.1,0.001)
SVM_iris.build_W();
SVM_iris.build_update();
SVM_iris.train_model_full();
SVM_iris.build_b();
y_valid_pred = np.array( [ SVM_iris.make_predict(X_2d_valid_ele)[0] for X_2d_valid_ele in X_2d_valid ] ).flatten()
y_valid_pred[ y_valid_pred>0 ] = 1
y_valid_pred[ y_valid_pred<0 ] = -1
(y_2d_valid == y_valid_pred).astype(theano.config.floatX).sum()/len(y_valid_pred)
SVM_iris = SVM(X_2d_train,y_2d_train,len(y_2d_train),0.01,0.1,0.001)
SVM_iris.build_W();
SVM_iris.build_update();
SVM_iris.train_model_full();
SVM_iris.build_b();
y_valid_pred = np.array( [ SVM_iris.make_predict(X_2d_valid_ele)[0] for X_2d_valid_ele in X_2d_valid ] ).flatten()
y_valid_pred[ y_valid_pred>0 ] = 1
y_valid_pred[ y_valid_pred<0 ] = -1
(y_2d_valid == y_valid_pred).astype(theano.config.floatX).sum()/len(y_valid_pred)
m_val = np.cast["int32"](X.shape[0])
Xi = theano.shared( np.zeros_like(X[0],dtype=theano.config.floatX) )
X = theano.shared( np.zeros_like(X,dtype=theano.config.floatX) )
y = theano.shared( np.random.randint(2,size=m_val))
yi = theano.shared( np.cast["int32"]( np.random.randint(2)) )
m = theano.shared( m_val )
lambda_mult = theano.shared( np.zeros(m_val).astype(theano.config.floatX) ) # lambda Lagrange multipliers
Xi.set_value( X[np.int32(1)] )
np.random.randint(2,size=4)
np.random.randint(2)
X = np.random.randn(300,2)
y = np.logical_xor(X[:,0] > 0, X[:,1] > 0)
def rbf(Xi,Xj,sigma):
""" rbf - radial basis function"""
kernel_result = T.exp( -( (Xi-Xj)**2).sum()/ ( np.float32(2*sigma) )
return kernel_result
class SVM(object):
""" SVM - Support Vector Machines
"""
def __init__(self,X,y,m,C,sigma,alpha):
assert m == X.shape[0] and m == y.shape[0]
self.C = np.float32(C)
self.sigma = np.float32(sigma)
self.alpha = np.float32(alpha)
self._m = theano.shared( np.int32(m))
# self._Xi = theano.shared( X[0].astype(theano.config.floatX) )
self.X = theano.shared( X.astype(theano.config.floatX) )
self.y = theano.shared( y.astype(theano.config.floatX) )
# self._yi = theano.shared( y[0].astype(theano.config.floatX) )
self.lambda_mult = theano.shared( np.random.rand(m).astype(theano.config.floatX) ) # lambda Lagrange multipliers
def build_W(self):
m = self._m.get_value()
X = self.X
y = self.y
lambda_mult = self.lambda_mult
def dual_step(Xj,yj,lambdaj, # input sequences we iterate over j=0,1,...m-1
cumulative_sum, # previous iteration
prodi,Xi,sigma): # non-sequences that aren't iterated over
prodj = prodi*lambdaj*yj*rbf(Xi,Xj,sigma)
return prodj +
for i in range(m):
Xi = self.X[i]
yi = self.y[i]
lambdai = self.lambda_mult[i]
prodi = lambdai*yi
theano.scan(fn=dual_step,
sequences=[X,y,lambda_mult],
non_sequences=[prodi,Xi,sigma])
y[0].astype(theano.config.floatX)
test_SVM = SVM(X,y,len(y),1.,0.1,0.01)
range(test_SVM._m.get_value());
np.random.rand(4)
test_SVM.X
"""
Explanation: .build_update might take a while for FAST_COMPILE (that flag command that's typed in before the notebook starts for theano)
End of explanation
"""
m=4
d=2
X_val=np.arange(2,m*d+2).reshape(m,d).astype(theano.config.floatX)
X=theano.shared( X_val)
y_val=np.random.randint(2,size=m).astype(theano.config.floatX)
y=theano.shared( y_val )
lambda_mult_val = np.random.rand(m).astype(theano.config.floatX)
lambda_mult = theano.shared( lambda_mult_val ) # lambda Lagrange multipliers
sigma_val = 2.0
sigma = theano.shared( np.float32(sigma_val))
np.random.randint(2,size=4)
X[1]
np.random.rand(4)
#lambda_mult = theano.shared( np.zeros(m_val).astype(theano.config.floatX) ) # lambda Lagrange multipliers
prodi = lambda_mult[1]*y[1]
sigma=0.5
def step(Xj,Xi):
rbf = T.exp(-(Xj-Xi)**2/(np.float32(2.*sigma**2)))
return sandbox.cuda.basic_ops.gpu_from_host(rbf)
output,update=theano.scan(fn=step, sequences=[X,],non_sequences=[X[1],])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
print(test_rbf().shape)
test_rbf()
#Check
prodi_val = lambda_mult_val[1]*y_val[1]
for j in range(4):
print( np.exp(-((X_val[j]-X_val[1])**2).sum(0)/(np.float32(2.*sigma**2))) )
X_val
X_val[3]
prodi = lambda_mult[0]*y[0]
sigma=0.5
def step(Xj,yj,lambda_multj,Xi):
rbf = lambda_multj*yj*T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
return sandbox.cuda.basic_ops.gpu_from_host(rbf)
output,update=theano.scan(fn=step, sequences=[X,y,lambda_mult],non_sequences=[X[0],])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
print(test_rbf().shape)
test_rbf()
sigma=0.5
def rbf(Xj,Xi,sigma):
rbf = T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
return rbf
def step(Xj,yj,lambda_multj,Xi,yi,lambda_multi):
# W_i = lambda_multi*yi*lambda_multj*yj*T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
W_i = lambda_multi*yi*lambda_multj*yj*rbf(Xj,Xi,sigma)
return W_i
output,update=theano.scan(fn=step, sequences=[X,y,lambda_mult],non_sequences=[X[0],y[0],lambda_mult[0]])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
test_rbf()
output1,update1=theano.scan(fn=step, sequences=[X,y,lambda_mult],non_sequences=[X[1],y[1],lambda_mult[1]])
test_rbf1 = theano.function(inputs=[],outputs=output1,updates=update1 )
test_rbf1()
test_rbf = theano.function(inputs=[],outputs=output+output1 )
test_rbf()
output,update=theano.scan(fn=step, sequences=[X,y,lambda_mult],non_sequences=[X[0],y[0],lambda_mult[0]])
updates=[update,]
for i in range(1,4):
outputi,updatei=theano.scan(fn=step, sequences=[X,y,lambda_mult],non_sequences=[X[i],y[i],lambda_mult[i]])
output += outputi
updates.append(update)
test_rbf = theano.function(inputs=[],outputs=output )
test_rbf()
sigma=1.
for j in range(4):
print( np.exp(-((X_val[j]-X_val[0])**2).sum()/(np.float32(2.*sigma**2))) )
X_val
np.sum( [ np.exp(-((X_val[j]-X_val[0])**2).sum()/(np.float32(2.*sigma**2))) for j in range(4)])
def step(Xj,Xi):
rbf = T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
return rbf
output,update=theano.scan(fn=step, sequences=[X,],non_sequences=[X[0],])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
test_rbf()
def step(Xj,Xi):
rbf = T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
return rbf
output,update=theano.scan(fn=step, sequences=[X],outputs_info=[None,],non_sequences=[X[0]])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
test_rbf()
output,update=theano.reduce(fn=step, sequences=[X],outputs_info=[None,],non_sequences=[X[0]])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
test_rbf()
def step(Xj,cumulative_sum,Xi):
rbf = T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
return cumulative_sum + rbf
W_i0 = theano.shared( np.float32(0.))
output,update=theano.scan(fn=step, sequences=[X],outputs_info=[W_i0,],non_sequences=[X[0]])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
test_rbf()
# Also this works:
output,update=theano.reduce(fn=step, sequences=[X],outputs_info=[W_i0,],non_sequences=[X[0]])
test_rbf = theano.function(inputs=[],outputs=output,updates=update )
test_rbf()
sigma=0.5
def rbf(Xj,Xi,sigma):
rbf = T.exp(-((Xj-Xi)**2).sum()/(np.float32(2.*sigma**2)))
return rbf
def step(Xj,yj,lambda_multj,cumulative_sum, Xi,yi,lambda_multi):
W_i = lambda_multi*yi*lambda_multj*yj*rbf(Xj,Xi,sigma)
return cumulative_sum + W_i
W_00 = theano.shared( np.float32(0.))
output,update=theano.reduce(fn=step, sequences=[X,y,lambda_mult],outputs_info=[W_00],
non_sequences=[X[0],y[0],lambda_mult[0]])
updates=[update,]
for i in range(1,m):
W_i0 = theano.shared( np.float32(0.))
outputi,updatei=theano.reduce(fn=step, sequences=[X,y,lambda_mult],
outputs_info=[W_i0],
non_sequences=[X[i],y[i],lambda_mult[i]])
output += outputi
updates.append(update)
test_rbf = theano.function(inputs=[],outputs=output )
test_rbf()
#sanity check
cum_sum_val=0.
for i in range(m):
toadd=np.sum([lambda_mult_val[i]*y_val[i]*lambda_mult_val[j]*y_val[j]*np.exp(-((X_val[j]-X_val[i])**2).sum()/(np.float32(2.*sigma**2))) for j in range(4)])
cum_sum_val += toadd
print(cum_sum_val)
test_SVM=SVM(X_val,y_val,m,1.0,2.0,0.01)
test_f= theano.function( inputs=[], outputs=T.dot( test_SVM.y, test_SVM.lambda_mult))
test_f()
test_f= theano.function( inputs=[], outputs=T.dot( test_SVM.y, test_SVM.y ))
test_f()
test_SVM.y.get_value()
theano.ifelse( T.lt(test_SVM.y,np.float32(0)), np.float32(0), test_SVM.y )
lower_bound = theano.shared( np.float32(0.) )
theano.ifelse.ifelse( T.lt(test_SVM.y, lower_bound), lower_bound, test_SVM.y )
lower_bound = theano.shared( np.float32(0.5) )
#lower_bound_check=T.switch( T.lt(test_SVM.y, lower_bound), lower_bound, test_SVM.y )
lower_bound_check=T.switch( T.lt(test_SVM.y, lower_bound), test_SVM.y, lower_bound )
test_f=theano.function(inputs=[],outputs=lower_bound_check)
test_f()
np.ndarray(5)
dir(scipy);
"""
Explanation: Test values
End of explanation
"""
with open("./Data/train.1",'rb') as f:
train_1_lst = f.readlines()
f.close()
# strip of '\n'
train_1_lst = [x.strip() for x in train_1_lst]
print(len(train_1_lst))
train_1_lst=[line.replace('1:','').replace('2:','').replace('3:','').replace('4:','') for line in train_1_lst]
train_1_lst=[line.split() for line in train_1_lst]
train_1_arr=np.array( [[float(ele) for ele in line] for line in train_1_lst] )
train_1_y=train_1_arr[:,0]
train_1_X=train_1_arr[:,1:]
print(train_1_y.shape)
print(train_1_X.shape)
with open("./Data/test.1",'rb') as f:
test_1_lst = f.readlines()
f.close()
# strip of '\n'
test_1_lst = [x.strip() for x in test_1_lst]
print(len(test_1_lst))
test_1_lst=[line.replace('1:','').replace('2:','').replace('3:','').replace('4:','') for line in test_1_lst]
test_1_lst=[line.split() for line in test_1_lst]
test_1_arr=np.array( [[float(ele) for ele in line] for line in test_1_lst] )
test_1_y=test_1_arr[:,0]
test_1_X=test_1_arr[:,1:]
with open("./Data/train.3",'rb') as f:
train_3_lst = f.readlines()
f.close()
# strip of '\n'
train_3_lst = [x.strip() for x in train_3_lst]
print(len(train_3_lst))
train_3_lst=[line.replace('1:','').replace('2:','').replace('3:','').replace('4:','').replace('5:','').replace('6:','').replace('7:','').replace('8:','').replace('9:','').replace('10:','').replace('11:','').replace('12:','').replace('13:','').replace('14:','').replace('15:','').replace('16:','').replace('17:','').replace('18:','').replace('19:','').replace('20:','').replace('21:','').replace('22:','') for line in train_3_lst]
train_3_lst=[line.split() for line in train_3_lst]
train_3_DF=pd.DataFrame( train_3_lst)
train_3_y = train_3_DF[0].as_matrix().astype(theano.config.floatX)
train_3_X = train_3_DF.ix[:,1:].as_matrix().astype(theano.config.floatX)
print(train_3_X.shape)
ratiotraintotot = 0.2
numberofexamples1 = len(train_1_y)
numberoftrain1 = int( numberofexamples1 * ratiotraintotot )
numberofvalid1 = numberofexamples1 - numberoftrain1
shuffled_idx = np.random.permutation(numberofexamples1)
train1_idx = shuffled_idx[:numberoftrain1]
valid1_idx = shuffled_idx[numberoftrain1:]
from sklearn.svm import SVC
clf=SVC()
clf.fit(train_1_X[train1_idx],train_1_y[train1_idx])
(clf.predict(train_1_X[valid1_idx]) == train_1_y[valid1_idx]).astype(theano.config.floatX).sum()/len(valid1_idx)
(clf.predict(test_1_X) == test_1_y).astype(theano.config.floatX).sum()/float(len(test_1_y))
pd.DataFrame(train_1_X).describe()
scaler = StandardScaler()
train_1_X_scaled = scaler.fit_transform(train_1_X)
pd.DataFrame(train_1_X_scaled).describe()
pd.DataFrame(train_1_y).describe()
train_1_y[ train_1_y < 1] = -1
len(train1_idx)
SVM_1 = SVM_parallel(train_1_X_scaled[train1_idx],train_1_y[train1_idx],len(train_1_y[train1_idx]),1.0,1.,0.001)
SVM_1.build_W();
SVM_1.build_update()
SVM_1.train_model_full()
SVM_1.build_b()
#yhat_parallel = SVM_1.make_predictions(train_1_X_scaled[valid1_idx]) ;
yhat_parallel = SVM_1.make_predictions_parallel(train_1_X_scaled[valid1_idx[:300]]) ;
yhat_parallel_2 = SVM_1.make_predictions_parallel(train_1_X_scaled[valid1_idx[:100]]) ;
yhat_parallel[0].shape
yhat_parallel_2
yhat = np.sign( yhat_parallel[0])
#(yhat == train_1_y[valid1_idx[:100]]).sum()/float(len(train_1_y[valid1_idx[:100]]))
(yhat == train_1_y[valid1_idx[:300]]).sum()/float(len(train_1_y[valid1_idx[:300]]))
len(valid1_idx)
yhat_1000 = SVM_1.make_predictions_parallel(train_1_X_scaled[valid1_idx[:1000]]) ;
yhat_1000 = np.sign( yhat_1000[0])
(yhat_1000 == train_1_y[valid1_idx[:1000]]).sum()/float(len(train_1_y[valid1_idx[:1000]]))
test_1_X_scaled = scaler.transform(test_1_X)
yhat_test = SVM_1.make_predictions_parallel(test_1_X_scaled) ;
yhat_test = np.sign( yhat_test[0])
(yhat_test == test_1_y).sum()/float(len(test_1_y))
train_1_y[valid1_idx[:100]]
"""
Explanation: cf. A Practical Guide to Support Vector Classication, Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin
End of explanation
"""
import sys
sys.getrecursionlimit()
sys.setrecursionlimit(50000)
sys.getrecursionlimit()
yhat_valid = SVM_1.make_predictions(train_1_X_scaled[valid1_idx])
SVM_1 = SVM_parallel(train_1_X_scaled,train_1_y,len(train_1_y),2.0,1.,0.01)
SVM_1.build_W();
SVM_1.build_update();
SVM_1.train_model_full(100) # 8 hours
SVM_1.build_b()
yhat_test = SVM_1.make_predictions_parallel(test_1_X_scaled) ;
yhat_test = np.sign( yhat_test[0])
(yhat_test == test_1_y).sum()/float(len(test_1_y))
test_1_y
test_1_y[ test_1_y < 1] = -1
yhat_test[ ]
# SVC
clf=SVC(C=2.0,gamma=2.0)
clf.fit(train_1_X_scaled,train_1_y)
(clf.predict(test_1_X_scaled) == test_1_y).sum()/float(len(test_1_y))
SVM_1_C2 = SVM_1
SVM_1 = SVM_parallel(train_1_X_scaled,train_1_y,len(train_1_y),2.0,0.25,0.001)
SVM_1.build_W();
SVM_1.build_update();
%time SVM_1.train_model_full(10) # CPU times: user 43min 45s, sys: 1min 10s, total: 44min 56s
#Wall time: 44min 54s
SVM_1.build_b()
yhat_test = SVM_1.make_predictions_parallel(test_1_X_scaled) ;
yhat_test = np.sign( yhat_test[0]);
(yhat_test == test_1_y).sum()/float(len(test_1_y))
SVM_1_C2 = SVM_1
SVM_1 = SVM_parallel(train_1_X_scaled,train_1_y,len(train_1_y),2.0,0.20,0.001) # sigma=0.2
SVM_1.build_W();
SVM_1.build_update();
%time SVM_1.train_model_full(20)
SVM_1.build_b()
yhat_test = SVM_1.make_predictions_parallel(test_1_X_scaled) ;
yhat_test = np.sign( yhat_test[0]);
(yhat_test == test_1_y).sum()/float(len(test_1_y)) # sigma = 0.2
SVM_1 = SVM_parallel(train_1_X_scaled,train_1_y,len(train_1_y),2.0,0.30,0.001)
SVM_1.build_W();
SVM_1.build_update();
%time SVM_1.train_model_full(15)
SVM_1.build_b()
yhat_test = SVM_1.make_predictions_parallel(test_1_X_scaled) ;
yhat_test = np.sign( yhat_test[0]);
(yhat_test == test_1_y).sum()/float(len(test_1_y))
"""
Explanation: So other people have this same problem too with Python, inherently with Python: https://github.com/Theano/Theano/issues/689
End of explanation
"""
with open("./Data/test.3",'rb') as f:
test_3_lst = f.readlines()
f.close()
# strip of '\n'
test_3_lst = [x.strip() for x in test_3_lst]
print(len(test_3_lst))
test_3_lst=[line.replace('1:','').replace('2:','').replace('3:','').replace('4:','').replace('5:','').replace('6:','').replace('7:','').replace('8:','').replace('9:','').replace('10:','').replace('11:','').replace('12:','').replace('13:','').replace('14:','').replace('15:','').replace('16:','').replace('17:','').replace('18:','').replace('19:','').replace('20:','').replace('21:','').replace('22:','') for line in test_3_lst]
test_3_lst=[line.split() for line in test_3_lst]
test_3_DF=pd.DataFrame( test_3_lst)
test_3_y = test_3_DF[0].as_matrix().astype(theano.config.floatX)
test_3_X = test_3_DF.ix[:,1:].as_matrix().astype(theano.config.floatX)
print(test_3_X.shape)
print(test_3_y.shape)
"""
Explanation: Vehicle data set from Anonymous user from Germany
cf. A Practical Guide to Support Vector Classication, Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin
http://www.csie.ntu.edu.tw/~cjlin/papers/guide/data/
Get and data clean/data wrangle/preprocess the test data, test_3 for vehicle data set
End of explanation
"""
scaler = StandardScaler()
train_3_X_scaled = scaler.fit_transform(train_3_X)
train_3_X
"""
Explanation: Scale the train.3 Vehicle data
End of explanation
"""
train_3_X_pd = pd.DataFrame(train_3_X)
train_3_X_pd_cleaned = train_3_X_pd.where( pd.notnull( train_3_X_pd ), train_3_X_pd.mean(), axis='columns')
train_3_X_pd.describe()
train_3_X_pd_cleaned.describe()
train_3_X_scaled = scaler.fit_transform( train_3_X_pd_cleaned.as_matrix() )
train_3_y
SVM_3 = SVM_parallel(train_3_X_scaled,train_3_y,len(train_3_y),128.0,2.0,0.001) # sigma=2.0
SVM_3.build_W();
SVM_3.build_update();
%time SVM_3.train_model_full(20)
SVM_3.build_b()
print(test_3_y.shape)
test_3_y
print(test_3_X.shape)
test_3_X_scaled = scaler.transform( test_3_X)
pd.DataFrame( train_3_X_scaled).describe()
pd.DataFrame( test_3_X_scaled).describe()
%time yhat_test3 = SVM_3.make_predictions_parallel( test_3_X_scaled)
yhat_test3 = np.sign( yhat_test3[0]);
(yhat_test3 == test_3_y).sum()/float(len(test_3_y))
"""
Explanation: Clean the data where I choose to fill in missing values, NaN values, with the mean, due to the distribution of the data
End of explanation
"""
SVM_3._yhat.get_value()
yhat_test3[0]
np.sign( yhat_test3[0])
test_3_y
yhat_test3
np.place( yhat_test3, yhat_test3 < 0., 0.)
yhat_test3
yPratt_test_results = SVM_3.make_prob_Pratt(yhat_test3)
alpha = np.float32(0.01)
yhat = SVM_3._yhat
y_sh = theano.shared( yhat_test3.astype(theano.config.floatX ) )
A = theano.shared( np.float32( np.random.rand() ) )
B = theano.shared( np.float32( np.random.rand() ) )
Prob_1_given_yhat = np.float32(1.)/(np.float32(1.)+ T.exp(A*yhat +B))
costfunctional = T.nnet.binary_crossentropy( Prob_1_given_yhat, y_sh).mean()
DA, DB = T.grad(costfunctional, [A,B])
train = theano.function(inputs=[],outputs=[Prob_1_given_yhat, costfunctional],
updates=[(A,A-alpha*DA),(B,B-alpha*DB)],name="train")
probabilities = theano.function(inputs=[], outputs=Prob_1_given_yhat,name="probabilities")
training_steps=10000
for i in range(training_steps):
pred,err = train()
probabilities_vals = probabilities()
print(len(yhat_test3))
print(len(probabilities_vals))
probabilities_vals
(probabilities_vals > 0.5).astype(theano.config.floatX)
np.place( yhat_test3, yhat_test3 < 0., 0.)
%time yPratt_test_results = SVM_3.make_prob_Pratt(yhat_test3)
yPratt_test_results[0]
(yPratt_test_results[0] > 0.7).astype(theano.config.floatX)
"""
Explanation: Developing Pratt scaling functionality to make adhoc probability likelihoood estimates (estimates of probability)
End of explanation
"""
|
SJSlavin/phys202-2015-work
|
assignments/assignment04/TheoryAndPracticeEx01.ipynb
|
mit
|
from IPython.display import Image
"""
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
"""
Image(filename='silver-feature-unpredictable-21.png')
Image(filename='silver-feature-unpredictable-1.png')
"""
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation
"""
|
risantos/schoolwork
|
Física Computacional/Ficha 1 - Interpolacao.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 1 - Interpolação
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
End of explanation
"""
func_x = lambda x: 1/ (1 + x**2) # Função dada
def xinterval(x_i, x_f, n):
" Gera um array de 'n' pontos equidistantes entre 'x_i' e 'x_f' "
xn = np.linspace(x_i, x_f, num = n)
return xn
def newtoninterp(x, y, x_new):
""" Dados um conjunto de pontos 'x', suas respectivas imagens 'y', e
um intervalo de pontos 'x_new' para o qual se vão aplicar o
polinómios """
n = len(x)
def difdiv(xi):
"Cálculo dos coeficientes através do método das diferenças divididas"
d = list(y)
for j in range(1, n):
for i in range (n-1, j-1, -1): # intervalo escolhido de forma a evitar dependências de variáveis no cálculo
d[i] = (d[i]-d[i-1]) / (xi[i] - xi[i-j])
return d
def interpol(coef, x_pts, x_new):
y_new = []
for pt in x_new:
co = coef[len(coef)-1] # último coeficiente
for i in range(n-2, -1, -1): # intervalo para multiplicar do último ponto para o primeiro, sem dependências
co *= pt - x_pts[i]
co += coef[i]
y_new.append(co)
return y_new
coef_l = difdiv(x)
ypol = interpol(coef_l, x, x_new)
return ypol, coef_l
"""
Explanation: Dados um conjunto de pontos $x$, suas respectivas imagens $y$ e um intervalo de pontos x_new:
+ Calculam-se os Coeficientes com o método das diferenças divididas
+ Para cada ponto de x_new, aplica-se o polinómio, obtendo os valores de y_new.
End of explanation
"""
test_x, test_y = [1, 2, 4, 5, 8], [10, 5, 2, 4, 14]
plot_x_range = np.linspace(1, 8, num = 150)
pol_teste, pol_list = newtoninterp(test_x, test_y, plot_x_range)
pol_dado = lambda x: (1042/63) - (146/21)*x + (7/36)*x**2 + (5/21)*x**3 - (5/252)*x**4
plt.figure(figsize=(12, 5))
plt.scatter(test_x, test_y)
plt.plot(plot_x_range, pol_teste, 'c''-', label = 'interpolado')
plt.plot(plot_x_range, pol_dado(plot_x_range), 'r' '--', label = 'polinómio dado')
plt.title('Teste de implementação do método de interpolação polinomial de Newton')
plt.xlabel('$x_{i}$', size=20)
plt.ylabel('$f(x_{i})$', size = 16)
plt.legend(loc='upper center')
plt.show()
"""
Explanation: Nesta figura tenta-se demonstrar a eficácia da rotina, comparando o polinómio com o presente no pdf:
$p(x) = \frac{1042}{63} - \frac{146}{21} x + \frac{7}{36} x^{2} + \frac{5}{21} x^{3} - \frac{5}{252} x^{4}$
Verifica-se que o método está bem implementado, já que os resultados são idênticos.
End of explanation
"""
fig = plt.figure(figsize= (20, 12))
x_span = np.linspace(-5, 5, num = 200)
x_set = [-4.92, -2.67, -1.58, 0.88, 2.22, 3.14, 4.37]
y_set = list(map(func_x, x_set))
for i in range(1, 6):
x_p = xinterval(-5, 5, i+1)
y_p = func_x(x_p)
y_gen, y_poli = newtoninterp(x_p, y_p, x_span)
ip = fig.add_subplot(3, 2, i)
ip.set_title('Interpolação de grau: ' + str(i))
print('Coeficientes da Interpolação de grau ' + str(i) + ':')
s=''
for p in range(0, len(y_poli)): s += 'C' + str(p) + ': ' + str(y_poli[p]) + '; '
print(s)
#plt.scatter(x_p, y_p)
plt.plot(x_span, y_gen, 'c' , label = 'polinómio')
plt.plot(x_span, func_x(x_span), 'r', label = r'$\frac{1}{1+x^{2}}$') # original function
plt.scatter(x_set, y_set)
if i == 4 or i == 5: plt.xlabel('$x_{i}$', size=20)
if i %2 != 0: plt.ylabel('$f(x_{i})$', size = 16)
plt.legend(loc='best')
# plt.plot(x_span, exact_gen)
plt.show()
"""
Explanation: Como o método funciona, faz-se a interpolação em grau $n \in [ 1, 2, 3, 4 , 5]$ com $n+1$ pontos equidistantes no intervalo $x \in [-5, 5]$
A verde está representada a função original, $f(x) =\frac{1}{1+x^{2}}$, e a azul o polinómio obtido para cada caso.
Os pontos representados são os requeridos:
[-4.92, -2.67, -1.58, 0.88, 2.22, 3.14, 4.37]
End of explanation
"""
x_20 = xinterval(-5, 5, 21)
y_20 = list(map(func_x, x_20))
evalx_20 = [-4.75, 4.8]
evaly_20 = list(map(func_x, evalx_20))
pol_20, lpol_20 = newtoninterp(x_20, y_20, x_span)
print('Coeficientes da Interpolação:')
for p in range(0, len(lpol_20)): print('C' + str(p) + ': ' + str(lpol_20[p]))
fig = plt.figure(figsize = (10, 4))
i20 = fig.add_subplot(111)
i20.set_title('Interpolação de grau 20')
plt.plot(x_span, pol_20, 'c', label = 'interpolado')
plt.plot(x_span, func_x(x_span), 'r' '--' , label = r'$\frac{1}{1+x^{2}}$')
plt.scatter(evalx_20, evaly_20)
plt.xlabel('$x_{i}$', size=20)
plt.ylabel('$f(x_{i})$', size = 16)
plt.legend(loc = 'best')
plt.show()
"""
Explanation: Para um polinómio de grau 20, observa-se um óptimo ajuste em volta dos pontos centrais,
e uma oscilação imensa nos pontos junto aos extremos iniciais e finais, o que é efectivamente um mau resultado.
Isto permite verificar, como esperado, o fenómeno de Runge.
Nos pontos $x = -4.75$ e $x = 4.8$ encontramos as oscilações que fogem muito ao valor real.
End of explanation
"""
wl = [4358, 4861, 5896, 6563, 7679] # wavelength
n_wl = [1.6174, 1.6062, 1.5923, 1.5870, 1.5812] # n for each wavelength
"""
Explanation: 2.
O índice de refracção do poliestireno, medido para diferentes
comprimentos de onda $\lambda $ (correspondentes às riscas intensas do
espectro de sódio) é dado pela tabela seguinte:
| $\lambda (\mathring A)$ | 4358 | 4861 | 5896 | 6563 | 7679 |
|---------|--------|--------|--------|--------|--------|
| n | 1.6174 | 1.6062 | 1.5923 | 1.5870 | 1.5812 |
End of explanation
"""
def Lagrangepolinterp(x, y, x_new):
n = len(x)
y_new = []
for x_n in x_new:
lag_pols = []
for i in range(n):
l = 1
for k in range(n):
if k != i:
l *= (x_n - x[k]) / (x[i] - x[k])
lag_pols.append(l)
point = 0
for i, j in zip(y, lag_pols):
point += i * j
y_new.append(point)
return y_new
"""
Explanation: Implementa-se o método de Polinómio Interpolador pelo método de Lagrange:
End of explanation
"""
lag_pol_test = Lagrangepolinterp(test_x, test_y, plot_x_range)
plt.figure(figsize=(12, 5))
plt.plot(plot_x_range, lag_pol_test, 'r' '--', label = 'interpolação Lagrange')
plt.plot(plot_x_range, pol_dado(plot_x_range), 'c' '-', label = 'polinómio dado')
plt.scatter(test_x, test_y)
plt.legend(loc='upper center')
plt.title('Teste de implementação do método polinomial de Lagrange')
plt.xlabel('$x_{i}$', size=20)
plt.ylabel('$f(x_{i})$', size = 16)
plt.show()
x_in, x_fi, dx = 3500, 8500, 0.01 #intervalo de pontos entre 3000 e 9000 a cada 0.01
wl_range = np.arange(x_in, x_fi, dx)
n_refr = Lagrangepolinterp(wl, n_wl, wl_range)
plt.figure(figsize = (12,5))
plt.scatter(wl, n_wl, color = 'red', label = '$\lambda (n)$ da tabela')
plt.plot(wl_range, n_refr, 'c', label = '$\lambda (n)$ interpolado')
plt.title('Interpolação de $\lambda (n)$ utilizando o método polinomial de Lagrange')
plt.xlabel('$\lambda$', size=16)
plt.ylabel('$n$', size=18)
plt.legend(loc='upper right')
plt.show()
"""
Explanation: Tal como para o caso do Método de Interpolação de Newton, confirma-se que o método de Interpolação de Lagrange (polinómico) implementado se comporta conforme o esperado.
End of explanation
"""
n_5000 = n_refr[int((5000 - x_in) / dx)]
print(n_5000)
"""
Explanation: Utilizando o polinómio interpolador pelo método de Lagrange, determina-se o índice de refracção para $\lambda = 5000 \mathring A$ como:
End of explanation
"""
|
olavurmortensen/gensim
|
docs/notebooks/word2vec.ipynb
|
lgpl-2.1
|
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
"""
Explanation: Word2Vec Tutorial
This tutorial follows a blog post written by the creator of gensim.
Preparing the Input
Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence a list of words (utf8 strings):
End of explanation
"""
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
print(model)
print(model.vocab)
"""
Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
End of explanation
"""
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences) # can be a non-repeatable, 1-pass generator
print(new_model)
print(model.vocab)
"""
Explanation: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences) will run two passes over the sentences iterator.
1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
2. The second pass trains the neural model.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:
End of explanation
"""
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
"""
Explanation: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
End of explanation
"""
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
"""
Explanation: Training
Word2Vec accepts several parameters that affect both training speed and quality.
One of them is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:
End of explanation
"""
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
"""
Explanation: Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
The last of the major parameters (full list here) is for training parallelization, to speed up training:
End of explanation
"""
model.accuracy('./datasets/questions-words.txt')
"""
Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google have released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
Gensim support the same evaluation set, in exactly the same format:
End of explanation
"""
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
"""
Explanation: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
Once again, good performance on this test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task.
Storing and loading models
You can store/load models using the standard gensim methods:
End of explanation
"""
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = ['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue',
'training', 'it', 'with', 'more', 'sentences']
model.train(more_sentences)
# cleaning up temp
os.close(fs)
os.remove(temp_path)
"""
Explanation: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.Word2Vec.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Online training / Resuming training
Advanced users can load a model and continue training it with more sentences:
End of explanation
"""
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
model.doesnt_match("input is lunch he sentence cat".split())
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
"""
Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box:
End of explanation
"""
model['tree'] # raw NumPy vector of a word
"""
Explanation: If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
End of explanation
"""
|
Diyago/Machine-Learning-scripts
|
clustering/ods_unsupervised_learning.ipynb
|
apache-2.0
|
import numpy as np
import pandas as pd
import seaborn as sns
from tqdm import tqdm_notebook
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use(['seaborn-darkgrid'])
plt.rcParams['figure.figsize'] = (12, 9)
plt.rcParams['font.family'] = 'DejaVu Sans'
from sklearn import metrics
from sklearn.cluster import KMeans, AgglomerativeClustering, SpectralClustering
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
RANDOM_STATE = 17
X_train = np.loadtxt("../../data/samsung_HAR/samsung_train.txt")
y_train = np.loadtxt("../../data/samsung_HAR/samsung_train_labels.txt").astype(int)
X_test = np.loadtxt("../../data/samsung_HAR/samsung_test.txt")
y_test = np.loadtxt("../../data/samsung_HAR/samsung_test_labels.txt").astype(int)
# Проверим размерности
assert(X_train.shape == (7352, 561) and y_train.shape == (7352,))
assert(X_test.shape == (2947, 561) and y_test.shape == (2947,))
"""
Explanation: <center>
<img src="../../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 3
<center>
Авторы материала: Ольга Дайховская (@aiho), Юрий Кашницкий (@yorko).
Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
<center>Домашнее задание № 7
<center> Обучение без учителя
В этом задании мы разберемся с тем, как работают методы снижения размерности и кластеризации данных. Заодно еще раз попрактикуемся в задаче классификации.
Мы будем работать с набором данных Samsung Human Activity Recognition. Скачайте данные отсюда. Данные поступают с акселерометров и гироскопов мобильных телефонов Samsung Galaxy S3 (подробнее про признаки – по ссылке на UCI выше), также известен вид активности человека с телефоном в кармане – ходил ли он, стоял, лежал, сидел или шел вверх/вниз по лестнице.
Вначале мы представим, что вид активности нам неизвестнен, и попробуем кластеризовать людей чисто на основе имеющихся признаков. Затем решим задачу определения вида физической активности именно как задачу классификации.
Заполните код в клетках (где написано "Ваш код здесь") и ответьте на вопросы в веб-форме.
End of explanation
"""
X = np.vstack([X_train, X_test])
y = np.hstack([y_train, y_test])
"""
Explanation: Для кластеризации нам не нужен вектор ответов, поэтому будем работать с объединением обучающей и тестовой выборок. Объедините X_train с X_test, а y_train – с y_test.
End of explanation
"""
np.unique(y)
n_classes = np.unique(y).size
"""
Explanation: Определим число уникальных значений меток целевого класса.
End of explanation
"""
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
"""
Explanation: Эти метки соответствуют:
- 1 - ходьбе
- 2 - подъему вверх по лестнице
- 3 - спуску по лестнице
- 4 - сидению
- 5 - стоянию
- 6 - лежанию
Отмасштабируйте выборку с помощью StandardScaler с параметрами по умолчанию.
End of explanation
"""
pca = PCA(0.90, random_state=RANDOM_STATE)
X_pca = pca.fit_transform(X_scaled)
"""
Explanation: Понижаем размерность с помощью PCA, оставляя столько компонент, сколько нужно для того, чтобы объяснить как минимум 90% дисперсии исходных (отмасштабированных) данных. Используйте отмасштабированную выборку и зафиксируйте random_state (константа RANDOM_STATE).
End of explanation
"""
len(pca.explained_variance_ratio_)
"""
Explanation: Вопрос 1:<br>
Какое минимальное число главных компонент нужно выделить, чтобы объяснить 90% дисперсии исходных (отмасштабированных) данных?
End of explanation
"""
round(pca.explained_variance_ratio_[0] * 100)
"""
Explanation: Варианты:
- 56
- <b>65</b>
- 66
- 193
Вопрос 2:<br>
Сколько процентов дисперсии приходится на первую главную компоненту? Округлите до целых процентов.
Варианты:
- 45
- <b>51</b>
- 56
- 61
End of explanation
"""
plt.figure(figsize=(13,10))
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, s=20, cmap='viridis');
plt.colorbar()
"""
Explanation: Визуализируйте данные в проекции на первые две главные компоненты.
End of explanation
"""
k_means = KMeans(n_clusters=n_classes, n_init=100, random_state=RANDOM_STATE).fit(X_pca)
cluster_labels = k_means.predict(X_pca)
"""
Explanation: Вопрос 3:<br>
Если все получилось правильно, Вы увидите сколько-то кластеров, почти идеально отделенных друг от друга. Какие виды активности входят в эти кластеры?<br>
Ответ:
- 1 кластер: все 6 активностей
- <b>2 кластера: (ходьба, подъем вверх по лестнице, спуск по лестнице) и (сидение, стояние, лежание)</b>
- 3 кластера: (ходьба), (подъем вверх по лестнице, спуск по лестнице) и (сидение, стояние, лежание)
- 6 кластеров
Сделайте кластеризацию данных методом KMeans, обучив модель на данных со сниженной за счет PCA размерностью. В данном случае мы подскажем, что нужно искать именно 6 кластеров, но в общем случае мы не будем знать, сколько кластеров надо искать.
Параметры:
n_clusters = n_classes (число уникальных меток целевого класса)
n_init = 100
random_state = RANDOM_STATE (для воспроизводимости результата)
Остальные параметры со значениями по умолчанию.
End of explanation
"""
plt.figure(figsize=(13,10))
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=cluster_labels, s=20, cmap='viridis')
plt.colorbar()
"""
Explanation: Визуализируйте данные в проекции на первые две главные компоненты. Раскрасьте точки в соответствии с полученными метками кластеров.
End of explanation
"""
tab = pd.crosstab(y, cluster_labels, margins=True)
tab.index = ['ходьба', 'подъем вверх по лестнице',
'спуск по лестнице', 'сидение', 'стояние', 'лежание', 'все']
tab.columns = ['cluster' + str(i + 1) for i in range(6)] + ['все']
tab
"""
Explanation: Посмотрите на соответствие между метками кластеров и исходными метками классов и на то, какие виды активностей алгоритм KMeans путает.
End of explanation
"""
# Ваш код здесь
inertia = []
for k in tqdm_notebook(range(1, n_classes + 1)):
kmeans = KMeans(n_clusters=k, n_init=100, random_state=RANDOM_STATE).fit(X_pca)
inertia.append(np.sqrt(kmeans.inertia_))
for i in range(1, len(inertia) - 1):
D = abs(inertia[i] - inertia[i + 1]) / abs(inertia[i - 1] - inertia[i])
print(D)
"""
Explanation: Видим, что каждому классу (т.е. каждой активности) соответствуют несколько кластеров. Давайте посмотрим на максимальную долю объектов в классе, отнесенных к какому-то одному кластеру. Это будет простой метрикой, характеризующей, насколько легко класс отделяется от других при кластеризации.
Пример: если для класса "спуск по лестнице", в котором 1406 объектов, распределение кластеров такое:
- кластер 1 – 900
- кластер 3 – 500
- кластер 6 – 6,
то такая доля будет 900 / 1406 $\approx$ 0.64.
Вопрос 4:<br>
Какой вид активности отделился от остальных лучше всего в терминах простой метрики, описанной выше?<br>
Ответ:
- ходьба
- стояние
- спуск по лестнице
- <b>нет верного ответа</b>
Видно, что kMeans не очень хорошо отличает только активности друг от друга. Используйте метод локтя, чтобы выбрать оптимальное количество кластеров. Параметры алгоритма и данные используем те же, что раньше, меняем только n_clusters.
End of explanation
"""
ag = AgglomerativeClustering(n_clusters=n_classes,
linkage='ward').fit_predict(X_pca)
"""
Explanation: Вопрос 5:<br>
Какое количество кластеров оптимально выбрать, согласно методу локтя?<br>
Ответ:
- 1
- <b>2</b>
- 3
- 4
Попробуем еще один метод кластеризации, который описывался в статье – агломеративную кластеризацию.
End of explanation
"""
metrics.adjusted_rand_score(y, ag)
metrics.adjusted_rand_score(y, cluster_labels)
"""
Explanation: Посчитайте Adjusted Rand Index (sklearn.metrics) для получившегося разбиения на кластеры и для KMeans с параметрами из задания к 4 вопросу.
End of explanation
"""
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
svc = LinearSVC(random_state=RANDOM_STATE)
svc_params = {'C': [0.001, 0.01, 0.1, 1, 10]}
grid = GridSearchCV(svc, param_grid=svc_params, cv=3).fit(X_train_scaled, y_train)
best_svc = grid.best_estimator_
best_svc.C
"""
Explanation: Вопрос 6:<br>
Отметьте все верные утверждения.<br>
Варианты:
- <b>Согласно ARI, KMeans справился с кластеризацией хуже, чем Agglomerative Clustering</b>
- <b>Для ARI не имеет значения какие именно метки присвоены кластерам, имеет значение только разбиение объектов на кластеры</b>
- <b>В случае случайного разбиения на кластеры ARI будет близок к нулю</b>
Можно заметить, что задача не очень хорошо решается именно как задача кластеризации, если выделять несколько кластеров (> 2). Давайте теперь решим задачу классификации, вспомнив, что данные у нас размечены.
Для классификации используйте метод опорных векторов – класс sklearn.svm.LinearSVC. Мы в курсе отдельно не рассматривали этот алгоритм, но он очень известен, почитать про него можно, например, в материалах Евгения Соколова – тут.
Настройте для LinearSVC гиперпараметр C с помощью GridSearchCV.
Обучите новый StandardScaler на обучающей выборке (со всеми исходными признаками), примените масштабирование к тестовой выборке
В GridSearchCV укажите cv=3.
End of explanation
"""
y_predicted = best_svc.predict(X_test_scaled)
print(metrics.classification_report(y_test, y_predicted, target_names=tab.index[:6]))
"""
Explanation: Вопрос 7<br>
Какое значение гиперпараметра C было выбрано лучшим по итогам кросс-валидации?<br>
Ответ:
- 0.001
- 0.01
- <b>0.1</b>
- 1
- 10
End of explanation
"""
X_train_scaled_pca = pca.fit_transform(X_train_scaled)
X_test_scaled_pca = pca.transform(X_test_scaled)
svc = LinearSVC(random_state=RANDOM_STATE)
svc_params = {'C': [0.001, 0.01, 0.1, 1, 10]}
grid_2 = GridSearchCV(svc, param_grid=svc_params, cv=3).fit(X_train_scaled_pca, y_train)
round((grid.best_score_ - grid_2.best_score_) * 100)
"""
Explanation: Вопрос 8:<br>
Какой вид активности SVM определяет хуже всего в терминах точности? Полноты? <br>
Ответ:
- по точности – подъем вверх по лестнице, по полноте – лежание
- по точности – лежание, по полноте – сидение
- по точности – ходьба, по полноте – ходьба
- <b>по точности – стояние, по полноте – сидение</b>
Наконец, проделайте то же самое, что в 7 вопросе, только добавив PCA.
Используйте выборки X_train_scaled и X_test_scaled
Обучите тот же PCA, что раньше, на отмасшабированной обучающей выборке, примените преобразование к тестовой
Настройте гиперпараметр C на кросс-валидации по обучающей выборке с PCA-преобразованием. Вы заметите, насколько это проходит быстрее, чем раньше.
Вопрос 9:<br>
Какова разность между лучшим качеством (долей верных ответов) на кросс-валидации в случае всех 561 исходных признаков и во втором случае, когда применялся метод главных компонент? Округлите до целых процентов.<br>
Варианты:
- Качество одинаковое
- 2%
- <b>4% </b>
- 10%
- 20%
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_forward_sensitivity_maps.ipynb
|
bsd-3-clause
|
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
# Read the forward solutions with surface orientation
fwd = mne.read_forward_solution(fwd_fname, surf_ori=True)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
"""
Explanation: Display sensitivity maps for EEG and MEG sensors
Sensitivity maps can be produced from forward operators that
indicate how well different sensor types will be able to detect
neural currents from different regions of the brain.
To get started with forward modeling see ref:tut_forward.
End of explanation
"""
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
"""
Explanation: Compute sensitivity maps
End of explanation
"""
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
plt.colorbar(im, ax=ax, cmap='RdBu_r')
plt.show()
plt.figure()
plt.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
plt.legend()
plt.title('Normal orientation sensitivity')
plt.xlabel('sensitivity')
plt.ylabel('count')
plt.show()
grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[0, 50, 100]))
"""
Explanation: Show gain matrix a.k.a. leadfield matrix with sensitivity map
End of explanation
"""
|
pyoceans/erddapy
|
notebooks/searchfor.ipynb
|
bsd-3-clause
|
from erddapy import ERDDAP
e = ERDDAP(
server="https://upwell.pfeg.noaa.gov/erddap",
protocol="griddap"
)
"""
Explanation: Searching datasets
erddapy can wrap the same form-like search capabilities of ERDDAP with the search_for keyword.
End of explanation
"""
import pandas as pd
search_for = "HFRadar"
url = e.get_search_url(search_for=search_for, response="csv")
pd.read_csv(url)["Dataset ID"]
"""
Explanation: Single word search.
End of explanation
"""
search_for = "HFRadar 2km"
url = e.get_search_url(search_for=search_for, response="csv")
pd.read_csv(url)["Dataset ID"]
"""
Explanation: Filtering the search with extra words.
End of explanation
"""
search_for = "HFRadar -EXPERIMENTAL"
url = e.get_search_url(search_for=search_for, response="csv")
pd.read_csv(url)["Dataset ID"]
"""
Explanation: Filtering the search with words that should not be found.
End of explanation
"""
search_for = "wind speed"
url = e.get_search_url(search_for=search_for, response="csv")
len(pd.read_csv(url)["Dataset ID"])
"""
Explanation: Quoted search or "phrase search," first let us try the unquoted search.
End of explanation
"""
search_for = '"wind speed"'
url = e.get_search_url(search_for=search_for, response="csv")
len(pd.read_csv(url)["Dataset ID"])
"""
Explanation: Too many datasets because wind, speed, and wind speed are matched.
Now let's use the quoted search to reduce the number of results to only wind speed.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/03_tensorflow/a_tfstart.ipynb
|
apache-2.0
|
import tensorflow as tf
import numpy as np
print(tf.__version__)
"""
Explanation: <h1> Getting started with TensorFlow </h1>
In this notebook, you play around with the TensorFlow Python API.
End of explanation
"""
a = np.array([5, 3, 8])
b = np.array([3, -1, 2])
c = np.add(a, b)
print(c)
"""
Explanation: <h2> Adding two tensors </h2>
First, let's try doing this using numpy, the Python numeric package. numpy code is immediately evaluated.
End of explanation
"""
a = tf.constant([5, 3, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
print(c)
"""
Explanation: The equivalent code in TensorFlow consists of two steps:
<p>
<h3> Step 1: Build the graph </h3>
End of explanation
"""
with tf.Session() as sess:
result = sess.run(c)
print(result)
"""
Explanation: c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph.
Try the following in the cell above:
<ol>
<li> Change the 5 to 5.0, and similarly the other five numbers. What happens when you run this cell? </li>
<li> Add an extra number to a, but leave b at the original (3,) shape. What happens when you run this cell? </li>
<li> Change the code back to a version that works </li>
</ol>
<p/>
<h3> Step 2: Run the graph
End of explanation
"""
a = tf.placeholder(dtype=tf.int32, shape=(None,)) # batchsize x scalar
b = tf.placeholder(dtype=tf.int32, shape=(None,))
c = tf.add(a, b)
with tf.Session() as sess:
result = sess.run(c, feed_dict={
a: [3, 4, 5],
b: [-1, 2, 3]
})
print(result)
"""
Explanation: <h2> Using a feed_dict </h2>
Same graph, but without hardcoding inputs at build stage
End of explanation
"""
def compute_area(sides):
# slice the input to get the sides
a = sides[:,0] # 5.0, 2.3
b = sides[:,1] # 3.0, 4.1
c = sides[:,2] # 7.1, 4.8
# Heron's formula
s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b)
areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b)
return tf.sqrt(areasq)
with tf.Session() as sess:
# pass in two triangles
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
result = sess.run(area)
print(result)
"""
Explanation: <h2> Heron's Formula in TensorFlow </h2>
The area of triangle whose three sides are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Look up the available operations at https://www.tensorflow.org/api_docs/python/tf
End of explanation
"""
with tf.Session() as sess:
sides = tf.placeholder(tf.float32, shape=(None, 3)) # batchsize number of triangles, 3 sides
area = compute_area(sides)
result = sess.run(area, feed_dict = {
sides: [
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]
})
print(result)
"""
Explanation: <h2> Placeholder and feed_dict </h2>
More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
End of explanation
"""
import tensorflow as tf
tf.enable_eager_execution()
def compute_area(sides):
# slice the input to get the sides
a = sides[:,0] # 5.0, 2.3
b = sides[:,1] # 3.0, 4.1
c = sides[:,2] # 7.1, 4.8
# Heron's formula
s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b)
areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b)
return tf.sqrt(areasq)
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
print(area)
"""
Explanation: tf.eager
tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution.
<p>
One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code.
<b>You may need to restart the session to try this out.</b>
End of explanation
"""
|
henriquepgomide/caRtola
|
src/python/desafio_valorizacao/.ipynb_checkpoints/Descobrindo o algoritmo de valorização do Cartola FC - Parte I-checkpoint.ipynb
|
mit
|
# Importar bibliotecas
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
pd.options.mode.chained_assignment = None # default='warn'
# Abrir banco de dados
dados = pd.read_csv('~/caRtola/data/desafio_valorizacao/valorizacao_cartola_2018.csv')
# Listar nome das variáveis
str(list(dados))
# Selecionar variáveis para análise
dados = dados[['slug', 'rodada', 'posicao',
'status', 'variacao_preco', 'pontos',
'preco', 'media_pontos']]
# Explorar dados de apenas um jogador
paqueta = dados[dados.slug == 'lucas-paqueta']
paqueta.head(n=15)
"""
Explanation: Descobrindo o algoritmo de valorização do Cartola FC - Parte I
Explorando o algoritmo de valorização do Cartola.
Olá! Este é o primeiro tutorial da série que tentará descobrir o algoritmo de valorização do Cartola FC. Neste primeiro estudo, nós iremos:
Avaliar o sistema de valorizção ao longo das rodadas;
Estudar a distribuição a variação para cada rodada;
Realizar um estudo de caso com um jogador específico, estudando sua valorização e criando um modelo específico de valorização para o jogador.
Além disso, você estudará análise de dados usando Python com Pandas, Seaborn, Sklearn. Espero que você tenha noção sobre:
Modelos lineares
Análise de séries temporais
Conhecimentos básicos do Cartola FC.
End of explanation
"""
# Criar coluna variacao_preco_lag e pontos_lag
paqueta['variacao_preco_lag'] = paqueta['variacao_preco'].shift(1)
paqueta['pontos_lag'] = paqueta['pontos'].shift(1)
paqueta['media_lag'] = paqueta['media_pontos'].shift(-1)
paqueta[['slug', 'rodada', 'status',
'pontos_lag', 'variacao_preco_lag',
'preco', 'media_pontos']].head(n=15)
"""
Explanation: Algumas observações sobre a estrutura dos dados. Na linha '21136', Paquetá está como dúvida é teve pontuação de 0. Na linha abaixo ('21137'), ele está suspenso, no entanto pontuou.
A explicação para este erro nos dados está ligada em como os dados da API da Globo são organizados. Embora para o front-end do Cartola os dados estejam corretos, para nossa análise eles são inadequados. Por quê?
Vamos pensar que você está escalando para a rodada 38. Para esta rodada, a pontuação do jogador ainda não está disponível, somente a variação do seu preço, sua média e seu preço até a rodada 38. Assim, precisamos ajustar a coluna 'pontos', usando uma técnica simples de deslocar (lag) os dados da coluna. Além disso, precisaremos aplicar o mesmo processo na coluna 'variacao_preco' que também está ligada aos dados da rodada anterior.
Assim, a coluna 'variacao_preco' e 'pontos' estão deslocadas para cima e precisam ser corrigidas;
End of explanation
"""
# Transformar dados para plotar resultados
paqueta_plot = pd.melt(paqueta,
id_vars=['slug','rodada'],
value_vars=['variacao_preco_lag', 'pontos_lag', 'preco'])
# Plotar gráfico com variacao_preco_lag, pontos_lag e preco
plt.figure(figsize=(16, 6))
g = sns.lineplot(x='rodada', y='value', hue='variable', data=paqueta_plot)
"""
Explanation: Como podemos observar na tabela acima, os novos atributos que criamos agora estão alinhados com o status do atleta e poderão nos ajudar na etapa da modelagem. Antes de modelar, vamos explorar ainda nossos dados.
Primeira, observação para entendermos o modelo. O jogador quando está suspenso (linha 21137) ou seu status é nulo, não houve variação de preço. Há também outro ponto a ser observado, caso a pontuação do atleta seja positiva, há uma tendência de valorização. Vamos analisar isso nos dois gráficos abaixo.
End of explanation
"""
plt.figure(figsize=(16, 6))
g = sns.scatterplot(x='pontos_lag', y='variacao_preco_lag', hue='status', data=paqueta)
"""
Explanation: Neste gráfico, podemos observar que o preço do atleta foi razoavelmente estável ao longo do tempo. Ao observar o comportamento das linhas azul e laranja, conseguimos notar que quando uma linha tem inclinação negativa a outra parece acompanhar. Isso nos leva a concluir o óbvio, a pontuação do atleta está ligada diretamente a sua variação de preço.
End of explanation
"""
paqueta[['pontos_lag','variacao_preco_lag','preco','media_pontos']].corr()
"""
Explanation: Opa, aparentemente há uma relação entre os pontos e a variação do preço. Vamos analisar a matriz de correlação.
End of explanation
"""
# Set predictors and dependent variable
paqueta_complete = paqueta[(~paqueta.status.isin(['Nulo', 'Suspenso'])) & (paqueta.rodada > 5)]
paqueta_complete = paqueta_complete.dropna()
predictors = paqueta_complete[['pontos_lag','preco','media_lag']]
outcome = paqueta_complete['variacao_preco_lag']
regr = linear_model.LinearRegression()
regr.fit(predictors, outcome)
paqueta_complete['predictions'] = regr.predict(paqueta_complete[['pontos_lag', 'preco', 'media_lag']])
print('Intercept: \n', regr.intercept_)
print('Coefficients: \n', regr.coef_)
print("Mean squared error: %.2f"
% mean_squared_error(paqueta_complete['variacao_preco_lag'], paqueta_complete['predictions']))
print('Variance score: %.2f' % r2_score(paqueta_complete['variacao_preco_lag'], paqueta_complete['predictions']))
"""
Explanation: Temos algumas informações uteis que saíram da matriz de correlação. Primeira, a pontuação está correlacionada positivamente com a variação e o preço do atleta negativamente correlacionada. Estas duas variáveis já podem nos ajudar a montar um modelo.
End of explanation
"""
# Plotar variação do preço por valor previsto do modelo linear.
plt.figure(figsize=(8, 8))
g = sns.regplot(x='predictions',y='variacao_preco_lag', data=paqueta_complete)
# Plotar linhas com rodadas para avaliar se estamos errando alguma rodada específica
for line in range(0, paqueta_complete.shape[0]):
g.text(paqueta_complete.iloc[line]['predictions'],
paqueta_complete.iloc[line]['variacao_preco_lag']-0.25,
paqueta_complete.iloc[line]['rodada'],
horizontalalignment='right',
size='medium',
color='black',
weight='semibold')
"""
Explanation: Boa notícia! Nós estamos prevendo os resultados do jogador muito bem. O valor é aproximado, mas nada mal! A fórmula de valorização do jogador para uma dada rodada é:
$$ Variacao = 16.12 + (pontos * 0,174) - (preco * 0,824) + (media * 0,108) $$
Vamos abaixo em que medida nossas predições são compatíveis com o desempenho do jogador.
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/guide/keras/functional.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: 函数式 API
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/keras/functional" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/functional.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/functional.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/functional.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
设置
End of explanation
"""
inputs = keras.Input(shape=(784,))
"""
Explanation: 简介
Keras 函数式 API 是一种比 tf.keras.Sequential API 更加灵活的模型创建方式。函数式 API 可以处理具有非线性拓扑的模型、具有共享层的模型,以及具有多个输入或输出的模型。
深度学习模型通常是层的有向无环图 (DAG)。因此,函数式 API 是构建层计算图的一种方式。
请考虑以下模型:
(input: 784-dimensional vectors) ↧ [Dense (64 units, relu activation)] ↧ [Dense (64 units, relu activation)] ↧ [Dense (10 units, softmax activation)] ↧ (output: logits of a probability distribution over 10 classes)
这是一个具有三层的基本计算图。要使用函数式 API 构建此模型,请先创建一个输入节点:
End of explanation
"""
# Just for demonstration purposes.
img_inputs = keras.Input(shape=(32, 32, 3))
"""
Explanation: 数据的形状设置为 784 维向量。由于仅指定了每个样本的形状,因此始终忽略批次大小。
例如,如果您有一个形状为 (32, 32, 3) 的图像输入,则可以使用:
End of explanation
"""
inputs.shape
"""
Explanation: 返回的 inputs 包含馈送给模型的输入数据的形状和 dtype。形状如下:
End of explanation
"""
inputs.dtype
"""
Explanation: dtype 如下:
End of explanation
"""
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
"""
Explanation: 可以通过在此 inputs 对象上调用层,在层计算图中创建新的节点:
End of explanation
"""
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
"""
Explanation: “层调用”操作就像从“输入”向您创建的该层绘制一个箭头。您将输入“传递”到 dense 层,然后得到 x。
让我们为层计算图多添加几个层:
End of explanation
"""
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
"""
Explanation: 此时,您可以通过在层计算图中指定模型的输入和输出来创建 Model:
End of explanation
"""
model.summary()
"""
Explanation: 让我们看看模型摘要是什么样子:
End of explanation
"""
keras.utils.plot_model(model, "my_first_model.png")
"""
Explanation: 您还可以将模型绘制为计算图:
End of explanation
"""
keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True)
"""
Explanation: 并且,您还可以选择在绘制的计算图中显示每层的输入和输出形状:
End of explanation
"""
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=["accuracy"],
)
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
"""
Explanation: 此图和代码几乎完全相同。在代码版本中,连接箭头由调用操作代替。
“层计算图”是深度学习模型的直观心理图像,而函数式 API 是创建密切反映此图像的模型的方法。
训练、评估和推断
对于使用函数式 API 构建的模型来说,其训练、评估和推断的工作方式与 Sequential 模型完全相同。
如下所示,加载 MNIST 图像数据,将其改造为向量,将模型与数据拟合(同时监视验证拆分的性能),然后在测试数据上评估模型:
End of explanation
"""
model.save("path_to_my_model")
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model("path_to_my_model")
"""
Explanation: 有关更多信息,请参阅训练和评估指南。
保存和序列化
对于使用函数式 API 构建的模型,其保存模型和序列化的工作方式与 Sequential 模型相同。保存函数式模型的标准方式是调用 model.save() 将整个模型保存为单个文件。您可以稍后从该文件重新创建相同的模型,即使构建该模型的代码已不再可用。
保存的文件包括:
模型架构
模型权重值(在训练过程中得知)
模型训练配置(如果有的话,如传递给 compile)
优化器及其状态(如果有的话,用来从上次中断的地方重新开始训练)
End of explanation
"""
encoder_input = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
autoencoder.summary()
"""
Explanation: 有关详细信息,请阅读模型序列化和保存指南。
使用相同的层计算图定义多个模型
在函数式 API 中,模型是通过在层计算图中指定其输入和输出来创建的。这意味着可以使用单个层计算图来生成多个模型。
在下面的示例中,您将使用相同的层堆栈来实例化两个模型:能够将图像输入转换为 16 维向量的 encoder 模型,以及用于训练的端到端 autoencoder 模型。
End of explanation
"""
encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
decoder_input = keras.Input(shape=(16,), name="encoded_img")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
decoder = keras.Model(decoder_input, decoder_output, name="decoder")
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
autoencoder.summary()
"""
Explanation: 在上例中,解码架构与编码架构严格对称,因此输出形状与输入形状 (28, 28, 1) 相同。
Conv2D 层的反面是 Conv2DTranspose 层,MaxPooling2D 层的反面是 UpSampling2D 层。
所有模型均可像层一样调用
您可以通过在 Input 上或在另一个层的输出上调用任何模型来将其当作层来处理。通过调用模型,您不仅可以重用模型的架构,还可以重用它的权重。
为了查看实际运行情况,下面是对自动编码器示例的另一种处理方式,该示例创建了一个编码器模型、一个解码器模型,并在两个调用中将它们链接,以获得自动编码器模型:
End of explanation
"""
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1)(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
"""
Explanation: 如您所见,模型可以嵌套:模型可以包含子模型(因为模型就像层一样)。模型嵌套的一个常见用例是装配。例如,以下展示了如何将一组模型装配成一个平均其预测的模型:
End of explanation
"""
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
"""
Explanation: 处理复杂的计算图拓扑
具有多个输入和输出的模型
函数式 API 使处理多个输入和输出变得容易。而这无法使用 Sequential API 处理。
例如,如果您要构建一个系统,该系统按照优先级对自定义问题工单进行排序,然后将工单传送到正确的部门,则此模型将具有三个输入:
工单标题(文本输入),
工单的文本正文(文本输入),以及
用户添加的任何标签(分类输入)
此模型将具有两个输出:
介于 0 和 1 之间的优先级分数(标量 Sigmoid 输出),以及
应该处理工单的部门(部门范围内的 Softmax 输出)。
您可以使用函数式 API 通过几行代码构建此模型:
End of explanation
"""
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
"""
Explanation: 现在绘制模型:
End of explanation
"""
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
keras.losses.CategoricalCrossentropy(from_logits=True),
],
loss_weights=[1.0, 0.2],
)
"""
Explanation: 编译此模型时,可以为每个输出分配不同的损失。甚至可以为每个损失分配不同的权重,以调整其对总训练损失的贡献。
End of explanation
"""
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"priority": keras.losses.BinaryCrossentropy(from_logits=True),
"department": keras.losses.CategoricalCrossentropy(from_logits=True),
},
loss_weights=[1.0, 0.2],
)
"""
Explanation: 由于输出层具有不同的名称,您还可以像下面这样指定损失:
End of explanation
"""
# Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32")
# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit(
{"title": title_data, "body": body_data, "tags": tags_data},
{"priority": priority_targets, "department": dept_targets},
epochs=2,
batch_size=32,
)
"""
Explanation: 通过传递输入和目标的 NumPy 数组列表来训练模型:
End of explanation
"""
inputs = keras.Input(shape=(32, 32, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(inputs)
x = layers.Conv2D(64, 3, activation="relu")(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation="relu")(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation="relu")(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs, outputs, name="toy_resnet")
model.summary()
"""
Explanation: 当使用 Dataset 对象调用拟合时,它应该会生成一个列表元组(如 ([title_data, body_data, tags_data], [priority_targets, dept_targets]) 或一个字典元组(如 ({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}))。
有关详细说明,请参阅训练和评估指南。
小 ResNet 模型
除了具有多个输入和输出的模型外,函数式 API 还使处理非线性连接拓扑(这些模型的层没有按顺序连接)变得容易。这是 Sequential API 无法处理的。
关于这一点的一个常见用例是残差连接。让我们来为 CIFAR10 构建一个小 ResNet 模型以进行演示:
End of explanation
"""
keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)
"""
Explanation: 绘制模型:
End of explanation
"""
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["acc"],
)
# We restrict the data to the first 1000 samples so as to limit execution time
# on Colab. Try to train on the entire dataset until convergence!
model.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2)
"""
Explanation: 现在训练模型:
End of explanation
"""
# Embedding for 1000 unique words mapped to 128-dimensional vectors
shared_embedding = layers.Embedding(1000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype="int32")
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype="int32")
# Reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
"""
Explanation: 共享层
函数式 API 的另一个很好的用途是使用共享层的模型。共享层是在同一个模型中多次重用的层实例,它们会学习与层计算图中的多个路径相对应的特征。
共享层通常用于对来自相似空间(例如,两个具有相似词汇的不同文本)的输入进行编码。它们可以实现在这些不同的输入之间共享信息,以及在更少的数据上训练这种模型。如果在其中的一个输入中看到了一个给定单词,那么将有利于处理通过共享层的所有输入。
要在函数式 API 中共享层,请多次调用同一个层实例。例如,下面是一个在两个不同文本输入之间共享的 Embedding 层:
End of explanation
"""
vgg19 = tf.keras.applications.VGG19()
"""
Explanation: 提取和重用层计算图中的节点
由于要处理的层计算图是静态数据结构,可以对其进行访问和检查。而这就是将函数式模型绘制为图像的方式。
这也意味着您可以访问中间层的激活函数(计算图中的“节点”)并在其他地方重用它们,这对于特征提取之类的操作十分有用。
让我们来看一个例子。下面是一个 VGG19 模型,其权重已在 ImageNet 上进行了预训练:
End of explanation
"""
features_list = [layer.output for layer in vgg19.layers]
"""
Explanation: 下面是通过查询计算图数据结构获得的模型的中间激活:
End of explanation
"""
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype("float32")
extracted_features = feat_extraction_model(img)
"""
Explanation: 使用以下特征来创建新的特征提取模型,该模型会返回中间层激活的值:
End of explanation
"""
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
"""
Explanation: 这尤其适用于诸如神经样式转换之类的任务。
使用自定义层扩展 API
tf.keras 包含了各种内置层,例如:
卷积层:Conv1D、Conv2D、Conv3D、Conv2DTranspose
池化层:MaxPooling1D、MaxPooling2D、MaxPooling3D、AveragePooling1D
RNN 层:GRU、LSTM、ConvLSTM2D
BatchNormalization、Dropout、Embedding 等
但是,如果找不到所需内容,可以通过创建您自己的层来方便地扩展 API。所有层都会子类化 Layer 类并实现下列方法:
call 方法,用于指定由层完成的计算。
build 方法,用于创建层的权重(这只是一种样式约定,因为您也可以在 __init__ 中创建权重)。
要详细了解从头开始创建层的详细信息,请阅读自定义层和模型指南。
以下是 tf.keras.layers.Dense 的基本实现:
End of explanation
"""
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config, custom_objects={"CustomDense": CustomDense})
"""
Explanation: 为了在您的自定义层中支持序列化,请定义一个 get_config 方法,返回层实例的构造函数参数:
End of explanation
"""
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
"""
Explanation: 您也可以选择实现 from_config(cls, config) 类方法,该方法用于在给定其配置字典的情况下重新创建层实例。from_config 的默认实现如下:
python
def from_config(cls, config): return cls(**config)
何时使用函数式 API
什么时候应该使用 Keras 函数式 API 来创建新的模型,或者什么时候应该直接对 Model 类进行子类化呢?通常来说,函数式 API 更高级、更易用且更安全,并且具有许多子类化模型所不支持的功能。
但是,当构建不容易表示为有向无环的层计算图的模型时,模型子类化会提供更大的灵活性。例如,您无法使用函数式 API 来实现 Tree-RNN,而必须直接子类化 Model 类。
要深入了解函数式 API 和模型子类化之间的区别,请阅读 TensorFlow 2.0 符号式 API 和命令式 API 介绍。
函数式 API 的优势:
下列属性对于序贯模型(也是数据结构)同样适用,但对于子类化模型(是 Python 字节码而非数据结构)则不适用。
更加简洁
没有 super(MyClass, self).__init__(...),没有 def call(self, ...): 等内容。
对比:
python
inputs = keras.Input(shape=(32,)) x = layers.Dense(64, activation='relu')(inputs) outputs = layers.Dense(10)(x) mlp = keras.Model(inputs, outputs)
下面是子类化版本:
python
class MLP(keras.Model): def __init__(self, **kwargs): super(MLP, self).__init__(**kwargs) self.dense_1 = layers.Dense(64, activation='relu') self.dense_2 = layers.Dense(10) def call(self, inputs): x = self.dense_1(inputs) return self.dense_2(x) # Instantiate the model. mlp = MLP() # Necessary to create the model's state. # The model doesn't have a state until it's called at least once. _ = mlp(tf.zeros((1, 32)))
定义连接计算图时进行模型验证
在函数式 API 中,输入规范(形状和 dtype)是预先创建的(使用 Input)。每次调用层时,该层都会检查传递给它的规范是否符合其假设,如不符合,它将引发有用的错误消息。
这样可以保证能够使用函数式 API 构建的任何模型都可以运行。所有调试(除与收敛有关的调试外)均在模型构造的过程中静态发生,而不是在执行时发生。这类似于编译器中的类型检查。
函数式模型可绘制且可检查
您可以将模型绘制为计算图,并且可以轻松访问该计算图中的中间节点。例如,要提取和重用中间层的激活(如前面的示例所示),请运行以下代码:
python
features_list = [layer.output for layer in vgg19.layers] feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
函数式模型可以序列化或克隆
因为函数式模型是数据结构而非一段代码,所以它可以安全地序列化,并且可以保存为单个文件,从而使您可以重新创建完全相同的模型,而无需访问任何原始代码。请参阅序列化和保存指南。
要序列化子类化模型,实现器必须在模型级别指定 get_config() 和 from_config() 方法。
函数式 API 的劣势:
不支持动态架构
函数式 API 将模型视为层的 DAG。对于大多数深度学习架构来说确实如此,但并非所有(例如,递归网络或 Tree RNN 就不遵循此假设,无法在函数式 API 中实现)。
混搭 API 样式
在函数式 API 或模型子类化之间进行选择并非是让您作出二选一的决定而将您限制在某一类模型中。tf.keras API 中的所有模型都可以彼此交互,无论它们是 Sequential 模型、函数式模型,还是从头开始编写的子类化模型。
您始终可以将函数式模型或 Sequential 模型用作子类化模型或层的一部分:
End of explanation
"""
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
self.classifier = layers.Dense(1)
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Note that you specify a static batch size for the inputs with the `batch_shape`
# arg, because the inner computation of `CustomRNN` requires a static batch size
# (when you create the `state` zeros tensor).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
"""
Explanation: 您可以在函数式 API 中使用任何子类化层或模型,前提是它实现了遵循以下模式之一的 call 方法:
call(self, inputs, **kwargs) - 其中 inputs 是张量或张量的嵌套结构(例如张量列表),**kwargs 是非张量参数(非输入)。
call(self, inputs, training=None, **kwargs) - 其中 training 是指示该层是否应在训练模式和推断模式下运行的布尔值。
call(self, inputs, mask=None, **kwargs) - 其中 mask 是一个布尔掩码张量(对 RNN 等十分有用)。
call(self, inputs, training=None, mask=None, **kwargs) - 当然,您可以同时具有掩码和训练特有的行为。
此外,如果您在自定义层或模型上实现了 get_config 方法,则您创建的函数式模型将仍可序列化和克隆。
下面是一个从头开始编写、用于函数式模型的自定义 RNN 的简单示例:
End of explanation
"""
|
AllenDowney/ThinkBayes2
|
soln/chap04.ipynb
|
mit
|
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
"""
Explanation: Estimating Proportions
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
from scipy.stats import binom
n = 2
p = 0.5
k = 1
binom.pmf(k, n, p)
"""
Explanation: In the previous chapter we solved the 101 Bowls Problem, and I admitted that it is not really about guessing which bowl the cookies came from; it is about estimating proportions.
In this chapter, we take another step toward Bayesian statistics by solving the Euro problem.
We'll start with the same prior distribution, and we'll see that the update is the same, mathematically.
But I will argue that it is a different problem, philosophically, and use it to introduce two defining elements of Bayesian statistics: choosing prior distributions, and using probability to represent the unknown.
The Euro Problem
In Information Theory, Inference, and Learning Algorithms, David MacKay poses this problem:
"A statistical statement appeared in The Guardian on Friday January 4, 2002:
When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. `It looks very suspicious to me,' said Barry Blight, a statistics lecturer at the London School of Economics. `If the coin were unbiased, the chance of getting a result as extreme as that would be less than 7%.'
"But [MacKay asks] do these data give evidence that the coin is biased rather than fair?"
To answer that question, we'll proceed in two steps.
First we'll use the binomial distribution to see where that 7% came from; then we'll use Bayes's Theorem to estimate the probability that this coin comes up heads.
The Binomial Distribution
Suppose I tell you that a coin is "fair", that is, the probability of heads is 50%. If you spin it twice, there are four outcomes: HH, HT, TH, and TT. All four outcomes have the same probability, 25%.
If we add up the total number of heads, there are three possible results: 0, 1, or 2. The probabilities of 0 and 2 are 25%, and the probability of 1 is 50%.
More generally, suppose the probability of heads is $p$ and we spin the coin $n$ times. The probability that we get a total of $k$ heads is given by the binomial distribution:
$$\binom{n}{k} p^k (1-p)^{n-k}$$
for any value of $k$ from 0 to $n$, including both.
The term $\binom{n}{k}$ is the binomial coefficient, usually pronounced "n choose k".
We could evaluate this expression ourselves, but we can also use the SciPy function binom.pmf.
For example, if we flip a coin n=2 times and the probability of heads is p=0.5, here's the probability of getting k=1 heads:
End of explanation
"""
import numpy as np
ks = np.arange(n+1)
ps = binom.pmf(ks, n, p)
ps
"""
Explanation: Instead of providing a single value for k, we can also call binom.pmf with an array of values.
End of explanation
"""
from empiricaldist import Pmf
pmf_k = Pmf(ps, ks)
pmf_k
"""
Explanation: The result is a NumPy array with the probability of 0, 1, or 2 heads.
If we put these probabilities in a Pmf, the result is the distribution of k for the given values of n and p.
Here's what it looks like:
End of explanation
"""
def make_binomial(n, p):
"""Make a binomial Pmf."""
ks = np.arange(n+1)
ps = binom.pmf(ks, n, p)
return Pmf(ps, ks)
"""
Explanation: The following function computes the binomial distribution for given values of n and p and returns a Pmf that represents the result.
End of explanation
"""
pmf_k = make_binomial(n=250, p=0.5)
from utils import decorate
pmf_k.plot(label='n=250, p=0.5')
decorate(xlabel='Number of heads (k)',
ylabel='PMF',
title='Binomial distribution')
"""
Explanation: Here's what it looks like with n=250 and p=0.5:
End of explanation
"""
pmf_k.max_prob()
"""
Explanation: The most likely quantity in this distribution is 125:
End of explanation
"""
pmf_k[125]
"""
Explanation: But even though it is the most likely quantity, the probability that we get exactly 125 heads is only about 5%.
End of explanation
"""
pmf_k[140]
"""
Explanation: In MacKay's example, we got 140 heads, which is even less likely than 125:
End of explanation
"""
def prob_ge(pmf, threshold):
"""Probability of quantities greater than threshold."""
ge = (pmf.qs >= threshold)
total = pmf[ge].sum()
return total
"""
Explanation: In the article MacKay quotes, the statistician says, "If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%."
We can use the binomial distribution to check his math. The following function takes a PMF and computes the total probability of quantities greater than or equal to threshold.
End of explanation
"""
prob_ge(pmf_k, 140)
"""
Explanation: Here's the probability of getting 140 heads or more:
End of explanation
"""
pmf_k.prob_ge(140)
"""
Explanation: Pmf provides a method that does the same computation.
End of explanation
"""
import matplotlib.pyplot as plt
def fill_below(pmf):
qs = pmf.index
ps = pmf.values
plt.fill_between(qs, ps, 0, color='C5', alpha=0.4)
qs = pmf_k.index
fill_below(pmf_k[qs>=140])
fill_below(pmf_k[qs<=110])
pmf_k.plot(label='n=250, p=0.5')
decorate(xlabel='Number of heads (k)',
ylabel='PMF',
title='Binomial distribution')
"""
Explanation: The result is about 3.3%, which is less than the quoted 7%. The reason for the difference is that the statistician includes all outcomes "as extreme as" 140, which includes outcomes less than or equal to 110.
To see where that comes from, recall that the expected number of heads is 125. If we get 140, we've exceeded that expectation by 15.
And if we get 110, we have come up short by 15.
7% is the sum of both of these "tails", as shown in the following figure.
End of explanation
"""
pmf_k.prob_le(110)
"""
Explanation: Here's how we compute the total probability of the left tail.
End of explanation
"""
hypos = np.linspace(0, 1, 101)
prior = Pmf(1, hypos)
"""
Explanation: The probability of outcomes less than or equal to 110 is also 3.3%,
so the total probability of outcomes "as extreme" as 140 is 6.6%.
The point of this calculation is that these extreme outcomes are unlikely if the coin is fair.
That's interesting, but it doesn't answer MacKay's question. Let's see if we can.
Bayesian Estimation
Any given coin has some probability of landing heads up when spun
on edge; I'll call this probability x.
It seems reasonable to believe that x depends
on physical characteristics of the coin, like the distribution
of weight.
If a coin is perfectly balanced, we expect x to be close to 50%, but
for a lopsided coin, x might be substantially different.
We can use Bayes's theorem and the observed data to estimate x.
For simplicity, I'll start with a uniform prior, which assumes that all values of x are equally likely.
That might not be a reasonable assumption, so we'll come back and consider other priors later.
We can make a uniform prior like this:
End of explanation
"""
likelihood_heads = hypos
likelihood_tails = 1 - hypos
"""
Explanation: hypos is an array of equally spaced values between 0 and 1.
We can use the hypotheses to compute the likelihoods, like this:
End of explanation
"""
likelihood = {
'H': likelihood_heads,
'T': likelihood_tails
}
"""
Explanation: I'll put the likelihoods for heads and tails in a dictionary to make it easier to do the update.
End of explanation
"""
dataset = 'H' * 140 + 'T' * 110
"""
Explanation: To represent the data, I'll construct a string with H repeated 140 times and T repeated 110 times.
End of explanation
"""
def update_euro(pmf, dataset):
"""Update pmf with a given sequence of H and T."""
for data in dataset:
pmf *= likelihood[data]
pmf.normalize()
"""
Explanation: The following function does the update.
End of explanation
"""
posterior = prior.copy()
update_euro(posterior, dataset)
"""
Explanation: The first argument is a Pmf that represents the prior.
The second argument is a sequence of strings.
Each time through the loop, we multiply pmf by the likelihood of one outcome, H for heads or T for tails.
Notice that normalize is outside the loop, so the posterior distribution only gets normalized once, at the end.
That's more efficient than normalizing it after each spin (although we'll see later that it can also cause problems with floating-point arithmetic).
Here's how we use update_euro.
End of explanation
"""
def decorate_euro(title):
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title=title)
posterior.plot(label='140 heads out of 250', color='C4')
decorate_euro(title='Posterior distribution of x')
"""
Explanation: And here's what the posterior looks like.
End of explanation
"""
posterior.max_prob()
"""
Explanation: This figure shows the posterior distribution of x, which is the proportion of heads for the coin we observed.
The posterior distribution represents our beliefs about x after seeing the data.
It indicates that values less than 0.4 and greater than 0.7 are unlikely; values between 0.5 and 0.6 are the most likely.
In fact, the most likely value for x is 0.56 which is the proportion of heads in the dataset, 140/250.
End of explanation
"""
uniform = Pmf(1, hypos, name='uniform')
uniform.normalize()
"""
Explanation: Triangle Prior
So far we've been using a uniform prior:
End of explanation
"""
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
triangle = Pmf(a, hypos, name='triangle')
triangle.normalize()
"""
Explanation: But that might not be a reasonable choice based on what we know about coins.
I can believe that if a coin is lopsided, x might deviate substantially from 0.5, but it seems unlikely that the Belgian Euro coin is so imbalanced that x is 0.1 or 0.9.
It might be more reasonable to choose a prior that gives
higher probability to values of x near 0.5 and lower probability
to extreme values.
As an example, let's try a triangle-shaped prior.
Here's the code that constructs it:
End of explanation
"""
uniform.plot()
triangle.plot()
decorate_euro(title='Uniform and triangle prior distributions')
"""
Explanation: arange returns a NumPy array, so we can use np.append to append ramp_down to the end of ramp_up.
Then we use a and hypos to make a Pmf.
The following figure shows the result, along with the uniform prior.
End of explanation
"""
update_euro(uniform, dataset)
update_euro(triangle, dataset)
"""
Explanation: Now we can update both priors with the same data:
End of explanation
"""
uniform.plot()
triangle.plot()
decorate_euro(title='Posterior distributions')
"""
Explanation: Here are the posteriors.
End of explanation
"""
from scipy.stats import binom
def update_binomial(pmf, data):
"""Update pmf using the binomial distribution."""
k, n = data
xs = pmf.qs
likelihood = binom.pmf(k, n, xs)
pmf *= likelihood
pmf.normalize()
"""
Explanation: The differences between the posterior distributions are barely visible, and so small they would hardly matter in practice.
And that's good news.
To see why, imagine two people who disagree angrily about which prior is better, uniform or triangle.
Each of them has reasons for their preference, but neither of them can persuade the other to change their mind.
But suppose they agree to use the data to update their beliefs.
When they compare their posterior distributions, they find that there is almost nothing left to argue about.
This is an example of swamping the priors: with enough
data, people who start with different priors will tend to
converge on the same posterior distribution.
The Binomial Likelihood Function
So far we've been computing the updates one spin at a time, so for the Euro problem we have to do 250 updates.
A more efficient alternative is to compute the likelihood of the entire dataset at once.
For each hypothetical value of x, we have to compute the probability of getting 140 heads out of 250 spins.
Well, we know how to do that; this is the question the binomial distribution answers.
If the probability of heads is $p$, the probability of $k$ heads in $n$ spins is:
$$\binom{n}{k} p^k (1-p)^{n-k}$$
And we can use SciPy to compute it.
The following function takes a Pmf that represents a prior distribution and a tuple of integers that represent the data:
End of explanation
"""
uniform2 = Pmf(1, hypos, name='uniform2')
data = 140, 250
update_binomial(uniform2, data)
"""
Explanation: The data are represented with a tuple of values for k and n, rather than a long string of outcomes.
Here's the update.
End of explanation
"""
uniform.plot()
uniform2.plot()
decorate_euro(title='Posterior distributions computed two ways')
"""
Explanation: And here's what the posterior looks like.
End of explanation
"""
np.allclose(uniform, uniform2)
"""
Explanation: We can use allclose to confirm that the result is the same as in the previous section except for a small floating-point round-off.
End of explanation
"""
hypos = np.linspace(0.1, 0.4, 101)
prior = Pmf(1, hypos)
"""
Explanation: But this way of doing the computation is much more efficient.
Bayesian Statistics
You might have noticed similarities between the Euro problem and the 101 Bowls Problem in <<_101Bowls>>.
The prior distributions are the same, the likelihoods are the same, and with the same data the results would be the same.
But there are two differences.
The first is the choice of the prior.
With 101 bowls, the uniform prior is implied by the statement of the problem, which says that we choose one of the bowls at random with equal probability.
In the Euro problem, the choice of the prior is subjective; that is, reasonable people could disagree, maybe because they have different information about coins or because they interpret the same information differently.
Because the priors are subjective, the posteriors are subjective, too.
And some people find that problematic.
The other difference is the nature of what we are estimating.
In the 101 Bowls problem, we choose the bowl randomly, so it is uncontroversial to compute the probability of choosing each bowl.
In the Euro problem, the proportion of heads is a physical property of a given coin.
Under some interpretations of probability, that's a problem because physical properties are not considered random.
As an example, consider the age of the universe.
Currently, our best estimate is 13.80 billion years, but it might be off by 0.02 billion years in either direction (see here).
Now suppose we would like to know the probability that the age of the universe is actually greater than 13.81 billion years.
Under some interpretations of probability, we would not be able to answer that question.
We would be required to say something like, "The age of the universe is not a random quantity, so it has no probability of exceeding a particular value."
Under the Bayesian interpretation of probability, it is meaningful and useful to treat physical quantities as if they were random and compute probabilities about them.
In the Euro problem, the prior distribution represents what we believe about coins in general and the posterior distribution represents what we believe about a particular coin after seeing the data.
So we can use the posterior distribution to compute probabilities about the coin and its proportion of heads.
The subjectivity of the prior and the interpretation of the posterior are key differences between using Bayes's Theorem and doing Bayesian statistics.
Bayes's Theorem is a mathematical law of probability; no reasonable person objects to it.
But Bayesian statistics is surprisingly controversial.
Historically, many people have been bothered by its subjectivity and its use of probability for things that are not random.
If you are interested in this history, I recommend Sharon Bertsch McGrayne's book, The Theory That Would Not Die.
Summary
In this chapter I posed David MacKay's Euro problem and we started to solve it.
Given the data, we computed the posterior distribution for x, the probability a Euro coin comes up heads.
We tried two different priors, updated them with the same data, and found that the posteriors were nearly the same.
This is good news, because it suggests that if two people start with different beliefs and see the same data, their beliefs tend to converge.
This chapter introduces the binomial distribution, which we used to compute the posterior distribution more efficiently.
And I discussed the differences between applying Bayes's Theorem, as in the 101 Bowls problem, and doing Bayesian statistics, as in the Euro problem.
However, we still haven't answered MacKay's question: "Do these data give evidence that the coin is biased rather than fair?"
I'm going to leave this question hanging a little longer; we'll come back to it in <<_Testing>>.
In the next chapter, we'll solve problems related to counting, including trains, tanks, and rabbits.
But first you might want to work on these exercises.
Exercises
Exercise: In Major League Baseball, most players have a batting average between .200 and .330, which means that their probability of getting a hit is between 0.2 and 0.33.
Suppose a player appearing in their first game gets 3 hits out of 3 attempts. What is the posterior distribution for their probability of getting a hit?
For this exercise, I'll construct the prior distribution by starting with a uniform distribution and updating it with imaginary data until it has a shape that reflects my background knowledge of batting averages.
Here's the uniform prior:
End of explanation
"""
likelihood = {
'Y': hypos,
'N': 1-hypos
}
"""
Explanation: And here is a dictionary of likelihoods, with Y for getting a hit and N for not getting a hit.
End of explanation
"""
dataset = 'Y' * 25 + 'N' * 75
"""
Explanation: Here's a dataset that yields a reasonable prior distribution.
End of explanation
"""
for data in dataset:
prior *= likelihood[data]
prior.normalize()
"""
Explanation: And here's the update with the imaginary data.
End of explanation
"""
prior.plot(label='prior')
decorate(xlabel='Probability of getting a hit',
ylabel='PMF')
"""
Explanation: Finally, here's what the prior looks like.
End of explanation
"""
# Solution
posterior = prior.copy()
for data in 'YYY':
posterior *= likelihood[data]
posterior.normalize()
# Solution
prior.plot(label='prior')
posterior.plot(label='posterior ')
decorate(xlabel='Probability of getting a hit',
ylabel='PMF')
# Solution
prior.max_prob()
# Solution
posterior.max_prob()
"""
Explanation: This distribution indicates that most players have a batting average near 250, with only a few players below 175 or above 350. I'm not sure how accurately this prior reflects the distribution of batting averages in Major League Baseball, but it is good enough for this exercise.
Now update this distribution with the data and plot the posterior. What is the most likely quantity in the posterior distribution?
End of explanation
"""
# Solution
# I'll use a uniform distribution again, although there might
# be background information we could use to choose a more
# specific prior.
hypos = np.linspace(0, 1, 101)
prior = Pmf(1, hypos)
# Solution
# If the actual fraction of cheaters is `x`, the number of
# YESes is (0.5 + x/2), and the number of NOs is (1-x)/2
likelihood = {
'Y': 0.5 + hypos/2,
'N': (1-hypos)/2
}
# Solution
dataset = 'Y' * 80 + 'N' * 20
posterior = prior.copy()
for data in dataset:
posterior *= likelihood[data]
posterior.normalize()
# Solution
posterior.plot(label='80 YES, 20 NO')
decorate(xlabel='Proportion of cheaters',
ylabel='PMF')
# Solution
posterior.idxmax()
"""
Explanation: Exercise: Whenever you survey people about sensitive issues, you have to deal with social desirability bias, which is the tendency of people to adjust their answers to show themselves in the most positive light.
One way to improve the accuracy of the results is randomized response.
As an example, suppose you want to know how many people cheat on their taxes.
If you ask them directly, it is likely that some of the cheaters will lie.
You can get a more accurate estimate if you ask them indirectly, like this: Ask each person to flip a coin and, without revealing the outcome,
If they get heads, they report YES.
If they get tails, they honestly answer the question "Do you cheat on your taxes?"
If someone says YES, we don't know whether they actually cheat on their taxes; they might have flipped heads.
Knowing this, people might be more willing to answer honestly.
Suppose you survey 100 people this way and get 80 YESes and 20 NOs. Based on this data, what is the posterior distribution for the fraction of people who cheat on their taxes? What is the most likely quantity in the posterior distribution?
End of explanation
"""
# Solution
def update_unreliable(pmf, dataset, y):
likelihood = {
'H': (1-y) * hypos + y * (1-hypos),
'T': y * hypos + (1-y) * (1-hypos)
}
for data in dataset:
pmf *= likelihood[data]
pmf.normalize()
# Solution
hypos = np.linspace(0, 1, 101)
prior = Pmf(1, hypos)
dataset = 'H' * 140 + 'T' * 110
posterior00 = prior.copy()
update_unreliable(posterior00, dataset, 0.0)
posterior02 = prior.copy()
update_unreliable(posterior02, dataset, 0.2)
posterior04 = prior.copy()
update_unreliable(posterior04, dataset, 0.4)
# Solution
posterior00.plot(label='y = 0.0')
posterior02.plot(label='y = 0.2')
posterior04.plot(label='y = 0.4')
decorate(xlabel='Proportion of heads',
ylabel='PMF')
# Solution
posterior00.idxmax(), posterior02.idxmax(), posterior04.idxmax()
"""
Explanation: Exercise: Suppose you want to test whether a coin is fair, but you don't want to spin it hundreds of times.
So you make a machine that spins the coin automatically and uses computer vision to determine the outcome.
However, you discover that the machine is not always accurate. Specifically, suppose the probability is y=0.2 that an actual heads is reported as tails, or actual tails reported as heads.
If we spin a coin 250 times and the machine reports 140 heads, what is the posterior distribution of x?
What happens as you vary the value of y?
End of explanation
"""
# Solution
hypos = np.linspace(0.1, 0.4, 101)
prior = Pmf(1, hypos)
# Solution
# Here's a specific version for n=2 shots per test
x = hypos
likes = [(1-x)**4, (2*x*(1-x))**2, x**4]
likelihood = np.sum(likes, axis=0)
# Solution
# Here's a more general version for any n shots per test
from scipy.stats import binom
n = 2
likes2 = [binom.pmf(k, n, x)**2 for k in range(n+1)]
likelihood2 = np.sum(likes2, axis=0)
# Solution
# Here are the likelihoods, computed both ways
import matplotlib.pyplot as plt
plt.plot(x, likelihood, label='special case')
plt.plot(x, likelihood2, label='general formula')
decorate(xlabel='Probability of hitting the target',
ylabel='Likelihood',
title='Likelihood of getting the same result')
# Solution
posterior = prior * likelihood
posterior.normalize()
# Solution
posterior.plot(label='Two tests, two shots, same outcome',
color='C4')
decorate(xlabel='Probability of hitting the target',
ylabel='PMF',
title='Posterior distribution',
ylim=[0, 0.015])
# Solution
# Getting the same result in both tests is more likely for
# extreme values of `x` and least likely when `x=0.5`.
# In this example, the prior indicates that `x` is less than 0.5,
# and the update gives more weight to extreme values.
# So the dataset makes lower values of `x` more likely.
"""
Explanation: Exercise: In preparation for an alien invasion, the Earth Defense League (EDL) has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, x.
Based on previous tests, the distribution of x in the population of designs is approximately uniform between 0.1 and 0.4.
Now suppose the new ultra-secret Alien Blaster 9000 is being tested. In a press conference, an EDL general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."
Is this data good or bad?
That is, does it increase or decrease your estimate of x for the Alien Blaster 9000?
Hint: If the probability of hitting each target is $x$, the probability of hitting one target in both tests
is $\left[2x(1-x)\right]^2$.
End of explanation
"""
|
sysid/nbs
|
lstm/nietzsche.ipynb
|
mit
|
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read().lower()
print('corpus length:', len(text))
path
!tail {path} -n25
#path = 'data/wiki/'
#text = open(path+'small.txt').read().lower()
#print('corpus length:', len(text))
#text = text[0:1000000]
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:-6])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:20]
''.join(indices_char[i] for i in idx[:70])
cs = 3
[idx[i] for i in range(0, len(idx)-1-cs, cs)]
"""
Explanation: Setup
We haven't really looked into the detail of how this works yet - so this is provided for self-study for those who are interested. We'll look at it closely next week.
End of explanation
"""
cs = 40
c1 = [idx[i:i+cs] for i in range(0, len(idx)-1, cs)]
c2 = [idx[i:i+cs] for i in range(1, len(idx), cs)]
"".join([indices_char[i] for i in c1[0]])
"".join([indices_char[i] for i in c2[0]])
"".join([indices_char[i] for i in c1[0:-2][0]])
x = np.stack(c1[:-2])
y = np.stack(c2[:-2])
y = np.expand_dims(y, -1)
x.shape, y.shape
n_fac = 42
pmodel=Sequential([
#Embedding(vocab_size, n_fac, input_length=maxlen),
Embedding(vocab_size, n_fac, input_length=1, batch_input_shape=(1,1)),
BatchNormalization(),
LSTM(512, return_sequences=True, stateful=True, dropout_U=0.2, dropout_W=0.2, consume_less='gpu'),
LSTM(512, return_sequences=True, stateful=True, dropout_U=0.2, dropout_W=0.2, consume_less='gpu'),
TimeDistributed(Dense(512, activation='relu')),
Dropout(0.1),
TimeDistributed(Dense(vocab_size, activation='softmax'))
])
pmodel.summary()
batch_size = 64
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(batch_size, 40)),
BatchNormalization(),
LSTM(512, return_sequences=True, stateful=True, dropout_U=0.2, dropout_W=0.2, consume_less='gpu'),
LSTM(512, return_sequences=True, stateful=True, dropout_U=0.2, dropout_W=0.2, consume_less='gpu'),
TimeDistributed(Dense(512, activation='relu')),
Dropout(0.1),
TimeDistributed(Dense(vocab_size, activation='softmax'))
])
model.summary()
for l in model.layers:
print(l.name)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
"""
Explanation: Preprocess and create model
End of explanation
"""
# In a stateful network, you should only pass inputs with a number of samples that can be divided by the batch size.
mx = len(x)//64*64
mx
import time
import tensorflow as tf
def run_epochs(n):
keras.backend.get_session().run(tf.global_variables_initializer()) ## bug in keras/TF, new version
#keras.backend.get_session().run(tf.initialize_all_variables()) ## bug, old version
for i in range(n):
start = time.time()
print("-- Epoch: {}".format(i))
model.reset_states()
h = model.fit(x[:mx], y[:mx], batch_size=batch_size, nb_epoch=1, shuffle=False, verbose=0)
print("-- duration: {}, loss: {}".format(time.time()-start, h.history['loss']))
run_epochs(1)
model.save_weights('data/nietzsche_ep1_TF.h5')
model.load_weights('data/nietzsche_ep1_TF.h5')
def print_example2(ln=160):
for l1, l2 in zip(model.layers, pmodel.layers):
if l1.name != "batchnormalization_1":
#Tracer()() #this one triggers debugger
#print("layer: {}, len: {}".format(l1.name, len(l1.get_weights())))
l2.set_weights(l1.get_weights())
pmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
offset = 10
seed_string=text[offset:ln+offset//4]
pmodel.reset_states()
# build context??
for s in seed_string:
x = np.array([char_indices[s]])[np.newaxis,:]
preds = pmodel.predict(x, verbose=0)[0][0]
#print("pred.shape:{}, pred:{}:{}".format(preds.shape, np.argmax(preds), choice(chars, p=preds)))
s = choice(chars, p=preds)
res=seed_string+s+'...\n\n'
for i in range(ln):
x = np.array([char_indices[s]])[np.newaxis,:]
preds = pmodel.predict(x, verbose=0)[0][0]
pres = choice(chars, p=preds)
res = res+pres
print(res)
print_example2()
run_epochs(10)
print_example2()
# not working for statefull, input dim (1, 40) not matchin (64, 40)
def print_example():
seed_string="ethics is a basic foundation of all that"
for i in range(320):
x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:]
preds = model.predict(x, verbose=0)[0][-1]
preds = preds/np.sum(preds)
next_char = choice(chars, p=preds)
seed_string = seed_string + next_char
print(seed_string)
#print_example()
%%capture output
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)
output.show()
%%capture output
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)
output.show()
print_example()
model.optimizer.lr=0.001
%%capture output
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=2)
output.show()
print_example()
model.optimizer.lr=0.0001
%%capture output
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=3)
output.show()
print_example()
model.save_weights('data/char_rnn.h5')
model.optimizer.lr=0.00001
%%capture output
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)
model.save_weights('data/char_rnn.h5')
output.show()
print_example()
%%capture output
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)
output.show()
print_example()
print_example()
model.save_weights('data/char_rnn.h5')
"""
Explanation: Train
End of explanation
"""
|
pastas/pasta
|
examples/notebooks/04_adding_rivers.ipynb
|
mit
|
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions()
ps.set_log_level("INFO")
"""
Explanation: Adding river levels
Developed by R.A. Collenteur & D. Brakenhoff
In this example it is shown how to create a Pastas model that not only includes precipitation and evaporation, but also observed river levels. We will consider observed heads that are strongly influenced by river level, based on a visual interpretation of the raw data.
End of explanation
"""
oseries = pd.read_csv("../data/nb5_head.csv", parse_dates=True,
squeeze=True, index_col=0)
rain = pd.read_csv("../data/nb5_prec.csv", parse_dates=True, squeeze=True,
index_col=0)
evap = pd.read_csv("../data/nb5_evap.csv", parse_dates=True, squeeze=True,
index_col=0)
waterlevel = pd.read_csv("../data/nb5_riv.csv", parse_dates=True,
squeeze=True, index_col=0)
ps.plots.series(oseries, [rain, evap, waterlevel], figsize=(10, 5), hist=False);
"""
Explanation: 1. import and plot data
Before a model is created, it is generally a good idea to try and visually interpret the raw data and think about possible relationship between the time series and hydrological variables. Below the different time series are plotted.
The top plot shows the observed heads, with different observation frequencies and some gaps in the data. Below that the observed river levels, precipitation and evaporation are shown. Especially the river level show a clear relationship with the observed heads. Note however how the range in the river levels is about twice the range in the heads. Based on these observations, we would expect the the final step response of the head to the river level to be around 0.5 [m/m].
End of explanation
"""
ml = ps.Model(oseries.resample("D").mean().dropna(), name="River")
sm = ps.RechargeModel(rain, evap, rfunc=ps.Exponential, name="recharge")
ml.add_stressmodel(sm)
ml.solve(tmin="2000", tmax="2019-10-29")
ml.plots.results(figsize=(12, 8));
"""
Explanation: 2. Create a timeseries model
First we create a model with precipitation and evaporation as explanatory time series. The results show that precipitation and evaporation can explain part of the fluctuations in the observed heads, but not all of them.
End of explanation
"""
w = ps.StressModel(waterlevel, rfunc=ps.One, name="waterlevel",
settings="waterlevel")
ml.add_stressmodel(w)
ml.solve(tmin="2000", tmax="2019-10-29")
axes = ml.plots.results(figsize=(12, 8));
axes[-1].set_xlim(0,10); # By default, the axes between responses are shared.
"""
Explanation: 3. Adding river water levels
Based on the analysis of the raw data, we expect that the river levels can help to explain the fluctuations in the observed heads. Here, we add a stress model (ps.StressModel) to add the rivers level as an explanatory time series to the model. The model fit is greatly improved, showing that the rivers help in explaining the observed fluctuations in the observed heads. It can also be observed how the response of the head to the river levels is a lot faster than the response to precipitation and evaporation.
End of explanation
"""
|
gidden/aneris
|
doc/source/tutorial.ipynb
|
apache-2.0
|
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import aneris
from aneris.tutorial import load_data
%matplotlib inline
"""
Explanation: Getting Started
This is a simple example of the basic capabilities of aneris.
First, model and history data are read in. The model is then harmonized. Finally, output is analyzed.
End of explanation
"""
model, hist, driver = load_data()
for scenario in driver.scenarios():
driver.harmonize(scenario)
harmonized, metadata, diagnostics = driver.harmonized_results()
"""
Explanation: The driver is used to execute the harmonization. It will handle the data formatting needed to execute the harmonizaiton operation and stores the harmonized results until they are needed.
Some logging output is provided. It can be suppressed with
aneris.logger().setLevel('WARN')
End of explanation
"""
data = pd.concat([hist, model, harmonized])
df = data[data.Region.isin(['World'])]
df = pd.melt(df, id_vars=aneris.iamc_idx, value_vars=aneris.numcols(df),
var_name='Year', value_name='Emissions')
df['Label'] = df['Model'] + ' ' + df['Variable']
df.head()
sns.lineplot(x=df.Year.astype(int), y=df.Emissions, hue=df.Label)
plt.legend(bbox_to_anchor=(1.05, 1))
"""
Explanation: All data of interest is combined in order to easily view it. We will specifically investigate output for the World in this example. A few operations are performed in order to get the data into a plotting-friendly format.
End of explanation
"""
|
IBMDecisionOptimization/docplex-examples
|
examples/cp/jupyter/sched_square.ipynb
|
apache-2.0
|
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
"""
Explanation: Sched Square
This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Download the library
Step 2: Model the Data
Step 3: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the search phase
Solve with Decision Optimization solve service
Step 4: Investigate the solution and run an example analysis
Summary
Describe the business problem
The aim of the square example is to place a set of small squares of different sizes into a large square.
How decision optimization can help
Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
For example:
Automate complex decisions and trade-offs to better manage limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Download the library
Run the following code to install Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
End of explanation
"""
from docplex.cp.model import *
"""
Explanation: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
Step 2: Model the data
End of explanation
"""
SIZE_SQUARE = 112
"""
Explanation: Size of the englobing square
End of explanation
"""
SIZE_SUBSQUARE = [50, 42, 37, 35, 33, 29, 27, 25, 24, 19, 18, 17, 16, 15, 11, 9, 8, 7, 6, 4, 2]
"""
Explanation: Sizes of the sub-squares
End of explanation
"""
mdl = CpoModel(name="SchedSquare")
"""
Explanation: Step 3: Set up the prescriptive model
End of explanation
"""
x = []
y = []
rx = pulse((0, 0), 0)
ry = pulse((0, 0), 0)
for i in range(len(SIZE_SUBSQUARE)):
sq = SIZE_SUBSQUARE[i]
vx = interval_var(size=sq, name="X" + str(i))
vx.set_end((0, SIZE_SQUARE))
x.append(vx)
rx += pulse(vx, sq)
vy = interval_var(size=sq, name="Y" + str(i))
vy.set_end((0, SIZE_SQUARE))
y.append(vy)
ry += pulse(vy, sq)
"""
Explanation: Define the decision variables
Create array of variables for sub-squares
End of explanation
"""
for i in range(len(SIZE_SUBSQUARE)):
for j in range(i):
mdl.add((end_of(x[i]) <= start_of(x[j]))
| (end_of(x[j]) <= start_of(x[i]))
| (end_of(y[i]) <= start_of(y[j]))
| (end_of(y[j]) <= start_of(y[i])))
"""
Explanation: Express the business constraints
Create dependencies between variables
End of explanation
"""
mdl.add(always_in(rx, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
mdl.add(always_in(ry, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
"""
Explanation: Set other constraints
End of explanation
"""
mdl.set_search_phases([search_phase(x), search_phase(y)])
"""
Explanation: Express the search phase
End of explanation
"""
msol = mdl.solve(TimeLimit=20)
"""
Explanation: Solve with Decision Optimization solve service
End of explanation
"""
print("Solution: ")
msol.print_solution()
"""
Explanation: Step 4: Investigate the solution and then run an example analysis
Print Solution
End of explanation
"""
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
"""
Explanation: Import graphical tools
End of explanation
"""
POP_UP_GRAPHIC=False
if msol and visu.is_visu_enabled():
import matplotlib.cm as cm
from matplotlib.patches import Polygon
if not POP_UP_GRAPHIC:
%matplotlib inline
# Plot external square
print("Plotting squares....")
fig, ax = plt.subplots()
plt.plot((0, 0), (0, SIZE_SQUARE), (SIZE_SQUARE, SIZE_SQUARE), (SIZE_SQUARE, 0))
for i in range(len(SIZE_SUBSQUARE)):
# Display square i
(sx, sy) = (msol.get_var_solution(x[i]), msol.get_var_solution(y[i]))
(sx1, sx2, sy1, sy2) = (sx.get_start(), sx.get_end(), sy.get_start(), sy.get_end())
poly = Polygon([(sx1, sy1), (sx1, sy2), (sx2, sy2), (sx2, sy1)], fc=cm.Set2(float(i) / len(SIZE_SUBSQUARE)))
ax.add_patch(poly)
# Display identifier of square i at its center
ax.text(float(sx1 + sx2) / 2, float(sy1 + sy2) / 2, str(SIZE_SUBSQUARE[i]), ha='center', va='center')
plt.margins(0)
plt.show()
"""
Explanation: You can set POP_UP_GRAPHIC=True if you prefer a pop up graphic window instead of an inline one.
End of explanation
"""
|
pligor/predicting-future-product-prices
|
04_time_series_prediction/.ipynb_checkpoints/13_price_history_seq2seq-raw-checkpoint.ipynb
|
agpl-3.0
|
from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from mylibs.tf_helper import getDefaultGPUconfig
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from common import get_or_run_nn
from data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider
from models.price_history_seq2seq import PriceHistorySeq2Seq
from cost_functions.huber_loss import huber_loss
from os.path import isdir
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
data_folder = '../../../../Dropbox/data'
assert isdir(data_folder)
"""
Explanation: https://www.youtube.com/watch?v=ElmBrKyMXxs
https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb
https://github.com/ematvey/tensorflow-seq2seq-tutorials
End of explanation
"""
num_epochs = 10
num_features = 1
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 47
#trunc_backprop_len = ??
"""
Explanation: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
End of explanation
"""
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_path, batch_size=batch_size)
dp.inputs.shape, dp.targets.shape
aa, bb = dp.next()
aa.shape, bb.shape
"""
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
"""
model = PriceHistorySeq2Seq(rng=random_state, dtype=dtype, config=config)
# graph = model.getGraph(batch_size=batch_size,
# num_units=num_units,
# input_len=input_len,
# target_len=target_len)
"""
Explanation: Step 2 - Build model
```
PriceHistoryDummySeq2Seq(rng=random_state, dtype=dtype, config=config).getGraph(batch_size=batch_size,
num_units=num_units,
input_len=input_len,
target_len=target_len)
learning rate: 0.001000
60
Tensor("inputs/unstack:0", shape=(47, 1), dtype=float32)
Tensor("encoder_rnn_layer/rnn/basic_rnn_cell_59/Tanh:0", shape=(47, 400), dtype=float32)
decoder inputs series
<type 'list'>
30
Tensor("decoder_rnn_layer/unstack:0", shape=(47, 1), dtype=float32)
30
Tensor("decoder_rnn_layer/rnn/basic_rnn_cell/Tanh:0", shape=(47, 400), dtype=float32)
30
Tensor("readout_layer/add:0", shape=(47, 1), dtype=float32)
Tensor("predictions/Reshape:0", shape=(47, 30), dtype=float32)
Tensor("error/Select:0", shape=(47, 30), dtype=float32)
Tensor("error/Mean:0", shape=(), dtype=float32)
```
End of explanation
"""
#show_graph(graph)
"""
Explanation: learning rate: 0.001000
60
Tensor("inputs/unstack:0", shape=(47, 1), dtype=float32)
Tensor("encoder_rnn_layer/rnn/basic_rnn_cell_59/Tanh:0", shape=(47, 400), dtype=float32)
time: Tensor("decoder_rnn_layer/rnn/while/add:0", shape=(), dtype=int32)
<tensorflow.python.ops.tensor_array_ops.TensorArray object at 0x7fc5424ef090>
Tensor("decoder_rnn_layer/decoder_out_tensor/decoder_out_tensor:0", shape=(?, 47, 400), dtype=float32)
Tensor("readout_layer/readouts:0", shape=(?, 1), dtype=float32)
Tensor("predictions/Reshape:0", shape=(31, 47), dtype=float32)
Tensor("predictions/transpose:0", shape=(47, 31), dtype=float32)
Tensor("error/Select:0", shape=(47, 31), dtype=float32)
Tensor("error/Mean:0", shape=(), dtype=float32)
End of explanation
"""
rnn_cell = PriceHistorySeq2Seq.RNN_CELLS.GRU
num_epochs = 200
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='006_gru_seq2seq_EOS1000_200epochs_60to30',
nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
average_huber_loss = np.mean([np.mean(huber_loss(dp.targets[ind], preds_dict[ind]))
for ind in range(len(dp.targets))])
average_huber_loss
"""
Explanation: Conclusion
There is no way this graph makes much sense but let's give it a try to see how bad really is
Step 3 training the network
RECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors
GRU cell - with EOS = 1000 - 200 epochs
End of explanation
"""
rnn_cell = PriceHistorySeq2Seq.RNN_CELLS.BASIC_RNN
num_epochs = 10
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='006_basic_rnn_seq2seq_EOS1000_60to30')
"""
Explanation: Conclusion
???
Basic RNN cell (EOS 1000)
End of explanation
"""
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
It is ~40% extra time, it is slower.
It's not as bad as I had originally imagined but it is pretty bad because it is above 4. This is above baseline!
End of explanation
"""
rnn_cell = PriceHistorySeq2Seq.RNN_CELLS.GRU
num_epochs = 10
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='006_gru_seq2seq_EOS1000_60to30')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
The observation of this architecture is that it is quite stable, we don't have much dynamic behavior to the target month
GRU cell - with EOS = 1000
End of explanation
"""
rnn_cell = PriceHistorySeq2Seq.RNN_CELLS.BASIC_RNN
num_epochs = 10
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
eos_token = 0
)
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='006_basic_rnn_seq2seq_zeroEOS_60to30')
"""
Explanation: Conclusion
???
Basic RNN cell (EOS 0)
End of explanation
"""
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
???
End of explanation
"""
rnn_cell = PriceHistorySeq2Seq.RNN_CELLS.GRU
num_epochs = 10
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
eos_token = 0,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='006_gru_seq2seq_zeroEOS_60to30')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Conclusion
It is oscillating which is an indication that setting EOS to zero where the real values is not helping the model
Perhaps another idea is to have as first input of the decoder RNN the latest value of the encoding stream
GRU cell - with EOS = 0
End of explanation
"""
rnn_cell = PriceHistorySeq2Seq.RNN_CELLS.GRU
num_epochs = 50
num_epochs, num_units, batch_size
def experiment():
return model.run(
npz_path=npz_path,
epochs=num_epochs,
batch_size=batch_size,
num_units=num_units,
input_len = input_len,
target_len = target_len,
rnn_cell=rnn_cell,
)
#dyn_stats = experiment()
dyn_stats, preds_dict = get_or_run_nn(experiment, filename='006_gru_seq2seq_EOS1000_50epochs_60to30')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: GRU cell - with EOS = 1000 - 50 epochs
End of explanation
"""
|
wuafeing/Python3-Tutorial
|
01 data structures and algorithms/01.13 sort list of dicts by key.ipynb
|
gpl-3.0
|
rows = [
{"fname": "Brian", "lname": "Jones", "uid": 1003},
{"fname": "David", "lname": "Beazley", "uid": 1002},
{"fname": "John", "lname": "Cleese", "uid": 1001},
{"fname": "Big", "lname": "Jones", "uid": 1004}
]
"""
Explanation: Previous
1.13 通过某个关键字排序一个字典列表
问题
你有一个字典列表,你想根据某个或某几个字典字段来排序这个列表。
解决方案
通过使用 operator 模块的 itemgetter 函数,可以非常容易的排序这样的数据结构。 假设你从数据库中检索出来网站会员信息列表,并且以下列的数据结构返回:
End of explanation
"""
from operator import itemgetter
rows_by_fname = sorted(rows, key = itemgetter("fname"))
print(rows_by_fname)
rows_by_uid = sorted(rows, key = itemgetter("uid"))
print(rows_by_uid)
"""
Explanation: 根据任意的字典字段来排序输入结果行是很容易实现的,代码示例:
End of explanation
"""
rows_by_lfname = sorted(rows, key = itemgetter("lname", "fname"))
print(rows_by_lfname)
"""
Explanation: 代码的输出如上:
itemgetter() 函数也支持多个 keys ,比如下面的代码:
End of explanation
"""
rows_by_fname = sorted(rows, key = lambda r: r["fname"])
rows_by_lfname = sorted(rows, key = lambda r: (r["lname"], r["fname"]))
"""
Explanation: 讨论
在上面例子中, rows 被传递给接受一个关键字参数的 sorted() 内置函数。 这个参数是 callable 类型,并且从 rows 中接受一个单一元素,然后返回被用来排序的值。 itemgetter() 函数就是负责创建这个 callable 对象的。
operator.itemgetter() 函数有一个被 rows 中的记录用来查找值的索引参数。可以是一个字典键名称, 一个整形值或者任何能够传入一个对象的 __getitem__() 方法的值。 如果你传入多个索引参数给 itemgetter() ,它生成的 callable 对象会返回一个包含所有元素值的元组, 并且 sorted() 函数会根据这个元组中元素顺序去排序。 但你想要同时在几个字段上面进行排序(比如通过姓和名来排序,也就是例子中的那样)的时候这种方法是很有用的。
itemgetter() 有时候也可以用 lambda 表达式代替,比如:
End of explanation
"""
min(rows, key = itemgetter("uid"))
max(rows, key = itemgetter("uid"))
"""
Explanation: 这种方案也不错。但是,使用 itemgetter() 方式会运行的稍微快点。因此,如果你对性能要求比较高的话就使用 itemgetter() 方式。
最后,不要忘了这节中展示的技术也同样适用于 min() 和 max() 等函数。比如:
End of explanation
"""
|
zoofIO/flexx-notebooks
|
flexx_tutorial_app.ipynb
|
bsd-3-clause
|
from flexx import flx
"""
Explanation: Tutorial for flexx.app - connecting to the browser
End of explanation
"""
%gui asyncio
flx.init_notebook()
class MyComponent(flx.JsComponent):
foo = flx.StringProp('', settable=True)
@flx.reaction('foo')
def on_foo(self, *events):
if self.foo:
window.alert('foo is ' + self.foo, + len(events))
m = MyComponent()
m.set_foo('helo')
"""
Explanation: In normal operation, one uses flx.launch() to fire up a browser (or desktop app) to run the JavaScript in. This is followed by flx.run() (or flx.start() for servers) to enter Flexx' main loop.
In the notebook, however, there already is a browser. To tell Flexx that we're in the notebook, use flx.init_notebook() at the start of your notebook. Since Flexx's event system is based on asyncio, we need to "activate" asyncio as well.
End of explanation
"""
from flexxamples.testers.find_prime import PrimeFinder
p = PrimeFinder()
p.find_prime_py(2000)
p.find_prime_js(2000) # Result is written to JS console, open F12 to see it
"""
Explanation: Let's use an example model:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.