repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
MTG/essentia
|
src/examples/python/tutorial_io_audio.ipynb
|
agpl-3.0
|
import essentia.standard as es
filename = 'audio/dubstep.flac'
# Load the whole file in mono
audio = es.MonoLoader(filename=filename)()
print(audio.shape)
# Load the whole file in stereo
audio, _, _, _, _, _ = es.AudioLoader(filename=filename)()
print(audio.shape)
# Load and resample to 16000 Hz
audio = es.MonoLoader(filename=filename, sampleRate=16000)()
print(audio.shape)
# Load only a 10-seconds segment in mono, starting from the 2nd minute
audio = es.EasyLoader(filename='audio/Vivaldi_Sonata_5_II_Allegro.flac',
sampleRate=44100, startTime=60, endTime=70)()
print(audio.shape)
"""
Explanation: Audio loaders and metadata
Loading audio
Essentia relies on the FFmpeg library for audio input/output, and therefore there are many possibilities when it comes to loading audio. See AudioLoader, MonoLoader, and EasyLoader algorithms for more details.
Below are some examples of their usage loading audio files:
End of explanation
"""
# Replace with your own file
es.MetadataReader(filename='audio/Mr. Bungle - Stubb (a Dub).mp3')()
"""
Explanation: Reading file metadata
Essentia also supports loading the metadata embedded in audio files (such as ID3) using MetadataReader.
End of explanation
"""
metadata_pool = es.MetadataReader(filename='audio/Mr. Bungle - Stubb (a Dub).mp3')()[7]
for d in metadata_pool.descriptorNames():
print(d, metadata_pool[d])
"""
Explanation: The output contains standard metadata fields (track name, artist, name, album name, track number, etc.) as well as bitrate and samplerate. It also includes an Essentia pool object containing all other fields found:
End of explanation
"""
|
DigNeurosurgeon/seeg
|
notebooks/3 seeg_predict_implantation_accuracy-turicreate.ipynb
|
gpl-3.0
|
# import libraries
import turicreate as tc
import h5py
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
plt.style.use('ggplot')
%matplotlib inline
import warnings; warnings.simplefilter('ignore')
#%xmode plain; # shorter error messages
pd.options.mode.chained_assignment = None
# global setting whether to save figures or not
# will save as 300 dpi PNG - all filenames start with "fig_"
save_figures = False
# load data
electrodes = pd.read_csv('../data/electrodes_public.csv')
# find missing values
nan_rows = sum([True for idx,row in electrodes.iterrows() if any(row.isnull())])
print('Nr of rows with missing values:', nan_rows)
def missing_values_table(df):
mis_val = df.isnull().sum()
mis_val_percent = 100 * df.isnull().sum()/len(df)
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
mis_val_table = mis_val_table.rename(columns = {0 : 'Missing Values', 1 : '% of Total Values'})
return mis_val_table
missing_values_table(electrodes)
#not used features
electrodes.drop(['EntryX', 'EntryY', 'EntryZ','ScrewLength','PlanningRing','PlanningArc'], axis = 1, inplace = True)
# calculate TPLE and remove entry data from dataframe
electrodes['TPLE'] = np.sqrt(np.square(electrodes['TipX'] - electrodes['PlanningX']) +
np.square(electrodes['TipY'] - electrodes['PlanningY']) +
np.square(electrodes['TipZ'] - electrodes['PlanningZ'])
).round(1)
#plt.figure(figsize=[16, 8])
#sns.distplot(electrodes['TPLE'], hist=False, rug=True)
#sns.distplot(electrodes['TPLE'])
plt.figure(figsize=[16, 8])
sns.kdeplot(electrodes[electrodes['PatientPosition']=='Supine']['TPLE'], label='Supine')
sns.kdeplot(electrodes[electrodes['PatientPosition']=='Prone']['TPLE'], label='Prone', color='y')
"""
Explanation: Comparing machine learning approaches to predict SEEG accuracy
Stereoencephalography (SEEG) is a technique used in drug-resistant epilepsy patients that may be a candidate for surgical resection of the epileptogenic zone. Multiple electrodes are placed using a so-called "frame based" stereotactic approach, in our case using the Leksell frame. In our previous paper "Methodology, outcome, safety and in vivo accuracy in traditional frame-based stereoelectroencephalography" by Van der Loo et al (2017) we reported on SEEG electrode implantation accuracy in a cohort of 71 patients who were operated between September 2008 and April 2016, in whom a total of 902 electrodes were implanted. Data for in vivo application accuracy analysis were available for 866 electrodes.
The goal of the current project is to use a public version of this dataset (without any personal identifiers) to predict electrode implantation accuracy by using and comparing different machine learning approaches.
Pieter Kubben, MD, PhD<br/>
neurosurgeon @ Maastricht University Medical Center, The Netherlands
For any questions you can reach me by email or on Twitter.
Data description
The public dataset contains these variables:
PatientPosition: patient position during surgery (nominal: supine, prone)
Contacts: nr of contacts of electrode implanted (ordinal: 5, 8, 10, 12, 15, 18)
ElectrodeType: describes trajectory type (nominal: oblique, orthogonal). Oblique refers to implantation using the Leksell arc, and orthogonal using a dedicated L-piece mounted on the frame (mostly implants in temporal lobe) when arc angles become too high (approx > 155°) or too low (approx < 25°)
PlanningX: planned Cartesian X coord of target (numeric, in mm)
PlanningY: planned Cartesian Y coord of target (numeric, in mm)
PlanningZ: planned Cartesian Z coord of target (numeric, in mm)
PlanningRing: planned ring coord, the trajectory direction in sagittal plane (numeric, in degrees); defines entry
PlanningArc: planned arc coord (trajectory direction in coronal plane (numeric, in degrees); defines entry
DuraTipDistancePlanned: distance from dura mater (outer sheet covering the brain surface) to target (numeric, in mm)
EntryX: real Cartesian X coord of entry point (numeric, in mm)
EntryY: real Cartesian Y coord of entry point (numeric, in mm)
EntryZ: real Cartesian Z coord of entry point (numeric, in mm)
TipX: real Cartesian X coord of target point (numeric, in mm)
TipY: real Cartesian Y coord of target point (numeric, in mm)
TipZ: real Cartesian Z coord of target point (numeric, in mm)
SkinSkullDistance: distance between skin surfacce and skull surface (numeric, in mm)
SkullThickness: skull thickness (numeric, in mm)
SkullAngle: insertion angle of electrode relative to skull (numeric, in degrees)
ScrewLength: length of bone screw used to guide and fixate electrode (ordinal: 20, 25, 30, 35 mm)
The electrodes are the Microdeep depth electrodes by DIXI Medical.
To the limited extent possible in this case I tried to make these FAIR data and adhere to FAIR guiding principles. In practice this meant I introduced the topic, described my data and created a DOI.
Now let's get started.
End of explanation
"""
print('TPLE Mean Supine: {} (SD {}) \nTPLE Mean Prone: {} (SD {})'.format(
round(electrodes[electrodes['PatientPosition']=='Supine']['TPLE'].mean(),2), round(electrodes[electrodes['PatientPosition']=='Supine']['TPLE'].std(),2),
round(electrodes[electrodes['PatientPosition']=='Prone']['TPLE'].mean(),2), round(electrodes[electrodes['PatientPosition']=='Prone']['TPLE'].std(),2)))
"""
Explanation: It looks like their distributions come from different samples
End of explanation
"""
alpha = 0.05
t_stat, p = stats.mstats.ttest_ind(electrodes[electrodes['PatientPosition']=='Supine']['TPLE'],
electrodes[electrodes['PatientPosition']=='Prone']['TPLE'])
if p < alpha:
print("p= {} The null hypothesis can be rejected".format(p))
else:
print("p= {} The null hypothesis cannot be rejected".format(p))
"""
Explanation: Let's prove if indeed are independet samples with a t-test. If we observe a large p-value, then we cannot reject the null hypothesis of identical means. If the p-value is smaller than alpha, then we reject the null hypothesis of equal means.
End of explanation
"""
print('TPLE Mean Oblique: {} (SD {}) \nTPLE Mean Orthogonal: {} (SD {})'.format(
round(electrodes[electrodes['ElectrodeType']=='Oblique']['TPLE'].mean(),2), round(electrodes[electrodes['ElectrodeType']=='Oblique']['TPLE'].std(),2),
round(electrodes[electrodes['ElectrodeType']=='Orthogonal']['TPLE'].mean(),2), round(electrodes[electrodes['ElectrodeType']=='Orthogonal']['TPLE'].std(),2)))
"""
Explanation: So it means the two groups came from diferent populations, and we might trate them different
We do the same with electrode type
End of explanation
"""
alpha = 0.05
t_stat, p = stats.mstats.ttest_ind(electrodes[electrodes['ElectrodeType']=='Oblique']['TPLE'],
electrodes[electrodes['ElectrodeType']=='Orthogonal']['TPLE'])
if p < alpha:
print("p= {} The null hypothesis can be rejected".format(p))
else:
print("p= {} The null hypothesis cannot be rejected".format(p))
#missing_values_table(electrodes)
#taking into account came from different distributions
df_supine = electrodes[electrodes['PatientPosition']=='Supine']
df_prone = electrodes[electrodes['PatientPosition']=='Prone']
"""
Explanation: Computing t-test.
End of explanation
"""
dura_sup = df_supine['DuraTipDistancePlanned'].dropna()
dura_pro = df_prone['DuraTipDistancePlanned'].dropna()
# normallity test
def normal_test(vector, alpha = 0.05):
k2, p = stats.mstats.normaltest(vector)
if p < alpha: # null hypothesis: x comes from a normal distribution
print("p = {} The null hypothesis can be rejected".format(p))
else:
print("p = {} The null hypothesis cannot be rejected".format(p))
#electrodes['DuraTipDistancePlanned'].hist()
plt.figure(figsize=[16, 8])
sns.distplot(dura_sup)
sns.distplot(dura_pro)
normal_test(dura_sup)
normal_test(dura_pro)
"""
Explanation: Input missing variables
We start with the variable to input DuraTipDistancePlanned, but first take a look to the distribution
End of explanation
"""
def input_mean(df, variable):
df_null = df[df[variable].isnull()]
df_not_null = df[df[variable].notnull()]
df_null[variable] = round(df_not_null[variable].mean(),1)
inputed_df = df_null.append(df_not_null).sort_index()
return inputed_df
df_supine = input_mean(df_supine, 'DuraTipDistancePlanned')
df_prone = input_mean(df_prone, 'DuraTipDistancePlanned')
"""
Explanation: They are normal distribuited so we might input variables with the statistics of the variable
End of explanation
"""
electrodes = df_supine.append(df_prone).sort_index()
#missing_values_table(electrodes)
"""
Explanation: We merge again
End of explanation
"""
#electrodes.insert(0, 'Index', electrodes.index)
contact_null = electrodes[electrodes['Contacts'].isnull()]
contact_not_null = electrodes[electrodes['Contacts'].notnull()]
contact_not_null['Contacts'] = contact_not_null['Contacts'].astype(int)
"""
Explanation: Next variable to input is the number of contacts
End of explanation
"""
sf_contact_not_null = tc.SFrame(data=contact_not_null)
# features for contacts classifier
features = ['PatientPosition', 'ElectrodeType', 'PlanningX', 'PlanningY', 'PlanningZ', 'DuraTipDistancePlanned', 'SkinSkullDistance', 'SkullThickness', 'SkullAngle']
# We train a classifier
train_data, test_data = sf_contact_not_null.random_split(0.85)
clf1 = tc.classifier.create(train_data, target='Contacts', features = features)
clf1
#clf1.evaluate(test_data)
#clf1.evaluate(train_data)
#now convert the rest of the data
sf_contact_null = tc.SFrame(data=contact_null)
new_contacts = clf1.predict(sf_contact_null[features])
contact_null['Contacts'] = new_contacts
"""
Explanation: We need to convert into sframe
End of explanation
"""
electrodes = contact_null.append(contact_not_null).sort_index()
#missing_values_table(electrodes)
# No missing values anymore
"""
Explanation: New electrodes table with inputed values
End of explanation
"""
contacts_supine = df_supine.groupby('Contacts').count()['TPLE']
contacts_prone = df_prone.groupby('Contacts').count()['TPLE']
plt.figure(figsize=[20, 8])
plt.subplot(1,2,1); plt.xticks(contacts_supine.index)
plt.bar(contacts_supine.index, contacts_supine)
plt.subplot(1,2,2), plt.xticks(contacts_prone.index)
plt.bar(contacts_prone.index, contacts_prone, color='y')
"""
Explanation: How the categories look like on the two different groups?
End of explanation
"""
plt.figure(figsize=[16, 8])
coordinates = ['PlanningX','PlanningY','PlanningZ','TipX','TipY','TipZ']
for c in coordinates:
sns.kdeplot(electrodes[str(c)], label=str(c))
"""
Explanation: Target Variables
Now we will remove large outliers (difference between planned and real coord) in the Z-axis as electrode insertion length (depth) is influenced also by other factors (calculations regarding depth which could lead to either too superficial or too deep, but also possible malfixation of the screw cap which may cause loosening of the electrode and hence a more superficial position.. it won't migrate into the depth spontaneously). These are very limited numbers and would too much influence further analysis.
End of explanation
"""
for c in coordinates:
normal_test(electrodes[str(c)])
"""
Explanation: Every Axis follows a particular distribution, let's try a normal test
End of explanation
"""
electrodes.head()
"""
Explanation: Not normal distribuited
End of explanation
"""
electrodes.drop('TPLE', axis = 1, inplace = True)
#electrodes.insert(0, 'Index', range(0, len(electrodes), 1))
sf_electrodes = tc.SFrame(data=electrodes)
sf_electrodes.head()
sf_electrodes.num_rows()
features_train = ['PatientPosition',
'Contacts',
'ElectrodeType',
'PlanningX',
'PlanningY',
'PlanningZ',
'DuraTipDistancePlanned','SkinSkullDistance',
'SkullThickness',
'SkullAngle']
targets_train = ['TipX', 'TipY', 'TipZ']
train_data, test_data = sf_electrodes.random_split(0.8)
def multi_boosted_tree(dataset, features, targets):
models = []
for target in targets:
reg = tc.boosted_trees_regression.create(dataset, target=target, features = features, verbose=False)
models.append(reg)
return models
MBT = multi_boosted_tree(train_data, features_train, targets_train)
MBT
MBT[1].get_feature_importance()
# Evaluate the model and save the results into a dictionary
#results =
for i in range(3):
print(MBT[i-1].evaluate(test_data))
"""
Explanation: We need to structure our data properly for further analysis and convert the categorical variables (nominal, ordinal) to the category type.
Regression Model to Predict Coordinates
End of explanation
"""
test_data['PredictX'] = [round(i,3) for i in MBT[0].predict(test_data)]
test_data['PredictY'] = [round(i,3) for i in MBT[1].predict(test_data)]
test_data['PredictZ'] = [round(i,3) for i in MBT[2].predict(test_data)]
test_data['TPLE'] = np.sqrt(np.square(test_data['TipX'] - test_data['PlanningX']) +
np.square(test_data['TipY'] - test_data['PlanningY']) +
np.square(test_data['TipZ'] - test_data['PlanningZ'])
).round(1)
test_data['TPLE_Pred'] = np.sqrt(np.square(test_data['PredictX'] - test_data['PlanningX']) +
np.square(test_data['PredictY'] - test_data['PlanningY']) +
np.square(test_data['PredictZ'] - test_data['PlanningZ'])
).round(1)
test_data
#test_data['TPLE'].show()
#test_data['TPLE_Pred'].show()
rmse_1 = tc.evaluation.rmse(test_data['TPLE'], test_data['TPLE_Pred'])
print('Root Mean Squared Error for Boosted Tree Regression \n{}'.format(rmse_1))
"""
Explanation: TPLE Comparsison with MBT
End of explanation
"""
td_supine_tple = test_data[test_data['PatientPosition']=='Supine']['TPLE']
td_prone_tple = test_data[test_data['PatientPosition']=='Prone']['TPLE']
print('TPLE Mean Supine: {} (SD {}) \nTPLE Mean Prone: {} (SD {})'.format(
round(td_supine_tple.mean(),2), round(td_supine_tple.std(),2),
round(td_prone_tple.mean(),2), round(td_prone_tple.std(),2)))
estimators = []
for i in range(len(test_data)):
if test_data['PatientPosition'][i] == 'Supine':
estimators.append(round(td_supine_tple.mean(),2))
else:
estimators.append(round(td_prone_tple.mean(),2))
test_data['TPLE_point'] = estimators
rmse_2 = tc.evaluation.rmse(test_data['TPLE'], test_data['TPLE_point'])
print('Root Mean Squared Error for Point Estimator \n{}'.format(rmse_2))
#test_data.print_rows(172,19)
"""
Explanation: Point Estimator
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
#First some preprocessing
X = electrodes[features_train].values
#column 0 and 2 are categorical
y = electrodes[targets_train].values
labelencoder_X_1 = LabelEncoder()
X[:, 0] = labelencoder_X_1.fit_transform(X[:, 0])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=80)
sc = StandardScaler()
X_trainS = sc.fit_transform(X_train)
X_testS = sc.transform(X_test)
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.optimizers import SGD
from keras.utils.np_utils import to_categorical
from keras.callbacks import ModelCheckpoint
batch_size = 2
epochs = 100
model = Sequential()
model.add(Dense(output_dim = 60, activation = 'relu', input_dim = X_train.shape[1])) #(10,)
model.add(Dense(output_dim = 60, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(output_dim = 3, activation = 'linear')) # dim output (3, ) last layer should have linear activations like no activators
model.summary()
#sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics = ['accuracy']) #mse
#epoch number, 2 decimals - valaccuracy 2 float
filepath = "../models/weights-nn-{epoch:02d}-{val_acc:.2f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=0, mode='max')
model.fit(X_trainS, y_train,
batch_size=batch_size, verbose=0,
epochs=epochs,
validation_data=(X_testS, y_test),
callbacks=[checkpoint])
scores = model.evaluate(X_testS, y_test, batch_size=2)
print('\n\nrmse: {:.2f}\n\n{}: {:.2f}'.format(np.sqrt(scores[0]), model.metrics_names[1], scores[1]))
y_pred = model.predict(X_testS)
"""
Explanation: Deep learning
As a quick glance to deep learning we will apply a Multilayer Perceptron (MLP) for multi-class softmax classification using stochastic gradient descent as an optimizer. The code is borrowed from the official keras Sequential model guide and adapted where needed.
End of explanation
"""
#feature_idx, feature_name = [], []
#features = []
#for idx, i in enumerate(electrodes.columns):
# features.append((idx,i))
# feature_idx.append(idx)
# feature_name.append(i)
list_coor = ['PlanningX','PlanningY','PlanningZ','TipX','TipY','TipZ']
X = electrodes[list_coor].values #trying the TPLE calculation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=80)
coordenates = pd.DataFrame()
coordenates['PlanningX'] = [i for i in X_test[:,0]]
coordenates['PlanningY'] = [i for i in X_test[:,1]]
coordenates['PlanningZ'] = [i for i in X_test[:,2]]
coordenates['TipX'] = [i for i in X_test[:,3]]
coordenates['TipY'] = [i for i in X_test[:,4]]
coordenates['TipZ'] = [i for i in X_test[:,5]]
coordenates['PredictX'] = [y_pred[i][0] for i in range(len(y_pred))]
coordenates['PredictY'] = [y_pred[i][1] for i in range(len(y_pred))]
coordenates['PredictZ'] = [y_pred[i][2] for i in range(len(y_pred))]
coordenates.head()
coordenates['TPLE'] = np.sqrt(np.square(coordenates['TipX'] - coordenates['PlanningX']) +
np.square(coordenates['TipY'] - coordenates['PlanningY']) +
np.square(coordenates['TipZ'] - coordenates['PlanningZ'])
).round(1)
coordenates['TPLE_Pred'] = np.sqrt(np.square(coordenates['PredictX'] - coordenates['PlanningX']) +
np.square(coordenates['PredictY'] - coordenates['PlanningY']) +
np.square(coordenates['PredictZ'] - coordenates['PlanningZ'])
).round(1)
coordenates.head(10)
sf_coordenates = tc.SFrame(data=coordenates)
rmse_3 = tc.evaluation.rmse(sf_coordenates['TPLE'], sf_coordenates['TPLE_Pred'])
print('Root Mean Squared Error for Deep Learning \n{}'.format(rmse_3))
"""
Explanation: We are predicting the real carteasian points, so we want to calculate a "real TPLE" before the operation
TPLE Comparsison for NN
End of explanation
"""
rmseS = [rmse_1, rmse_2, rmse_3]
rsme_accuracy = [round(1 - (i/max(coordenates['TPLE'])),4) for i in rmseS]
rsme_accuracy
#rmse the smaller the better
y=rsme_accuracy#[round(rmse_1,1),round(rmse_2,1),round(rmse_3,1)];
x=list(range(len(y)))
top = [i+0.1 for i in y]; bot = [i-0.1 for i in y]; inter = list(zip(bot,top))
plt.figure(figsize=(16,8))
plt.errorbar(x,y,yerr=[(top-bot)/2 for top,bot in inter], fmt='o', label='Accuracy', color='red', linewidth=3)
plt.plot(x,y, color='black'); plt.legend(); plt.title('Accuracy of RMSE for TPLE')
plt.text(x[0]+.05, y[0]-.01, 'Multi-outupt Boosted Trees')
plt.text(x[1]+.05, y[1]+.01, 'Group Point Estimate')
plt.text(x[2]-.2, y[2]-.01, 'Deep Learning')
"""
Explanation: Preliminary Results
End of explanation
"""
|
root-mirror/training
|
OldSummerStudentsCourse/2017/examples/notebooks/TTreeAccess_Example_py.ipynb
|
gpl-2.0
|
import ROOT
"""
Explanation: Access TTree in Python using PyROOT
<hr style="border-top-width: 4px; border-top-color: #34609b;">
End of explanation
"""
f = ROOT.TFile.Open("https://root.cern.ch/files/summer_student_tutorial_tracks.root")
"""
Explanation: Open a file which is located on the web. No type is to be specified for "f".
End of explanation
"""
maxPt=-1
for event in f.events:
maxPt=-1
for track in event.tracks:
pt = track.Pt()
if pt > maxPt: maxPt = pt
if event.evtNum % 100 == 0:
print "Processing event number %i" %event.evtNum
print "Max pt is %f" %maxPt
"""
Explanation: Loop over the TTree called "events" in the file. It is accessed with the dot operator.
Same holds for the access to the branches: no need to set them up - they are just accessed by name, again with the dot operator.
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
machine-learning/break_up_dates_and_times_into_multiple_features.ipynb
|
mit
|
# Load library
import pandas as pd
"""
Explanation: Title: Break Up Dates And Times Into Multiple Features
Slug: break_up_dates_and_times_into_multiple_features
Summary: How to break up dates and times into multiple features for machine learning in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Dates And Times
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create data frame
df = pd.DataFrame()
# Create five dates
df['date'] = pd.date_range('1/1/2001', periods=150, freq='W')
"""
Explanation: Create Date And Time Data
End of explanation
"""
# Create features for year, month, day, hour, and minute
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day
df['hour'] = df['date'].dt.hour
df['minute'] = df['date'].dt.minute
# Show three rows
df.head(3)
"""
Explanation: Break Up Dates And Times Into Individual Features
End of explanation
"""
|
sylvchev/coursIntroPython
|
cours/4-ApprendrePython-Modules.ipynb
|
gpl-3.0
|
# Module nombres de Fibonacci
def fib(n): # écrit la série de Fibonacci jusqu’à n
a, b = 0, 1
while b < n:
print (b, end=' ')
a, b = b, a+b
def fib2(n): # retourne la série de Fibonacci jusqu’à n
result = []
a, b = 0, 1
while b < n:
result.append(b)
a, b = b, a+b
return result
"""
Explanation: Modules
Python offre un moyen de mettre des définitions dans un fichier et de les utiliser dans un script ou dans un session interactive de l’interpréteur. Un tel fichier s’appelle un module ; les définitions d’un module peuvent être importées dans un autre module ou dans le module principal (la collection de variables à laquelle vous avez accès dans un script exécuté depuis le plus haut niveau et dans le mode calculatrice).
Un module est un fichier contenant des définitions et des instructions Python. Le nom de fichier est le nom du module auquel est ajouté le suffixe '.py'. Dans un module, le nom du module (comme chaîne de caractères) est disponible comme valeur de la variable globale __name__. Par exemple, employez votre éditeur de texte préféré pour créer un fichier appelé 'fibo.py' dans le répertoire courant avec le contenu suivant :
End of explanation
"""
cd src/notebook/coursIntroPython/
"""
Explanation: On se déplace dans le répertoire courant, à adapter à votre environnement.
End of explanation
"""
import fibo
"""
Explanation: Maintenant lancez l’interpréteur Python et importez ce module avec la commande suivante :
End of explanation
"""
fibo.fib(1000)
fibo.fib2(100)
"""
Explanation: En utilisant le nom de module vous pouvez accéder aux fonctions :
End of explanation
"""
dir(fibo)
"""
Explanation: Pour connaître les noms qu'un module défini on peut utiliser la fonction dir(), qui renvoie la liste triée des chaînes de caractères.
End of explanation
"""
import math
math.cos(2 * math.pi)
"""
Explanation: Modules de la bibliothèques standard
Beaucoup de fonctionalités de Python sont disponibles dans des bibliothèques appelées modules. Pour améliorer la lisibilité et la portabilité du code, ces modules ne sont pas pré-chargés lorsque vous lancez Python : il faut les importer explicitement. Par exemple, il y a un module de math qui contient de nombreuses fonctions relatives au mathématiques. Si vous souhaitez utiliser la fonction cosinus, cos, vous pouvez importer le module math puis indiquer que vous appelez la fonction cos en utilisant la notation préfixée. Il en va de même pour les autres éléments de math comme par exemple la constante $\pi$
End of explanation
"""
%whos
"""
Explanation: Quand on a fait un import, il faut donc préfixer tous les objets du module (par exemple math.cos), ceci permet de lever certaines ambiguïtés lors de la lecture du code.
Cependant, surtout lorsqu'on écrit du code à usage unique, cette notation est lourde. Il est possible d'importer tout le contenu d'un module dans l'espace de noms (namespace) courant.
L'espace de noms est constitué par l'ensemble des variables et des fonctions atteignables depuis un endroit du code. Pour connaître le contenu de l'espace de noms courant dans iPython, on peut faire appel à la commande %whos
End of explanation
"""
from math import *
cos(2 * pi)
"""
Explanation: Revenons aux modules, pour importer tout le contenu d'un module dans l'espace de noms courant, on peut utiliser:
End of explanation
"""
%whos
"""
Explanation: Si on importe ainsi beaucoup de modules, ce qui est assez courant, l'espace de nom va vite être très rempli. Si deux fonctions ou variables portent le même nom, alors c'est la fonction du dernier module importé qui sera conservée. Voyons par exemple à quoi ressemble l'espace de noms courant maintenant que nous avons importé tout le module math:
End of explanation
"""
from math import cos, pi
cos(2 * pi)
"""
Explanation: La bonne pratique concilie le meilleur des deux approches. Elle permet d'importer dans l'espace de noms uniquement les fonctions utiles.
End of explanation
"""
|
GoogleCloudPlatform/python-docs-samples
|
notebooks/tutorials/bigquery/Visualizing BigQuery public data.ipynb
|
apache-2.0
|
%%bigquery
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
"""
Explanation: Vizualizing BigQuery data in a Jupyter notebook
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.
Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table.
Using Jupyter magics to query BigQuery data
The BigQuery Python client library provides a magic command that allows you to run queries with minimal code.
The BigQuery client library provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame. The following cell executes a query of the BigQuery natality public dataset and returns the total births by year.
End of explanation
"""
%%bigquery total_births
SELECT
source_year AS year,
COUNT(is_male) AS birth_count
FROM `bigquery-public-data.samples.natality`
GROUP BY year
ORDER BY year DESC
LIMIT 15
"""
Explanation: The following command to runs the same query, but this time the results are saved to a variable. The variable name, total_births, is given as an argument to the %%bigquery. The results can then be used for further analysis and visualization.
End of explanation
"""
total_births.plot(kind='bar', x='year', y='birth_count');
"""
Explanation: The next cell uses the pandas DataFrame.plot method to visualize the query results as a bar chart. See the pandas documentation to learn more about data visualization with pandas.
End of explanation
"""
%%bigquery births_by_weekday
SELECT
wday,
SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births,
SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births
FROM `bigquery-public-data.samples.natality`
WHERE wday IS NOT NULL
GROUP BY wday
ORDER BY wday ASC
"""
Explanation: Run the following query to retrieve the number of births by weekday. Because the wday (weekday) field allows null values, the query excludes records where wday is null.
End of explanation
"""
births_by_weekday.plot(x='wday');
"""
Explanation: Visualize the query results using a line chart.
End of explanation
"""
from google.cloud import bigquery
client = bigquery.Client()
"""
Explanation: Using Python to query BigQuery data
Magic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, %%bigquery uses the BigQuery Python client library to run the given query, convert the results to a pandas Dataframe, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks.
To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API.
End of explanation
"""
sql = """
SELECT
plurality,
COUNT(1) AS count,
year
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(plurality) AND plurality > 1
GROUP BY
plurality, year
ORDER BY
count DESC
"""
df = client.query(sql).to_dataframe()
df.head()
"""
Explanation: Use the Client.query method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.).
End of explanation
"""
pivot_table = df.pivot(index='year', columns='plurality', values='count')
pivot_table.plot(kind='bar', stacked=True, figsize=(15, 7));
"""
Explanation: To chart the query results in your DataFrame, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time.
End of explanation
"""
sql = """
SELECT
gestation_weeks,
COUNT(1) AS count
FROM
`bigquery-public-data.samples.natality`
WHERE
NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99
GROUP BY
gestation_weeks
ORDER BY
gestation_weeks
"""
df = client.query(sql).to_dataframe()
"""
Explanation: Run the following query to retrieve the count of births by the number of gestation weeks.
End of explanation
"""
ax = df.plot(kind='bar', x='gestation_weeks', y='count', figsize=(15,7))
ax.set_title('Count of Births by Gestation Weeks')
ax.set_xlabel('Gestation Weeks')
ax.set_ylabel('Count');
"""
Explanation: Finally, chart the query results in your DataFrame.
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
machine-learning/discretize_features.ipynb
|
mit
|
# Load libraries
from sklearn.preprocessing import Binarizer
import numpy as np
"""
Explanation: Title: Discretize Features
Slug: discretize_features
Summary: How to discretize features for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create feature
age = np.array([[6],
[12],
[20],
[36],
[65]])
"""
Explanation: Create Data
End of explanation
"""
# Create binarizer
binarizer = Binarizer(18)
# Transform feature
binarizer.fit_transform(age)
"""
Explanation: Option 1: Binarize Feature
End of explanation
"""
# Bin feature
np.digitize(age, bins=[20,30,64])
"""
Explanation: Option 2: Break Up Feature Into Bins
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.18/_downloads/82dd66e6bdf7150b8691eaa46b63bcf9/plot_read_events.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Chris Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
"""
Explanation: Reading an event file
Read events from a file. For a more detailed guide on how to read events
using MNE-Python, see tut_epoching_and_averaging.
End of explanation
"""
events_1 = mne.read_events(fname, include=1)
events_1_2 = mne.read_events(fname, include=[1, 2])
events_not_4_32 = mne.read_events(fname, exclude=[4, 32])
"""
Explanation: Reading events
Below we'll read in an events file. We suggest that this file end in
-eve.fif. Note that we can read in the entire events file, or only
events corresponding to particular event types with the include and
exclude parameters.
End of explanation
"""
print(events_1[:5], '\n\n---\n\n', events_1_2[:5], '\n\n')
for ind, before, after in events_1[:5]:
print("At sample %d stim channel went from %d to %d"
% (ind, before, after))
"""
Explanation: Events objects are essentially numpy arrays with three columns:
event_sample | previous_event_id | event_id
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
mne.viz.plot_events(events_1, axes=axs[0], show=False)
axs[0].set(title="restricted to event 1")
mne.viz.plot_events(events_1_2, axes=axs[1], show=False)
axs[1].set(title="restricted to event 1 or 2")
mne.viz.plot_events(events_not_4_32, axes=axs[2], show=False)
axs[2].set(title="keep all but 4 and 32")
plt.setp([ax.get_xticklabels() for ax in axs], rotation=45)
plt.tight_layout()
plt.show()
"""
Explanation: Plotting events
We can also plot events in order to visualize how events occur over the
course of our recording session. Below we'll plot our three event types
to see which ones were included.
End of explanation
"""
mne.write_events('example-eve.fif', events_1)
"""
Explanation: Writing events
Finally, we can write events to disk. Remember to use the naming convention
-eve.fif for your file.
End of explanation
"""
|
NEONScience/NEON-Data-Skills
|
tutorials/Python/NEON-API-python/neon_api_01_introduction_requests_py/neon_api_01_introduction_requests_py.ipynb
|
agpl-3.0
|
import requests
import json
#Every request begins with the server's URL
SERVER = 'http://data.neonscience.org/api/v0/'
"""
Explanation: syncID: f059914f7cf74327908228e63e204d60
title: "Introduction to NEON API in Python"
description: "Use the NEON API in Python, via requests package and json package."
dateCreated: 2020-04-24
authors: Maxwell J. Burner
contributors: Donal O'Leary
estimatedTime: 1 hour
packagesLibraries: requests, json
topics:
languagesTool: python
dataProduct: DP3.10003.001
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/NEON-API-python/neon_api_01_introduction_requests_py/neon_api_01_introduction_requests_py.py
tutorialSeries: python-neon-api-series
urlTitle: neon-api-01-introduction-requests
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Understand the components of a NEON API call
* Understand the basic process of making and processing an API request in Python
* Query the 'sites/' or 'products/' API endpoints to determine data availability
* Query the 'data/' API endpoint to get information on specific data files
### Install Python Packages
* **requests**
* **json**
</div>
In this tutorial we will learn to make calls to the NEON API using Python. We will make calls to the 'sites/' and 'products/' endpoints of the API to determine availability of data for specific sites and months, and make a call to the 'data/' endpoint to learn the names and URLs of specific data files.
An API is an Application Programming Interface; this is a system that allows programs to send instructions and requests to servers, typically recieving data in exchange. Whereas sending a URL over the web normally would cause a web page to be displayed, sending an API call URL results in the deisred data being directly downloaded to your computer. NEON provides an API that allows different programming languages to send requests for NEON data files and products.
In this tutorial we will cover how to use API calls to learn about what types of NEON data products are available for different sites and time periods.
Basic API Call Components
The actual API call takes the form of a web URL, the contents of which determine what data is returned. This URL can be broken down into three parts, which appear in order:
The base url is the location of the server storing the data. This will be the same for all NEON API calls.
The endpoint indicates what type of data or metadata we are looking to download. This tutorial covers three endpoints: sites/, products/, and data/; other endpoints will be covered in later tutorials.
The target is a value or series of values that indicate the specific data product, site, location, or data files we are looking up.
In python we can easily deal with the complexities of the API call with by creating the different parts of the request as strings, then combining them with string concatenation. Concatenating (combining end to end) string in python is as easy as using a '+' sign. This approach makes it easy to modify different parts of our request as needed.
End of explanation
"""
#Site Code for Lower Teakettle
SITECODE = 'TEAK'
"""
Explanation: Site Querying
NEON manages 81 different sites across the United States and Puerto Rico. These sites are separated into two main groups, terrestrial and aquatic, and the aquatic sites are further subdivided into lakes, rivers, and wadable streams. Each of these different site types has a different set of instrumentation and observation strategies, therefore, not every data product is available at every site. Often we begin by asking what kinds of data products are available for a given site. This is done by using the sites/ endpoint in the API; this endpoint is used for getting information about specific NEON field sites. In this example we will query which data products are available at the <a href="https://www.neonscience.org/field-sites/field-sites-map/TEAK" target="_blank">Lower Teakettle (TEAK)</a> site.
End of explanation
"""
#Make request, using the sites/ endpoint
site_request = requests.get(SERVER+'sites/'+SITECODE)
#Convert to Python JSON object
site_json = site_request.json()
"""
Explanation: We first use the requests module to send the API request using the 'get' function; this returns a 'request' object.
To more easily access the data returned by the request, we convert the request object into a Python JSON object.
End of explanation
"""
site_json
"""
Explanation: The JSON object in Python is a complex collection, with nested layers of dictionaries ('dicts') and lists.
Briefly, a list is a collection of data in which each element is identified by index number, while a dictionary is a collection in which each element is identified by a label (called a 'key') that is usually a text string. You can visit the w3schools website for more information on lists, and the realpython website for more information on dictionaries.
Dictionaries are defined using curly brackets ({...}) and lists are defined using square brackets ([...]). When we look at the request in JSON format, we can see this this is quite a lot of text arranged in nested dicts and lists:
End of explanation
"""
#Use the 'keys' method to view the component of the uppermost json dictionary
site_json.keys()
"""
Explanation: At the uppermost level the JSON object is a dictionary containing a single element with the label 'data'. This 'data' element in turn contains a dictionary with elements containing various pieces of information about the site. When we want to know what elements a dict contians, we can use the .keys() method to list the keys to each element in that dict.
End of explanation
"""
#Access the 'data' component, and use the 'keys' method to view to componenets of the json data dictionary
site_json['data'].keys()
"""
Explanation: This output shows that the entire API response is contained within a single dict called 'data'. In order to access any of the information contained within this highest-level 'data' dict, we will need to reference that dict directly. Let's view the different keys that are available within 'data':
End of explanation
"""
#View the first data product dictionary
site_json['data']['dataProducts'][0]
"""
Explanation: The returned JSON keys includes information on site location, site type, site name and code, and the availability of different data products for the site. This last piece of information is located in the element with the 'dataProducts' key.
The 'dataProducts' element is a list of dictionaries, one for each type of NEON data product available at the site; each of these dictionaries has the same keys, but different values. Let's look at the JSON for the first entry ("[0]") in the list of data products:
End of explanation
"""
#View product code and name for every available data product
for product in site_json['data']['dataProducts']:
print(product['dataProductCode'],product['dataProductTitle'])
"""
Explanation: Lists are a type of sequential data, so we can use Python's for loop to directly go through every element one by one, in this case to print out the data product code and data product name.
End of explanation
"""
#Look at Breeding Landbird Count data products
PRODUCTCODE = 'DP1.10003.001'
"""
Explanation: Typically, we use site queries to determine for which months a particular data product is available at a particular site. Let's look for the availability of Breeding Landbird Counts (DP1.10003.001)
End of explanation
"""
#Get available months of Breeding Landbird Count data products for TEAK site
#Loop through the 'dataProducts' list items (each one a dict) at the site
for product in site_json['data']['dataProducts']:
if(product['dataProductCode'] == PRODUCTCODE): #If a list item's 'dataProductCode' dict element equals the product code string,
print('Available Months: ',product['availableMonths']) #print the available months and URLs
print('URLs for each Month: ', product['availableDataUrls'])
"""
Explanation: For each data product, there will be a list of the months for which data of that type was collected and it available at the site, and a corresponding list with the URLs that we would put into the API to get data on that month of data products.
End of explanation
"""
#Make request
product_request = requests.get(SERVER+'products/'+PRODUCTCODE)
product_json = product_request.json()
"""
Explanation: Data Product Querying
Alternatively, we may want a specific type of data product, but aren't certain of the sites and months for which that data is available. In this case we can use the product code and the products/ API endpoint to look up availbility.
End of explanation
"""
#Print keys for product data dictionary
print(product_json['data'].keys())
"""
Explanation: The product JSON will again store everything first in a 'data' element. Within this container, the product data is a dictionary with information on the data product we are looking up.
End of explanation
"""
#Print code, name, and abstract of data product
print(product_json['data']['productCode'])
print(product_json['data']['productName'])
print()
print(product_json['data']['productAbstract'])
"""
Explanation: This request returned a lot of different types of information. Much of this information is meant to provide explanations and context for the data product. Let's look at the abstract, which provides a relatively brief description of the data product.
End of explanation
"""
#View keys of one site dictionary
print(product_json['data']['siteCodes'][0].keys())
"""
Explanation: For looking up the availability of the data product, we want the 'siteCodes' element. This is a list with an entry for each site at which the data product is available. Each site entry is a dict whose elements includes site code, a list of months for which data is available, and a list of the API request URLs to request data for that site for a given month.
End of explanation
"""
#View available months and corresponding API urls, then save desired URL
for site in product_json['data']['siteCodes']:
if(site['siteCode'] == SITECODE):
for month in zip(site['availableMonths'],site['availableDataUrls']): #Loop through the list of months and URLs
print(month[0],month[1])
if(month[0] == '2018-06'): #If data is available for the desired month, save the URL
data_url = month[1]
print(data_url)
"""
Explanation: We can look up the availability of data at a particular site, and get a URL to request data for a specific month. We know from earlier that Lower Teakettle (TEAK) has the data product we want for June 2018; we can get the URL needed to request that data with nested loops through site and month lists.
End of explanation
"""
#Make Request
data_request = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2018-06')
data_json = data_request.json()
"""
Explanation: Data File Querying
We now know that landbird count data products are available for 2018-06 at the Lower Teakettle site. Using the server url, site code, product code, and a year-month argument, we can make a request to the data/ endpoint of the NEON API. This will allow us to see what specific landbird count data files can be obtained for 2018-06 at the Lower Teakettle site, and to learn the locations of these files as URLs.
End of explanation
"""
#Make request with saved url
data_request = requests.get(data_url)
data_json = data_request.json()
#Print dict key for 'data' element of data JSON
print(data_json['data'].keys())
"""
Explanation: Alternatively we could use one of the "Available Data URLs" from a sites/ or products/ request, like the data_url we saved earlier.
End of explanation
"""
#View keys and values in first file dict
for key in data_json['data']['files'][0].keys(): #Loop through keys of the data file dict
print(key,':\t', data_json['data']['files'][0][key])
for file in data_json['data']['files']:
print(file['name'])
"""
Explanation: As with the sites JSON content, the uppermost level of a data request JSON object is a dictionary whose only member has the 'data' key; this member in turn is a dictionary with the product code, the sitecode, the month, and a list of the available data files.
The 'files' list is a list of python dictionaries, one for each file available based on our query; the dictionary for each file includes an internal reference code, the file name, the size of the file in bytes, and the url at which the file is located.
End of explanation
"""
for file in data_json['data']['files']:
if(('_perpoint' in file['name'])|('_countdata' in file['name'])): #if file name includes '_perpoint' or '_countdata'
print(file['name'],file['url'])
"""
Explanation: A number of different files are available, but the actual count data are in files which have 'brd_perpoint' or 'brd_countdata' in the file name.
We can use if statements to get info on only these files. The Python in operator, in addition to being part of the construction of for loops, can check if a particular value is present in a sequence, so it can check if a particular series of characters is present in a string.
End of explanation
"""
|
domijin/ml-fit
|
gcpg_test.ipynb
|
mit
|
%pylab inline
import numpy as np
from datetime import datetime
import random
import pandas as pd
import os
"""
Explanation: Outline
GC per Galaxy
Harris Data Inspection
clean data
add iMType
exclude 0: Milky Way Galaxy & 356: A1689-BCG with NaN VMag
remove duplicate NGC4417(228=NaN), select better result for VCC-1386 (273=NaN)
missing data
VMag -> KMag: linear Regression for low Magnitude galaxies
* VMag == null: 1 MWG
* KMag == null: 77
* -Ngc == 0: 6
* sigma == null: 148
* Reff == null: 79
* -logMd == null: 166
* -logMGC == null: 1
* logMB == null: 357
explore data
MType correlation: sigma, KMag, Reff...
Ngc correlation: KMag, VMag, Reff...
pairplot for different iMType: E & S0 have bimode
Harris fig 10 & MType not agree; classification statistics not agree
skymap
GWGC Data Inspection
clean data
duplicate are not affecting
missing VMag
cross check Galaxy with Harris
kde fitting with B-Mag or I-Mag: b_max=-19.6276150479/I_max=-21.0082876063 while v_max=-20.5158289467
Ngc estimation model
Model Selection
Ngc vs. logMd: jointplot with regression, well fit; however, errorbar very wild
Sn vs. VMag: also dirty, but doable
LOWESS model
prediction plot of Sn vs VMag, Sn is cut to 0 for VMag notin GCpG.VMag.range (-11.17, -24.19)
total amount of GC: 649111
skymap of Galaxy with GC
Ngc vs VMag regression plot with distribution
55 Globular Cluster Data
data explore
age vs. Z, HBtype, ...
GC age estimator
random number generator based on kde
kde plot for estimation and data
Build GC library
relational database?
VGG DataFrame
GC@G DataFrame
BHB@GC DataFrame
bhlib: load all BH info
GClib: manipulate from VGG.Ngc & assign random Model
BHBdata: draw BHB from BHlib, combine host GC & Galaxy info. from GClib
| host galaxy info | GC info | BH info |
| --- | --- | --- |
| 1-5 | 6-7 | 8-14|
|GC, RA, DEC, Dist, VMag | Model, T_GC(GC birth time=13Gyr-Age), [Fe/H, f_b, ...]| T_eject, Type1, Type2, M1, M2, Seperation,Ecc, (BHlib.Model=GClib.Model) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14
GALAXY, RA, DEC, DIST, VMag, Model, Age, T_eject, Type1, Type2, Mass1, Mass2, AP, ECC
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.015 14.0 14.0 22.743 7.4624 7.3712 0.1834
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.015 14.0 14.0 23.152 8.137 59.156 0.2299
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 22.379 6.9613 124.3 0.9227
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 10.707 7.7546 10.758 0.0
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 22.894 7.7425 438.42 0.0
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 22.963 7.8389 113.13 0.0
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 23.403 8.6012 48.091 0.6402
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 23.582 8.8515 26.886 0.5168
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 23.77 9.2126 91.431 0.7603
25 0.00792 17.22027 16.222 -16.7982138988 207-5 11.1515151515 0.0301 14.0 14.0 25.773 11.163 141.72 0.2939
GC model setting
Harris MW GC property
To improve
use DM halo mass for correlation: SDSS DR7 for galaxy another catalog SDSS
HDF5 to store simulation data for easier visualization: HDF5 for python HDF5 for fortran
To explore
Tidal effect to binary stellar evolution & cross section change for non rigid body tidal disruption
NSC & MBH on Mass-$\sigma$ plot early paper
Massive BH formation: runaway collision channels Gürkan 04 runaway in YSC
End of explanation
"""
from scipy.interpolate import interp1d
import statsmodels.api as sm
## load Harris catalog
GCpG=pd.read_csv("/Users/domi/Dropbox/Research/Local_universe/data/GCpG.csv") # read csv version data
# GCpG[GCpG.duplicated()]
GCpG.drop_duplicates(['Name'],inplace=True)
## calculate Sn
Sn=GCpG.Ngc*10**(0.4*(GCpG.VMag+15.0))
sn_fit=GCpG[['VMag']]
sn_fit['Sn']=pd.Series(Sn,index=sn_fit.index)
## LOWESS model to predict value
# introduce some floats in our x-values
x = sn_fit.VMag
y = sn_fit.Sn
# lowess will return our "smoothed" data with a y value for at every x-value
lowess = sm.nonparametric.lowess(y, x, frac=0.36)
# unpack the lowess smoothed points to their values
lowess_x = list(zip(*lowess))[0]
lowess_y = list(zip(*lowess))[1]
# run scipy's interpolation. There is also extrapolation I believe
f = interp1d(lowess_x, lowess_y, bounds_error=False)
"""
Explanation: Harris Catalog: Sn vs VMag with LOWESS model
End of explanation
"""
## load Galaxy catalog
Galaxy=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/GWGCCatalog_IV.csv',
delim_whitespace=True)
## Convert data with ~
i=0
for column in Galaxy:
i+=1
if (i>4 and i<23):
Galaxy[column]=Galaxy[column].convert_objects(convert_numeric=True)
VGG=Galaxy[Galaxy.Dist<30]
## add and convert new VMag for VGG
# VGG['VMag']=pd.Series(VGG.Abs_Mag_B + 4.83 - 5.48,index=VGG.index)
# extra=VGG.Abs_Mag_B.isnull()&VGG.Abs_Mag_I.notnull()
# VGG.VMag[extra]=VGG.Abs_Mag_I[extra] + 4.83 - 4.08
"""
Explanation: GW Galaxy catalog with $d<30Mpc$ -> VGG
End of explanation
"""
import seaborn as sns
# ## use cross checked galaxy for magnitude convertion
# ind_GCpG=GCpG.Name.isin(VGG.Name)
# ind_VGG=VGG.Name.isin(GCpG.Name)
# f_joint, (ax1, ax2, ax3) = subplots(3, sharex=True, sharey=True, figsize=(8,8))
# xlim([-25,-10])
# p_v=sns.kdeplot(GCpG.VMag[ind_GCpG].get_values(),shade=True,ax=ax1,label='VMag in Harris')
# x,y = p_v.get_lines()[0].get_data()
# v_max=x[y.argmax()]
# ax1.vlines(v_max, 0, y.max())
# p_b=sns.kdeplot(VGG.Abs_Mag_B[ind_VGG].get_values(),shade=True,ax=ax2,label='BMag in White')
# x,y = p_b.get_lines()[0].get_data()
# b_max=x[y.argmax()]
# ax2.vlines(b_max, 0, y.max())
# p_I=sns.kdeplot(VGG.Abs_Mag_I[ind_VGG].get_values(),shade=True,ax=ax3,label='IMag in White')
# x,y = p_I.get_lines()[0].get_data()
# I_max=x[y.argmax()]
# ax3.vlines(I_max, 0, y.max())
## add and convert new VMag for VGG: VMag = V_max + B/I - B/I_max
VGG['VMag']=pd.Series(VGG.Abs_Mag_B -20.515828946699045 + 19.627615047946385,index=VGG.index)
extra=VGG.Abs_Mag_B.isnull()&VGG.Abs_Mag_I.notnull()
VGG.VMag[extra]=VGG.Abs_Mag_I[extra] -20.515828946699045 + 21.008287606264027
"""
Explanation: VGG: convert B/I to VMag based on VGG&Harris
End of explanation
"""
## calculate Ngc based on Sn from f(VMag)
VGG['Ngc']=pd.Series((map(lambda x: 0 if isnan(f(x)) else int(f(x)/10**(0.4*(x+15))),VGG.VMag)),index=VGG.index)
# sum(VGG.Ngc)
"""
Explanation: Compute Ngc for VGG
End of explanation
"""
GCage=pd.read_csv("/Users/domi/Dropbox/Research/Local_universe/data/55GC_age.csv")
def generate_rand_from_pdf(pdf, x_grid):
cdf = np.cumsum(pdf) # pdf from kde plot x128
cdf = cdf / cdf[-1] # normalization
values = np.random.rand(sum(VGG.Ngc)) # sample size Ngc
value_bins = np.searchsorted(cdf, values) # group Ngc into nearest 128 bins
random_from_cdf = x_grid[value_bins]
return random_from_cdf # return Ngc
age_kde=sns.kdeplot(GCage.Age,kernel='gau',bw='silverman')
#x_grid = np.linspace(min(GCage.Age)-1, max(GCage.Age)+1, len(age_kde.get_lines()[0].get_data()[1]))
age_curve=age_kde.get_lines()[0].get_data()
f2=interp1d(age_curve[0],age_curve[1],kind='cubic')
age_grid=np.linspace(min(age_curve[0]), max(age_curve[0]), 10000)
## original method, age not well spreaded.
# x_grid = np.linspace(min(GCage.Age)-1, max(GCage.Age)+1, len(age_kde.get_lines()[0].get_data()[1]))
# # define how many GC need to estimate the age
# GCage_from_kde = generate_rand_from_pdf(age_kde.get_lines()[0].get_data()[1], x_grid)
# sns.distplot(GCage_from_kde)
## Age better spreaded
GCage_from_kde = generate_rand_from_pdf(f2(age_grid), age_grid)
# sns.distplot(GCage_from_kde)
# cut off at 13.5 Gyrs
GCage_from_kde[GCage_from_kde>=13.5]=np.random.choice(GCage_from_kde[GCage_from_kde<13.5],sum(GCage_from_kde>=13.5))
# numpy.savetxt("/Users/domi/Dropbox/Research/Local_universe/data/GCage.dat", GCage_from_kde, delimiter=",")
# GCage_from_kde=numpy.loadtxt("/Users/domi/Dropbox/Research/Local_universe/data/GCage.dat",delimiter=",")
"""
Explanation: Estimate GC age based on 55 MWGCs
Kernel Density Estimation: Predict KDE & Generate Data
Simulate from Kernel Density Estimate (empirical PDF)
End of explanation
"""
## Load BHLIB
bhlib=pd.DataFrame()
###################################
for i in range(1,325): # model
for j in range(1,11): # model id
###################################
bhe=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/BHsystem/%d-%d-bhe.dat' %(i,j),
usecols=[0, 2, 3, 4, 6, 8, 10, 20], names=['T_eject','Type1','Type2','M1','M2','Seperation','Ecc','Model'],
header=None, delim_whitespace=True)
bhe.Model='%d-%d' %(i,j)
bhlib=pd.concat([bhlib,bhe],ignore_index=False)
# BHB binary
bhblib=bhlib[bhlib.Type1==14*(bhlib.Type2==14)].copy(deep=True)
bhsys=bhlib[-(bhlib.Type1==14*(bhlib.Type2==14))].copy(deep=True)
bhblib=bhblib.drop(bhblib.columns[[1, 2]], axis=1)
bhblib.to_csv('/Users/domi/Dropbox/Research/Local_universe/data/bhblib.dat',
index=True, sep=' ', header=True, float_format='%2.6f')
# bhblib=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/bhblib.dat',
# sep=' ', index_col=0)
print(shape(bhsys),shape(bhblib),shape(bhlib))
"""
Explanation: Multiple realization
load bhblib
load GClib
build BHBdata
Build BH LIB
Random samples in Pandas
End of explanation
"""
# # index of the Galaxy with GC
# ind_GC=VGG.index[VGG.Ngc>0]
# # initialize the GClib
# GClib=pd.DataFrame()
# for row in ind_GC:
# GClib=GClib.append([VGG.ix[[row],['RA','Dec','Dist','VMag']]]*VGG.Ngc[row]) # creat GCs in each Galaxy
# GClib=GClib.reset_index()
# GClib=GClib.rename(columns = {'index':'Galaxy'})
# # write to file
# GClib.to_csv('/Users/domi/Dropbox/Research/Local_universe/data/GClib.dat',
# index=False,sep=' ', header=False, float_format='%2.6f')
GClib=pd.DataFrame()
GClib=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/GClib.dat',
sep=' ',header=None, names=['Galaxy','RA','Dec','Dist','VMag','Model','Age'])
# assign model to GClib
GClib['Model']=GClib['RA'].apply(lambda x: str(randint(1,325))+'-'+str(randint(1,11)))
# assign Age to GClib
GClib['Age']=np.random.choice(GCage_from_kde,size(GCage_from_kde))
"""
Explanation: bhlib: T_eject Type1 Type2 M1 M2 Seperation Ecc Model
Build GClib, assign GC Model to each GC in VGG Galaxy, add Age to GC Model
Slow, only run it when necessary
End of explanation
"""
# to check pd.merge correctly assign the right BH infor from bhlib to GClib based on Model
display(GClib.ix[[133],GClib.columns],pd.merge(GClib.ix[[133],GClib.columns],bhblib,on='Model'),bhblib[bhblib.Model=='64-9'])
# with pd.option_context('display.max_columns', None):
# display(bhlib.sample(5),VGG.sample(5))
"""
Explanation: GClib: Galaxy RA Dec Dist VMag Model Age
Galaxy is the index of GC in VGG
End of explanation
"""
# ## Slow
# BHBdata=pd.merge(GClib,bhblib,on='Model')
# BHBdata.to_csv('/Users/domi/Dropbox/Research/Local_universe/data/BHBdata.dat.gz',
# index=False,sep=' ', header=False, float_format='%2.6f',compression='gzip')
from gcpg import build
reload(build)
build.run(1)
from astropy.table import Table
help(Table.read)
from astropy.time import Time
Time('2015-11-2').gps
"""
Explanation: Build BHBdata and write to file
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees_correction_b.ipynb
|
mit
|
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.datasource import download_data
files = download_data("td2a_eco_exercices_de_manipulation_de_donnees.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/")
files
"""
Explanation: 2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation - correction
Correction de l'exercice 2 et manipulations classiques de texte.
End of explanation
"""
import pandas
df_villes = pandas.read_csv("villes.txt", sep="\t", encoding="utf-8")
print(df_villes.columns)
df_villes.head()
"""
Explanation: Exercice 2 - Les villes
Durée : 40 minutes
Importer la base des villes villes.xls
Les noms de variables et les observations contiennent des espaces inutiles (exemple : 'MAJ ') : commnecer par nettoyer l'ensemble des chaines de caractères (à la fois dans les noms de colonnes et dans les observations)
Trouver le nombre de codes INSEE différents (attention aux doublons)
Comment calculer rapidement la moyenne, le nombre et le maximum pour chaque variable numérique ? (une ligne de code)
Compter le nombre de villes dans chaque Region et en faire un dictionnaire où la clé est la région et la valeur le nombre de villes
Représenter les communes en utilisant
matplotlib
une librairie de cartographie (ex : folium)
End of explanation
"""
# Corriger les espaces en trop
# solution pour les espaces
# les colonnes
df_villes.rename(columns=lambda x: x.strip(),inplace = True)
# les observations
df_villes['Nom Ville'] = df_villes[['Nom Ville']].applymap(lambda x: x.strip())
df_villes['MAJ'] = df_villes[['MAJ']].applymap(lambda x: x.strip())
df_villes.columns
"""
Explanation: Nettoyage des noms
End of explanation
"""
len(df_villes['Code INSEE'].unique())
"""
Explanation: Nombre de codes commune
End of explanation
"""
df_villes.describe()
"""
Explanation: Stats desc
End of explanation
"""
df_villes.groupby(['Code Région']).size().to_dict()
"""
Explanation: Régions et nombre de communes
End of explanation
"""
import pandas
df_villes.Longitude = df_villes.Longitude.apply(lambda x: "0" if x.strip() == "-" else x)
df_villes.Longitude = pandas.to_numeric(df_villes.Longitude)
"""
Explanation: Représenter visuellement les communes
La conversion depuis Excel (un copier coller) a laissé quelques cellules négative et presque nulles -. On les remplace par des valeurs nulles.
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
plt.scatter(df_villes.Longitude, df_villes.Latitude, s=0.1)
"""
Explanation: matplotlib
End of explanation
"""
import folium
import random
locations = df_villes[['Latitude', 'Longitude']].copy()
print(locations.shape, locations.dropna().shape)
locations.dropna(inplace = True)
locationlist = locations.values.tolist()
len(locationlist)
locationlist[7]
communes_random = random.sample(locationlist, 50)
map = folium.Map(location=[47.088615, 2.637424], zoom_start=6)
for point in range(0,len(communes_random)) :
folium.Marker(communes_random[point]).add_to(map)
map
"""
Explanation: Exemple avec Folium
End of explanation
"""
|
flothesof/SongCreator
|
IPython notebooks/Explore XML file names in wikifonia dump.ipynb
|
mit
|
import glob
fnames = glob.glob("../MusicXML_files/wikifonia20100503/*.xml")
fnames[:10]
"""
Explanation: Let's explore the song names in the files that are in the Wikifonia dump from 2010.
The folder wikifonia20100503 comes from a dump of the wikifonia database found here:
https://github.com/jganseman/musq
First, let's list all those files:
End of explanation
"""
import xml.etree.cElementTree as ET
tree = ET.ElementTree(file=fnames[0])
root = tree.getroot()
root
root.getchildren()
root.find('identification/creator').text
root.find('movement-title').text
"""
Explanation: Now, for every file, let's read it and extract the title.
End of explanation
"""
def title_composer(fname):
"Returns title and composer name from XML filename."
root = ET.ElementTree(file=fname).getroot()
return (root.find('identification/creator').text, root.find('movement-title').text)
title_composer(fnames[0])
"""
Explanation: Let's write a function.
End of explanation
"""
metadata = [title_composer(fname) for fname in fnames]
"""
Explanation: Let's now run a loop over all our data:
End of explanation
"""
import pandas as pd
df = pd.DataFrame(data=metadata, index=fnames, columns=('composer', 'song_title'))
df.head(10)
df.shape
"""
Explanation: Finally, let's build a pandas dataframe using this data:
End of explanation
"""
df[df.composer.str.contains('rolling', case=False)]
df[df.composer.str.contains('stone', case=False)]
df[df.composer.str.contains('keith', case=False)]
df[df.composer.str.contains('lennon', case=False)]
"""
Explanation: We can now easily filter some of the data. For instance all titles from the Rolling Stones.
End of explanation
"""
|
Honestpuck/charming
|
Notebooks/Importing Notebooks.ipynb
|
artistic-2.0
|
import io, os, sys, types
from IPython import get_ipython
from IPython.nbformat import current
from IPython.core.interactiveshell import InteractiveShell
"""
Explanation: Importing IPython Notebooks as Modules
It is a common problem that people want to import code from IPython Notebooks.
This is made difficult by the fact that Notebooks are not plain Python files,
and thus cannot be imported by the regular Python machinery.
Fortunately, Python provides some fairly sophisticated hooks into the import machinery,
so we can actually make IPython notebooks importable without much difficulty,
and only using public APIs.
End of explanation
"""
def find_notebook(fullname, path=None):
"""find a notebook, given its fully qualified name and an optional path
This turns "foo.bar" into "foo/bar.ipynb"
and tries turning "Foo_Bar" into "Foo Bar" if Foo_Bar
does not exist.
"""
name = fullname.rsplit('.', 1)[-1]
if not path:
path = ['']
for d in path:
nb_path = os.path.join(d, name + ".ipynb")
if os.path.isfile(nb_path):
return nb_path
# let import Notebook_Name find "Notebook Name.ipynb"
nb_path = nb_path.replace("_", " ")
if os.path.isfile(nb_path):
return nb_path
"""
Explanation: Import hooks typically take the form of two objects:
a Module Loader, which takes a module name (e.g. 'IPython.display'), and returns a Module
a Module Finder, which figures out whether a module might exist, and tells Python what Loader to use
End of explanation
"""
class NotebookLoader(object):
"""Module Loader for IPython Notebooks"""
def __init__(self, path=None):
self.shell = InteractiveShell.instance()
self.path = path
def load_module(self, fullname):
"""import a notebook as a module"""
path = find_notebook(fullname, self.path)
print ("importing IPython notebook from %s" % path)
# load the notebook object
with io.open(path, 'r', encoding='utf-8') as f:
nb = current.read(f, 'json')
# create the module and add it to sys.modules
# if name in sys.modules:
# return sys.modules[name]
mod = types.ModuleType(fullname)
mod.__file__ = path
mod.__loader__ = self
mod.__dict__['get_ipython'] = get_ipython
sys.modules[fullname] = mod
# extra work to ensure that magics that would affect the user_ns
# actually affect the notebook module's ns
save_user_ns = self.shell.user_ns
self.shell.user_ns = mod.__dict__
try:
for cell in nb.worksheets[0].cells:
if cell.cell_type == 'code' and cell.language == 'python':
# transform the input to executable Python
code = self.shell.input_transformer_manager.transform_cell(cell.input)
# run the code in themodule
exec(code, mod.__dict__)
finally:
self.shell.user_ns = save_user_ns
return mod
"""
Explanation: Notebook Loader
Here we have our Notebook Loader.
It's actually quite simple - once we figure out the filename of the module,
all it does is:
load the notebook document into memory
create an empty Module
execute every cell in the Module namespace
Since IPython cells can have extended syntax,
the IPython transform is applied to turn each of these cells into their pure-Python counterparts before executing them.
If all of your notebook cells are pure-Python,
this step is unnecessary.
End of explanation
"""
class NotebookFinder(object):
"""Module finder that locates IPython Notebooks"""
def __init__(self):
self.loaders = {}
def find_module(self, fullname, path=None):
nb_path = find_notebook(fullname, path)
if not nb_path:
return
key = path
if path:
# lists aren't hashable
key = os.path.sep.join(path)
if key not in self.loaders:
self.loaders[key] = NotebookLoader(path)
return self.loaders[key]
"""
Explanation: The Module Finder
The finder is a simple object that tells you whether a name can be imported,
and returns the appropriate loader.
All this one does is check, when you do:
python
import mynotebook
it checks whether mynotebook.ipynb exists.
If a notebook is found, then it returns a NotebookLoader.
Any extra logic is just for resolving paths within packages.
End of explanation
"""
sys.meta_path.append(NotebookFinder())
"""
Explanation: Register the hook
Now we register the NotebookFinder with sys.meta_path
End of explanation
"""
ls nbpackage
"""
Explanation: After this point, my notebooks should be importable.
Let's look at what we have in the CWD:
End of explanation
"""
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
from IPython.display import display, HTML
formatter = HtmlFormatter()
lexer = PythonLexer()
# publish the CSS for pygments highlighting
display(HTML("""
<style type='text/css'>
%s
</style>
""" % formatter.get_style_defs()
))
def show_notebook(fname):
"""display a short summary of the cells of a notebook"""
with io.open(fname, 'r', encoding='utf-8') as f:
nb = current.read(f, 'json')
html = []
for cell in nb.worksheets[0].cells:
html.append("<h4>%s cell</h4>" % cell.cell_type)
if cell.cell_type == 'code':
html.append(highlight(cell.input, lexer, formatter))
else:
html.append("<pre>%s</pre>" % cell.source)
display(HTML('\n'.join(html)))
show_notebook(os.path.join("nbpackage", "mynotebook.ipynb"))
"""
Explanation: So I should be able to import nbimp.mynotebook.
Aside: displaying notebooks
Here is some simple code to display the contents of a notebook
with syntax highlighting, etc.
End of explanation
"""
from nbpackage import mynotebook
"""
Explanation: So my notebook has a heading cell and some code cells,
one of which contains some IPython syntax.
Let's see what happens when we import it
End of explanation
"""
mynotebook.foo()
"""
Explanation: Hooray, it imported! Does it work?
End of explanation
"""
mynotebook.has_ip_syntax()
"""
Explanation: Hooray again!
Even the function that contains IPython syntax works:
End of explanation
"""
ls nbpackage/nbs
"""
Explanation: Notebooks in packages
We also have a notebook inside the nb package,
so let's make sure that works as well.
End of explanation
"""
show_notebook(os.path.join("nbpackage", "nbs", "other.ipynb"))
from nbpackage.nbs import other
other.bar(5)
"""
Explanation: Note that the __init__.py is necessary for nb to be considered a package,
just like usual.
End of explanation
"""
import shutil
from IPython.utils.path import get_ipython_package_dir
utils = os.path.join(get_ipython_package_dir(), 'utils')
shutil.copy(os.path.join("nbpackage", "mynotebook.ipynb"),
os.path.join(utils, "inside_ipython.ipynb")
)
"""
Explanation: So now we have importable notebooks, from both the local directory and inside packages.
I can even put a notebook inside IPython, to further demonstrate that this is working properly:
End of explanation
"""
from IPython.utils import inside_ipython
inside_ipython.whatsmyname()
"""
Explanation: and import the notebook from IPython.utils
End of explanation
"""
|
briennakh/BIOF509
|
Wk12/Wk12-machine-learning-workflow.ipynb
|
mit
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
"""
Explanation: Week 12 - The Machine Learning Workflow
End of explanation
"""
# http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#example-plot-digits-pipe-py
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
logistic = linear_model.LogisticRegression()
pca = decomposition.PCA()
pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
###############################################################################
# Plot the PCA spectrum
pca.fit(X_digits)
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.axes([.2, .2, .7, .7])
plt.plot(pca.explained_variance_, linewidth=2)
plt.axis('tight')
plt.xlabel('n_components')
plt.ylabel('explained_variance_')
###############################################################################
# Prediction
n_components = [20, 40, 64]
Cs = np.logspace(-4, 4, 3)
#Parameters of pipelines can be set using ‘__’ separated parameter names:
estimator = GridSearchCV(pipe,
dict(pca__n_components=n_components,
logistic__C=Cs))
estimator.fit(X_digits, y_digits)
plt.axvline(estimator.best_estimator_.named_steps['pca'].n_components,
linestyle=':', label='n_components chosen')
plt.legend(prop=dict(size=12))
plt.show()
print(estimator)
"""
Explanation: In previous weeks we have covered preprocessing our data, dimensionality reduction, and last week looked at supervised learning. This week we will be pulling these processes together into a complete project.
Most projects can be thought of as a series of discrete steps:
Data acquisition / loading
Feature creation
Feature normalization
Feature selection
Machine learning model
Combining multiple models
Reporting / Utilization
Data acquisition
If we are fortunate our data may already be in a usable format but more often extensive work is needed to generate something usable.
What type of data do we have?
Do we need to combine data from multiple sources?
Is our data structured in such a way it can be used directly?
Does our data need to be cleaned?
Does our data have issues with confounding?
Feature creation
Can our data be used directly?
What features have been used previously for similar tasks?
Feature normalization
Z-score normalization?
Min-max mormalization?
Feature selection
The number of features we have compared with our sample size will determine whether feature selection is needed. We may choose in the first instance not to use feature selection. If we observe that our performance on the validation dataset is substantially worse than on the training dataset it is likely our model is overfitting and would benefit from limiting the number of features.
Even if the performance is comparable we may still consider using dimensionality reduction or feature selection.
Machine learning model
Which algorithm to use will depend on the type of task and the size of the dataset. As with the preceding steps it can be difficult to predict the optimal approach and different options should be tried.
Combining multiple models
An additional step that can frequently boost performance is combining multiple different models. It is important to consider that combining different models can make the result more difficult to interpret.
The models may be generated by simply using a different algorithm or may additionally include changes to the features used.
Reporting / Utilization
Finally we need to be able to utilize the model we have generated. This typically takes the form of receiving a new sample and then performing all the steps used in training to make a prediction.
If we are generating a model only to understand the structure of the data we already have the new samples may be only the test dataset we set aside at the beginning.
Rapid experimentation
At each of the major steps we need to take there are a variety of options. It is often not clear which approach will give us the best performance and so we should try several.
Being able to rapidly try different options helps us get to the best solution faster. It is tempting to make a change to our code, execute it, look at the performance, and then decide between sticking with the change or going back to the original version. It is very easy to:
Loose track of what code generated what solution
Overwrite a working solution and be unable to repeat it
Using version control software is very useful for avoiding these issues.
Optimizing the entire workflow
We have previously looked at approaches for choosing the optimal parameters for an algorithm. We also have choices earlier in the workflow that we should systematically explore - what features should we use, how should they be normalized.
Scikit learn includes functionality for easily exploring the impact of different parameters not only in the machine learning algorithm we choose but at every stage of our solution.
Pipeline
End of explanation
"""
# http://scikit-learn.org/stable/auto_examples/feature_stacker.html#example-feature-stacker-py
# Author: Andreas Mueller <amueller@ais.uni-bonn.de>
#
# License: BSD 3 clause
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
iris = load_iris()
X, y = iris.data, iris.target
# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features where good, too?
selection = SelectKBest(k=1)
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
svm = SVC(kernel="linear")
# Do grid search over k, n_components and C:
pipeline = Pipeline([("features", combined_features), ("svm", svm)])
param_grid = dict(features__pca__n_components=[1, 2, 3],
features__univ_select__k=[1, 2],
svm__C=[0.1, 1, 10])
grid_search = GridSearchCV(pipeline, param_grid=param_grid)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
"""
Explanation: FeatureUnion
End of explanation
"""
import numpy as np
from sklearn.base import TransformerMixin
class ModelTransformer(TransformerMixin):
"""Wrap a classifier model so that it can be used in a pipeline"""
def __init__(self, model):
self.model = model
def fit(self, *args, **kwargs):
self.model.fit(*args, **kwargs)
return self
def transform(self, X, **transform_params):
return self.model.predict_proba(X)
def predict_proba(self, X, **transform_params):
return self.transform(X, **transform_params)
class VarTransformer(TransformerMixin):
"""Compute the variance"""
def transform(self, X, **transform_params):
var = X.var(axis=1)
return var.reshape((var.shape[0],1))
def fit(self, X, y=None, **fit_params):
return self
class MedianTransformer(TransformerMixin):
"""Compute the median"""
def transform(self, X, **transform_params):
median = np.median(X, axis=1)
return median.reshape((median.shape[0],1))
def fit(self, X, y=None, **fit_params):
return self
class ChannelExtractor(TransformerMixin):
"""Extract a single channel for downstream processing"""
def __init__(self, channel):
self.channel = channel
def transform(self, X, **transformer_params):
return X[:,:,self.channel]
def fit(self, X, y=None, **fit_params):
return self
class FFTTransformer(TransformerMixin):
"""Convert to the frequency domain and then sum over bins"""
def transform(self, X, **transformer_params):
fft = np.fft.rfft(X, axis=1)
fft = np.abs(fft)
fft = np.cumsum(fft, axis=1)
bin_size = 10
max_freq = 60
return np.column_stack([fft[:,i] - fft[:,i-bin_size]
for i in range(bin_size, max_freq, bin_size)])
def fit(self, X, y=None, **fit_params):
return self
import numpy as np
import os
import pickle
from sklearn.cross_validation import cross_val_score, StratifiedShuffleSplit
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
import get_traces
import transformers as trans
def build_pipeline(X):
"""Helper function to build the pipeline of feature transformations.
We do the same thing to each channel so rather than manually copying changes
for all channels this is automatically generated"""
channels = X.shape[2]
pipeline = Pipeline([
('features', FeatureUnion([
('select_%d_pipeline' % i,
Pipeline([('select_%d' % i, trans.ChannelExtractor(i)),
('channel features', FeatureUnion([
('var', trans.VarTransformer()),
('median', trans.MedianTransformer()),
('fft', trans.FFTTransformer()),
])),
])
) for i in range(channels)])),
('classifier', trans.ModelTransformer(RandomForestClassifier(
n_estimators=500,
max_depth=None,
min_samples_split=1,
random_state=0))),
])
return pipeline
def get_transformed_data(patient, func=get_traces.get_training_traces):
"""Load in all the data"""
X = []
channels = get_traces.get_num_traces(patient)
# Reading in 43 Gb of data . . .
for i in range(channels):
x, y = func(patient, i)
X.append(x)
return (np.dstack(X), y)
all_labels = []
all_predictions = np.array([])
folders = [i for i in os.listdir(get_traces.directory) if i[0] != '.']
folders.sort()
for folder in folders:
print('Starting %s' % folder)
print('getting data')
X, y = get_transformed_data(folder)
print(X.shape)
print('stratifiedshufflesplit')
cv = StratifiedShuffleSplit(y,
n_iter=5,
test_size=0.2,
random_state=0,)
print('cross_val_score')
pipeline = build_pipeline(X)
# Putting this in a list is unnecessary for just one pipeline - use to compare multiple pipelines
scores = [
cross_val_score(pipeline, X, y, cv=cv, scoring='roc_auc')
]
print('displaying results')
for score, label in zip(scores, ['pipeline',]):
print("AUC: {:.2%} (+/- {:.2%}), {:}".format(score.mean(),
score.std(), label))
clf = pipeline
print('Fitting full model')
clf.fit(X, y)
print('Getting test data')
testing_data, files = get_transformed_data(folder,
get_traces.get_testing_traces)
print('Generating predictions')
predictions = clf.predict_proba(testing_data)
print(predictions.shape, len(files))
with open('%s_randomforest_predictions.pkl' % folder, 'wb') as f:
pickle.dump((files, predictions[:,1]), f)
"""
Explanation: Advanced Pipeline: Seizure Detection
In this example we will look at an EEG dataset with the goal of detecting the onset of epileptic seizures.
This machine learning task and associated dataset was put together by UPenn and Mayo Clinic and hosted on Kaggle as a competition with financial backing from NINDS and the American Epilepsy Society.
There are two parts to the data:
16 electrode EEG at 400 Hz from dogs with naturally occurring epilepsy
Patient EEG recordings at 500 Hz and 5000 Hz with a variable number of electrodes
There are a number of challenges in approaching this task.
Patient / animal specific differences
Different number of electrodes
Different sampling rates
Different electrode positions
Lack of existing features
. . . and likely several others
Data acquisition
Thankfully with this being a Kaggle competition the data has already been collected for us. The data has been divided into multiple files each with 1 second of data. There are training examples for each of the subjects and samples have already been chosen for the test set.
We will still want to perform validation but we can use Kaggle to measure our final performance on the test set.
Feature creation
The data available are potentials observed by the different electrodes over the one second window used for each sample. These values are unlikely to be predictive in this format so we must use this data to create the features our model will use.
This is the first real decision we need to make. What might our features be?
Feature normalization
Applying normalization using standard approaches may be necessary depending on the algorithm we plan to use.
Feature selection
The dataset is reasonably large and we are likely to have a limited number of features. Our first solution might not need to select features prior to building the model.
Depending on how creative we get and how large the number of features we have grows later solutions may benefit from feature selection, either to prevent overfitting or to speed up evaluation.
Machine learning model
Evaluating different algorithms as we build a solution is probably going to be the best approach. While we are developing features using an ensemble or boosting algorithm will be the simplest approach.
Combining multiple models
There are multiple different levels at which we might develop machine learning models for this task. We could develop models for individual channels or all the channels combined. We could develop a single model for all subjects or separate models for each subject.
All of these different models could then be combined together.
Reporting / Utilization
For this task we know we will have a test dataset and our performance there will be evaluated.
NOTE: The dataset for this example is 43 GB uncompressed so I have not included it in the course material. If you want to run through the example it can be downloaded from the link above.
A github repository with all the code for this example is here.
End of explanation
"""
|
NREL/bifacial_radiance
|
docs/tutorials/6 - Advanced topics - Understanding trackerdict structure.ipynb
|
bsd-3-clause
|
import bifacial_radiance
from pathlib import Path
import os
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'Tutorial_06')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
simulationName = 'tutorial_6'
moduletype = 'test-module'
albedo = "litesoil" # this is one of the options on ground.rad
lat = 37.5
lon = -77.6
# Scene variables
nMods = 3
nRows = 1
hub_height = 2.3 # meters
pitch = 10 # meters # We will be using pitch instead of GCR for this example.
# Traking parameters
cumulativesky = False
limit_angle = 45 # tracker rotation limit angle
angledelta = 0.01 # we will be doing hourly simulation, we want the angle to be as close to real tracking as possible.
backtrack = True
#makeModule parameters
# x and y will be defined by the cell-level module parameters
xgap = 0.01
ygap = 0.10
zgap = 0.05
numpanels = 2
torquetube = True
axisofrotationTorqueTube = False
diameter = 0.1
tubetype = 'Oct' # This will make an octagonal torque tube.
material = 'black' # Torque tube will be made of this material (0% reflectivity)
tubeParams = {'diameter':diameter,
'tubetype':tubetype,
'material':material,
'axisofrotation':axisofrotationTorqueTube,
'visible':torquetube}
# Simulation range between two hours
startdate = '11_06_11' # Options: mm_dd, mm_dd_HH, mm_dd_HHMM, YYYY-mm-dd_HHMM
enddate = '11_06_14'
# Cell Parameters
numcellsx = 6
numcellsy = 12
xcell = 0.156
ycell = 0.156
xcellgap = 0.02
ycellgap = 0.02
demo = bifacial_radiance.RadianceObj(simulationName, path=testfolder)
demo.setGround(albedo)
epwfile = demo.getEPW(lat,lon)
metdata = demo.readWeatherFile(epwfile, starttime=startdate, endtime=enddate)
cellLevelModuleParams = {'numcellsx': numcellsx, 'numcellsy':numcellsy,
'xcell': xcell, 'ycell': ycell, 'xcellgap': xcellgap, 'ycellgap': ycellgap}
mymodule = demo.makeModule(name=moduletype, xgap=xgap, ygap=ygap, zgap=zgap,
numpanels=numpanels, cellModule=cellLevelModuleParams, tubeParams=tubeParams)
sceneDict = {'pitch':pitch,'hub_height':hub_height, 'nMods': nMods, 'nRows': nRows}
demo.set1axis(limit_angle=limit_angle, backtrack=backtrack, gcr=mymodule.sceney / pitch, cumulativesky=cumulativesky)
demo.gendaylit1axis()
demo.makeScene1axis(module=mymodule, sceneDict=sceneDict)
demo.makeOct1axis()
demo.analysis1axis()
"""
Explanation: 6 - Advanced topics: Understanding trackerdict structure
Tutorial 6 gives a good, detailed introduction to the trackerdict structure step by step.
Here is a condensed summary of functions you can use to explore the tracker dictionary.
Steps:
<ol>
<li> <a href='#step1'> Create a short Simulation + tracker dictionary beginning to end for 1 day </a></li>
<li> <a href='#step2'> Explore the tracker dictionary </a></li>
<li> <a href='#step3'> Explore Save Options </a></li>
</ol>
<a id='step 1'></a>
1. Create a short Simulation + tracker dictionary beginning to end for 1 day
End of explanation
"""
print(demo) # Shows all keys for top-level RadianceObj
trackerkeys = sorted(demo.trackerdict.keys()) # get the trackerdict keys to see a specific hour.
demo.trackerdict[trackerkeys[0]] # This prints all trackerdict content
demo.trackerdict[trackerkeys[0]]['scene'] # This shows the Scene Object contents
demo.trackerdict[trackerkeys[0]]['scene'].module.scenex # This shows the Module Object in the Scene's contents
demo.trackerdict[trackerkeys[0]]['scene'].sceneDict # Printing the scene dictionary saved in the Scene Object
demo.trackerdict[trackerkeys[0]]['scene'].sceneDict['tilt'] # Addressing one of the variables in the scene dictionary
# Looking at the AnalysisObj results indivudally
demo.trackerdict[trackerkeys[0]]['AnalysisObj'] # This shows the Analysis Object contents
demo.trackerdict[trackerkeys[0]]['AnalysisObj'].mattype # Addressing one of the variables in the Analysis Object
# Looking at the Analysis results Accumulated for the day:
demo.Wm2Back # this value is the addition of every individual irradiance result for each hour simulated.
# Access module values
demo.trackerdict[trackerkeys[0]]['scene'].module.scenex
"""
Explanation: <a id='step2'></a>
2. Explore the tracker dictionary
You can use any of the below options to explore the tracking dictionary. Copy it into an empty cell to see their contents.
End of explanation
"""
demo.exportTrackerDict(trackerdict = demo.trackerdict, savefile = 'results\\test_reindexTrue.csv', reindex = False)
demo.save(savefile = 'results\\demopickle.pickle')
"""
Explanation: <a id='step3'></a>
3. Explore Save Options
The following lines offer ways to save your trackerdict or your demo object.
End of explanation
"""
|
jepegit/cellpy
|
dev_utils/lookup/cellpy_check_hdf5_queries.ipynb
|
mit
|
my_data.make_step_table()
filename2 = Path("/Users/jepe/Arbeid/Data/celldata/20171120_nb034_11_cc.nh5")
my_data.save(filename2)
print(f"size: {filename2.stat().st_size/1_048_576} MB")
my_data2 = cellreader.CellpyData()
my_data2.load(filename2)
dataset2 = my_data2.dataset
print(dataset2.steps.columns)
del my_data2
del dataset2
# next: dont load the full hdf5-file, only get datapoints for a cycle from step_table
# then: query the hdf5-file for the data (and time it)
# ex: store.select('/CellpyData/dfdata', "data_point>20130104 & data_point<20130104 & columns=['A', 'B']")
infoname = "/CellpyData/info"
dataname = "/CellpyData/dfdata"
summaryname = "/CellpyData/dfsummary"
fidname = "/CellpyData/fidtable"
stepname = "/CellpyData/step_table"
store = pd.HDFStore(filename2)
store.select("/CellpyData/dfdata", where="index>21 and index<32")
store.select(
"/CellpyData/dfdata", "index>21 & index<32 & columns=['Test_Time', 'Step_Index']"
)
"""
Explanation: Some notes
should rename the tables consistently
e.g. dfsummary, dfdata, dfinfo, dfsteps, dffid
have to take care so that it also can read "old" cellpy-files
should make (or check if it is already made) an option for giving a "custom" config-file in starting the session
End of explanation
"""
steptable = store.select(stepname)
s = my_data.get_step_numbers(
steptype="charge",
allctypes=True,
pdtype=True,
cycle_number=None,
steptable=steptable,
)
cycle_mask = (
s["cycle"] == 2
) # also possible to give cycle_number in get_step_number instead
s.head()
a = s.loc[cycle_mask, ["point_first", "point_last"]].values[0]
v_hdr = "Voltage"
c_hdr = "Charge_Capacity"
d_hdr = "Discharge_Capacity"
i_hdr = "Current"
q = f"index>={ a[0] } & index<={ a[1] }"
q += f"& columns = ['{c_hdr}', '{v_hdr}']"
mass = dataset.mass
print(f"mass from dataset.mass = {mass:5.4} mg")
%%timeit
my_data.get_ccap(2)
%%timeit
c2 = store.select("/CellpyData/dfdata", q)
c2[c_hdr] = c2[c_hdr] * 1000000 / mass
5.03 / 3.05
"""
Explanation: Querying cellpy file (hdf5)
load steptable
get the stepnumbers for given cycle
create query and run it
scale the charge (100_000/mass)
End of explanation
"""
plt.plot(c2[c_hdr], c2[v_hdr])
store.close()
"""
Explanation: Result
65% penalty for using "hdf5" query lookup
5.03 vs 3.05 ms
End of explanation
"""
|
eford/rebound
|
ipython_examples/Units.ipynb
|
gpl-3.0
|
import rebound
import math
sim = rebound.Simulation()
sim.G = 6.674e-11
"""
Explanation: Unit convenience functions
For convenience, REBOUND offers simple functionality for converting units. One implicitly sets the units for the simulation through the values used for the initial conditions, but one has to set the appropriate value for the gravitational constant G, and sometimes it is convenient to get the output in different units.
The default value for G is 1, so one can:
a) use units for the initial conditions where G=1 (e.g., AU, $M_\odot$, yr/$2\pi$)
b) set G manually to the value appropriate for the adopted initial conditions, e.g., to use SI units,
End of explanation
"""
sim.units = ('yr', 'AU', 'Msun')
print("G = {0}.".format(sim.G))
"""
Explanation: c) set rebound.units:
End of explanation
"""
sim.add('Earth')
ps = sim.particles
import math
print("v = {0}".format(math.sqrt(ps[0].vx**2 + ps[0].vy**2 + ps[0].vz**2)))
"""
Explanation: When you set the units, REBOUND converts G to the appropriate value for the units passed (must pass exactly 3 units for mass length and time, but they can be in any order). Note that if you are interested in high precision, you have to be quite particular about the exact units.
As an aside, the reason why G differs from $4\pi^2 \approx 39.47841760435743$ is mostly that we follow the convention of defining a "year" as 365.25 days (a Julian year), whereas the Earth's sidereal orbital period is closer to 365.256 days (and at even finer level, Venus and Mercury modify the orbital period). G would only equal $4\pi^2$ in units where a "year" was exactly equal to one orbital period at $1 AU$ around a $1 M_\odot$ star.
Adding particles
If you use sim.units at all, you need to set the units before adding any particles. You can then add particles in any of the ways described in WHFast.ipynb. You can also add particles drawing from the horizons database (see Churyumov-Gerasimenko.ipynb). If you don't set the units ahead of time, HORIZONS will return initial conditions in units of AU, $M_\odot$ and yrs/$2\pi$, such that G=1.
Above we switched to units of AU, $M_\odot$ and yrs, so when we add Earth:
End of explanation
"""
sim = rebound.Simulation()
sim.units = ('m', 's', 'kg')
sim.add(m=1.99e30)
sim.add(m=5.97e24,a=1.5e11)
sim.convert_particle_units('AU', 'yr', 'Msun')
sim.status()
"""
Explanation: we see that the velocity is correctly set to approximately $2\pi$ AU/yr.
If you'd like to enter the initial conditions in one set of units, and then use a different set for the simulation, you can use the sim.convert_particle_units function, which converts both the initial conditions and G. Since we added Earth above, we restart with a new Simulation instance; otherwise we'll get an error saying that we can't set the units with particles already loaded:
End of explanation
"""
sim = rebound.Simulation()
print("G = {0}".format(sim.G))
sim.add(m=1.99e30)
sim.add(m=5.97e24,a=1.5e11)
sim.status()
"""
Explanation: We first set the units to SI, added (approximate values for) the Sun and Earth in these units, and switched to AU, yr, $M_\odot$. You can see that the particle states were converted correctly--the Sun has a mass of about 1, and the Earth has a distance of about 1.
Note that when you pass orbital elements to sim.add, you must make sure G is set correctly ahead of time (through either 3 of the methods above), since it will use the value of sim.G to generate the velocities:
End of explanation
"""
|
PyLCARS/PythonUberHDL
|
myHDL_ComputerFundamentals/Counters/CountersInMyHDL.ipynb
|
bsd-3-clause
|
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
"""
Explanation: \title{Counters in myHDL}
\author{Steven K Armour}
\maketitle
Counters play a vital role in Digital Hardware, ranging from Clock Dividers; (see below) to event triggers by recording the number of events that have occurred or will still need to occur (all the counters here in use a clock as the counting event but this is easily changed). Presented below are some basic HDL counters (Up, Down, Hybridized Up-Down) in myHDL.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Refrances" data-toc-modified-id="Refrances-1"><span class="toc-item-num">1 </span>Refrances</a></span></li><li><span><a href="#Libraries-and-Helper-functions" data-toc-modified-id="Libraries-and-Helper-functions-2"><span class="toc-item-num">2 </span>Libraries and Helper functions</a></span></li><li><span><a href="#Counter-Specs" data-toc-modified-id="Counter-Specs-3"><span class="toc-item-num">3 </span>Counter Specs</a></span></li><li><span><a href="#myHDL-modules-bitvector-type-behavior" data-toc-modified-id="myHDL-modules-bitvector-type-behavior-4"><span class="toc-item-num">4 </span>myHDL modules bitvector type behavior</a></span><ul class="toc-item"><li><span><a href="#up-counting-behavior" data-toc-modified-id="up-counting-behavior-4.1"><span class="toc-item-num">4.1 </span>up counting behavior</a></span></li><li><span><a href="#down-counting-behavior" data-toc-modified-id="down-counting-behavior-4.2"><span class="toc-item-num">4.2 </span>down counting behavior</a></span></li></ul></li><li><span><a href="#Up-Counter" data-toc-modified-id="Up-Counter-5"><span class="toc-item-num">5 </span>Up-Counter</a></span><ul class="toc-item"><li><span><a href="#myHDL-testing" data-toc-modified-id="myHDL-testing-5.1"><span class="toc-item-num">5.1 </span>myHDL testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-5.2"><span class="toc-item-num">5.2 </span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-5.3"><span class="toc-item-num">5.3 </span>Verilog Testbench</a></span></li></ul></li><li><span><a href="#Down-Counter" data-toc-modified-id="Down-Counter-6"><span class="toc-item-num">6 </span>Down Counter</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-6.1"><span class="toc-item-num">6.1 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-6.2"><span class="toc-item-num">6.2 </span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-6.3"><span class="toc-item-num">6.3 </span>Verilog Testbench</a></span></li></ul></li><li><span><a href="#Up/Down-Counter" data-toc-modified-id="Up/Down-Counter-7"><span class="toc-item-num">7 </span>Up/Down Counter</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-7.1"><span class="toc-item-num">7.1 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-7.2"><span class="toc-item-num">7.2 </span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-7.3"><span class="toc-item-num">7.3 </span>Verilog Testbench</a></span></li></ul></li><li><span><a href="#Application:-Clock-Divider" data-toc-modified-id="Application:-Clock-Divider-8"><span class="toc-item-num">8 </span>Application: Clock Divider</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-8.1"><span class="toc-item-num">8.1 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Code" data-toc-modified-id="Verilog-Code-8.2"><span class="toc-item-num">8.2 </span>Verilog Code</a></span></li><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-8.3"><span class="toc-item-num">8.3 </span>Verilog Testbench</a></span></li></ul></li></ul></div>
Refrances
@misc{loi le_2017,
title={Verilog code for counter with testbench},
url={http://www.fpga4student.com/2017/03/verilog-code-for-counter-with-testbench.html},
journal={Fpga4student.com},
author={Loi Le, Van},
year={2017}
}
@misc{digilent_2018,
title={Learn.Digilentinc | Counter and Clock Divider},
url={https://learn.digilentinc.com/Documents/262},
journal={Learn.digilentinc.com},
author={Digilent},
year={2018}
}
Libraries and Helper functions
End of explanation
"""
CountVal=17
BitSize=int(np.log2(CountVal))+1; BitSize
"""
Explanation: Counter Specs
End of explanation
"""
ModBV=modbv(0)[BitSize:]
IntBV=intbv(0)[BitSize:]
print(f"`ModBV` max is {ModBV.max}; min is {ModBV.min}")
print(f"`IntBV` max is {IntBV.max}; min is {IntBV.min}")
for _ in range(ModBV.max*2):
try:
ModBV+=1; IntBV+=1
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV}")
except ValueError:
ModBV+=1
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV} and INVALID")
"""
Explanation: myHDL modules bitvector type behavior
up counting behavior
End of explanation
"""
ModBV=modbv(2**BitSize -1)[BitSize:]
IntBV=intbv(2**BitSize -1)[BitSize:]
print(f"`ModBV` max is {ModBV.max}; min is {ModBV.min}")
print(f"`IntBV` max is {IntBV.max}; min is {IntBV.min}")
for _ in range(ModBV.max*2):
try:
ModBV-=1; IntBV-=1
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV}")
except ValueError:
ModBV-=0
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV} and INVALID")
"""
Explanation: down counting behavior
End of explanation
"""
@block
def Up_Counter(count, Trig, clk, rst, CountVal, BitSize):
"""
UpCounter
Input:
clk(bool): system clock feed
rst(bool): clock reset signal
Ouput:
count (bit vector): current count value; count
Trig(bool)
Parmeter(Python Only):
CountVal(int): value to count to
BitSize (int): Bitvalue size is log_2(CountVal)+1
"""
#internals
count_i=Signal(modbv(0)[BitSize:])
Trig_i=Signal(bool(0))
@always(clk.posedge, rst.negedge)
def logic():
if rst:
count_i.next=0
Trig_i.next=0
elif count_i%CountVal==0 and count_i!=0:
Trig_i.next=1
count_i.next=0
else:
count_i.next=count_i+1
@always_comb
def OuputBuffer():
count.next=count_i
Trig.next=Trig_i
return instances()
"""
Explanation: Up-Counter
up counters are counters that count up to a target value from a lower starting value. The following counter is a simple one that uses the clock as incrementer (think one clock cycle as one swing of an old grandfather clock pendulum). But more complicated counters can use any signal as an incrementer. This Counter also has a signal the indicates that the counter has been triggered before the modulus values for the internal counter is reset. This is because this counter tries to reproduce the behavior of timers found on common apps that show how much time has elapsed since the counter has run up
\begin{figure}
\centerline{\includegraphics[width=10cm]{Up_Counter.png}}
\caption{\label{fig:RP} Up_Counter Functianl Digram }
\end{figure}
End of explanation
"""
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
Trig=Signal(bool(0)); Peeker(Trig, 'Trig')
count=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')
DUT=Up_Counter(count, Trig, clk, rst, CountVal, BitSize)
def Up_CounterTB():
"""
myHDL only Testbench for `Up_Counter` module
"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==int(CountVal*1.5):
rst.next=1
elif i==int(CountVal*1.5)+1:
rst.next=0
if i==int(CountVal*2.5):
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, Up_CounterTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Up_CounterData=Peeker.to_dataframe()
Up_CounterData=Up_CounterData[Up_CounterData['clk']==1]
Up_CounterData.drop('clk', axis=1, inplace=True)
Up_CounterData.reset_index(drop=True, inplace=True)
Up_CounterData
"""
Explanation: myHDL testing
End of explanation
"""
DUT.convert()
VerilogTextReader('Up_Counter');
"""
Explanation: Verilog Code
End of explanation
"""
ResetAt=int(CountVal*1.5)+1
StopAt=int(CountVal*2.5)
@block
def Up_CounterTBV():
"""
myHDL -> Verilog Testbench for `Up_Counter` module
"""
clk=Signal(bool(0))
rst=Signal(bool(0))
Trig=Signal(bool(0))
count=Signal(modbv(0)[BitSize:])
@always_comb
def print_data():
print(clk, rst, Trig, count)
DUT=Up_Counter(count, Trig, clk, rst, CountVal, BitSize)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==ResetAt:
rst.next=1
elif i==(ResetAt+1):
rst.next=0
else:
pass
if i==StopAt:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=Up_CounterTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('Up_CounterTBV');
"""
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{Up_CounterRTL.png}}
\caption{\label{fig:UCRTL} Up_Counter RTL Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{Up_CounterSYN.png}}
\caption{\label{fig:UCSYN} Up_Counter Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
Verilog Testbench
End of explanation
"""
@block
def Down_Counter(count, Trig, clk, rst, StartVal, BitSize):
"""
DownCounter
Input:
clk(bool): system clock feed
rst(bool): clock reset signal
Ouput:
count (bit vector): current count value; count
Trig(bool)
Parmeter(Python Only):
StartVal(int): value to count from
BitSize (int): Bitvalue size is log_2(CountVal)+1
CatButt
"""
#internal counter value
count_i=Signal(modbv(StartVal)[BitSize:])
@always(clk.posedge, rst.negedge)
def logic():
if rst:
count_i.next=StartVal
Trig.next=0
elif count_i==0:
Trig.next=1
count_i.next=StartVal
else:
count_i.next=count_i-1
@always_comb
def OuputBuffer():
count.next=count_i
return instances()
"""
Explanation: Down Counter
Down Counters Count Down from a set upper value to a set target lower value. The following Down Counter is a simple revamp of the previous Up Counter. Thus it starts from the CountVal and counts down to zero to trigger the trigger signal that it has completed one countdown cycle before the internal counter resets to restart the countdown.
\begin{figure}
\centerline{\includegraphics[width=10cm]{}}
\caption{\label{fig:RP} Down_Counter Functianl Digram (ToDo) }
\end{figure}
End of explanation
"""
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
Trig=Signal(bool(0)); Peeker(Trig, 'Trig')
count=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')
DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)
def Down_CounterTB():
"""
myHDL only Testbench for `Down_Counter` module
"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==int(CountVal*1.5):
rst.next=1
elif i==int(CountVal*1.5)+1:
rst.next=0
if i==int(CountVal*2.5):
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, Down_CounterTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Down_CounterData=Peeker.to_dataframe()
Down_CounterData=Down_CounterData[Down_CounterData['clk']==1]
Down_CounterData.drop('clk', axis=1, inplace=True)
Down_CounterData.reset_index(drop=True, inplace=True)
Down_CounterData
"""
Explanation: myHDL Testing
End of explanation
"""
DUT.convert()
VerilogTextReader('Down_Counter');
"""
Explanation: Verilog Code
End of explanation
"""
ResetAt=int(CountVal*1.5)
StopAt=int(CountVal*2.5)
@block
def Down_CounterTBV():
"""
myHDL -> Verilog Testbench for `Down_Counter` module
"""
clk=Signal(bool(0))
rst=Signal(bool(0))
Trig=Signal(bool(0))
count=Signal(modbv(0)[BitSize:])
@always_comb
def print_data():
print(clk, rst, Trig, count)
DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==ResetAt:
rst.next=1
elif i==(ResetAt+1):
rst.next=0
else:
pass
if i==StopAt:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=Down_CounterTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('Down_CounterTBV');
"""
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{Down_CounterRTL.png}}
\caption{\label{fig:DCRTL} Down_Counter RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{Down_CounterSYN.png}}
\caption{\label{fig:DCSYN} Down_Counter Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
Verilog Testbench
End of explanation
"""
#Create the Direction States for UpDown Counter
DirStates=enum('Up', 'Down')
print(f"`Up` state repersentation is {bin(DirStates.Up)}")
print(f"`Down` state repersentation is {bin(DirStates.Down)}")
@block
def UpDown_Counter(Dir, count, Trig, clk, rst,
CountVal, StartVal, BitSize):
"""
UpDownCounter, hybrid of a simple Up Counter and
a simple Down Counter using `Dir` to control Up/Down
count Direction
Input:
Dir():
clk(bool): system clock feed
rst(bool): clock reset signal
Ouput:
count (bit vector): current count value; count
Trig(bool)
Parmeter(Python Only):
CountVal(int): Highest Value for counter
StartVal(int): starting value for internal counter
BitSize (int): Bitvalue size is log_2(CountVal)+1
"""
#internal counter value
count_i=Signal(modbv(StartVal)[BitSize:])
@always(clk.posedge, rst.negedge)
def logic():
if rst:
count_i.next=StartVal
Trig.next=0
#counter contanment
elif count_i//CountVal==1 and rst==0:
count_i.next=StartVal
#up behavior
elif Dir==DirStates.Up:
count_i.next=count_i+1
#simple Triger at ends
if count_i%CountVal==0:
Trig.next=1
#down behavior
elif Dir==DirStates.Down:
count_i.next=count_i-1
#simple Triger at ends
if count_i%CountVal==0:
Trig.next=1
@always_comb
def OuputBuffer():
count.next=count_i
return instances()
"""
Explanation: Up/Down Counter
This Counter incorporates both an Up Counter and Down Counter via hybridizing between the two via a direction control state machine
\begin{figure}
\centerline{\includegraphics[width=10cm]{}}
\caption{\label{fig:RP} UpDown_Counter Functianl Digram (ToDo) }
\end{figure}
End of explanation
"""
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
Trig=Signal(bool(0)); Peeker(Trig, 'Trig')
count=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')
Dir=Signal(DirStates.Up); Peeker(Dir, 'Dir')
DUT=UpDown_Counter(Dir, count, Trig, clk, rst,
CountVal, StartVal=CountVal//2, BitSize=BitSize)
def UpDown_CounterTB():
"""
myHDL only Testbench for `UpDown_Counter` module
"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==int(CountVal*1.5):
Dir.next=DirStates.Down
elif i==int(CountVal*2.5):
rst.next=1
elif i==int(CountVal*2.5)+1:
rst.next=0
if i==int(CountVal*3.5):
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, UpDown_CounterTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
UpDown_CounterData=Peeker.to_dataframe()
UpDown_CounterData=UpDown_CounterData[UpDown_CounterData['clk']==1]
UpDown_CounterData.drop('clk', axis=1, inplace=True)
UpDown_CounterData.reset_index(drop=True, inplace=True)
UpDown_CounterData
"""
Explanation: myHDL Testing
End of explanation
"""
DUT.convert()
VerilogTextReader('UpDown_Counter');
"""
Explanation: Verilog Code
End of explanation
"""
StateChangeAt=int(CountVal*1.5)
ResetAt=int(CountVal*2.5)
StopAt=int(CountVal*3.5)
@block
def UpDown_CounterTBV():
"""
myHDL -> Verilog Testbench for `Down_Counter` module
"""
clk=Signal(bool(0))
rst=Signal(bool(0))
Trig=Signal(bool(0))
count=Signal(modbv(0)[BitSize:])
Dir=Signal(DirStates.Up)
DUT=UpDown_Counter(Dir, count, Trig, clk, rst,
CountVal, StartVal=CountVal//2, BitSize=BitSize)
@always_comb
def print_data():
print(clk, rst, Trig, count)
DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==StateChangeAt:
Dir.next=DirStates.Down
elif i==ResetAt:
rst.next=1
elif i==ResetAt+1:
rst.next=0
else:
pass
if i==StopAt:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=UpDown_CounterTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('UpDown_CounterTBV');
"""
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{UpDown_CounterRTL.png}}
\caption{\label{fig:UDCRTL} UpDown_Counter RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{UpDown_CounterSYN.png}}
\caption{\label{fig:UDCSYN} UpDown_Counter Synthesized schematic; Xilinx Vivado 2017.4}
\end{figure}
Verilog Testbench
End of explanation
"""
@block
def ClockDivider(Divisor, clkOut, count, clk,rst):
"""
Simple Clock Divider based on the Digilint Clock Divider
https://learn.digilentinc.com/Documents/262
Input:
Divisor(32 bit): the clock frequncy divide by value
clk(bool): The input clock
rst(bool): clockDivider Reset
Ouput:
clkOut(bool): the divided clock ouput
count(32bit): the value of the internal counter
"""
count_i=Signal(modbv(0)[32:])
@always(clk.posedge, rst.posedge)
def counter():
if rst:
count_i.next=0
elif count_i==(Divisor-1):
count_i.next=0
else:
count_i.next=count_i+1
clkOut_i=Signal(bool(0))
@always(clk.posedge, rst.posedge)
def clockTick():
if rst:
clkOut_i.next=0
elif count_i==(Divisor-1):
clkOut_i.next=not clkOut_i
else:
clkOut_i.next=clkOut_i
@always_comb
def OuputBuffer():
count.next=count_i
clkOut.next=clkOut_i
return instances()
"""
Explanation: Application: Clock Divider
On common application in HDL for counters in build clock dividers. And while there are more specialized and advanced means to perform up or down frequency generation from a reference clock (see for example digital Phase Lock Loops). A simple clock divider is very useful HDL code to drive other HDL IPs that should/need a slower event rate than the Megahertz+ speeds of today's FPGAs
\begin{figure}
\centerline{\includegraphics[width=10cm]{}}
\caption{\label{fig:RP} ClockDivider Functianl Digram (ToDo) }
\end{figure}
End of explanation
"""
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
Divisor=Signal(intbv(0)[32:]); Peeker(Divisor, 'Divisor')
count=Signal(intbv(0)[32:]); Peeker(count, 'count')
clkOut=Signal(bool(0)); Peeker(clkOut, 'clkOut')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=ClockDivider(Divisor, clkOut, count, clk,rst)
def ClockDividerTB():
"""
myHDL only Testbench for `ClockDivider` module
"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(2,6+1):
Divisor.next=i
rst.next=0
#run clock time
for _ in range(4*2**(i-1)):
yield clk.posedge
for j in range(1):
if j==0:
rst.next=1
yield clk.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, ClockDividerTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
ClockDividerData=Peeker.to_dataframe()
ClockDividerData
ClockDividerData_2=ClockDividerData[ClockDividerData['Divisor']==2]
ClockDividerData_2.reset_index(drop=True, inplace=True)
ClockDividerData_2.plot(y=['clk', 'clkOut']);
ClockDividerData_3=ClockDividerData[ClockDividerData['Divisor']==3]
ClockDividerData_3.reset_index(drop=True, inplace=True)
ClockDividerData_3.plot(y=['clk', 'clkOut']);
ClockDividerData_4=ClockDividerData[ClockDividerData['Divisor']==4]
ClockDividerData_4.reset_index(drop=True, inplace=True)
ClockDividerData_4.plot(y=['clk', 'clkOut']);
ClockDividerData_5=ClockDividerData[ClockDividerData['Divisor']==5]
ClockDividerData_5.reset_index(drop=True, inplace=True)
ClockDividerData_5.plot(y=['clk', 'clkOut']);
ClockDividerData_6=ClockDividerData[ClockDividerData['Divisor']==6]
ClockDividerData_6.reset_index(drop=True, inplace=True)
ClockDividerData_6.plot(y=['clk', 'clkOut']);
DUT.convert()
VerilogTextReader('ClockDivider');
"""
Explanation: myHDL Testing
End of explanation
"""
@block
def ClockDividerTBV():
"""
myHDL -> Verilog Testbench for `ClockDivider` module
"""
clk=Signal(bool(0));
Divisor=Signal(intbv(0)[32:])
count=Signal(intbv(0)[32:])
clkOut=Signal(bool(0))
rst=Signal(bool(0))
@always_comb
def print_data():
print(clk, Divisor, count, clkOut, rst)
DUT=ClockDivider(Divisor, clkOut, count, clk,rst)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(2,6+1):
Divisor.next=i
rst.next=0
#run clock time
for _ in range(4*2**(i-1)):
yield clk.posedge
for j in range(1):
if j==0:
rst.next=1
else:
pass
yield clk.posedge
raise StopSimulation()
return instances()
TB=ClockDividerTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('ClockDividerTBV');
"""
Explanation: Verilog Code
\begin{figure}
\centerline{\includegraphics[width=10cm]{ClockDividerRTL.png}}
\caption{\label{fig:clkDivRTL} ClockDivider RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{ClockDividerSYN.png}}
\caption{\label{fig:clkDivRTL} ClockDivider synthesized schematic; Xilinx Vivado 2017.4}
\end{figure}
Verilog Testbench
End of explanation
"""
|
malogrisard/NTDScourse
|
toolkit/02_ex_exploitation.ipynb
|
mit
|
import pandas as pd
import numpy as np
from IPython.display import display
import os.path
folder = os.path.join('..', 'data', 'social_media')
# Your code here.
fb = pd.read_sql('facebook', 'sqlite:///' + os.path.join(folder, 'facebook.sqlite'))
tw = pd.read_sql('twitter', 'sqlite:///' + os.path.join(folder, 'twitter.sqlite'))
n, d = fb.shape
print('The data is a {} with {} samples of dimensionality {}.'.format(type(fb), n, d))
"""
Explanation: A Python Tour of Data Science: Data Acquisition & Exploration
Michaël Defferrard, PhD student, EPFL LTS2
Exercise: problem definition
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
This notebook is the second part of the exercise. Given the data we collected from Facebook an Twitter in the last exercise, we will construct an ML model and evaluate how good it is to predict the number of likes of a post / tweet given the content.
1 Data importation
Use pandas to import the facebook.sqlite and twitter.sqlite databases.
Print the 5 first rows of both tables.
End of explanation
"""
#Data cleaning
for i in range(len(fb)):
if fb['text'][[i] == 'http' :#or fb[i] == 'the':
fb.icol(i)
else:
continue
from sklearn.feature_extraction.text import CountVectorizer
import re
nwords = 100
# Your code here.
vectorizer = CountVectorizer(max_features = nwords)
#----------------------------------------------------------------------------------------------
fb_text_vec = vectorizer.fit_transform(fb['text'])
fb_text_vectorized = fb_text_vec.toarray()
fb_words = vectorizer.get_feature_names()
#data cleaning
fb_words.remove('http')
freqs = [(word, fb_text_vec.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()]
fb_Most_used = sorted(freqs, key = lambda x: -x[1])
#----------------------------------------------------------------------------------------------
tw_text_vec = vectorizer.fit_transform(tw['text'])
tw_text_vectorized = tw_text_vec.toarray()
tw_words = vectorizer.get_feature_names()
#data cleaning
tw_words.remove('rt')
freqs = [(word, tw_text_vec.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()]
tw_Most_used = sorted(freqs, key = lambda x: -x[1])
"""
Explanation: 2 Vectorization
First step: transform the data into a format understandable by the machine. What to do with text ? A common choice is the so-called bag-of-word model, where we represent each word a an integer and simply count the number of appearances of a word into a document.
Example
Let's say we have a vocabulary represented by the following correspondance table.
| Integer | Word |
|:-------:|---------|
| 0 | unknown |
| 1 | dog |
| 2 | school |
| 3 | cat |
| 4 | house |
| 5 | work |
| 6 | animal |
Then we can represent the following document
I have a cat. Cats are my preferred animals.
by the vector $x = [6, 0, 0, 2, 0, 0, 1]^T$.
Tasks
Construct a vocabulary of the 100 most occuring words in your dataset.
Build a vector $x \in \mathbb{R}^{100}$ for each document (post or tweet).
Tip: the natural language modeling libraries nltk and gensim are useful for advanced operations. You don't need them here.
Arise a first data cleaning question. We may have some text in french and other in english. What do we do ?
End of explanation
"""
b = vectorizer.vocabulary_.get('2016')
print(fb_Most_used[:5])
print(tw_Most_used[:5])
"""
Explanation: Exploration question: what are the 5 most used words ? Exploring your data while playing with it is a useful sanity check.
End of explanation
"""
# Your code here.
X = tw_text_vectorized
X = X.astype(np.float)
#X -= X.mean(axis=0)
#X /= X.std(axis=0)
y = tw['likes']
y = y.astype(np.float)
# Training and testing sets.
test_size = round(len(X)/2)
print('Split: {} testing and {} training samples'.format(test_size, y.size - test_size))
perm = np.random.permutation(y.size)
X_test = X[:test_size]
X_train = X[test_size:]
y_test = y[perm[:test_size]]
y_train = y[perm[test_size:]]
"""
Explanation: 3 Pre-processing
The independant variables $X$ are the bags of words.
The target $y$ is the number of likes.
Split in half for training and testing sets.
End of explanation
"""
import scipy.sparse
class RidgeRegression(object):
"""Our ML model."""
def __init__(self, alpha=0):
"The class' constructor. Initialize the hyper-parameters."
self.a = alpha
def predict(self, X):
"""Return the predicted class given the features."""
return np.sign(X.dot(self.w) + self.b)
def fit(self, X, y):
"""Learn the model's parameters given the training data, the closed-form way."""
n, d = X.shape
self.b = np.mean(y)
Ainv = np.linalg.inv(X.T.dot(X) + self.a * np.identity(d))
self.w = Ainv.dot(X.T).dot(y - self.b)
def loss(self, X, y, w=None, b=None):
"""Return the current loss.
This method is not strictly necessary, but it provides
information on the convergence of the learning process."""
w = self.w if w is None else w # The ternary conditional operator
b = self.b if b is None else b # makes those tests concise.
import autograd.numpy as np # See below for autograd.
return np.linalg.norm(np.dot(X, w) + b - y)**2 + self.a * np.linalg.norm(w, 2)**2
"""
Explanation: 4 Linear regression
Using numpy, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$
Please define a class LinearRegression with two methods:
1. fit learn the parameters $w$ and $b$ of the model given the training examples.
2. predict gives the estimated number of likes of a post / tweet. That will be used to evaluate the model on the testing set.
To evaluate the classifier, create an accuracy(y_pred, y_true) function which computes the mean squared error $\frac1n \| \hat{y} - y \|_2^2$.
Hint: you may want to use the function scipy.sparse.linalg.spsolve().
End of explanation
"""
# Your code here.
import sklearn.metrics
neigh = RidgeRegression()
neigh.fit(X_train, y_train)
y_predTest = neigh.predict(X_train)
train_accuracy = sklearn.metrics.accuracy_score(y_test, y_predTest)
y_pred = neigh.predict(X_test)
test_accuracy = sklearn.metrics.accuracy_score(y_test, y_pred)
print('train_accuracy',train_accuracy)
print('test_accuracy',test_accuracy)
"""
Explanation: Interpretation: what are the most important words a post / tweet should include ?
End of explanation
"""
import ipywidgets
from IPython.display import clear_output
# Your code here.
"""
Explanation: 5 Interactivity
Create a slider for the number of words, i.e. the dimensionality of the samples $x$.
Print the accuracy for each change on the slider.
End of explanation
"""
from sklearn import linear_model, metrics
# Your code here.
neigh = sklearn.linear_model.LogisticRegression()
neigh.fit(X_train, y_train)
y_predTest = neigh.predict(X_train)
train_accuracy = sklearn.metrics.accuracy_score(y_train, y_predTest)
y_pred = neigh.predict(X_test)
test_accuracy = sklearn.metrics.accuracy_score(y_test, y_pred)
print('train_accuracy',train_accuracy)
print('test_accuracy',test_accuracy)
"""
Explanation: 6 Scikit learn
Fit and evaluate the linear regression model using sklearn.
Evaluate the model with the mean squared error metric provided by sklearn.
Compare with your implementation.
End of explanation
"""
import os
os.environ['KERAS_BACKEND'] = 'theano' # tensorflow
import keras
# Your code here.
"""
Explanation: 7 Deep Learning
Try a simple deep learning model !
Another modeling choice would be to use a Recurrent Neural Network (RNN) and feed it the sentence words after words.
End of explanation
"""
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
# Your code here.
"""
Explanation: 8 Evaluation
Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.
What do you observe ? What are your suggestions to improve the performance ?
End of explanation
"""
|
ML4DS/ML4all
|
R1.Intro_Regression/.ipynb_checkpoints/regression_intro_student-checkpoint.ipynb
|
mit
|
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import numpy as np
import scipy.io # To read matlab files
import pandas as pd # To read data tables from csv files
# For plots and graphical results
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pylab
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
"""
Explanation: Introduction to Regression.
Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Notebook version: 1.1 (Sep 08, 2017)
Changes: v.1.0 - First version. Extracted from regression_intro_knn v.1.0.
v.1.1 - Compatibility with python 2 and python 3
Pending changes: test_helper does not work in python 3.
End of explanation
"""
from sklearn import datasets
# Load the dataset. Select it by uncommenting the appropriate line
D_all = datasets.load_boston()
#D_all = datasets.load_diabetes()
# Extract data and data parameters.
X = D_all.data # Complete data matrix (including input and target variables)
S = D_all.target # Target variables
n_samples = X.shape[0] # Number of observations
n_vars = X.shape[1] # Number of variables (including input and target)
"""
Explanation: 1. The regression problem
The goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_1, X_2, \ldots, X_N$ (that we will collect in a single vector $\bf X$).
Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.
<img src="figs/block_diagram.png", width=600>
The only information available to estimate the relation between the inputs and the target is a dataset $\mathcal D$ containing several observations of all variables.
$$\mathcal{D} = {{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$$
The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$.
<img src="figs/predictor.png", width=300>
2. Examples of regression problems.
The <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems.
<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>).
<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>.
We can load these datasets as follows:
End of explanation
"""
print(n_samples)
"""
Explanation: This dataset contains
End of explanation
"""
print(n_vars)
"""
Explanation: observations of the target variable and
End of explanation
"""
# Select a dataset
nrows = 4
ncols = 1 + (X.shape[1]-1)/nrows
# Some adjustment for the subplot.
pylab.subplots_adjust(hspace=0.2)
# Plot all variables
for idx in range(X.shape[1]):
ax = plt.subplot(nrows,ncols,idx+1)
ax.scatter(X[:,idx], S) # <-- This is the key command
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
plt.ylabel('Target')
"""
Explanation: input variables.
3. Scatter plots
3.1. 2D scatter plots
When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>
Python methods plot and scatter from the matplotlib package can be used for these graphical representations.
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 3.2. 3D Plots
With the addition of a third coordinate, plot and scatter can be used for 3D plotting.
Exercise 1:
Select the diabetes dataset. Visualize the target versus components 2 and 4. (You can get more info about the <a href=http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter>scatter</a> command and an <a href=http://matplotlib.org/examples/mplot3d/scatter3d_demo.html>example of use</a> in the <a href=http://matplotlib.org/index.html> matplotlib</a> documentation)
End of explanation
"""
# In this section we will plot together the square and absolute errors
grid = np.linspace(-3,3,num=100)
plt.plot(grid, grid**2, 'b-', label='Square error')
plt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')
plt.xlabel('Error')
plt.ylabel('Cost')
plt.legend(loc='best')
plt.show()
"""
Explanation: 4. Evaluating a regression task
In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are
Square error: $l(s, \hat{s}) = (s - \hat{s})^2$
Absolute error: $l(s, \hat{s}) = |s - \hat{s}|$
Note that both the square and absolute errors are functions of the estimation error $e = s-{\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\hat s}) = s^2 \left(s-{\hat s}\right)^2$.
End of explanation
"""
# Load dataset in arrays X and S
df = pd.read_csv('datasets/x01.csv', sep=',', header=None)
X = df.values[:,0]
S = df.values[:,1]
# <SOL>
# </SOL>
if sys.version_info.major==2:
Test.assertTrue(np.isclose(R, 153781.943889), 'Incorrect value for the average square error')
"""
Explanation: The overal prediction performance is computed as the average of the loss computed over a set of samples:
$${\bar R} = \frac{1}{K}\sum_{k=1}^K l\left(s^{(k)}, \hat{s}^{(k)}\right)$$
Exercise 2:
The dataset in file 'datasets/x01.csv', taken from <a href="http://people.sc.fsu.edu/~jburkardt/datasets/regression/x01.txt">here</a> records the average weight of the brain and body for a number of mammal species.
* Represent a scatter plot of the targe variable versus the one-dimensional input.
* Plot, over the same plot, the prediction function given by $S = 1.2 X$
* Compute the square error rate for the given dataset.
End of explanation
"""
|
wheeler-microfluidics/teensy-minimal-rpc
|
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
|
gpl-3.0
|
import pandas as pd
def get_pdb_divide_params(frequency, F_BUS=int(48e6)):
mult_factor = np.array([1, 10, 20, 40])
prescaler = np.arange(8)
clock_divide = (pd.DataFrame([[i, m, p, m * (1 << p)]
for i, m in enumerate(mult_factor) for p in prescaler],
columns=['mult_', 'mult_factor', 'prescaler', 'combined'])
.drop_duplicates(subset=['combined'])
.sort_values('combined', ascending=True))
clock_divide['clock_mod'] = (F_BUS / frequency / clock_divide.combined).astype(int)
return clock_divide.loc[clock_divide.clock_mod <= 0xffff]
PDB0_IDLY = 0x4003600C # Interrupt Delay Register
PDB0_SC = 0x40036000 # Status and Control Register
PDB0_MOD = 0x40036004 # Modulus Register
PDB_SC_PDBEIE = 0x00020000 # Sequence Error Interrupt Enable
PDB_SC_SWTRIG = 0x00010000 # Software Trigger
PDB_SC_DMAEN = 0x00008000 # DMA Enable
PDB_SC_PDBEN = 0x00000080 # PDB Enable
PDB_SC_PDBIF = 0x00000040 # PDB Interrupt Flag
PDB_SC_PDBIE = 0x00000020 # PDB Interrupt Enable.
PDB_SC_CONT = 0x00000002 # Continuous Mode Enable
PDB_SC_LDOK = 0x00000001 # Load OK
def PDB_SC_TRGSEL(n): return (((n) & 15) << 8) # Trigger Input Source Select
def PDB_SC_PRESCALER(n): return (((n) & 7) << 12) # Prescaler Divider Select
def PDB_SC_MULT(n): return (((n) & 3) << 2) # Multiplication Factor
def PDB_SC_LDMOD(n): return (((n) & 3) << 18) # Load Mode Select
# PDB0_IDLY = 1; // the pdb interrupt happens when IDLY is equal to CNT+1
proxy.mem_cpy_host_to_device(PDB0_IDLY, np.uint32(1).tostring())
# software trigger enable PDB continuous
PDB_CONFIG = (PDB_SC_TRGSEL(15) | PDB_SC_PDBEN | PDB_SC_CONT | PDB_SC_LDMOD(0))
PDB0_SC_ = (PDB_CONFIG | PDB_SC_PRESCALER(clock_divide.prescaler) |
PDB_SC_MULT(clock_divide.mult_) |
PDB_SC_DMAEN | PDB_SC_LDOK) # load all new values
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
clock_divide = get_pdb_divide_params(25).iloc[0]
# PDB0_MOD = (uint16_t)(mod-1);
proxy.mem_cpy_host_to_device(PDB0_MOD, np.uint32(clock_divide.clock_mod).tostring())
PDB0_SC_ = (PDB_CONFIG | PDB_SC_PRESCALER(clock_divide.prescaler) |
PDB_SC_DMAEN | PDB_SC_MULT(clock_divide.mult_) |
PDB_SC_SWTRIG) # start the counter!
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
PDB0_SC_ = 0
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
"""
Explanation: NB Cannot use PIT to trigger periodic DMA due to hardware bug
See here.
Try using PDB instead??
End of explanation
"""
import arduino_helpers.hardware.teensy as teensy
from arduino_rpc.protobuf import resolve_field_values
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as DMA
import teensy_minimal_rpc.ADC as ADC
import teensy_minimal_rpc.SIM as SIM
import teensy_minimal_rpc.PIT as PIT
# Disconnect from existing proxy (if available)
try:
del proxy
except NameError:
pass
proxy = SerialProxy()
proxy.pin_mode(teensy.LED_BUILTIN, 1)
from IPython.display import display
proxy.update_sim_SCGC6(SIM.R_SCGC6(PDB=True))
sim_scgc6 = SIM.R_SCGC6.FromString(proxy.read_sim_SCGC6().tostring())
display(resolve_field_values(sim_scgc6)[['full_name', 'value']].T)
# proxy.update_pit_registers(PIT.Registers(MCR=PIT.R_MCR(MDIS=False)))
# pit_registers = PIT.Registers.FromString(proxy.read_pit_registers().tostring())
# display(resolve_field_values(pit_registers)[['full_name', 'value']].T)
import numpy as np
# CORE_PIN13_PORTSET = CORE_PIN13_BITMASK;
# CORE_PIN13_PORTCLEAR = CORE_PIN13_BITMASK;
#define CORE_PIN13_PORTCLEAR GPIOC_PCOR
#define CORE_PIN13_PORTSET GPIOC_PSOR
#define GPIOC_PCOR (*(volatile uint32_t *)0x400FF088) // Port Clear Output Register
#define GPIOC_PSOR (*(volatile uint32_t *)0x400FF084) // Port Set Output Register
CORE_PIN13_BIT = 5
GPIOC_PCOR = 0x400FF088 # Port Clear Output Register
GPIOC_PSOR = 0x400FF084 # Port Set Output Register
proxy.mem_cpy_host_to_device(GPIOC_PSOR, np.uint32(1 << CORE_PIN13_BIT).tostring())
proxy.update_dma_mux_chcfg(0, DMA.MUX_CHCFG(ENBL=1, TRIG=0, SOURCE=48))
proxy.update_dma_registers(DMA.Registers(SERQ=0))
proxy.update_dma_registers(DMA.Registers(CERQ=0))
resolve_field_values(DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(0).tostring()))[['full_name', 'value']]
print proxy.update_pit_timer_config(0, PIT.TimerConfig(LDVAL=int(48e6)))
print proxy.update_pit_timer_config(0, PIT.TimerConfig(TCTRL=PIT.R_TCTRL(TEN=True)))
pit0 = PIT.TimerConfig.FromString(proxy.read_pit_timer_config(0).tostring())
display(resolve_field_values(pit0)[['full_name', 'value']].T)
PIT_LDVAL0 = 0x40037100 # Timer Load Value Register
PIT_CVAL0 = 0x40037104 # Current Timer Value Register
PIT_TCTRL0 = 0x40037108 # Timer Control Register
proxy.mem_cpy_host_to_device(PIT_TCTRL0, np.uint32(1).tostring())
proxy.mem_cpy_device_to_host(PIT_TCTRL0, 4).view('uint32')[0]
proxy.digital_write(teensy.LED_BUILTIN, 0)
proxy.update_dma_registers(DMA.Registers(SSRT=0))
proxy.free_all()
toggle_pin_addr = proxy.mem_alloc(4)
proxy.mem_cpy_host_to_device(toggle_pin_addr, np.uint32(1 << CORE_PIN13_BIT).tostring())
tcds_addr = proxy.mem_aligned_alloc(32, 2 * 32)
hw_tcds_addr = 0x40009000
tcd_addrs = [tcds_addr + 32 * i for i in xrange(2)]
# Create Transfer Control Descriptor configuration for first chunk, encoded
# as a Protocol Buffer message.
tcd0_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT,
DSIZE=DMA.R_TCD_ATTR._32_BIT),
NBYTES_MLNO=4,
SADDR=int(toggle_pin_addr),
SOFF=0,
SLAST=0,
DADDR=int(GPIOC_PSOR),
DOFF=0,
# DLASTSGA=0,
# CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=False))
# proxy.update_dma_TCD(0, tcd0_msg)
DLASTSGA=int(tcd_addrs[1]),
CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=True))
# # Convert Protocol Buffer encoded TCD to bytes structure.
tcd0 = proxy.tcd_msg_to_struct(tcd0_msg)
# Create binary TCD struct for each TCD protobuf message and copy to device
# memory.
for i in xrange(2):
tcd_i = tcd0.copy()
tcd_i['DADDR'] = [GPIOC_PSOR, GPIOC_PCOR][i]
tcd_i['DLASTSGA'] = tcd_addrs[(i + 1) % len(tcd_addrs)]
tcd_i['CSR'] |= (1 << 4)
proxy.mem_cpy_host_to_device(tcd_addrs[i], tcd_i.tostring())
# Load initial TCD in scatter chain to DMA channel chosen to handle scattering.
proxy.mem_cpy_host_to_device(hw_tcds_addr, tcd0.tostring())
proxy.update_dma_registers(DMA.Registers(SSRT=0))
dma_channel_scatter = 0
dma_channel_i = 1
dma_channel_ii = 2
"""
Explanation: Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
After each scan, use DMA scatter chain to write the converted ADC values to a
separate output array for each ADC channel. The length of the output array to
allocate for each ADC channel is determined by the sample_count in the
example below.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
N.B., Only the trigger for the first ADC channel is an explicit
software trigger. All remaining triggers occur through minor-loop DMA
channel linking from channel $ii$ to channel $i$.
After each scan through all ADC channels is complete, the ADC readings are
scattered using the selected "scatter" DMA channel through a major-loop link
between DMA channel $ii$ and the "scatter" channel.
<img src="multi-channel_ADC_multi-samples_using_DMA.jpg" style="max-height: 600px" />
Device
Connect to device
End of explanation
"""
# Set ADC parameters
proxy.setAveraging(16, teensy.ADC_0)
proxy.setResolution(16, teensy.ADC_0)
proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.update_adc_registers(
teensy.ADC_0,
ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B)))
"""
Explanation: Configure ADC sample rate, etc.
End of explanation
"""
DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h`
DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h`
# DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
# DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
# DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
proxy.update_dma_mux_chcfg(dma_channel_ii,
DMA.MUX_CHCFG(SOURCE=DMAMUX_SOURCE_ADC0,
TRIG=False,
ENBL=True))
# DMA request input signals and this enable request flag
# must be asserted before a channel’s hardware service
# request is accepted (21.3.3/394).
# DMA_SERQ = i
proxy.update_dma_registers(DMA.Registers(SERQ=dma_channel_ii))
proxy.enableDMA(teensy.ADC_0)
proxy.DMA_registers().loc['']
dmamux = DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(dma_channel_ii).tostring())
resolve_field_values(dmamux)[['full_name', 'value']]
adc0 = ADC.Registers.FromString(proxy.read_adc_registers(teensy.ADC_0).tostring())
resolve_field_values(adc0)[['full_name', 'value']].loc[['CFG2', 'SC1A', 'SC3']]
"""
Explanation: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
End of explanation
"""
import re
import numpy as np
import pandas as pd
import arduino_helpers.hardware.teensy.adc as adc
# The number of samples to record for each ADC channel.
sample_count = 10
teensy_analog_channels = ['A0', 'A1', 'A0', 'A3', 'A0']
sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)])
for v in dir(teensy) if re.search(r'^A\d+', v)]))
channel_sc1as = np.array(sc1a_pins[teensy_analog_channels].tolist(), dtype='uint32')
"""
Explanation: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
End of explanation
"""
proxy.free_all()
N = np.dtype('uint16').itemsize * channel_sc1as.size
# Allocate source array
adc_result_addr = proxy.mem_alloc(N)
# Fill result array with zeros
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Copy channel SC1A configurations to device memory
adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channel_sc1as.view('uint8'))
# Allocate source array
samples_addr = proxy.mem_alloc(sample_count * N)
tcds_addr = proxy.mem_aligned_alloc(32, sample_count * 32)
hw_tcds_addr = 0x40009000
tcd_addrs = [tcds_addr + 32 * i for i in xrange(sample_count)]
hw_tcd_addrs = [hw_tcds_addr + 32 * i for i in xrange(sample_count)]
# Fill result array with zeros
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Create Transfer Control Descriptor configuration for first chunk, encoded
# as a Protocol Buffer message.
tcd0_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=channel_sc1as.size * 2,
SADDR=int(adc_result_addr),
SOFF=2,
SLAST=-channel_sc1as.size * 2,
DADDR=int(samples_addr),
DOFF=2 * sample_count,
DLASTSGA=int(tcd_addrs[1]),
CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=True))
# Convert Protocol Buffer encoded TCD to bytes structure.
tcd0 = proxy.tcd_msg_to_struct(tcd0_msg)
# Create binary TCD struct for each TCD protobuf message and copy to device
# memory.
for i in xrange(sample_count):
tcd_i = tcd0.copy()
tcd_i['SADDR'] = adc_result_addr
tcd_i['DADDR'] = samples_addr + 2 * i
tcd_i['DLASTSGA'] = tcd_addrs[(i + 1) % len(tcd_addrs)]
tcd_i['CSR'] |= (1 << 4)
proxy.mem_cpy_host_to_device(tcd_addrs[i], tcd_i.tostring())
# Load initial TCD in scatter chain to DMA channel chosen to handle scattering.
proxy.mem_cpy_host_to_device(hw_tcd_addrs[dma_channel_scatter],
tcd0.tostring())
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print 'Analog pins:', proxy.mem_cpy_device_to_host(adc_sda1s_addr, len(channel_sc1as) *
channel_sc1as.dtype.itemsize).view('uint32')
"""
Explanation: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
End of explanation
"""
ADC0_SC1A = 0x4003B000 # ADC status and control registers 1
sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT,
DSIZE=DMA.R_TCD_ATTR._32_BIT),
NBYTES_MLNO=4,
SADDR=int(adc_sda1s_addr),
SOFF=4,
SLAST=-channel_sc1as.size * 4,
DADDR=int(ADC0_SC1A),
DOFF=0,
DLASTSGA=0,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(dma_channel_i, sda1_tcd_msg)
"""
Explanation: Configure DMA channel $i$
End of explanation
"""
ADC0_RA = 0x4003B010 # ADC data result register
ADC0_RB = 0x4003B014 # ADC data result register
tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=2,
SADDR=ADC0_RA,
SOFF=0,
SLAST=0,
DADDR=int(adc_result_addr),
DOFF=2,
DLASTSGA=-channel_sc1as.size * 2,
CSR=DMA.R_TCD_CSR(START=0, DONE=False,
MAJORELINK=True,
MAJORLINKCH=dma_channel_scatter))
proxy.update_dma_TCD(dma_channel_ii, tcd_msg)
"""
Explanation: Configure DMA channel $ii$
End of explanation
"""
# Clear output array to zero.
proxy.mem_fill_uint8(adc_result_addr, 0, N)
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Software trigger channel $i$ to copy *first* SC1A configuration, which
# starts ADC conversion for the first channel.
#
# Conversions for subsequent ADC channels are triggered through minor-loop
# linking from DMA channel $ii$ to DMA channel $i$ (*not* through explicit
# software trigger).
print 'ADC results:'
for i in xrange(sample_count):
proxy.update_dma_registers(DMA.Registers(SSRT=dma_channel_i))
# Display converted ADC values (one value per channel in `channel_sd1as` list).
print ' Iteration %s:' % i, proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print ''
print 'Samples by channel:'
# Trigger once per chunk
# for i in xrange(sample_count):
# proxy.update_dma_registers(DMA.Registers(SSRT=0))
device_dst_data = proxy.mem_cpy_device_to_host(samples_addr, sample_count * N)
pd.DataFrame(device_dst_data.view('uint16').reshape(-1, sample_count).T,
columns=teensy_analog_channels)
"""
Explanation: Trigger sample scan across selected ADC channels
End of explanation
"""
|
bearing/dosenet-analysis
|
Programming Lesson Modules/Module 4- Example Plot of Weather Data.ipynb
|
mit
|
%matplotlib inline
import csv
import io
import urllib.request
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
# another matplotlib convention; this extension facilitates dates as
# axes labels.
from datetime import datetime
# we will use the datetime extension so we can group the timestamp data
# into manageable units of year, month, date, and time.
url = 'https://radwatch.berkeley.edu/sites/default/files/pictures/rooftop_tmp/weather.csv'
response = urllib.request.urlopen(url)
reader = csv.reader(io.TextIOWrapper(response))
timedata = []
Bi214 = []
Cs137 = []
line = 0
for row in reader:
if line != 0:
timedata.append(datetime.strptime(row[0], '%Y-%m-%d %H:%M:%S'))
# datetime.strptime is a class object that facilitates usage
# of date/time data in Python
Bi214.append(float(row[1]))
Cs137.append(float(row[4]))
line += 1
def weather_plot1(timedata, Bi214, Cs137):
fig, ax = plt.subplots()
# matplotlib convention that unpacks figures into variables for ax
# (axis manipulation) and fig (figure manipulation)
# shortcut commands for: fig = plt.figure()
# AND: fig.add_subplot(1,1,1)
ax.plot(timedata, Bi214, 'ro-', label="Bismuth-214")
ax.plot(timedata, Cs137, 'bs-', label="Cesium-137")
plt.title('AirMonitor Data: Bi-214 and Cs-137 CPS from {0}-{1} to {2}-{3}'
.format(timedata[0].month, timedata[0].day,
timedata[-1].month, timedata[-1].day))
# string interpolation (represented by {}): The '{}' are replaced by
# the strings given in .format(-,-,-,-) in the 2nd line
plt.xlabel('Time')
plt.ylabel('counts per second')
plt.legend(loc='best')
# loc=best places the legend where it will obstruct the data the least.
weather_plot1(timedata, Bi214, Cs137)
"""
Explanation: Module 4- Example Plots of Weather Data
author: Radley Rigonan
In this module, I we will be using data from RadWatch's AirMonitor to create a plot that compares counts per second (CPS) due to Bismuth-214 against the CPS of the lesser occurring isotope Cesium-137. I will be using the following link:
https://radwatch.berkeley.edu/sites/default/files/pictures/rooftop_tmp/weather.csv
The first step in creating a plot is being aware of the format of your CSV file. This weather.csv is organized 9 columns. The 1st column contains important timestamp information, the 2nd column contains Bi-234 CPS, and the 5th column contains Cs-137 CPS. Therefore, we must extract the data from these columns:
End of explanation
"""
def weather_plot2(timedata, Bi214, Cs137):
weather_plot1(timedata, Bi214, Cs137)
# the following commands are simple adjustments to the axes:
plt.xticks(rotation=30)
plt.yscale('log')
plt.title('AirMonitor Data: Bi-214 and Cs-137 CPS from {0}-{1} to {2}-{3}'
.format(timedata[0].month, timedata[0].day,
timedata[-1].month, timedata[-1].day))
plt.show()
weather_plot2(timedata, Bi214, Cs137)
"""
Explanation: There are a few problems with this current plot. Notably, the x-ticks are overlapping and it is difficult to examine the Cesium-137 data because it is so small. Regarding the x-ticks, the labels can be made more visible by rotating the tick-labels. In addition, the Cesium data can be made more visible with a logarithmic plot:
End of explanation
"""
def weather_plot3(timedata, Bi214, Cs137):
import numpy as np
# the next module explains numpy in more detail, but numpy is used
# here to perform a square-root operation for error.
# 1st step: plot the data
fig, ax = plt.subplots()
ax.plot(timedata, Bi214, 'ro-', label='Bismuth-214', linestyle='none')
ax.errorbar(timedata, Bi214, yerr=np.sqrt(Bi214)/60, fmt='ro', ecolor='r')
ax.plot(timedata, Cs137, 'bs-', label='Cesium-137', linestyle='none')
ax.errorbar(timedata, Cs137, yerr=np.sqrt(Cs137)/60, fmt='bs', ecolor='b')
# error is given by np.sqrt(Bi214)/60 and is based on the conversion
# from hourly countrate to counts-per-second
# for ax.errorbar, fmt=format and is identical to corresponding plot
# ecolor=line color and is identical to corresponding plot for same reason.
# 2nd step: legend and axis manipulations:
plt.legend(loc='best')
plt.yscale('log')
# you can decide if this plot is better/worse in log scale
# 3rd step: format ticks along axis; we will use matplotlib's built-in
# datetime commands to format the axis:
ax.xaxis.set_major_locator(mdates.DayLocator())
# ticks on x-axis day-by-day basis
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))
# tick labels only occur on days in the format: Month-Day
# you can customize the format, i.e. '%m-%d-%Y %H:00' would be
# Month-Day-Year Hour:00
ax.xaxis.set_minor_locator(mdates.HourLocator())
# minor ticks on x-axis occur on hour marks
plt.xticks(rotation=30)
# 4th step: titles and labels
plt.title('AirMonitor Data: Bi-214 and Cs-137 CPS from {0}-{1} to {2}-{3}'
.format(timedata[0].month, timedata[0].day, timedata[-1].month, timedata[-1].day))
plt.xlabel('Time')
plt.ylabel('counts per second')
weather_plot3(timedata, Bi214, Cs137)
"""
Explanation: While these plots are fine, many professionally-made graphics are thoroughly controlled and every aspect of the plot is kept in mind. The following example is a more comprehensive approach to plotting. Also, this example will calculate error and include error bars in the final plot. The final plot is similar to the plot in the AirMonitor website: radwatch.berkeley.edu/airsampling
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.2/tutorials/ltte.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Rømer and Light Travel Time Effects (ltte)
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=phoebe.linspace(-0.05, 0.05, 51), dataset='lc01')
"""
Explanation: Now let's add a light curve dataset to see how ltte affects the timings of eclipses.
End of explanation
"""
print(b['ltte@compute'])
"""
Explanation: Relevant Parameters
The 'ltte' parameter in context='compute' defines whether light travel time effects are taken into account or not.
End of explanation
"""
b['sma@binary'] = 100
b['q'] = 0.1
"""
Explanation: Comparing with and without ltte
In order to have a binary system with any noticeable ltte effects, we'll set a somewhat extreme mass-ratio and semi-major axis.
End of explanation
"""
b.set_value_all('atm', 'blackbody')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.run_compute(irrad_method='none', ltte=False, model='ltte_off')
b.run_compute(irrad_method='none', ltte=True, model='ltte_on')
afig, mplfig = b.plot(show=True)
"""
Explanation: We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody.
End of explanation
"""
|
woobe/h2o_tutorials
|
introduction_to_machine_learning/py_03a_regression_basics.ipynb
|
mit
|
# Start and connect to a local H2O cluster
import h2o
h2o.init(nthreads = -1)
"""
Explanation: Machine Learning with H2O - Tutorial 3a: Regression Models (Basics)
<hr>
Objective:
This tutorial explains how to build regression models with four different H2O algorithms.
<hr>
Wine Quality Dataset:
Source: https://archive.ics.uci.edu/ml/datasets/Wine+Quality
CSV (https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv)
<hr>
Algorithms:
GLM
DRF
GBM
DNN
<hr>
Full Technical Reference:
http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/modeling.html
<br>
End of explanation
"""
# Import wine quality data from a local CSV file
wine = h2o.import_file("winequality-white.csv")
wine.head(5)
# Define features (or predictors)
features = list(wine.columns) # we want to use all the information
features.remove('quality') # we need to exclude the target 'quality' (otherwise there is nothing to predict)
features
# Split the H2O data frame into training/test sets
# so we can evaluate out-of-bag performance
wine_split = wine.split_frame(ratios = [0.8], seed = 1234)
wine_train = wine_split[0] # using 80% for training
wine_test = wine_split[1] # using the rest 20% for out-of-bag evaluation
wine_train.shape
wine_test.shape
"""
Explanation: <br>
End of explanation
"""
# Build a Generalized Linear Model (GLM) with default settings
# Import the function for GLM
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
# Set up GLM for regression
glm_default = H2OGeneralizedLinearEstimator(family = 'gaussian', model_id = 'glm_default')
# Use .train() to build the model
glm_default.train(x = features,
y = 'quality',
training_frame = wine_train)
# Check the model performance on training dataset
glm_default
# Check the model performance on test dataset
glm_default.model_performance(wine_test)
"""
Explanation: <br>
Generalized Linear Model
End of explanation
"""
# Build a Distributed Random Forest (DRF) model with default settings
# Import the function for DRF
from h2o.estimators.random_forest import H2ORandomForestEstimator
# Set up DRF for regression
# Add a seed for reproducibility
drf_default = H2ORandomForestEstimator(model_id = 'drf_default', seed = 1234)
# Use .train() to build the model
drf_default.train(x = features,
y = 'quality',
training_frame = wine_train)
# Check the DRF model summary
drf_default
# Check the model performance on test dataset
drf_default.model_performance(wine_test)
"""
Explanation: <br>
Distributed Random Forest
End of explanation
"""
# Build a Gradient Boosting Machines (GBM) model with default settings
# Import the function for GBM
from h2o.estimators.gbm import H2OGradientBoostingEstimator
# Set up GBM for regression
# Add a seed for reproducibility
gbm_default = H2OGradientBoostingEstimator(model_id = 'gbm_default', seed = 1234)
# Use .train() to build the model
gbm_default.train(x = features,
y = 'quality',
training_frame = wine_train)
# Check the GBM model summary
gbm_default
# Check the model performance on test dataset
gbm_default.model_performance(wine_test)
"""
Explanation: <br>
Gradient Boosting Machines
End of explanation
"""
# Build a Deep Learning (Deep Neural Networks, DNN) model with default settings
# Import the function for DNN
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
# Set up DNN for regression
dnn_default = H2ODeepLearningEstimator(model_id = 'dnn_default')
# (not run) Change 'reproducible' to True if you want to reproduce the results
# The model will be built using a single thread (could be very slow)
# dnn_default = H2ODeepLearningEstimator(model_id = 'dnn_default', reproducible = True)
# Use .train() to build the model
dnn_default.train(x = features,
y = 'quality',
training_frame = wine_train)
# Check the DNN model summary
dnn_default
# Check the model performance on test dataset
dnn_default.model_performance(wine_test)
"""
Explanation: <br>
H2O Deep Learning
End of explanation
"""
# Use GLM model to make predictions
yhat_test_glm = glm_default.predict(wine_test)
yhat_test_glm.head(5)
# Use DRF model to make predictions
yhat_test_drf = drf_default.predict(wine_test)
yhat_test_drf.head(5)
# Use GBM model to make predictions
yhat_test_gbm = gbm_default.predict(wine_test)
yhat_test_gbm.head(5)
# Use DNN model to make predictions
yhat_test_dnn = dnn_default.predict(wine_test)
yhat_test_dnn.head(5)
"""
Explanation: <br>
Making Predictions
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cccr-iitm/cmip6/models/iitm-esm/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'iitm-esm', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: IITM-ESM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
widdowquinn/Notebooks-Bioinformatics
|
Biopython_NCBI_Entrez_downloads.ipynb
|
mit
|
# This line imports the Bio.Entrez module, and makes it available
# as 'Entrez'.
from Bio import Entrez
# The line below imports the Bio.SeqIO module, which allows reading
# and writing of common bioinformatics sequence formats.
from Bio import SeqIO
# Create a new directory (if needed) for output/downloads
import os
outdir = "ncbi_downloads"
os.makedirs(outdir, exist_ok=True)
# This line sets the variable 'Entrez.email' to the specified
# email address. You should substitute your own address for the
# example address provided below. Please do not provide a
# fake name.
Entrez.email = "Fakey.McFakename@example.com"
# This line sets the name of the tool that is making the queries
Entrez.tool = "Biopython_NCBI_Entrez_downloads.ipynb"
"""
Explanation: Downloading genome data from NCBI with Biopython and Entrez
Introduction
In this worksheet, you will use Biopython to download pathogen genome data from NCBI programmatically with Python.
It is possible to obtain the same data by point-and-click from a browser, at the terminal using a program like wget, or by other means, but scripting data downloads in this way has advantages, such as:
automation - only one script is required to download many sequences
reproducibility - the same data will be downloaded each time, and copy-paste errors will be avoided
self-documentation - the script itself describes exactly how the data was obtained
future adaptability (and reuse) - only minor changes to the script may be required for the next analysis or project
<div class="alert alert-warning">
<b>Note: large data sets</b>: if you wish to download large datasets, then using <b>wget</b>, <b>ftp</b> or other methods can be better than programmatic access <i>via</i> <b>Entrez</b>. The <b>Entrez</b> interface may give errors partway through large downloads, and is not designed for large data transfers.
</div>
This Jupyter notebook provides some examples of scripting genome downloads from NCBI singly, and in groups. This method of obtaining genome data uses the Entrez interface that NCBI provides for automated querying of its data.
Running cells in this notebook
This is an interactive notebook, which means you are able to run the code that is written in each of the cells.
<div class="alert alert-info" role="alert">
To run the code in a cell, you should:
<ol>
<li>Place your mouse cursor in the cell, and click (this gives the cell <i>focus</i>) to make it active
<li>Hold down the <b>Shift</b> key, and press the <b>Return<b> key.
</ol>
</div>
If this is successful, you should see the input marker to the left of the cell change from
In [ ]:
to (for example)
In [1]:
and you may see output appear below the cell.
Related online documentation
Biopython tutorial for Entrez: http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc109
Biopython technical documentation for Bio.Entrez: http://biopython.org/DIST/docs/api/Bio.Entrez-module.html
Entrez introductory documentation at NCBI: http://www.ncbi.nlm.nih.gov/books/NBK25497/
Entrez help: http://www.ncbi.nlm.nih.gov/books/NBK3837/
Entrez Quick Start Guide: http://www.ncbi.nlm.nih.gov/books/NBK25500/
Requirements
<div class="alert alert-success">
To complete this worksheet, you will need:
<ul>
<li>an active internet connection
<li>the <b>Biopython</b> libraries
</ul>
</div>
Entrez
Entrez is the name NCBI give to the tools they provide as a computational interface to the data they hold across their genomic and other databases (e.g. PubMed). Many scripts and programs that interact with NCBI to download data (e.g. from GenBank or RefSeq) will be using this set of tools.
<div class="alert alert-warning">
<b>Caveats</b>
<br />
There are usage caps for this service, and it is possible to over-use <b>Entrez</b>. If this happens, you or your IP address may be blacklisted. In order to avoid this, you should keep to the following guidelines:
<br />
<ul>
<li> Make no more than three URL requests per second
<li> Make large queries outwith the hours of 0900-1700 EST (1400-2200 GMT)
<li> Provide your email address as an identifier when querying
</ul>
<br />
Programming libraries, such as <b>Biopython</b>'s <b>Bio.Entrez</b> module, will usually help you stay within those guidelines by limiting the frequency of queries, and insisting that you provide an email address.
</div>
Biopython and Bio.Entrez <img src="images/biopython_small.jpg" style="width: 150px; float: right;">
Biopython is a widely-used library, providing bioinformatics tools for the popular Python programming language. Similar libraries exist for other programming languages.
Bio.Entrez is a module of Biopython that provides tools to make queries against the NCBI databases using the Entrez interface.
1. Connecting to NCBI
In order to use the Bio.Entrez module, you need to import it. This is how modules become available for use in Python.
<div class="alert alert-info" role="alert">
It is good practice at this point to specify your email, so that <b>NCBI</b> can contact you in case of problems (or if you are likely to become blacklisted through excessive use).
It is also good practice to specify a '<b>tool</b>' that is the script making the call.
</div>
End of explanation
"""
# The line below uses the Entrez.einfo() function to
# ask NCBI what databases are available. The result is
# 'stored' in a variable called 'handle'
handle = Entrez.einfo()
# In the line below, the response from NCBI is read
# into a record, that organises NCBI's response into
# something you can work with.
record = Entrez.read(handle)
"""
Explanation: 2. Using Bio.Entrez to list available databases
When you send a query or request to NCBI using Bio.Entrez, the remote service will send back data in XML format. This is a file format designed to be easy for computers to read, but is very verbose and difficult to read for humans.
The Bio.Entrez module can read() this data so that you can extract useful information.
In the example below, you will ask NCBI for a list of the databases you can search by using the Entrez.einfo() function. This will return a handle containing the XML response from NCBI. This will be read into a record that you can inspect and manipulate, by the Entrez.read() function.
End of explanation
"""
print(record["DbList"])
"""
Explanation: The variable record contains a list of the available databases at NCBI, which you can see by executing the cell below:
End of explanation
"""
# The line below carries out a search of the `assembly` database at NCBI,
# using the phrase `Ralstonia solanacearum` as the search query,
# and asks NCBI to return up to the first 100 results
handle = Entrez.esearch(db="assembly", term="Ralstonia solanacearum", retmax=100)
# This line converts the returned information from NCBI into a form we
# can use, as before.
record = Entrez.read(handle)
"""
Explanation: You may recognise some of the database names, such as pubmed, nuccore, assembly, sra, and taxonomy.
Entrez allows you to query these databases using Entrez.esearch() in much the same way that you just obtained the list of databases with Entrez.einfo().
3. Using Bio.Entrez to find genome assemblies at NCBI
In the cells below, you will use Bio.Entrez to identify assemblies for the bacterial plant pathogen Ralstonia solanacearum. As our interest is genome data, we will query against the assembly database at NCBI. This database contains entries for all genome assemblies, whether complete or draft.
We are interested in Ralstonia solanacearum, so will search against the assembly database with the text "Ralstonia solanacearum" as a query. The function that allows us to do this is Entrez.esearch(). By default, searches are limited to 20 results (as on the NCBI webpage), but we can change this.
End of explanation
"""
# This line prints the downloaded information from NCBI, so
# we can read it.
print(record)
"""
Explanation: The returned information can be viewed by running the cell below.
The output may look confusing at first, but it simply describes the database identifiers that uniquely identify the assemblies present in the assembly database that correspond to the query we made, and a few other pieces of information (number of returned entries, total number of entries that could have been returned, how the query was processed) that we do not need, right now.
End of explanation
"""
# The line below takes the first value in the list of
# database accessions record["IdList"], and places it in
# the variable 'accession'
accession = record["IdList"][0]
# Show the contents of the variable 'accession'
print(accession)
"""
Explanation: For now, we are interested in the list of database identifiers, in record['IdList']. We will use these to get information from the assembly database.
We will look at a single record first, and then consider how to get all the Ralstonia genomes at the same time.
4. Downloading a single genome from NCBI
In this section, you will use one of the database identifiers returned from your search at NCBI to identify and download the GenBank records corresponding to a single assembly of Ralstonia solanacearum.
To do this, we will select a single accession from the list in record["IdList"], using the code in the cell below.
<div class="alert alert-danger" role="alert">
Although this is a single assembly, with a single accession ID, we shall see that we need to download more than one sequence to cover the complete genome.
</div>
End of explanation
"""
# The line below requests the identifiers (UIDs) for all
# records in the `nucleotide` database that correspond to the
# assembly UID that is stored in the variable 'accession'
handle = Entrez.elink(dbfrom="assembly", db="nucleotide",
from_uid=accession)
# We place the downloaded information in the variable 'links'
links = Entrez.read(handle)
"""
Explanation: Linking across databases
<div class="alert alert-info" role="alert">
There is a complicating factor: assemblies may not be a single complete sequence, and could comprise several contigs, or a chromosome and several extrachromosomal elements, all annotated independently. These are stored independently in a different database, called <b>nucleotide</b>, and each has an individual accession.
<br/><br />
We need to <i>link</i> the <b>assembly</b> accession to each of the <b>nucleotide</b> accessions.
<br/><br />
This is a common requirement when querying <b>NCBI</b> databases, and is achieved using the <b>Entrez.elink()</b> function.
</div>
We need to specify the database for which we have the accession (or UID), and which database we want to query for related records (in this case, nucleotide).
End of explanation
"""
# The code below provides a function that extracts nucleotide
# database accessions for INSDC data from the result of an
# Entrez.elink() query.
def extract_insdc(links):
"""Returns the link UIDs for RefSeq entries, from the
passed Elink search results"""
# Work only with INSDC accession UIDs
linkset = [ls for ls in links[0]['LinkSetDb'] if
ls['LinkName'] == 'assembly_nuccore_insdc']
if 0 == len(linkset): # There are no INSDC UIDs
raise ValueError("Elink() output has no assembly_nuccore_insdc data")
# Make a list of the INSDC UIDs
uids = [i['Id'] for i in linkset[0]['Link']]
return uids
"""
Explanation: The links variable may contain links to more than one version of the genome (NCBI keep third-party managed genome data in GenBank/INSDC records, and NCBI-'owned' data in RefSeq records).
The function below extracts only the INSDC information from the Elink() query. It is not important that you understand the code.
End of explanation
"""
# The line below uses the extract_insdc() function to get INSDC/GenBank
# accession UIDs for the components of the genome/assembly referred to
# in the 'links' variable. These will be stored in the variable
# 'nuc_uids'
nuc_uids = extract_insdc(links)
# Show the contents of 'nuc_uids'
print(nuc_uids)
"""
Explanation: You will use the extract_insdc() function to get the accession IDs for the sequences in this Ralstonia solanacearum genome, in the cell below.
End of explanation
"""
# The lines below retrieve (fetch) the GenBank records for
# each database entry specified in `nuc_uids`, in plain text
# format. These are parsed with Biopython's SeqIO module into
# SeqRecords, which structure the data into a usable format.
# The SeqRecords are placed in the variable 'records'.
records = []
for nuc_uid in nuc_uids:
handle = Entrez.efetch(db="nucleotide", rettype="gbwithparts", retmode="text",
id=nuc_uid)
records.append(SeqIO.read(handle, 'genbank'))
"""
Explanation: Fetching sequence records from NCBI
Now we have accession UIDs for the nucleotide sequences of the assembly, you will use Entrez.efetch as before to fetch each sequence record from NCBI.
We need to tell NCBI which database we want to use (in this case, nucleotide), and the identifiers for the records (the values in nuc_uids). To get all the data at the same time, we can join the accession ids into a single string, with commas to separate the individual UIDs.
We will also tell NCBI two further pieces of information:
The format we want the data returned in. We will ask for GenBank format (gbwithparts) to obtain the genome sequence and feature annotations.
How we want the data returned. We will ask for plain text (text).
End of explanation
"""
# Show the contents of each downloaded `SeqRecord`.
for record in records:
print(record, "\n")
"""
Explanation: By running the cell below, you can see that each sequence in the Ralstonia solanacearum assembly has been downloaded into a SeqRecord, and that it contains useful metadata, describing the sequence assembly and properties of the annotation.
End of explanation
"""
# The line below writes the sequence data in 'seqdata' to
# the local file "data/ralstonia.gbk", in GenBank format.
# The function returns the number of sequences that were written to file
SeqIO.write(records, os.path.join(outdir, "ralstonia.gbk"), "genbank")
"""
Explanation: Writing sequence data with Biopython
The SeqIO module can be used to write sequence data out to a file on your local hard drive. You will do this in the cells below, using the SeqIO.write() function.
<div class="alert alert-info" role="alert">
The <b>SeqRecord</b>s you downloaded contain sequence and feature annotation data, and can be written in any of several file formats. Some of these formats preserve annotation information, and some do not.
</div>
Firstly, in the cell below, you will write GenBank format files that preserve both sequence and annotation data. For the SeqIO.write() function, we need to specify the list of SeqRecords (records), the output filename to which they will be written, and the format we wish to write (in this case "genbank").
End of explanation
"""
# The line below writes the sequence data in 'seqdata' to
# the local file "data/ralstonia.fasta", in FASTA format.
SeqIO.write(records, os.path.join(outdir, "ralstonia.fasta"), "fasta")
"""
Explanation: If you inspect the newly-created ralstonia.gbk file, you should see that it contains complete GenBank records, describing this genome.
GenBank files are detailed and large, and sometimes we only want to consider the genome sequence itself, not its annotation. The FASTA sequence can be written out on its own by specifyinf the "fasta" format to SeqIO.write() instead. This time, we write the output to data/ralstonia.fasta.
End of explanation
"""
|
iRipVanWinkle/ml
|
mlcourse_open[solutions]/homeworks/hw3_session2_decision_trees.ipynb
|
mit
|
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier, export_graphviz
"""
Explanation: <center>
<img src="../../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
<center>Домашнее задание № 3
<center> Деревья решений для классификации и регрессии
В этом задании мы разберемся с тем, как работает дерево решений в задаче регрессии, а также построим (и настроим) классифицирующие деревья решений в задаче прогнозирования сердечно-сосудистых заболеваний.
Заполните код в клетках (где написано "Ваш код здесь") и ответьте на вопросы в веб-форме.
End of explanation
"""
X = np.linspace(-2, 2, 7)
y = X ** 3
plt.scatter(X, y)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
"""
Explanation: 1. Простой пример восстановления регрессии с помощью дерева решений
Рассмотрим следующую одномерную задачу восстановления регрессии. Неформально, надо построить функцию $a(x)$, приближающую искомую зависимость $y = f(x)$ в терминах среднеквадратичной ошибки: $min \sum_i {(a(x_i) - f(x_i))}^2$. Подробно мы рассмотрим эту задачу в следующий раз (4-я статья курса), а пока поговорим о том, как решать эту задачу с помощью дерева решений. Предварительно прочитайте небольшой раздел "Дерево решений в задаче регрессии" 3-ей статьи курса.
End of explanation
"""
plt.scatter(X, y)
X_l = X[X < 0]
y_l = [y[X < 0].mean()] * X_l.shape[0]
plt.plot(X, [y.mean()] * X.shape[0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
"""
Explanation: Проделаем несколько шагов в построении дерева решений. Исходя из соображений симметрии, выберем пороги для разбиения равными соответственно 0, 1.5 и -1.5. Напомним, что в случае задачи восстановления регрессии листовая вершина выдает среднее значение ответа по всем объектам обучающей выборки, попавшим в эту вершину.
Итак, начнём. Дерево глубины 0 состоит из одного корня, который содержит всю обучающую выборку. Как будут выглядеть предсказания данного дерева для $x \in [-2, 2]$? Постройте соответствующий график.
End of explanation
"""
plt.scatter(X, y)
X_l = X[X < 0]
y_l = [y[X < 0].mean()] * X_l.shape[0]
X_r = X[X >= 0]
y_r = [y[X >= 0].mean()] * X_r.shape[0]
plt.plot(X_l, y_l)
plt.plot(X_r, y_r)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
"""
Explanation: Произведем первое разбиение выборки по предикату $[x < 0]$. Получим дерево глубины 1 с двумя листьями. Постройте аналогичный график предсказаний для этого дерева.
End of explanation
"""
def dispersion(X, y):
r = 0
i = 0
c = 1 / X.shape[0]
return c * sum([(_y - c * sum(y)) ** 2 for _y in y])
def regression_var_criterion(X, y, t):
X_l = X[X < t]
y_l = y[X < t]
X_r = X[X >= t]
y_r = y[X >= t]
d = dispersion(X, y) - (X_l.shape[0] / X.shape[0] * dispersion(X_l, y_l)) - (X_r.shape[0] / X.shape[0] * dispersion(X_r, y_r))
return d
t = np.arange(-1.9, 1.9, 0.1)
criteria = [regression_var_criterion(X, y, _t) for _t in t]
plt.plot(t, criteria)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
"""
Explanation: В алгоритме построения дерева решений признак и значение порога, по которым происходит разбиение выборки, выбираются исходя из некоторого критерия. Для регрессии обычно используется дисперсионный критерий:
$$Q(X, j, t) = D(X) - \dfrac{|X_l|}{|X|} D(X_l) - \dfrac{|X_r|}{|X|} D(X_r),$$
где $X$ – выборка, находящаяся в текущей вершине, $X_l$ и $X_r$ – разбиение выборки $X$ на две части по предикату $[x_j < t]$ (то есть по $j$-ому признаку и порогу $t$), а $D(X)$ – дисперсия ответов на выборке $X$:
$$D(X) = \dfrac{1}{|X|} \sum_{x_j \in X}(y_j – \dfrac{1}{|X|}\sum_{x_i \in X}y_i)^2,$$
где $y_i = y(x_i)$ – ответ на объекте $x_i$. При каждом разбиении вершины выбираются признак $j$ и значение порога $t$, максимизирующие значение функционала $Q(X, j, t)$.
В нашем случае признак всего один, поэтому $Q$ зависит только от значения порога $t$ (и ответов выборки в данной вершине).
Постройте график функции $Q(X, t)$ в корне в зависимости от значения порога $t$ на отрезке $[-1.9, 1.9]$.
End of explanation
"""
plt.scatter(X, y)
print(type(X))
X_l = X[X < 0]
y_l = [y[X < 0].mean()] * X_l.shape[0]
X_ll = X[X < -1.5]
y_ll = [y[X < -1.5].mean()] * X_ll.shape[0]
X_lr = X[(X >= -1.5) & (X < 0)]
y_lr = [y[(X >= -1.5) & (X < 0)].mean()] * X_lr.shape[0]
X_r = X[X >= 0]
y_r = [y[X >= 0].mean()] * X_r.shape[0]
X_rl = X[(X >= 0) & (X < 1.5)]
y_rl = [y[(X >= 0) & (X
< 1.5)].mean()] * X_rl.shape[0]
X_rr = X[(X >= 1.5)]
y_rr = [y[(X >= 1.5)].mean()] * X_rr.shape[0]
X_ = np.r_[X_ll, X_lr, X_rl, X_rr]
y_ = np.r_[y_ll, y_lr, y_rl, y_rr]
plt.plot(X_, y_)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
"""
Explanation: <font color='red'>Вопрос 1.</font> Оптимально ли с точки зрения дисперсионного критерия выбранное нами значение порога $t = 0$?
- Да
- Нет
Теперь произведем разбиение в каждой из листовых вершин. В левой (соответствующей ветви $x < 0$) – по предикату $[x < -1.5]$, а в правой (соответствующей ветви $x \geqslant 0$) – по предикату $[x < 1.5]$. Получится дерево глубины 2 с 7 вершинами и 4 листьями. Постройте график предсказаний этого дерева для $x \in [-2, 2]$.
End of explanation
"""
df = pd.read_csv('../../mlcourse_open/data/mlbootcamp5_train.csv',
index_col='id', sep=';')
df.head()
"""
Explanation: <font color='red'>Вопрос 2.</font> Из скольки отрезков состоит график, изображающий предсказания построенного дерева на отрезке [-2, 2]?
- 5
- 6
- 7
- 8
2. Построение дерева решений для прогноза сердечно-сосудистых заболеваний
Считаем в DataFrame знакомый нам набор данных по сердечно-сосудистым заболеваниям.
End of explanation
"""
df['age_of_year'] = (df['age'] // 365.25).astype(int)
df.drop(['age'], axis=1, inplace=True)
df = pd.get_dummies(df, columns=['cholesterol', 'gluc'])
df.head()
"""
Explanation: Сделайте небольшие преобразования признаков: постройте признак "возраст в годах", а также постройте по 3 бинарных признака на основе cholesterol и gluc, где они, соответственно, равны 1, 2 или 3. Эта техника называется dummy-кодированием или One Hot Encoding (OHE), удобней всего в данном случае использовать pandas.get_dummmies.
End of explanation
"""
y = df['cardio'].astype(int)
X = df.drop('cardio', axis=1)
X.shape, y.shape
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.7, random_state=17)
X_train.shape, X_valid.shape, y_train.shape, y_valid.shape
"""
Explanation: Разбейте выборку на обучающую и отложенную (holdout) части в пропорции 7/3. Для этого используйте метод sklearn.model_selection.train_test_split, зафиксируйте у него random_state=17.
End of explanation
"""
DT = DecisionTreeClassifier(max_depth=3, random_state=17)
DT.fit(X_train, y_train)
# используем .dot формат для визуализации дерева
export_graphviz(DT, feature_names=X.columns,
out_file='../img/3.3.dot', filled=True)
!dot -Tpng ../img/3.3.dot -o ../img/3.3.png
!rm ../img/3.3.dot
"""
Explanation: Обучите на выборке (X_train, y_train) дерево решений с ограничением на максимальную глубину в 3. Зафиксируйте у дерева random_state=17. Визуализируйте дерево с помошью sklearn.tree.export_graphviz, dot и pydot. Пример дан в статье под спойлером "Код для отрисовки дерева". Обратите внимание, что команды в Jupyter notebook, начинающиеся с восклицательного знака – это терминальные команды (которые мы обычно запускаем в терминале/командной строке).
End of explanation
"""
DT.score(X_valid, y_valid)
"""
Explanation: <img src='../img/3.3.png'>
<font color='red'>Вопрос 3.</font> Какие 3 признака задействуются при прогнозе в построенном дереве решений? (то есть эти три признака "можно найти в дереве")
- weight, height, gluc=3
- smoke, age, gluc=3
- age, weight, chol=3
- age, ap_hi, chol=3
Сделайте с помощью обученного дерева прогноз для отложенной выборки (X_valid, y_valid). Посчитайте долю верных ответов (accuracy).
End of explanation
"""
tree_params = {'max_depth': list(range(2, 11))}
tree_grid = GridSearchCV(DT, tree_params, cv=5, n_jobs=-1)
%%time
tree_grid.fit(X_train, y_train)
tree_grid.best_params_
"""
Explanation: Теперь на кросс-валидации по выборке (X_train, y_train) настройте глубину дерева, чтобы повысить качество модели. Используйте GridSearchCV, 5-кратную кросс-валидацию. Зафиксируйте у дерева random_state=17. Перебирайте параметр max_depth от 2 до 10.
End of explanation
"""
max_deep = [x['max_depth'] for x in tree_grid.cv_results_['params']]
std_score = tree_grid.cv_results_['std_test_score']
plt.plot(max_deep, std_score)
plt.xlabel(r'max deep')
plt.ylabel(r'std score')
"""
Explanation: Нарисуйте график того, как меняется средняя доля верных ответов на кросс-валидации в зависимости от значения max_depth.
End of explanation
"""
acc1 = DT.score(X_valid, y_valid)
acc2 = tree_grid.best_estimator_.score(X_valid, y_valid)
(acc2 - acc1) / acc1 * 100
"""
Explanation: Выведите лучшее значение max_depth, то есть такое, при котором среднее значение метрики качества на кросс-валидации максимально. Посчитайте также, какова теперь доля верных ответов на отложенной выборке. Все это можно сделать с помощью обученного экземпляра класса GridSearchCV.
End of explanation
"""
data = df.copy()
data["age_45-50"] = data["age_of_year"].apply(lambda x: 1 if x >= 45 and x < 50 else 0)
data["age_50-55"] = data["age_of_year"].apply(lambda x: 1 if x >= 50 and x < 55 else 0)
data["age_55-60"] = data["age_of_year"].apply(lambda x: 1 if x >= 55 and x < 60 else 0)
data["age_60-65"] = data["age_of_year"].apply(lambda x: 1 if x >= 60 and x < 65 else 0)
data["ap_hi_120-140"] = data["ap_hi"].apply(lambda x: 1 if x >= 120 and x < 140 else 0)
data["ap_hi_140-160"] = data["ap_hi"].apply(lambda x: 1 if x >= 140 and x < 160 else 0)
data["ap_hi_160-180"] = data["ap_hi"].apply(lambda x: 1 if x >= 160 and x < 180 else 0)
data["male"] = data["gender"].map({1: 0, 2: 1})
#df = pd.get_dummies(df, columns=['cholesterol'])
data.drop(["height", "weight", "ap_hi", "ap_lo", "alco", "active",
"ap_hi", "gender", "gluc_1", "gluc_2", "gluc_3", "age_of_year"], axis=1, inplace=True)
data.head()
y = data['cardio'].astype(int)
X = data.drop('cardio', axis=1)
X.shape, y.shape
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.7, random_state=17)
X_train.shape, X_valid.shape, y_train.shape, y_valid.shape
tree = DecisionTreeClassifier(max_depth=3, random_state=17)
tree.fit(X_train, y_train)
# используем .dot формат для визуализации дерева
export_graphviz(tree, feature_names=X.columns,
out_file='../img/3.5.dot', filled=True)
!dot -Tpng ../img/3.5.dot -o ../img/3.5.png
!rm ../img/3.5.dot
"""
Explanation: <font color='red'>Вопрос 4.</font> Имеется ли на кривой валидации по максимальной глубине дерева четкий пик, если перебирать max_depth от 2 до 10? Повысила ли настройка глубины дерева качество классификации (accuracy) более чем на 1% на отложенной выборке?
- да, да
- да, нет
- нет, да
- нет, нет
Обратимся опять (как и в 1 домашке) к картинке, демонстрирующей шкалу SCORE для расчёта риска смерти от сердечно-сосудистого заболевания в ближайшие 10 лет.
<img src='../../img/SCORE2007.png' width=70%>
Создайте бинарные признаки, примерно соответствующие этой картинке:
- $age \in [45,50), \ldots age \in [60,65) $ (4 признака)
- верхнее артериальное давление: $ap_hi \in [120,140), ap_hi \in [140,160), ap_hi \in [160,180),$ (3 признака)
Далее будем строить дерево решений с этим признаками, а также с признаками smoke, cholesterol и gender. Из признака cholesterol надо сделать 3 бинарных, соотв-х уникальным значениям признака ( cholesterol=1, cholesterol=2 и cholesterol=3), эта техника называется dummy-кодированием или One Hot Encoding (OHE). Признак gender надо перекодировать: значения 1 и 2 отобразить на 0 и 1. Признак лучше переименовать в male (0 – женщина, 1 – мужчина). В общем случае кодирование значений делает sklearn.preprocessing.LabelEncoder, но в данном случае легко обойтись и без него.
Итак, дерево решений строится на 12 бинарных признаках.
Постройте дерево решений с ограничением на максимальную глубину = 3 и обучите его на всей исходной обучающей выборке. Используйте DecisionTreeClassifier, на всякий случай зафикисровав random_state=17, остальные аргументы (помимо max_depth и random_state) оставьте по умолчанию.
<font color='red'>Вопрос 5.</font> Какой бинарный признак из 12 перечисленных оказался самым важным для обнаружения ССЗ, то есть поместился в вершину построенного дерева решений?
- Верхнее артериальное давление от 160 до 180 (мм рт.ст.)
- Пол мужской / женский
- Верхнее артериальное давление от 140 до 160 (мм рт.ст.)
- Возраст от 50 до 55 (лет)
- Курит / не курит
- Возраст от 60 до 65 (лет)
End of explanation
"""
|
widdowquinn/notebooks
|
sampling_fnr_fpr.ipynb
|
mit
|
%pylab inline
from scipy import stats
from ipywidgets import interact, fixed
def sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high):
"""Returns subsamples and observations from two normal
distributions.
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
"""
# subsamples
samples = (clip(stats.norm.rvs(mu_neg, sd_neg, size=n_neg), clip_low, clip_high),
clip(stats.norm.rvs(mu_pos, sd_pos, size=n_pos), clip_low, clip_high))
# observed samples, including FPR and FNR
[shuffle(s) for s in samples]
obs_neg = concatenate((samples[0][:int((1-fpr)*n_neg)],
samples[1][int((1-fnr)*n_pos):]))
obs_pos = concatenate((samples[1][:int((1-fnr)*n_pos)],
samples[0][int((1-fpr)*n_neg):]))
# return subsamples and observations
return ((samples[0], samples[1]), (obs_neg, obs_pos))
def draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100,
num_bins=50,
xmin=50, xmax=100, points=100,
subsample=True,
negcolor='blue', poscolor='green'):
"""Renders a matplotlib plot of normal distributions and subsamples,
and returns t-test P values that the means of the two subsamples are
equal, with and without FNR/FPR.
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
- bins number of bins for histogram
- xmin x-axis lower limit
- xmax x-axis upper limit
- points number of points for plotting PDF
- subsample Boolean: True plots subsamples
"""
x = linspace(points, xmin, xmax)
# Normal PDFs
norms = (normpdf(x, mu_neg, sd_neg), normpdf(x, mu_pos, sd_pos))
# Get subsamples and observations
samples, obs = sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high)
# Plot distribution and samples
plot(x, norms[0], color=negcolor)
plot(x, norms[1], color=poscolor)
if subsample:
h_neg = hist(samples[0], num_bins, normed=1, facecolor=negcolor, alpha=0.5)
h_pos = hist(samples[1], num_bins, normed=1, facecolor=poscolor, alpha=0.5)
ax = gca()
ax.set_xlabel("value")
ax.set_ylabel("frequency")
# Calculate t-tests
t_sam = stats.ttest_ind(samples[0], samples[1], equal_var=False)
t_obs = stats.ttest_ind(obs[0], obs[1], equal_var=False)
ax.set_title("$P_{real}$: %.02e $P_{obs}$: %.02e" % (t_sam[1], t_obs[1]))
"""
Explanation: The way you define groups affects your statistical tests
This notebook is, I hope, a step towards explaining the issue and sharing some intuition about
the difference between experiments where classes to be compared are under the control of the experimenter, and those where they are predicted or otherwise subject to error
the importance to interpreting a statistical test of knowing the certainty with which individual data points are assigned to the classes being compared
the effects of classification false positive and false negative rate, and an imbalance of membership between classes being compared
Overview: Experiment
Let's say you have data from an experiment. The experiment is to test the binding of some human drug candidates to a large number (thousands) of mouse proteins. The experiment was conducted with transformed mouse proteins in yeast, so some of the proteins are in different states to how they would be found in the mouse: truncated to work in yeast, and with different post-translational modifications.
Overview: Analysis
Now let's consider one possible analysis of the data. We will test whether the mouse proteins that bind to the drug are more similar to their human equivalents than those that do not bind to the drug.
We are ignoring the question of what 'equivalent' means here, and assume that a satisfactory equivalency is known and accepted. We will represent the 'strength' of equivalence by percentage sequence identity of each mouse protein to its human counterpart, on a scale of 0-100%.
We will test whether there is a difference between the two groups by a $t$-test of two samples. We assume that each group (binds/does not bind) subsamples a distinct, normally-distributed population of sequence identities. We do not know whether the means or variances of these populations are the same, but we will test the null hypothesis that the means are identical (or the difference between means is zero).
This isn't the way I would want to analyse this kind of data in a real situation - but we're doing it here to show the effect of how we define groups. We'll define positive and negative groups as follows:
positive: mouse protein that binds to drug in the yeast experiment
negative: mouse protein that does not bind to drug in the yeast experiment
Python code:
Let's take a look at what we're actually doing when we perform this analysis. We'll use some Python code to visualise and explore this. Skip over this if you like…
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, subsample=False)
"""
Explanation: A perfect experiment:
First, we're assuming that there are two "real" populations of sequence identity between mouse and human equivalents - one representing proteins that bind to the drug in the experiment; one representing proteins that do not bind to the drug in this experiment. We're also assuming that the distribution of these values is Normal.
Giving ourselves a decent chance for success, we'll assume that the real situation for drug-binding proteins is that they have a mean sequence identity of around 90%, and those proteins that don't bind the drug have a mean identity of 85% to their human counterpart. We'll also assume that the standard deviation of thes identities is 5% in each case.
These are the "real", but idealised, populations from which our experimental results are drawn.
We can see how these distributions look:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100)
"""
Explanation: A $t$-test between samples of these populations is not consistent with the null hypothesis: that both means are equal. The reported P-value in the plot title tells us that the probability of seeing this data (or a greater difference between the population means) is $P_{real}$ and is pretty small. No-one should have difficulty seeing that these populations have different means.
NOTE: The $t$-test is calculated on the basis of a 100-item subsample of each population and $P$-values will change when you rerun the cell.
When the experiment is performed, the results are not these idealised populations, but observations that are subsampled from the population.
We'll say that we find fewer mouse proteins that bind the drug than do not and, for the sake of round numbers, we'll have:
100 positive results: mouse protein binds drug
2000 negative results: mouse protein does not bind drug
And we can show this outcome:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.01)
"""
Explanation: Now the frequency of each experimental group is also plotted, as a histogram. The blue group - our negative examples, match the idealised distribution well. The green group - our positives - match the profile less well, but even so the difference between means is visibly apparent.
The title of the plot shows two $P$-values from the $t$-test, which is again very small. The null hypothesis: that the means are equal, is rejected.
Experimental error in classification
For many experiments, the experimenter is in complete control of sample classification, throughout.
For example, the experimenter either does, or does not, inject a mouse with a drug. The 'experimental' sample is easily and absolutely distinguished from the control. In this case, $t$-tests are simple to apply, with relatively few caveats.
This easy distinction between 'experiment' and 'control' samples is such a common circumstance, that it is easy to fall into the trap of thinking that it is always a clear division. But it is not, and it is increasingly less so when the samples being compared derive from high-throughput experiments, as in our fictional example, here.
When the categories being compared are classified as the result of an experiment or prediction that has an inherent error in classification, then we may be comparing hybrid populations of results: mixtures of positive and negative examples - and this can potentially affect the analysis results.
Where does classification error come from?
In our fictional experiment every interaction is taking place in yeast, so the biochemistry is different to the mouse and may affect the outcome of binding. Furthermore, the assay itself will have detection limits (some real - maybe productive - binding will not be detected).
Although we are doing the experiment in yeast, we want to claim that the results tell us something about interaction in the mouse. Why is that? It is in part because the comparison of mouse protein sequence identity to the human counterpart protein implies that we care about whether the interaction is similar in the human and mouse system. It is also because the implication of binding in the yeast experiment is efficacy in the animal system.
The experiment is an imperfect proxy for detecting whether the drug "really binds" to each protein in the mouse. Not every individual binding outcome will be correct. Errors arise in comparison to what happens in the mouse, and also simple experimental error or variation. We can therefore define outcomes for each individual test:
true positive: the drug binds in the yeast experiment, and in the mouse
true negative: the drug does not bind in the yeast experiment, and does not bind in the mouse
false positive: the drug binds in the yeast experiment, but does not in the mouse
false negative: the drug does not bind in the yeast experiment, but does bind in the mouse
Introducing false negatives
We can look at the effect of introducing a false negative rate (FNR: the probability that a protein which binds the drug in the mouse gives a negative result in yeast).
We'll start it low, at $FNR=0.01$:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.2)
"""
Explanation: Again, the $P$-value reported by the $t$-test allows us to reject the null hypothesis. But there's a difference to the earlier graphs, as the two $P$-values in the title differ. That is because they represent two different tests.
$P_{real}$: the $P$-value obtained with no false positives or false negatives. This is the $P$-value we would get if we could correctly assign every protein to be either 'binding' or 'non-binding' of the drug, in the yeast experiment.
$P_{obs}$: the $P$-value obtained from our observed dataset, which could contain either false positives or false negatives. This is the $P$-value we would get if the yeast experiment had the same false positive or false negative rate.
In this case, with FNR=0.01, and 100 'true' positive interactors, we should expect approximately one 'true' positive to be misclassified as a 'negative', and the resulting effect on our test to be quite small. This is, in fact, what we see - $P_{obs}$ should be very slightly higher than $P_{real}$.
Increasing the rate of false negatives
If we increase the rate of false negatives to first $FNR=0.1$ and then $FNR=0.2$, we see a greater influence on our statistical test:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01)
"""
Explanation: Now the $t$-test reports several orders of magnitude difference in $P$-values. Both are still very small, and the difference in population means is clear, but the trend is obvious: misclassification moves the reported $P$-value closer to accepting the null hypothesis that both populations have the same mean.
Introducing false positives
Now let's introduce a false positive rate (FPR: the probability that a protein which does not bind the drug in the mouse does so in the yeast experiment).
Again, starting low, at $FNR=0.01$:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2)
"""
Explanation: The direction of change is again the same: with false positive errors, the test $P$-value moves towards accepting the null hypothesis. Also, the size of this change is (probably - it might change if you rerun the notebook) greater than that which we saw for $FNR=0.01$.
The increase in effect is because the sample sizes for positive and negative groups differ. 1% of a sample of 2000 negatives is 20; 1% of a sample of 100 is 1. Misclassifying 20 negatives in the middle of 100 positives is likely to have more effect than misclassifying a single positive amongst 2000 negatives.
This tendency becomes more pronounced as we increase $FPR$:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01, fnr=0.01)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2, fnr=0.2)
"""
Explanation: With $FPR=0.2$, the $t$-test now reports a $P$-value that can be 20-30 orders of magnitude different from what would be seen with no misclassification. This is a considerable move towards being less able to reject the null hypothesis in what we might imagine to be a clear-cut case of having two distinct populations.
Combining false negatives and false positives
As might be expected, the effect of combining false positive and false negative misclassifications is greater than either case alone. This is illustrated in the plots below.
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=1000, n_pos=50, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=200, n_pos=10, fpr=0.1, fnr=0.1)
"""
Explanation: The effects of class size
We have seen that the relative sizes of positive and negative classes affect whether $FPR$ or $FNR$ is the more influential error type for this data. The total size of classes also has an influence. In general, reducing the number of class members increases the impact of misclassification, in part because the smaller sample size overall makes it harder to reject the null hypothesis as a 'baseline':
End of explanation
"""
def multiple_samples(n_samp=1000,
mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100):
"""Returns the distribution of P-values obtained from subsampled
and observed (with FNR/FPR) normal distributions, over n_samp
repetitions.
- n_samp number of times to (re)sample from the distribution
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
"""
p_sam, p_obs = [], []
for n in range(n_samp):
samples, obs = sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high)
t_sam = stats.ttest_ind(samples[0], samples[1], equal_var=False)
t_obs = stats.ttest_ind(obs[0], obs[1], equal_var=False)
p_sam.append(t_sam[1])
p_obs.append(t_obs[1])
# return the P-values
return (p_sam, p_obs)
def draw_multiple_samples(n_samp=1000,
mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100,
logy=True):
"""Plots the distribution of P-values obtained from subsampled
and observed (with FNR/FPR) normal distributions, over n_samp
repetitions.
- n_samp number of times to (re)sample from the distribution
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
"""
p_sam, p_obs = multiple_samples(n_samp, mu_neg, mu_pos,
sd_neg, sd_pos, n_neg, n_pos,
fnr, fpr, clip_low, clip_high)
# plot P-values against each other
if logy:
p = loglog(p_sam, p_obs, 'o', alpha=0.3)
else:
p = semilogx(p_sam, p_obs, 'o', alpha=0.3)
ax = gca()
ax.set_xlabel("'Real' subsample P-value")
ax.set_ylabel("Observed subsample P-value")
ax.set_title("reps=%d $n_{neg}$=%d $n_{pos}$=%d FNR=%.02f FPR=%.02f" %
(n_samp, n_neg, n_pos, fnr, fpr))
# Add y=x lines, P=0.05
lims = [min([ax.get_xlim(), ax.get_ylim()]),
max([(0.05, 0.05), max([ax.get_xlim(), ax.get_ylim()])])]
if logy:
loglog(lims, lims, 'k', alpha=0.75)
ax.set_aspect('equal')
else:
semilogx(lims, lims, 'k', alpha=0.75)
vlines(0.05, min(ax.get_ylim()), max(max(ax.get_ylim()), 0.05), color='red') # add P=0.05 lines
hlines(0.05, min(ax.get_xlim()), max(max(ax.get_xlim()), 0.05), color='red')
"""
Explanation: Some realisations of the last example (n_neg=200, n_pos=10, fpr=0.1, fnr=0.1) result in a $P$-value that (at the 0.05 level) rejects the null hypothesis when there is no misclassification, but cannot reject the null hypothesis when misclassification is taken into account.
Misclassification of samples into categories can prevent statistical determination of category differences, even for distinct categories
The general case
All our examples so far have been single realisations of a fictional experiment. The results will vary every time you (re-)run a cell, and quite greatly between runs, especially for some parameter values.
Let's look at what happens over several hundred replications of the experiment, to pick up on some trends.
Python code
As before, ignore this if you don't want to look at it.
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100)
"""
Explanation: The perfect experiment
Assuming $FNR=0$ and $FPR=0$, i.e. no misclassification at all, the observed and real subsample $t$-test values are always identical. We plot $t$-test $P$-values results for 1000 replicates of our fictional experiment, below:
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.2)
"""
Explanation: The red lines in the plot indicate a nominal $P=0.05$ threshold.
The vertical line indicates this for the 'real' data - that is to say that points on the left of this line indicate a 'real' difference between the means of the populations (we reject the null hypothesis) at $P=0.05$.
The horizontal line indicates a similar threshold at $P=0.05$ for the 'observed' data - that which has some level of misclassification. Points below this line indicate that the experiment - with misclassification - rejects the null hypothesis at $P=0.05$.
Here, there is no misclassification, and all points lie on the diagonal, accordingly. The two populations we draw from are quite distinct, so all points cluster well to the left of and below the $P=0.05$ thresholds.
The effect of $FNR$
We saw before that, due to the small relative size of the positive set, the effect of $FNR$ was not very pronounced. Running 1000 replicates of the experiment, we can get some intuition about how increasing $FNR$ affects the observed $P$-value, relative to that which we would see without any misclassification.
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2)
"""
Explanation: The effet of increasing $FNR$ is to move the reported $P$-values away from the $y=x$ diagonal, and towards the $P=0.05$ threshold. Even with $FNR=0.1$, almost every run of the experiment misreports the 'real' $P$-value such that we are less likely to reject the null hypothesis.
The effect of $FPR$
We also saw earlier that, again due to the small relative size of the positive set, the effect of $FPR$ was greater than that of $FNR$.
By running 1000 replicates of the experiment as before, we can understand how increasing $FPR$ affects the observed $P$-value, relative there being no misclassification.
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2, fnr=0.2)
"""
Explanation: We see the same progression of 'observed' $P$-values away from what would be the 'real' $P$-value without misclassification, but this time much more rapidly than with $FNR$ as $FPR$ increases. Even for this very distinct pair of populations, whose 'true' $P$-value should be ≈$10^-40$, an $FPR$ of 0.2 runs the risk of occasionally failing to reject the null hypothesis that the population means are the same.
As before, combining misclassification of positive and negative examples results in us being more likely to accept the null hypothesis, even for a very distinct pair of populations.
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50)
"""
Explanation: A more realistic population?
The examples above have all been performed with two populations that have very distinct means, by $t$-test. How powerful is the effect of misclassification when the distinction is not so clear.
Let's consider two populations with less readily-distinguishable means, as might be encountered in real data:
$\mu_{neg}=85$, $\mu_{pos}=90$, $\sigma_{neg}=6$, $\sigma_{pos}=6$
Over 1000 repeated (perfect) experiments, pretty much all experiments reject the null hypothesis that the means are the same, at $P=0.05$, but some (rare) experiments might falsely accept this:
End of explanation
"""
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50)
"""
Explanation: In a single realisation of a perfect experiment with no misclassification, the reported $P$-value is likely to very strongly reject the null-hypothesis:
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.01, fpr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.05, fpr=0.05)
"""
Explanation: What is the impact of misclassification?
Now, we increase the level of misclassification modestly:
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.1, fpr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50,
fnr=0.1, fpr=0.1, logy=False)
"""
Explanation: And now, at $FPR=FNR=0.05$ we start to see the population of experiments creeping into the upper left quadrant. In this quadrant we have experiments where data that (if classified correctly) would reject the null hypothesis are observed to accept it instead. This problem gets worse as the rate of misclassification increases.
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.2, fpr=0.2)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50,
fnr=0.2, fpr=0.2, logy=False)
"""
Explanation: At $FPR=FNR=0.2$
End of explanation
"""
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=89, sd_neg=7, sd_pos=7, n_neg=150, n_pos=20,
fnr=0.1, fpr=0.15, logy=False)
"""
Explanation: What does this all mean?
If we know that our experiment may involve misclassification into the groups we are going to compare, then we need to consider alternative statistical methods to $t$-tests, as misclassification can introduce quantitative (effect size) and qualitative (presence/absence of an effect) errors to our analysis.
The likelihood of such errors being introduced depends on the nature of the experiment: number of samples in each group, expected false positive and false negative rate, and the expected difference between the two groups. We need to have at least an estimate of each of these quantities to be able to determine whether a simple test (e.g. $t$-test) might be appropriate, whether we need to take misclassification into account explicitly, or whether the data are likely unable to give a decisive answer to our biological question.
A further point to consider is whether the initial assumption of the question/experiment is realistic. For instance, in our fictional example here, is it truly realistic to expect that our drug will only bind those mouse proteins that are most similar to their human counterparts? I would argue that this is unlikely, and that there are almost certainly proportionally as many proteins highly-similar to their human counterparts that do not bind the drug. As misclassification can tend to sway the result towards an overall false negative of "no difference between the groups" where there is one, it may be difficult to distinguish between faulty assumptions, and the effects of misclassification.
Does misclassification always give an overall false negative?
No.
Sometimes, data that should not reject the null hypothesis can be reported as rejecting it: an overall false positive. These are the points in the lower-right quadrant, below:
End of explanation
"""
interact(draw_sample_comparison,
mu_neg=(60, 99, 1), mu_pos=(60, 99, 1),
sd_neg=(0, 15, 1), sd_pos=(0, 15, 1),
n_neg=(0, 150, 1), n_pos=(0, 150, 1),
fnr=(0, 1, 0.01), fpr=(0, 1, 0.01),
clip_low=fixed(0), clip_high=fixed(100),
num_bins=fixed(50), xmin=fixed(50),
xmax=fixed(100), points=fixed(100),
subsample=True, negcolor=fixed('blue'),
poscolor=fixed('green'))
interact(draw_multiple_samples,
mu_neg=(60, 99, 1), mu_pos=(60, 99, 1),
sd_neg=(0, 15, 1), sd_pos=(0, 15, 1),
n_neg=(0, 150, 1), n_pos=(0, 150, 1),
fnr=(0, 1, 0.01), fpr=(0, 1, 0.01),
clip_low=fixed(0), clip_high=fixed(100))
"""
Explanation: Note that, apart from the overall sample size (170 mouse proteins instead of 2000) the parameters for this run are not very different from those we have been using.
Interactive examples
The cells below allow you to explore variation in all the parameters we have modified above, and their effects on reported $P$ values:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/hammoz-consortium/cmip6/models/sandbox-2/landice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
ljwolf/pysal
|
pysal/contrib/viz/mapping_guide.ipynb
|
bsd-3-clause
|
shp_link = ps.examples.get_path('columbus.shp')
shp = ps.open(shp_link)
some = [bool(random.getrandbits(1)) for i in ps.open(shp_link)]
fig = plt.figure()
base = maps.map_poly_shp(shp)
base.set_facecolor('none')
base.set_linewidth(0.75)
base.set_edgecolor('0.8')
some = maps.map_poly_shp(shp, which=some)
some.set_alpha(0.5)
some.set_linewidth(0.)
cents = np.array([poly.centroid for poly in ps.open(shp_link)])
pts = plt.scatter(cents[:, 0], cents[:, 1])
pts.set_color('red')
ax = maps.setup_ax([base, some, pts], [shp.bbox, shp.bbox, shp.bbox])
fig.add_axes(ax)
plt.show()
"""
Explanation: Guide for the mapping module in PySAL
Contributors:
Dani Arribas-Bel <daniel.arribas.bel@gmail.com>
Serge Rey <sjsrey@gmail.com>
This document describes the main structure, components and usage of the mapping module in PySAL. The is organized around three main layers:
A lower-level layer that reads polygon, line and point shapefiles and returns a Matplotlib collection.
A medium-level layer that performs some usual transformations on a Matplotlib object (e.g. color code polygons according to a vector of values).
A higher-level layer intended for end-users for particularly useful cases and style preferences pre-defined (e.g. Create a choropleth).
Lower-level component
This includes basic functionality to read spatial data from a file (currently only shapefiles supported) and produce rudimentary Matplotlib objects. The main methods are:
map_poly_shape: to read in polygon shapefiles
map_line_shape: to read in line shapefiles
map_point_shape: to read in point shapefiles
These methods all support an option to subset the observations to be plotted (very useful when missing values are present). They can also be overlaid and combined by using the setup_ax function. the resulting object is very basic but also very flexible so, for minds used to matplotlib this should be good news as it allows to modify pretty much any property and attribute.
Example
End of explanation
"""
net_link = ps.examples.get_path('eberly_net.shp')
net = ps.open(net_link)
values = np.array(ps.open(net_link.replace('.shp', '.dbf')).by_col('TNODE'))
pts_link = ps.examples.get_path('eberly_net_pts_onnetwork.shp')
pts = ps.open(pts_link)
fig = plt.figure()
netm = maps.map_line_shp(net)
netc = maps.base_choropleth_unique(netm, values)
ptsm = maps.map_point_shp(pts)
ptsm = maps.base_choropleth_classif(ptsm, values)
ptsm.set_alpha(0.5)
ptsm.set_linewidth(0.)
ax = maps.setup_ax([netc, ptsm], [net.bbox, net.bbox])
fig.add_axes(ax)
plt.show()
"""
Explanation: Medium-level component
This layer comprises functions that perform usual transformations on matplotlib objects, such as color coding objects (points, polygons, etc.) according to a series of values. This includes the following methods:
base_choropleth_classless
base_choropleth_unique
Example
End of explanation
"""
maps.plot_poly_lines(ps.examples.get_path('columbus.shp'))
"""
Explanation: base_choropleth_classif
Higher-level component
This currently includes the following end-user functions:
plot_poly_lines: very quick shapefile plotting.
End of explanation
"""
shp_link = ps.examples.get_path('columbus.shp')
values = np.array(ps.open(ps.examples.get_path('columbus.dbf')).by_col('HOVAL'))
types = ['classless', 'unique_values', 'quantiles', 'equal_interval', 'fisher_jenks']
for typ in types:
maps.plot_choropleth(shp_link, values, typ, title=typ)
"""
Explanation: plot_choropleth: for quick plotting of several types of choropleths.
End of explanation
"""
|
anandha2017/udacity
|
nd101 Deep Learning Nanodegree Foundation/DockerImages/19_Autoencoders/notebooks/autoencoder/Simple_Autoencoder_Solution.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
GoogleCloudPlatform/ai-platform-samples
|
notebooks/samples/tables/census_income_prediction/getting_started_notebook.ipynb
|
apache-2.0
|
# Use the latest major GA version of the framework.
! pip install --upgrade --quiet --user --user google-cloud-automl
"""
Explanation: Getting Started with AutoML Tables
<table align="left">
<td>
<a href="https://colab.sandbox.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tables/census_income_prediction/getting_started_notebook.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tables/census_income_prediction/getting_started_notebook.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
Google’s AutoML provides the ability for software engineers to build high quality models without the need to know how to build, train models, or deploy/serve models on the cloud. Instead, one only needs to know about dataset curation, evaluating results, and the how-to steps.
AutoML Tables is a supervised learning service. This means that you train a machine learning model with example data. AutoML Tables uses tabular (structured) data to train a machine learning model to make predictions on new data. One column from your dataset, called the target, is what your model will learn to predict. Some number of the other data columns are inputs (called features) that the model will learn patterns from.
In this notebook, we will use the Google Cloud SDK AutoML Python API to create a binary classification model using a real dataset from the Census Income Dataset.
We will provide the training and evaluation dataset, once dataset is created we will use AutoML API to create the model and then perform predictions to predict if a given individual has an income above or below 50k, given information like the person's age, education level, marital-status, occupation etc...
For setting up a Google Cloud Platform (GCP) account for using AutoML, please see the online documentation for Getting Started.
Dataset
This tutorial uses the United States Census Income Dataset provided by the UC Irvine Machine Learning Repository containing information about people from a 1994 Census database, including age, education, marital status, occupation, and whether they make more than $50,000 a year. The dataset consists of over 30k rows, where each row corresponds to a different person. For a given row, there are 14 features that the model conditions on to predict the income of the person. A few of the features are named above, and the exhaustive list can be found both in the dataset link above.
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
Cloud AI Platform
Cloud Storage
AutoML Tables
Learn about Cloud AI Platform pricing,
Cloud Storage pricing,
AutoML Tables pricing and use the Pricing Calculator to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already meets
all the requirements to run this notebook. If you are using AI Platform Notebook, make sure the machine configuration type is 1 vCPU, 3.75 GB RAM or above. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
Enable AutoML API.
PIP Install Packages and dependencies
Install addional dependencies not installed in Notebook environment
End of explanation
"""
from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")
"""
Explanation: Note: Try installing using sudo, if the above command throw any permission errors.
Restart the kernel to allow automl_v1beta1 to be imported for Jupyter Notebooks.
End of explanation
"""
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
COMPUTE_REGION = "us-central1" # Currently only supported region.
"""
Explanation: Set up your GCP Project Id
Enter your Project Id in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
End of explanation
"""
# Upload the downloaded JSON file that contains your key.
import sys
if 'google.colab' in sys.modules:
from google.colab import files
keyfile_upload = files.upload()
keyfile = list(keyfile_upload.keys())[0]
%env GOOGLE_APPLICATION_CREDENTIALS $keyfile
! gcloud auth activate-service-account --key-file $keyfile
"""
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
AutoML > AutoML Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
%env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account
! gcloud auth activate-service-account --key-file '/path/to/service/account'
"""
Explanation: If you are running the notebook locally, enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
"""
! gsutil mb -p $PROJECT_ID -l $COMPUTE_REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. Make sure Storage > Storage Admin role is enabled
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# AutoML library.
from google.cloud import automl_v1beta1 as automl
import google.cloud.automl_v1beta1.proto.data_types_pb2 as data_types
import matplotlib.pyplot as plt
from ipywidgets import interact
import ipywidgets as widgets
"""
Explanation: Import libraries and define constants
Import relevant packages.
End of explanation
"""
#@title Constants { vertical-output: true }
# A name for the AutoML tables Dataset to create.
DATASET_DISPLAY_NAME = 'census' #@param {type: 'string'}
# The GCS data to import data from (doesn't need to exist).
INPUT_CSV_NAME = 'census_income' #@param {type: 'string'}
# A name for the AutoML tables model to create.
MODEL_DISPLAY_NAME = 'census_income_model' #@param {type: 'string'}
assert all([
PROJECT_ID,
COMPUTE_REGION,
DATASET_DISPLAY_NAME,
INPUT_CSV_NAME,
MODEL_DISPLAY_NAME,
])
"""
Explanation: Populate the following cell with the necessary constants and run it to initialize constants.
End of explanation
"""
# Initialize the clients.
automl_client = automl.AutoMlClient()
tables_client = automl.TablesClient(project=PROJECT_ID, region=COMPUTE_REGION)
"""
Explanation: Initialize client for AutoML and AutoML Tables
End of explanation
"""
# List the datasets.
list_datasets = tables_client.list_datasets()
datasets = { dataset.display_name: dataset.name for dataset in list_datasets }
datasets
"""
Explanation: Test the set up
To test whether your project set up and authentication steps were successful, run the following cell to list your datasets in this project.
If no dataset has previously imported into AutoML Tables, you shall expect an empty return.
End of explanation
"""
# List the models.
list_models = tables_client.list_models()
models = { model.display_name: model.name for model in list_models }
models
"""
Explanation: You can also print the list of your models by running the following cell.
If no model has previously trained using AutoML Tables, you shall expect an empty return.
End of explanation
"""
# Create dataset.
dataset = tables_client.create_dataset(
dataset_display_name=DATASET_DISPLAY_NAME)
dataset_name = dataset.name
dataset
"""
Explanation: Import training data
Create dataset
Now we are ready to create a dataset instance (on GCP) using the client method create_dataset(). This method has one required parameter, the human readable display name DATASET_DISPLAY_NAME.
Select a dataset display name and pass your table source information to create a new dataset.
End of explanation
"""
GCS_DATASET_URI = 'gs://{}/{}.csv'.format(BUCKET_NAME, INPUT_CSV_NAME)
! gsutil ls gs://$BUCKET_NAME || gsutil mb -l $COMPUTE_REGION gs://$BUCKET_NAME
! gsutil cp gs://cloud-ml-data-tables/notebooks/census_income.csv $GCS_DATASET_URI
"""
Explanation: Import data
You can import your data to AutoML Tables from GCS or BigQuery. For this tutorial, you can use the census_income dataset as your training data. We provide code below to copy the data into a bucket you own automatically. You are free to adjust the value of BUCKET_NAME as needed.
End of explanation
"""
# Read the data source from GCS.
import_data_response = tables_client.import_data(
dataset=dataset,
gcs_input_uris=GCS_DATASET_URI
)
print('Dataset import operation: {}'.format(import_data_response.operation))
# Synchronous check of operation status. Wait until import is done.
print('Dataset import response: {}'.format(import_data_response.result()))
# Verify the status by checking the example_count field.
dataset = tables_client.get_dataset(dataset_name=dataset_name)
dataset
"""
Explanation: Import data into the dataset, this process may take a while, depending on your data, once completed, you can verify the status by printing the dataset object. This time pay attention to the example_count field with 32461 records.
End of explanation
"""
# List table specs.
list_table_specs_response = tables_client.list_table_specs(dataset=dataset)
table_specs = [s for s in list_table_specs_response]
# List column specs.
list_column_specs_response = tables_client.list_column_specs(dataset=dataset)
column_specs = {s.display_name: s for s in list_column_specs_response}
# Print Features and data_type.
features = [(key, data_types.TypeCode.Name(value.data_type.type_code))
for key, value in column_specs.items()]
print('Feature list:\n')
for feature in features:
print(feature[0],':', feature[1])
# Table schema pie chart.
type_counts = {}
for column_spec in column_specs.values():
type_name = data_types.TypeCode.Name(column_spec.data_type.type_code)
type_counts[type_name] = type_counts.get(type_name, 0) + 1
plt.pie(x=type_counts.values(), labels=type_counts.keys(), autopct='%1.1f%%')
plt.axis('equal')
plt.show()
"""
Explanation: Review the specs
Run the following command to see table specs such as row count.
End of explanation
"""
column_spec_display_name = 'income' #@param {type:'string'}
type_code='CATEGORY' #@param {type:'string'}
update_column_response = tables_client.update_column_spec(
dataset=dataset,
column_spec_display_name=column_spec_display_name,
type_code=type_code,
nullable=False,
)
update_column_response
"""
Explanation: Update dataset: assign a label column and enable nullable columns
This section is important, as it is where you specify which column (meaning which feature) you will use as your label. This label feature will then be predicted using all other features in the row.
AutoML Tables automatically detects your data column type. For example, for the (census_income) it detects income_bracket to be categorical (as it is just either over or under 50k) and age to be numerical. Depending on the type of your label column, AutoML Tables chooses to run a classification or regression model. If your label column contains only numerical values, but they represent categories, change your label column type to categorical by updating your schema.
Update a column: Set nullable parameter
End of explanation
"""
column_spec_display_name = 'income' #@param {type:'string'}
update_dataset_response = tables_client.set_target_column(
dataset=dataset,
column_spec_display_name=column_spec_display_name,
)
update_dataset_response
"""
Explanation: Tip: You can use 'type_code': 'CATEGORY' in the preceding update_column_spec_dict to convert the column data type from FLOAT64 to CATEGORY.
Update dataset: Assign a label
End of explanation
"""
# The number of hours to train the model.
model_train_hours = 1 #@param {type:'integer'}
create_model_response = tables_client.create_model(
model_display_name=MODEL_DISPLAY_NAME,
dataset=dataset,
train_budget_milli_node_hours=model_train_hours*1000,,
exclude_column_spec_names=['fnlwgt','income'],
)
operation_id = create_model_response.operation.name
print('Create model operation: {}'.format(create_model_response.operation))
# Wait until model training is done.
model = create_model_response.result()
model_name = model.name
model
"""
Explanation: Creating a model
Train a Model
Once we have defined our datasets and features we will create a model.
Specify the duration of the training. For example, 'train_budget_milli_node_hours': 1000 runs the training for one hour. You can increase that number up to a maximum of 72 hours ('train_budget_milli_node_hours': 72000) for the best model performance.
Even with a budget of 1 node hour (the minimum possible budget), training a model can take more than the specified node hours
If your Colab times out, use client.list_models() to check whether your model has been created. Then use model name to continue to the next steps. Run the following command to retrieve your model.
model = tables_client.get_model(model_display_name=MODEL_DISPLAY_NAME)
You can also select the objective to optimize your model training by setting optimization_objective. This solution optimizes the model by using default optimization objective. Refer link for more details.
End of explanation
"""
tables_client.deploy_model(model=model).result()
"""
Explanation: Model deployment
Important : Deploy the model, then wait until the model FINISHES deployment.
The model takes a while to deploy online. When the deployment code response = client.deploy_model(model_name=model.name) finishes, you will be able to see this on the UI. Check the UI and navigate to the predict tab of your model, and then to the online prediction portion, to see when it finishes online deployment before running the prediction cell.You should see "online prediction" text near the top, click on it, and it will take you to a view of your online prediction interface. You should see "model deployed" on the far right of the screen if the model is deployed, or a "deploying model" message if it is still deploying. </span>
End of explanation
"""
model = tables_client.get_model(model_name=model_name)
model
"""
Explanation: Verify if model has been deployed, check deployment_state field, it should show: DEPLOYED
End of explanation
"""
workclass_ids = ['Private', 'Self-emp-not-inc', 'Self-emp-inc', 'Federal-gov',
'Local-gov', 'State-gov', 'Without-pay', 'Never-worked']
education_ids = ['Bachelors', 'Some-college', '11th', 'HS-grad', 'Prof-school',
'Assoc-acdm', 'Assoc-voc', '9th', '7th-8th', '12th', 'Masters',
'1st-4th', '10th', 'Doctorate', '5th-6th', 'Preschool']
marital_status_ids = ['Married-civ-spouse', 'Divorced', 'Never-married',
'Separated', 'Widowed', 'Married-spouse-absent',
'Married-AF-spouse']
occupation_ids = ['Tech-support', 'Craft-repair', 'Other-service', 'Sales',
'Exec-managerial', 'Prof-specialty', 'Handlers-cleaners',
'Machine-op-inspct', 'Adm-clerical', 'Farming-fishing',
'Transport-moving', 'Priv-house-serv', 'Protective-serv',
'Armed-Forces']
relationship_ids = ['Wife', 'Own-child', 'Husband', 'Not-in-family',
'Other-relative', 'Unmarried']
race_ids = ['White', 'Asian-Pac-Islander', 'Amer-Indian-Eskimo', 'Other',
'Black']
sex_ids = ['Female', 'Male']
native_country_ids = ['United-States', 'Cambodia', 'England', 'Puerto-Rico',
'Canada', 'Germany', 'Outlying-US(Guam-USVI-etc)',
'India', 'Japan', 'Greece', 'South', 'China', 'Cuba',
'Iran', 'Honduras', 'Philippines', 'Italy', 'Poland',
'Jamaica', 'Vietnam', 'Mexico', 'Portugal', 'Ireland',
'France', 'Dominican-Republic', 'Laos', 'Ecuador',
'Taiwan', 'Haiti', 'Columbia', 'Hungary', 'Guatemala',
'Nicaragua', 'Scotland', 'Thailand', 'Yugoslavia',
'El-Salvador', 'Trinadad&Tobago', 'Peru', 'Hong',
'Holand-Netherlands']
# Create dropdown for workclass.
workclass = widgets.Dropdown(
options=workclass_ids,
value=workclass_ids[0],
description='workclass:'
)
# Create dropdown for education.
education = widgets.Dropdown(
options=education_ids,
value=education_ids[0],
description='education:',
width='500px'
)
# Create dropdown for marital status.
marital_status = widgets.Dropdown(
options=marital_status_ids,
value=marital_status_ids[0],
description='marital status:',
width='500px'
)
# Create dropdown for occupation.
occupation = widgets.Dropdown(
options=occupation_ids,
value=occupation_ids[0],
description='occupation:',
width='500px'
)
# Create dropdown for relationship.
relationship = widgets.Dropdown(
options=relationship_ids,
value=relationship_ids[0],
description='relationship:',
width='500px'
)
# Create dropdown for race.
race = widgets.Dropdown(
options=race_ids,
value=race_ids[0],
description='race:',
width='500px'
)
# Create dropdown for sex.
sex = widgets.Dropdown(
options=sex_ids,
value=sex_ids[0],
description='sex:',
width='500px'
)
# Create dropdown for native country.
native_country = widgets.Dropdown(
options=native_country_ids,
value=native_country_ids[0],
description='native_country:',
width='500px'
)
display(workclass)
display(education)
display(marital_status)
display(occupation)
display(relationship)
display(race)
display(sex)
display(native_country)
"""
Explanation: Run the prediction, only after the model finishes deployment
Make an Online prediction
You can toggle exactly which values you want for all of the numeric features, and choose from the drop down windows which values you want for the categorical features.
Note: If the model has not finished deployment, the prediction will NOT work. The following cells show you how to make an online prediction.
End of explanation
"""
#@title Make an online prediction: set the numeric variables{ vertical-output: true }
age = 36 #@param {type:'slider', min:1, max:100, step:1}
capital_gain = 40000 #@param {type:'slider', min:0, max:100000, step:10000}
capital_loss = 559.5 #@param {type:'slider', min:0, max:4000, step:0.1}
fnlwgt = 150000 #@param {type:'slider', min:0, max:1000000, step:50000}
education_num = 9 #@param {type:'slider', min:1, max:16, step:1}
hours_per_week = 40 #@param {type:'slider', min:1, max:100, step:1}
"""
Explanation: Adjust the slides on the right to the desired test values for your online prediction.
End of explanation
"""
inputs = {
'age': age,
'workclass': workclass.value,
'fnlwgt': fnlwgt,
'education': education.value,
'education_num': education_num,
'marital_status': marital_status.value,
'occupation': occupation.value,
'relationship': relationship.value,
'race': race.value,
'sex': sex.value,
'capital_gain': capital_gain,
'capital_loss': capital_loss,
'hours_per_week': hours_per_week,
'native_country': native_country.value,
}
prediction_result = tables_client.predict(model=model, inputs=inputs)
prediction_result
"""
Explanation: Run the following cell, and then choose the desired test values for your online prediction.
End of explanation
"""
predictions = [(prediction.tables.score, prediction.tables.value.string_value)
for prediction in prediction_result.payload]
predictions = sorted(
predictions, key=lambda tup: (tup[0],tup[1]), reverse=True)
print('Prediction is: ', predictions[0])
"""
Explanation: Get Prediction
We extract the google.cloud.automl_v1beta1.types.PredictResponse object prediction_result and iterate to create a list of tuples with score and label, then we sort based on highest score and display it.
End of explanation
"""
undeploy_model_response = tables_client.undeploy_model(model=model)
"""
Explanation: Undeploy the model
End of explanation
"""
gcs_output_folder_name = 'census_income_predictions' #@param {type: 'string'}
SAMPLE_INPUT = 'gs://cloud-ml-data/automl-tables/notebooks/census_income_batch_prediction_input.csv'
GCS_BATCH_PREDICT_OUTPUT = 'gs://{}/{}/'.format(BUCKET_NAME,
gcs_output_folder_name)
! gsutil cp $SAMPLE_INPUT $GCS_BATCH_PREDICT_OUTPUT
"""
Explanation: Batch prediction
Initialize prediction
Your data source for batch prediction can be GCS or BigQuery.
For this tutorial, you can use:
census_income_batch_prediction_input.csv as input source.
Create a GCS bucket and upload the file into your bucket.
Some of the lines in the batch prediction input file are intentionally left missing some values. The AutoML Tables logs the errors in the errors.csv file. Also, enter the UI and create the bucket into which you will load your predictions.
The bucket's default name here is automl-tables-pred to be replaced with your own.
NOTE: The client library has a bug. If the following cell returns a:
TypeError: Could not convert Any to BatchPredictResult error, ignore it.
The batch prediction output file(s) will be updated to the GCS bucket that you set in the preceding cells.
End of explanation
"""
batch_predict_response = tables_client.batch_predict(
model=model,
gcs_input_uris=GCS_BATCH_PREDICT_URI,
gcs_output_uri_prefix=GCS_BATCH_PREDICT_OUTPUT,
)
print('Batch prediction operation: {}'.format(
batch_predict_response.operation))
# Wait until batch prediction is done.
batch_predict_result = batch_predict_response.result()
batch_predict_response.metadata
"""
Explanation: Launch Batch prediction
End of explanation
"""
# Delete model resource.
tables_client.delete_model(model_name=model_name)
# Delete dataset resource.
tables_client.delete_dataset(dataset_name=dataset_name)
# Delete Cloud Storage objects that were created.
! gsutil -m rm -r gs://$BUCKET_NAME
# If training model is still running, cancel it.
automl_client.transport._operations_client.cancel_operation(operation_id)
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
End of explanation
"""
|
rvuduc/cse6040-ipynbs
|
14--pagerank-partial-solns2.ipynb
|
bsd-3-clause
|
import sqlite3 as db
import pandas as pd
def get_table_names (conn):
assert type (conn) == db.Connection # Only works for sqlite3 DBs
query = "SELECT name FROM sqlite_master WHERE type='table'"
return pd.read_sql_query (query, conn)
def print_schemas (conn, table_names=None, limit=0):
assert type (conn) == db.Connection # Only works for sqlite3 DBs
if table_names is None:
table_names = get_table_names (conn)
c = conn.cursor ()
query = "PRAGMA TABLE_INFO ({table})"
for name in table_names:
c.execute (query.format (table=name))
columns = c.fetchall ()
print ("=== {table} ===".format (table=name))
col_string = "[{id}] {name} : {type}"
for col in columns:
print (col_string.format (id=col[0],
name=col[1],
type=col[2]))
print ("\n")
conn = db.connect ('poliblogs.db')
for name in get_table_names (conn)['name']:
print_schemas (conn, [name])
query = '''SELECT * FROM %s LIMIT 5''' % name
print (pd.read_sql_query (query, conn))
print ("\n")
"""
Explanation: CSE 6040, Fall 2015 [14]: PageRank (still cont'd)
This notebook is identical to Lab 13, but with solutions provided for Part 1 and partial solutions for Part 2.
In this notebook, you'll implement the PageRank algorithm summarized in class. You'll test it on a real dataset (circa 2005) that consists of political blogs and their links among one another.
Note that the presentation in class follows the matrix view of the algorithm. Cleve Moler (inventor of MATLAB) has a nice set of notes here.
For today's notebook, you'll need to download the following additional materials:
* A cse6040utils module, which is a Python module containing some handy routines from previous classes: link (Note: This module is already part of the git repo for our notebooks if you are pulling from there.)
* A SQLite version of the political blogs dataset: http://cse6040.gatech.edu/fa15/poliblogs.db (~ 611 KiB)
Part 1: Explore the Dataset
Let's start by looking at the dataset, to get a feel for what it contains.
Incidentally, one of you asked recently how to get the schema for a SQLite database when using Python. Here is some code adapted from a few ideas floating around on the web. Let's use these to inspect the tables available in the political blogs dataset.
End of explanation
"""
query = '''
SELECT MIN(Id) AS MinId,
MAX(Id) AS MaxId,
COUNT(DISTINCT Id) AS NumDistinctIds
FROM Vertices
'''
df = pd.read_sql_query (query, conn)
print df
assert df.MinId[0] == 1
assert df.MaxId[0] == df.NumDistinctIds[0]
print ("\n==> Verified: Vertex ids cover [1, %d] densely." \
% df.NumDistinctIds[0])
"""
Explanation: Exercise. Write a snippet of code to verify that the vertex IDs are dense in some interval $[1, n]$. That is, there is a minimum value of $1$, some maximum value $n$, and no missing values between $1$ and $n$.
End of explanation
"""
query = '''
SELECT {col} FROM Edges
WHERE {col} NOT IN (SELECT Id FROM Vertices)
'''
df_s = pd.read_sql_query (query.format (col='Source'), conn)
print (df_s['Source'])
df_t = pd.read_sql_query (query.format (col='Target'), conn)
print (df_t['Target'])
assert df_s['Source'].empty
assert df_t['Target'].empty
print ("==> Verified: All source and target IDs are vertices.")
"""
Explanation: Exercise. Make sure every edge has its end points in the vertex table.
End of explanation
"""
query = '''
SELECT Id, Url
FROM Vertices
WHERE (Id NOT IN (SELECT DISTINCT Source FROM Edges))
AND (Id NOT IN (SELECT DISTINCT Target FROM Edges))
'''
df_solo_vertices = pd.read_sql_query (query, conn)
print df_solo_vertices.head ()
num_solo_vertices = len (df_solo_vertices)
# Our testing code follows, assuming your `num_solo_vertices` variable:
print ("\n==> %d vertices have no incident edges." % num_solo_vertices)
assert num_solo_vertices == 266
"""
Explanation: Exercise. Determine which vertices have no incident edges. Store the number of such vertices in a variable, num_solo_vertices.
End of explanation
"""
# Complete this query:
query = '''
CREATE VIEW IF NOT EXISTS Outdegrees AS
SELECT Source AS Id, COUNT(*) AS Degree
FROM Edges
GROUP BY Source
'''
c = conn.cursor ()
c.execute (query)
from IPython.display import display
query = '''
SELECT Outdegrees.Id, Degree, Url
FROM Outdegrees, Vertices
WHERE Outdegrees.Id = Vertices.Id
ORDER BY -Degree
'''
df_outdegrees = pd.read_sql_query (query, conn)
print "==> A few entries with large out-degrees:"
display (df_outdegrees.head ())
print "\n==> A few entries with small out-degrees:"
display (df_outdegrees.tail ())
"""
Explanation: Exercise. Compute a view called Outdegrees, which contains the following columns:
Id: vertex ID
Degree: the out-degree of this vertex.
To help you test your view, the following snippet includes a second query that selects from your view but adds a Url field and orders the results in descending order of degree. It also prints first few and last few rows of this query, so you can inspect the URLs as a sanity check. (Perhaps it also provides a small bit of entertainment!)
End of explanation
"""
query = '''
SELECT S.Url, T.Url, Out.Degree
FROM Edges AS E,
(SELECT Id, Url FROM Vertices) AS S,
(SELECT Id, Url FROM Vertices) AS T,
(SELECT Id, Degree FROM Outdegrees) AS Out
WHERE (E.Source=S.Id) AND (E.Target=T.Id) AND (E.Source=Out.Id)
ORDER BY -Out.Degree
'''
df_G = pd.read_sql_query (query, conn)
from IPython.display import display
display (df_G.head ())
print ("...")
display (df_G.tail ())
"""
Explanation: Exercise. Query the database to extract a report of which URLs point to which URLs. Also include the source vertex out-degree and order the rows in descending order by it.
End of explanation
"""
from cse6040utils import sparse_matrix
# Extract entries from the table
query = '''
SELECT Target AS Row, Source AS Col, 1.0/Degree AS Val
FROM Edges, Outdegrees
WHERE Edges.Source = Outdegrees.Id
'''
df_A = pd.read_sql_query (query, conn)
display (df_A.head (10))
# Copy entries from df_A into A_1
A_1 = sparse_matrix () # Initially all zeros, with no rows or columns
for (i, j, a_ij) in zip (df_A['Row'], df_A['Col'], df_A['Val']):
A_1[i-1][j-1] += a_ij # "-1" switches to 0-based indexing
"""
Explanation: Part 2: Implement PageRank
The following exercises will walk you through a possible implementation of PageRank for this dataset.
Exercise. Build a sparse matrix, A_1, that stores $G^TD^{-1}$, where $G^T$ is the transpose of the connectivity matrix $G$, and $D^{-1}$ is the diagonal matrix of inverse out-degrees.
End of explanation
"""
# Select all vertices with no outgoing edges
query = '''
SELECT Id FROM Vertices
WHERE Id NOT IN (SELECT DISTINCT Source FROM Edges)
'''
df_anti_social = pd.read_sql_query (query, conn)
print ("==> Found %d vertices with no outgoing links." \
% len (df_anti_social))
# Add self-edges for empty rows/columns
for i in df_anti_social['Id']:
A_1[i-1][i-1] = 1.0
"""
Explanation: Errata: Bug in matrix construction. Based on questions from students after class, it seems the construction of $A \equiv G^TD^{-1}$ as Prof. Vuduc described it in class has a subtle bug: it does not treat unlinked pages correctly!
To see why, suppose you are the random surfer visiting page $i$, and, with probability $\alpha$, you decide to follow an outgoing link. But what if the page has no outgoing link?
This scenario corresponds to row $i$ of $G$ being entirely zero. So, the random surfer would just "disappear." The easiest fix to the model to account for this case is to assume that the random surfer stays on the same page, which means we should set $a_{ii}$ to 1. The following code snippet handles this case.
End of explanation
"""
def dense_vector (n, init_val=0.0):
"""
Returns a dense vector of length `n`, with all entries set to
`init_val`.
"""
return [init_val] * n
def spmv (n, A, x):
"""Returns a dense vector y of length n, where y = A*x."""
y = dense_vector (n)
for (i, A_i) in A.items ():
s = 0
for (j, a_ij) in A_i.items ():
s += a_ij * x[j]
y[i] = s
return y
"""
Explanation: Exercise. Implement a function to multiply a sparse matrix by a dense vector, assuming a dense vector defined as follows.
End of explanation
"""
n = df.NumDistinctIds[0] # Number of vertices, from Part 1
u = dense_vector (n, 1.0)
y = spmv (n, A_1, u)
print sum (y)
"""
Explanation: As a quick test, let's verify that multiplying $A_1$ by the vector of all ones, $u$, counts the number of vertices.
Why should that be the case? Two of you asked about this after class.
End of explanation
"""
# Some helper functions, in case you need them
import math
def vec_scale (x, alpha):
"""Scales the vector x by a constant alpha."""
return [x_i*alpha for x_i in x]
def vec_add_scalar (x, c):
"""Adds the scalar value c to every element of x."""
return [x_i+c for x_i in x]
def vec_sub (x, y):
"""Returns x - y"""
return [x_i - y_i for (x_i, y_i) in zip (x, y)]
def vec_2norm (x):
"""Returns ||x||_2"""
return math.sqrt (sum ([x_i**2 for x_i in x]))
# YOUR CODE GOES BELOW. We've provided some scaffolding code,
# so you just need to complete it.
ALPHA = 0.85 # Probability of following some link
MAX_ITERS = 25
n = df.NumDistinctIds[0] # Number of vertices, from Part 1
# Let X[t] store the dense x(t) vector at time t
X = []
x_0 = dense_vector (n, 1.0/n) # Initial distribution: 1/n at each page
X.append (x_0)
for t in range (1, MAX_ITERS):
# Complete this implementation
X.append (...)
"""
Explanation: Exercise. Complete the PageRank implementation for this dataset. To keep it simple, you may take $\alpha=0.85$, $x(0)$ equal to the vector of all $1/n$ values, and 25 iterations.
Additionally, you may find the following functions helpful.
The support code in the next code cell differs slightly from the notebook we posted originally. It renames those functions and provides additional functions (e.g., vec_2norm), in case you want to implement a residual-based termination test.
End of explanation
"""
# Write some code here to create a table in the database
# called PageRank. It should have one column to hold the
# page (vertex) ID, and one for the rank value.
# Some helper code to compute a view containing the indegrees.
query = '''
CREATE VIEW IF NOT EXISTS Indegrees AS
SELECT Target AS Id, COUNT(*) AS Degree
FROM Edges
GROUP BY Target
'''
c = conn.cursor ()
c.execute (query)
# Complete this query:
query = '''
...
'''
df_ranks = pd.read_sql_query (query, conn)
display (df_ranks)
sum (df_ranks['Rank'])
"""
Explanation: Exercise. Check your result by first inserting the final computed PageRank vector back into the database, and then using a SQL query to see the ranked URLs. In your query output, also include both the in-degrees and out-degrees of each vertex.
End of explanation
"""
|
jacobdein/alpine-soundscapes
|
utilities/Set weather data datetime.ipynb
|
mit
|
weather_filepath = ""
"""
Explanation: Set weather data datetime
This notebook formats a date and a time column for weather data measurements with a unix timestamp. Each measurement is then inserted into a pumilio database.
Required packages
<a href="https://github.com/pydata/pandas">pandas</a> <br />
<a href="https://github.com/rasbt/pyprind">pyprind</a> <br />
<a href="https://github.com/jacobdein/pymilio">pymilio</a>
Variable declarations
weather_filepath – path to excel containing weather measurements, each with a unix timestamp
End of explanation
"""
import pandas
import pyprind
from datetime import datetime
from Pymilio import database
"""
Explanation: Import statements
End of explanation
"""
weather_data = pandas.read_excel(weather_filepath)
weather_data['WeatherDate'] = weather_data['WeatherDate'].astype('str')
weather_data['WeatherTime'] = weather_data['WeatherTime'].astype('str')
for index, row in weather_data.iterrows():
timestamp = row['timestamp']
dt = datetime.fromtimestamp(timestamp)
date = datetime.strftime(dt, "%Y-%m-%d")
time = datetime.strftime(dt, "%H:%M:%S")
weather_data.set_value(index, 'WeatherDate', date)
weather_data.set_value(index, 'WeatherTime', time)
weather_data = weather_data.drop('timestamp', axis=1)
weather_data = weather_data.drop('LightIntensity', axis=1)
"""
Explanation: Create and format a 'WeatherDate' and 'WeatherTime' column
End of explanation
"""
db = database.Pymilio_db_connection(user='pumilio',
database='pumilio',
read_default_file='~/.my.cnf.pumilio')
"""
Explanation: Connect to database
End of explanation
"""
table_name = 'WeatherData'
column_list = [ n for n in weather_data.columns ]
column_names = ", ".join(column_list)
progress_bar = pyprind.ProgBar(len(weather_data), bar_char='█', title='Progress', monitor=True, stream=1, width=50)
for index, row in weather_data.iterrows():
progress_bar.update(item_id=str(index))
value_list = [ str(v) for v in row.as_matrix() ]
value_strings = "'"
value_strings = value_strings + "', '".join(value_list)
value_strings = value_strings + "'"
#value_strings = value_strings.replace('nan', 'NULL')
statement = """INSERT INTO {0} ({1}) VALUES ({2})""".format(table_name, column_names, value_strings)
db = pumilio_db._connect()
c = db.cursor()
c.execute(statement)
c.close()
db.close()
"""
Explanation: Insert weather measurements into a pumilio database
End of explanation
"""
#weather_data.to_csv("~/Desktop/weather_db.csv", index=False, header=False)
"""
Explanation: Optionally export dataframe to a csv file
End of explanation
"""
|
CQuIC/pysme
|
notebooks/mollow-triplets/mollow-triplets-2.ipynb
|
mit
|
from functools import partial
import pdb
import pickle
import numpy as np
from scipy.optimize import minimize
from scipy.fftpack import fft, fftshift, fftfreq
from scipy.integrate import quad
from scipy.special import factorial, sinc
import matplotlib.pyplot as plt
import pysme.integrate as integ
import pysme.hierarchy as hier
import pysme.sparse_system_builder as ssb
import qinfo as qi
from qinfo import supops
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# define Qubit operators
sx = np.array([[0, 1], [1, 0]], dtype=np.complex)
sy = np.array([[0, -1.j], [1.j, 0]], dtype=np.complex)
sz = np.array([[1, 0], [0, -1]], dtype=np.complex)
Id = np.eye(2, dtype=np.complex)
sp = (sx + 1.j * sy) / 2
sm = (sx - 1.j * sy) / 2
zero = np.zeros((2, 2), dtype=np.complex)
plt.style.use('../paper.mplstyle')
"""
Explanation: Mollow triplets 2
End of explanation
"""
def Heff_fn(beta, L):
return 1.j*np.conj(beta)*L - 1.j*beta*L.conj().T
"""
Explanation: A coherent drive adds a Hamiltonian term.
\begin{align}
d\rho&=dt[\beta^L-\beta L^\dagger,\rho]+dt\mathcal{D}[L]\rho \
&=-idt[i\beta^L-i\beta L^\dagger,\rho]+dt\mathcal{D}[L]\rho \
H_{\text{eff}}&=i\beta^*L-i\beta L^\dagger
\end{align}
For $L=\sqrt{\gamma}\sigma_-$, the Rabi frequency is $\Omega=2\sqrt{\gamma}|\beta|$.
(Carmichæl notates the Rabi frequency as $2\Omega$, so in his notation $\Omega=\sqrt{\gamma}|\beta|$.
End of explanation
"""
def rect(x, a, b):
return np.where(x < a, 0, np.where(x < b, 1, 0))
def xi_rect(t, a, b):
return rect(t, a, b)/np.sqrt(b - a)
gamma = 1
beta = 16.j
r = np.log(2)
mu = 0
T = 4
xi_fn = partial(xi_rect, a=0, b=T)
Omega = 2*np.sqrt(gamma)*np.abs(beta)
Omega
rho0 = (Id - sz) / 2
S = Id
L = np.sqrt(gamma)*sm
H = zero
"""
Explanation: Try to reproduce plots from my thesis.
End of explanation
"""
tau_final = 2**8
tau_samples = 2**12
taus = np.linspace(0, tau_final, tau_samples)
"""
Explanation: Parameters indicating how precisely we compute the correlators.
End of explanation
"""
times_ss = np.linspace(0, 32, 2**14)
"""
Explanation: How long we evolve for to get steady-state.
End of explanation
"""
frequencies = fftshift(fftfreq(taus.shape[0], np.diff(taus)[0]))
omegas = 2*np.pi*frequencies
d_omega = np.diff(omegas)[0]
Rabi_offset = 2*np.abs(beta) // d_omega
"""
Explanation: Derived quantities useful for figuring out the scale of the frequency-domain steps.
End of explanation
"""
def calc_white_auto_corr(L, beta, r, mu, times_ss, taus, solve_ivp_kwargs=None):
N = np.sinh(r)**2
M_sq = -np.exp(2.j * mu) * np.sinh(r) * np.cosh(r)
integrator = integ.UncondGaussIntegrator(L, M_sq, N, Heff_fn(beta, L))
soln_ss = integrator.integrate(Id/2, times_ss)
rho_ss = soln_ss.get_density_matrices(np.s_[-1])
sp_ss = np.trace(sp @ rho_ss)
sm_ss = np.trace(sm @ rho_ss)
L_0_taus = integrator.integrate_non_herm(rho_ss @ sp, taus, solve_ivp_kwargs=solve_ivp_kwargs)
Expt_t_taus = L_0_taus.get_expectations(sm, hermitian=False)
return rho_ss, Expt_t_taus - sp_ss * sm_ss
def calc_hier_auto_corr(xi_fn, L, r, mu, beta, m_max, taus, rho_ss, t, t0=0, timesteps=2**10,
solve_ivp_kwargs=None):
sp_ss = np.trace(sp @ rho_ss)
sm_ss = np.trace(sm @ rho_ss)
Id_field = np.eye(m_max + 1, dtype=np.complex)
factory = hier.HierarchyIntegratorFactory(2, m_max)
integrator = factory.make_uncond_integrator(xi_fn, Id, L, Heff_fn(beta, L), r, mu)
times = np.linspace(t0, t, timesteps)
soln_t = integrator.integrate(rho_ss, times)
sp_ss_t = soln_t.get_expectations(sp, vac_rho(m_max), idx_slice=np.s_[-1], hermitian=False)
rho_ss_t = soln_t.get_hierarchy_density_matrices(np.s_[-1])
L_t_t = rho_ss_t @ np.kron(sp, Id_field)
L_t_taus = integrator.integrate_hier_init_cond(L_t_t, taus + t,
solve_ivp_kwargs=solve_ivp_kwargs)
Expt_t_taus = L_t_taus.get_expectations(sm, vac_rho(m_max), hermitian=False)
soln_t_taus = integrator.integrate_hier_init_cond(rho_ss_t, taus + t,
solve_ivp_kwargs=solve_ivp_kwargs)
sm_ss_t_taus = soln_t_taus.get_expectations(sm, vac_rho(m_max), hermitian=False)
# Subtract off a bunch of stuff that gets rid of the delta
return (Expt_t_taus - sp_ss_t * sm_ss - sp_ss * sm_ss_t_taus + sp_ss * sm_ss,
sp_ss, sm_ss, sp_ss_t, sm_ss_t_taus)
def calc_hier_auto_corr_fock(xi_fn, L, r, mu, beta, m_max, taus, rho_ss, t, t0=0, timesteps=2**10,
solve_ivp_kwargs=None):
sp_ss = np.trace(sp @ rho_ss)
sm_ss = np.trace(sm @ rho_ss)
Id_field = np.eye(m_max + 1, dtype=np.complex)
factory = hier.HierarchyIntegratorFactory(2, m_max)
integrator = factory.make_uncond_integrator(xi_fn, Id, L, Heff_fn(beta, L), 0, 0)
times = np.linspace(t0, t, timesteps)
soln_t = integrator.integrate(rho_ss, times)
sp_ss_t = soln_t.get_expectations(sp, sqz_rho(r, mu, m_max), idx_slice=np.s_[-1], hermitian=False)
rho_ss_t = soln_t.get_hierarchy_density_matrices(np.s_[-1])
L_t_t = rho_ss_t @ np.kron(sp, Id_field)
L_t_taus = integrator.integrate_hier_init_cond(L_t_t, taus + t,
solve_ivp_kwargs=solve_ivp_kwargs)
Expt_t_taus = L_t_taus.get_expectations(sm, sqz_rho(r, mu, m_max), hermitian=False)
soln_t_taus = integrator.integrate_hier_init_cond(rho_ss_t, taus + t,
solve_ivp_kwargs=solve_ivp_kwargs)
sm_ss_t_taus = soln_t_taus.get_expectations(sm, sqz_rho(r, mu, m_max), hermitian=False)
# Subtract off a bunch of stuff that gets rid of the delta
return (Expt_t_taus - sp_ss_t * sm_ss - sp_ss * sm_ss_t_taus + sp_ss * sm_ss,
sp_ss, sm_ss, sp_ss_t, sm_ss_t_taus)
def lam_mu(gamma_c, eps):
lam = gamma_c + eps
mu = gamma_c - eps
return lam, mu
def N_degen_PA(omega, omega_A, lam, mu):
return (lam**2 - mu**2)/4*(1/((omega - omega_A)**2 + mu**2)
- 1/((omega - omega_A)**2 + lam**2))
def mod_M_degen_PA(omega, omega_A, lam, mu):
return (lam**2 - mu**2)/4*(1/((omega - omega_A)**2 + mu**2)
+ 1/((omega - omega_A)**2 + lam**2))
def deltas_degen_PA(gamma_c, Omega, lam, mu):
delta_mu = gamma_c*Omega*(lam**2 - mu**2)/(4*mu*(Omega**2 + mu**2))
delta_lam = gamma_c*Omega*(lam**2 - mu**2)/(4*lam*(Omega**2 + mu**2))
return delta_mu, delta_lam
def F_G(delta_mu, delta_lam, Phi):
F_A = -(1j/4)*(delta_mu*(1 + np.cos(Phi))
- delta_lam*(1 - np.cos(Phi)))
G_A = -(1/4)*(delta_mu + delta_lam)*np.sin(Phi)
return F_A, G_A
def get_degen_PA_params(Omega, omega_A, omega_L, gamma_c, eps, phi_L, phi_s):
lam, mu = lam_mu(gamma_c, eps)
N_A = N_degen_PA(omega_A, omega_A, lam, mu)
N_Om = N_degen_PA(omega_A + Omega, omega_A, lam, mu)
mod_M_A = mod_M_degen_PA(omega_A, omega_A, lam, mu)
mod_M_Om = mod_M_degen_PA(omega_A + Omega, omega_A, lam, mu)
M_A = np.exp(2j*phi_s)*mod_M_A
M_Om = np.exp(2j*phi_s)*mod_M_Om
Delta_AL = omega_A - omega_L
Phi = 2*phi_L - phi_s
delta_mu, delta_lam = deltas_degen_PA(gamma_c, Omega, lam, mu)
F_A, G_A = F_G(delta_mu, delta_lam, Phi)
return N_A, N_Om, M_A, M_Om, Delta_AL, F_A, G_A
def calc_quasi_markoff_degen_PA_auto_corr(
gamma, Omega, omega_A, omega_L, gamma_c, eps, phi_L, phi_s, times_ss, taus, solve_ivp_kwargs=None):
N_A, N_Om, M_A, M_Om, Delta_AL, F_A, G_A = get_degen_PA_params(
Omega, omega_A, omega_L, gamma_c, eps, phi_L, phi_s)
integrator = integ.QuasiMarkoff2LvlIntegrator(
gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A)
soln_ss = integrator.integrate(Id/2, times_ss)
rho_ss = soln_ss.get_density_matrices(np.s_[-1])
sp_ss = np.trace(sp @ rho_ss)
sm_ss = np.trace(sm @ rho_ss)
L_0_taus = integrator.integrate_non_herm(rho_ss @ sp, taus, solve_ivp_kwargs=solve_ivp_kwargs)
Expt_t_taus = L_0_taus.get_expectations(sm, hermitian=False)
return rho_ss, Expt_t_taus - sp_ss * sm_ss
def get_fluorescence_spectrum(auto_corr):
fluorescence = fftshift(fft(auto_corr))
return fluorescence
def gen_save_load_data(data_gen_method, fname, overwrite=False):
'''Get the data returned by the generating method, running the method only if the data isn't already available.
If the given filename exists, load and return the data from that file. Otherwise generate the data using the
supplied method and save and return it.
Useful for notebooks you imagine running multiple times, but where some of the data is expensive to generate
and you want to save it to disk to be reloaded for future sessions.
'''
try:
with open(fname, 'xb' if not overwrite else 'wb') as f:
data = data_gen_method()
pickle.dump(data, f)
except FileExistsError:
print('Data already exist.')
with open(fname, 'rb') as f:
data = pickle.load(f)
return data
def gen_Expt_t_taus_wavepacket(
xi_fn, L, r, mu, beta, m_max, taus, rho_ss_coh, t_final, solve_ivp_kwargs=None):
'''Generate autocorrelation function for squeezed hierarchy
Parameters
----------
xi_fn : callable (float -> complex)
Function returning the wavepacket amplitude at the evaluated time
L : np.array
Lindblad operator
r : float
Squeezing strength
mu : float
Squeezing angle
beta : complex
Drive amplitude
m_max : int
Hierarchy truncation level
taus : np.array
Times at which to evaluate the autocorrelation function
rho_ss_coh : np.array
The steady-state density matrix for a coherently driven atom
t_final : float
The fiducial time t to compute the auto correlation fn wrt: <sp(t)sm(t+tau)>
solve_ivp_kwargs : dict
Keyword arguments for the ``solve_ivp`` call made when integrating the hierarchy
'''
return calc_hier_auto_corr(xi_fn, L, r, mu, beta, m_max,
taus, rho_ss_coh, t=t_final,
solve_ivp_kwargs=solve_ivp_kwargs)
def rho_from_ket(ket):
return np.outer(ket, ket.conj())
def vac_rho(n):
ket = np.zeros(n + 1, dtype=np.complex)
ket[0] = 1
return rho_from_ket(ket)
def make_squeezed_state_vec(r, mu, N, normalized=True):
r'''Make a truncated squeezed-state vector.
The squeezed-state vector is :math:`S(r,\mu)|0\rangle`. The truncated
vector is renormalized by default.
Parameters
----------
N: positive integer
The dimension of the truncated Hilbert space, basis {0, ..., N-1}
r: real number
Squeezing amplitude
mu: real number
Squeezing phase
normalized: boolean
Whether or not the truncated vector is renormalized
Returns
-------
numpy.array
Squeezed-state vector in the truncated Hilbert space, represented in the
number basis
'''
ket = np.zeros(N, dtype=np.complex)
for n in range(N//2):
ket[2*n] = (1 / np.sqrt(np.cosh(r))) * ((-0.5 * np.exp(2.j * mu) * np.tanh(r))**n /
factorial(n)) * np.sqrt(factorial(2 * n))
return ket / np.linalg.norm(ket) if normalized else ket
def sqz_rho(r, mu, n):
return rho_from_ket(make_squeezed_state_vec(r, mu, n + 1))
rho_ss_coh, delta_Expt_t_taus_coh = calc_white_auto_corr(L, beta, 0, 0, times_ss, taus,
solve_ivp_kwargs={'rtol': 1e-6, 'atol': 1e-9})
fluor_spec = get_fluorescence_spectrum(delta_Expt_t_taus_coh)
fig, ax = plt.subplots()
ax.plot(omegas, np.abs(fluor_spec))
ax.set_xlabel(r'$(\omega-\Omega)/\Gamma$')
m_max = 12
t_final = 0.5
Expt_t_taus_wavepacket, _, _, _, _ = gen_save_load_data(partial(gen_Expt_t_taus_wavepacket, xi_fn=xi_fn, L=L, r=r,
mu=mu, beta=beta, m_max=m_max, taus=taus,
rho_ss_coh=rho_ss_coh, t_final=t_final,
solve_ivp_kwargs={'rtol': 1e-6, 'atol': 1e-9}),
'2020-06-25/Expt_t_taus_wavepacket.pickle')
Expt_t_taus_wavepacket
fluor_spec_wavepacket = get_fluorescence_spectrum(Expt_t_taus_wavepacket)
fig, ax = plt.subplots()
ax.plot(omegas, np.abs(fluor_spec))
ax.plot(omegas, np.abs(fluor_spec_wavepacket))
ax.set_xlabel(r'$(\omega-\Omega)/\Gamma$')
"""
Explanation: Functions for calculating the necessary correlation functions.
End of explanation
"""
def rect_correlator(omega, omega_p, s, T):
return (s**2*T*np.exp(1.j*(omega_p - omega)*T/2)
*sinc(omega_p*T/(2*np.pi))*sinc(omega*T/(2*np.pi)))
Omega, Omega_p = np.mgrid[-2*np.pi:2*np.pi:129j,-2*np.pi:2*np.pi:129j]
correlations = rect_correlator(Omega, Omega_p, np.sinh(np.log(2)), 4)
fig, ax = plt.subplots()
ax.pcolormesh(Omega, Omega_p, np.abs(correlations), rasterized=True)
ax.set_aspect('equal')
correlations = rect_correlator(Omega, Omega_p, np.sinh(np.log(2)), 1)
fig, ax = plt.subplots()
ax.pcolormesh(Omega, Omega_p, np.abs(correlations), rasterized=True)
ax.set_aspect('equal')
class QuasiMarkoff2LvlIntegrator(integ.UncondLindbladIntegrator):
def __init__(self, gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A):
dim = 2
basis = ssb.SparseBasis(dim)
self.basis = basis
sx = np.array([[0, 1], [1, 0]], dtype=np.complex)
sy = np.array([[0, -1j], [1j, 0]], dtype=np.complex)
sz = np.array([[1, 0], [0, -1]], dtype=np.complex)
sp = (sx + 1j*sy)/2
sm = (sx - 1j*sy)/2
# Eqn 16 and 17 in [YB96]
Sm = sm*np.exp(-1j*phi_L)
Sp = sp*np.exp(1j*phi_L)
Y = 1j*sz@(Sp + Sm)
sz_vec = self.basis.vectorize(sz)
Sm_vec = self.basis.vectorize(Sm)
Sp_vec = self.basis.vectorize(Sp)
Y_vec = self.basis.vectorize(Y)
H_vec = ((Omega/2 + 1j*F_A)*(Sp_vec + Sm_vec)
+ (Delta_AL/2)*sz_vec + G_A*Y_vec)
self.Q = -gamma/4*(
(1 + N_A)*(basis.make_double_comm_matrix(Sm_vec, 1)
- 2*basis.make_diff_op_matrix(Sm_vec))
+ N_A*(basis.make_double_comm_matrix(Sp_vec, 1)
- 2*basis.make_diff_op_matrix(Sp_vec))
+ (1 + N_Om)*(-basis.make_double_comm_matrix(Sm_vec, 1)
- 2*basis.make_diff_op_matrix(Sm_vec))
+ N_Om*(-basis.make_double_comm_matrix(Sp_vec, 1)
- 2*basis.make_diff_op_matrix(Sp_vec))
- 2*basis.make_double_comm_matrix(Sm_vec, M_A*np.exp(-2j*phi_L))
+ 2*basis.make_real_comm_matrix(M_A*np.exp(-2j*phi_L)*Sp_vec,
Sp_vec)
+ 2*basis.make_real_comm_matrix(np.conjugate(M_A)*np.exp(2j*phi_L)
*Sm_vec, Sm_vec)
- 2*basis.make_double_comm_matrix(Sm_vec,
M_Om*np.exp(-2j*phi_L))
- 2*basis.make_real_comm_matrix(M_Om*np.exp(-2j*phi_L)*Sp_vec,
Sp_vec)
- 2*basis.make_real_comm_matrix(np.conjugate(M_Om)*np.exp(2j*phi_L)
*Sm_vec, Sm_vec))
self.Q += basis.make_hamil_comm_matrix(H_vec)
self.Q += -2*basis.make_real_sand_matrix(F_A*(Sp_vec - Sm_vec), sz_vec)
self.Q += -2*basis.make_real_sand_matrix(G_A*(Sp_vec + Sm_vec), sz_vec)
def get_quasi_markoff_ss_slow(gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A, times_ss):
integrator = QuasiMarkoff2LvlIntegrator(
gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A)
soln_ss = integrator.integrate(Id/2, times_ss)
rho_ss = soln_ss.get_density_matrices(np.s_[-1])
return rho_ss
def get_quasi_markoff_ss(gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A):
phi_s = 0.5*np.angle(M_A) # Assumes the angle is independent of frequency
Phi = 2*phi_L - phi_s
gamma_Omega = gamma*(N_Om + np.abs(M_Om)*np.cos(Phi))
gamma_0 = gamma*(N_A - np.abs(M_A)*np.cos(Phi) + 1/2)
A = np.array([[-gamma_Omega, gamma*np.abs(M_Om)*np.sin(Phi), -2*G_A],
[gamma*np.abs(M_A)*np.sin(Phi), -gamma_0, 0.5*Omega + 2.j*F_A],
[0, -2*Omega, -gamma_0 - gamma_Omega]])
bloch_vec_ss = np.linalg.inv(A)@np.array([0, 0, gamma])
sx = np.array([[0, 1], [1, 0]], dtype=np.complex)
sy = np.array([[0, -1j], [1j, 0]], dtype=np.complex)
sz = np.array([[1, 0], [0, -1]], dtype=np.complex)
rho_ss = 0.5*(np.eye(2) + bloch_vec_ss[0]*sx + bloch_vec_ss[1]*sy + bloch_vec_ss[2]*sz)
return rho_ss
def integrator_to_lind_proc_tensor(integrator, t=0):
if isinstance(integrator.basis, ssb.SparseBasis):
op_basis = qi.OperatorBasis(integrator.basis.basis.todense())
else:
op_basis = qi.OperatorBasis(integrator.basis.basis)
lind_proc_mat = integrator.jac(t, rho_vec=None)
lind_process = lambda rho: supops.act_proc_mat(rho, lind_proc_mat, op_basis)
lind_proc_tensor = supops.process_to_proc_tensor(lind_process, op_basis.vec_dim)
return lind_proc_tensor
def get_lind_degen_choi_mat(lind_proc_tensor):
dim = lind_proc_tensor.shape[0]
Id_proc_tensor = supops.get_identity_proc_tensor(dim)
Id_choi_mat = supops.proc_tensor_to_choi_mat(Id_proc_tensor)
Id_choi_mat_degen_proj = (np.eye(Id_choi_mat.shape[0], dtype=Id_choi_mat.dtype)
- qi.rho_from_ket(np.linalg.eigh(Id_choi_mat)[1][:,-1]))
lind_choi_mat = supops.proc_tensor_to_choi_mat(lind_proc_tensor)
lind_degen_choi_mat = Id_choi_mat_degen_proj @ lind_choi_mat @ Id_choi_mat_degen_proj
return lind_degen_choi_mat
gamma = 1
r_A = np.log(2)
r_Om = np.log(2)/2
phi_s = 0
Nth_A = 0
Nth_Om = 0
N_A = (2*Nth_A + 1)*np.sinh(r_A)**2 + Nth_A
N_Om = (2*Nth_Om + 1)*np.sinh(r_Om)**2 + Nth_Om
M_A = -(2*Nth_A + 1)*np.exp(2j*phi_s)*np.sinh(r_A)*np.cosh(r_A)
M_Om = -(2*Nth_Om + 1)*np.exp(2j*phi_s)*np.sinh(r_Om)*np.cosh(r_Om)
F_A = 0.1j
G_A = .05
Delta_AL = 0
Omega = 8
phi_L = 0
integrator = QuasiMarkoff2LvlIntegrator(
gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A)
lind_proc_tensor = integrator_to_lind_proc_tensor(integrator)
degen_choi_mat = get_lind_degen_choi_mat(lind_proc_tensor)
np.round(degen_choi_mat, 4)
np.linalg.eigvalsh(degen_choi_mat)
"""
Explanation: For a rectangular wavepacket
\begin{align}
\xi_t
&=
\begin{cases}
1/\sqrt{T} & 0\leq t\leq T
\
0 & \text{otherwise}
\end{cases}
\end{align}
we have
\begin{align}
\langle b_\omega^\dagger b_{\omega^\prime}\rangle_{\gamma,\xi}
&=
\frac{4s^2}{T}e^{i(\omega^\prime-\omega)T/2}
\frac{\sin(\omega^\prime T/2)\sin(\omega T/2)}{\omega\omega^\prime}
\
&=
s^2Te^{i(\omega^\prime-\omega)T/2}
\frac{\sin(\omega^\prime T/2)}{\omega^\prime T/2}\frac{\sin(\omega T/2)}{\omega T/2}
\
&=
s^2Te^{i(\omega^\prime-\omega)T/2}\operatorname{sinc}(\omega^\prime T/2\pi)
\operatorname{sinc}(\omega T/2\pi)
\
\operatorname{sinc}(x)
&=
\frac{\sin(\pi x)}{\pi x}
\end{align}
End of explanation
"""
get_quasi_markoff_ss_slow(1., 1., 1., 1, 1, 1., 1., 0., 0., 0., np.linspace(0, 2**6, 2**12))
get_quasi_markoff_ss(1., 1., 1., 1, 1, 1., 1., 0., 0., 0.)
"""
Explanation: Looks like we get negativity fairly easily as a result of arbitrary F_A values.
End of explanation
"""
def calc_quasi_markoff_auto_corr(
gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A, times_ss, taus, solve_ivp_kwargs=None):
integrator = QuasiMarkoff2LvlIntegrator(
gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A)
soln_ss = integrator.integrate(Id/2, times_ss)
rho_ss = soln_ss.get_density_matrices(np.s_[-1])
sp_ss = np.trace(sp @ rho_ss)
sm_ss = np.trace(sm @ rho_ss)
L_0_taus = integrator.integrate_non_herm(rho_ss @ sp, taus, solve_ivp_kwargs=solve_ivp_kwargs)
Expt_t_taus = L_0_taus.get_expectations(sm, hermitian=False)
return rho_ss, Expt_t_taus - sp_ss * sm_ss
def get_quasi_fluors(
T, gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A, times_ss, taus, solve_ivp_kwargs=None):
_, Expt_t_taus = calc_quasi_markoff_auto_corr(gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A, times_ss, taus, solve_ivp_kwargs)
fluors = get_fluorescences([Expt_t_taus], T)[1]
return fluors
def get_quasi_cost_fn(target_fluor, T, gamma, beta, omega_A, omega_L, times_ss, taus, solve_ivp_kwargs=None):
Omega = 2*np.abs(beta)
phi_L = np.angle(beta)
def calculate_cost(x):
gamma_c, eps, phi_s = x
quasi_fluor = get_quasi_fluors(T, gamma, Omega, omega_A, omega_L, gamma_c, eps, phi_L, phi_s, times_ss, taus)[0]
return np.sum((quasi_fluor - target_fluor)**2)
return calculate_cost
"""
Explanation: I seem to have not quite got this part right...
End of explanation
"""
class QuasiMarkoff2LvlIntegratorFactory(UncondLindbladIntegrator):
def __init__(self, gamma, N_A, N_Om, M_A, M_Om, Delta_AL, Omega, phi_L, F_A, G_A):
dim = 2
basis = ssb.SparseBasis(dim)
self.basis = basis
sx = np.array([[0, 1], [1, 0]], dtype=np.complex)
sy = np.array([[0, -1j], [1j, 0]], dtype=np.complex)
sz = np.array([[1, 0], [0, -1]], dtype=np.complex)
sp = (sx + 1j*sy)/2
sm = (sx - 1j*sy)/2
# Eqn 16 and 17 in [YB96]
Sm = sm*np.exp(-1j*phi_L)
Sp = sp*np.exp(1j*phi_L)
Y = 1j*sz@(Sp + Sm)
sz_vec = self.basis.vectorize(sz)
Sm_vec = self.basis.vectorize(Sm)
Sp_vec = self.basis.vectorize(Sp)
Y_vec = self.basis.vectorize(Y)
H_vec = ((Omega/2 + 1j*F_A)*(Sp_vec + Sm_vec)
+ (Delta_AL/2)*sz_vec + G_A*Y_vec)
self.N_A_1_op = (basis.make_double_comm_matrix(Sm_vec, 1)
- 2*basis.make_diff_op_matrix(Sm_vec))
self.N_A_op = (basis.make_double_comm_matrix(Sp_vec, 1)
- 2*basis.make_diff_op_matrix(Sp_vec))
self.N_Om_1_op = (-basis.make_double_comm_matrix(Sm_vec, 1)
- 2*basis.make_diff_op_matrix(Sm_vec))
self.N_Om_op = (-basis.make_double_comm_matrix(Sp_vec, 1)
- 2*basis.make_diff_op_matrix(Sp_vec))
self.Q = -gamma/4*(
(1 + N_A)*self.N_A_1_op
+ N_A*self.N_A_op
+ (1 + N_Om)*self.N_Om_1_op
+ N_Om*self.N_Om_op
- 2*basis.make_double_comm_matrix(Sm_vec, M_A*np.exp(-2j*phi_L))
+ 2*basis.make_real_comm_matrix(M_A*np.exp(-2j*phi_L)*Sp_vec,
Sp_vec)
+ 2*basis.make_real_comm_matrix(M_A.conj()*np.exp(2j*phi_L)
*Sm_vec, Sm_vec)
- 2*basis.make_double_comm_matrix(Sm_vec,
M_Om*np.exp(-2j*phi_L))
- 2*basis.make_real_comm_matrix(M_Om*np.exp(-2j*phi_L)*Sp_vec,
Sp_vec)
- 2*basis.make_real_comm_matrix(M_Om.conj()*np.exp(2j*phi_L)
*Sm_vec, Sm_vec))
self.Q += basis.make_hamil_comm_matrix(H_vec)
self.Q += -2*basis.make_real_sand_matrix(F_A*(Sp_vec - Sm_vec), sz_vec)
self.Q += -2*basis.make_real_sand_matrix(G_A*(Sp_vec + Sm_vec), sz_vec)
"""
Explanation: Could try and make things faster by precomputing some stuff that doesn't change when tweaking parameters.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/tutorials/distance.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: Distance
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
print(b.get_parameter(qualifier='distance', context='system'))
print(b.get_parameter(qualifier='t0', context='system'))
"""
Explanation: Relevant Parameters
The 'distance' parameter lives in the 'system' context and is simply the distance between the center of the coordinate system and the observer (at t0)
End of explanation
"""
b.add_dataset('orb', times=np.linspace(0,3,101), dataset='orb01')
b.set_value('distance', 1.0)
b.run_compute(model='dist1')
b.set_value('distance', 2.0)
b.run_compute(model='dist2')
afig, mplfig = b['orb01'].plot(y='ws', show=True, legend=True)
"""
Explanation: Influence on Orbits (Positions)
The distance has absolutely NO effect on the synthetic orbit as the origin of the orbit's coordinate system is such that the barycenter of the system is at 0,0,0 at t0.
To demonstrate this, let's create an 'orb' dataset and compute models at both 1 m and 2 m and then plot the resulting synthetic models.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: Influence on Light Curves (Fluxes)
Fluxes are, however, affected by distance exactly as you'd expect as inverse of distance squared.
To illustrate this, let's add an 'lc' dataset and compute synthetic fluxes at 1 and 2 m.
End of explanation
"""
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value('distance', 1.0)
b.run_compute(model='dist1', overwrite=True)
b.set_value('distance', 2.0)
b.run_compute(model='dist2', overwrite=True)
"""
Explanation: To make things easier to compare, let's disable limb darkening
End of explanation
"""
afig, mplfig = b['lc01'].plot(show=True, legend=True)
"""
Explanation: Since we doubled the distance from 1 to 2 m, we expect the entire light curve at 2 m to be divided by 4 (note the y-scales on the plots below).
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b.set_value('distance', 1.0)
b.run_compute(model='dist1', overwrite=True)
b.set_value('distance', 2.0)
b.run_compute(model='dist2', overwrite=True)
print("dist1 abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='dist1')))
print("dist2 abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='dist2')))
print("dist1 intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='dist1')))
print("dist2 intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='dist2')))
"""
Explanation: Note that 'pblum' is defined such that a (spherical, non-eclipsed, non-limb darkened) star with a pblum of 4pi will contribute a flux of 1.0 at 1.0 m (the default distance).
For more information, see the pblum tutorial
Influence on Meshes (Intensities)
Distance does not affect the intensities stored in the mesh (including those in relative units). In other words, like third light, distance only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our distances again and look at the values of the intensities in the mesh.
End of explanation
"""
|
GoogleCloudPlatform/cloudml-samples
|
notebooks/scikit-learn/TrainingWithScikitLearnInCMLE.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC
End of explanation
"""
%env PROJECT_ID <PROJECT_ID>
%env BUCKET_NAME <BUCKET_NAME>
%env REGION us-central1
%env TRAINER_PACKAGE_PATH ./census_training
%env MAIN_TRAINER_MODULE census_training.train
%env JOB_DIR gs://<BUCKET_NAME>/scikit_learn_job_dir
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
! mkdir census_training
"""
Explanation: scikit-learn Training on AI Platform
This notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform.
How to bring your model to AI Platform
Getting your model ready for training can be done in 3 steps:
1. Create your python model file
1. Add code to download your data from Google Cloud Storage so that AI Platform can use it
1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model
1. Prepare a package
1. Submit the training job
Prerequisites
Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform.
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.
Part 0: Setup
Create a project on GCP
Create a Google Cloud Storage Bucket
Enable AI Platform Training and Prediction and Compute Engine APIs
Install Cloud SDK
Install scikit-learn [Optional: used if running locally]
Install pandas [Optional: used if running locally]
These variables will be needed for the following steps.
* TRAINER_PACKAGE_PATH <./census_training> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.
* MAIN_TRAINER_MODULE <census_training.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>
* JOB_DIR <gs://$BUCKET_NAME/scikit_learn_job_dir> - The path to a Google Cloud Storage location to use for job output.
* RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
* PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
Replace:
* PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
* BUCKET_NAME <YOUR_BUCKET_NAME> - with the bucket id you created above.
* JOB_DIR <gs://YOUR_BUCKET_NAME/scikit_learn_job_dir> - with the bucket id you created above.
* REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.
End of explanation
"""
%%writefile ./census_training/train.py
# [START setup]
import datetime
import pandas as pd
from google.cloud import storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
# TODO: REPLACE '<BUCKET_NAME>' with your GCS BUCKET_NAME
BUCKET_NAME = '<BUCKET_NAME>'
# [END setup]
# ---------------------------------------
# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).
# AI Platform will then be able to use the data when training your model.
# ---------------------------------------
# [START download-data]
# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
blob = bucket.blob('ml-engine/sklearn/census_data/adult.data')
# Download the data
blob.download_to_filename('adult.data')
# [END download-data]
# ---------------------------------------
# This is where your model code would go. Below is an example model using the census dataset.
# ---------------------------------------
# [START define-and-load-data]
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist()
# [END define-and-load-data]
# [START categorical-feature-conversion]
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# [END categorical-feature-conversion]
# [START create-pipeline]
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# [END create-pipeline]
# ---------------------------------------
# 2. Export and save the model to GCS
# ---------------------------------------
# [START export-to-gcs]
# Export the model to a file
model = 'model.joblib'
joblib.dump(pipeline, model)
# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
# [END export-to-gcs]
"""
Explanation: The data
The Census Income Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/sklearn/census_data/.
Training file is adult.data
Evaluation file is adult.test (not used in this notebook)
Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.
Disclaimer
This dataset is provided by a third party. Google provides no representation,
warranty, or other guarantees about the validity or any other aspects of this dataset.
Part 1: Create your python model file
First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a scikit-learn model. However, there are two key differences:
1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.
1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.
The code in this file loads the data into a pandas DataFrame that can be used by scikit-learn. Then the model is fit against the training data. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform's prediction service.
REPLACE Line 18: BUCKET_NAME = '<BUCKET_NAME>' with your GCS BUCKET_NAME
Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.
End of explanation
"""
%%writefile ./census_training/__init__.py
# Note that __init__.py can be an empty file.
"""
Explanation: Part 2: Create Trainer Package
Before you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here
End of explanation
"""
! gcloud config set project $PROJECT_ID
"""
Explanation: Part 3: Submit Training Job
Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:
job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +"%Y%m%d_%H%M%S")
job-dir - The path to a Google Cloud Storage location to use for job output.
package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.
module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.
region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.
runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.
Note: Check to make sure gcloud is set to the current PROJECT_ID
End of explanation
"""
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier BASIC
"""
Explanation: Submit the training job.
End of explanation
"""
! gsutil ls gs://$BUCKET_NAME/census_*
"""
Explanation: [Optional] StackDriver Logging
You can view the logs for your training job:
1. Go to https://console.cloud.google.com/
1. Select "Logging" in left-hand pane
1. Select "Cloud ML Job" resource from the drop-down
1. In filter by prefix, use the value of $JOB_NAME to view the logs
[Optional] Verify Model File in GCS
View the contents of the destination model folder to verify that model file has indeed been uploaded to GCS.
Note: The model can take a few minutes to train and show up in GCS.
End of explanation
"""
|
gaufung/ISL
|
training-materials/Stasmodels-training/OLS.ipynb
|
mit
|
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
%matplotlib inline
"""
Explanation: Ordinary Least Squares
End of explanation
"""
# artificial data
nsample = 100
x = np.linspace(0, 10, nsample)
X = np.column_stack((x,x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
# add a columnn of 1s as the intercept
X = sm.add_constant(X)
y = np.dot(X, beta) + e
# fit the estimation
model = sm.OLS(y, X).fit()
print(model.summary())
print('Parameters: ', model.params)
print('R2: ', model.rsquared)
print('R2 adjust: ', model.rsquared_adj)
print('BIC:', model.bic)
"""
Explanation: OLS estimation
$y = \beta_0 +\sum_{i=1}^{j}\beta_ix_i$
End of explanation
"""
nsample = 50
sig = 0.5
x = np.linspace(0, 20 ,nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5 , 0.5, -0.02, 5.0]
y = np.dot(X, beta) + np.random.normal(size=nsample)
model = sm.OLS(y, X).fit()
print(model.summary())
print('parameters:', model.params)
print('standard errors', model.bse)
print('pridict values', model.predict())
# draw a plot to compare the true relationship to OLS predictions
prstd, iv_l, iv_u = wls_prediction_std(model)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x,y,'o')
ax.plot(x, model.fittedvalues, 'r--.')
ax.plot(x, iv_u, 'b--')
ax.plot(x, iv_l, 'g--')
plt.show()
"""
Explanation: OLS non-linear curve but linear in parameters
simulate artificial data
End of explanation
"""
nsample = 50
groups = np.zeros(nsample, np.int)
groups[20:40]=1
groups[40:]=2
dummy = sm.categorical(groups, drop=True)
dummy
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
beta = [1., 3, -3, 10]
y_true = np.dot(X,beta)
e = np.random.normal(size=nsample)
y = y_true + e
model = sm.OLS(y, X).fit()
print(model.summary())
prstd, iv_l, iv_u = wls_prediction_std(model)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, model.fittedvalues, 'r--.', label="Predicted")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
legend = ax.legend(loc="best")
"""
Explanation: OLS with dummary variables
End of explanation
"""
R = [[0,1,0,0],[0,0,1,0]]
R = np.array(R)
print(model.f_test(R))
"""
Explanation: Hypothesis Test
F test
Hypothesis that both coefficients on the dummy variables are equals to zero, that is, $R \times \beta =0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups.
End of explanation
"""
print(model.f_test('x2=x3=0'))
print(model.f_test('x1=0'))
"""
Explanation: You can also use formula-like syntax to test hypotheses
End of explanation
"""
beta = [0.001, 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
model = sm.OLS(y, X).fit()
print(model.f_test(R))
print(model.f_test('x2=x3=0'))
print(model.f_test('x1=0'))
"""
Explanation: Small group effects
End of explanation
"""
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
model = sm.OLS(y,X).fit()
print(model.summary())
norm_x = X.values
for i, name in enumerate(X):
if name == 'const':
continue
norm_x[:,i] = X[name]/np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T, norm_x)
"""
Explanation: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
End of explanation
"""
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
"""
Explanation: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
End of explanation
"""
|
iurilarosa/thesis
|
codici/Archiviati/numpy/.ipynb_checkpoints/Hough Numpy-checkpoint.ipynb
|
gpl-3.0
|
import scipy.io
import pandas
import numpy
import os
from matplotlib import pyplot
from scipy import sparse
import multiprocessing
%matplotlib inline
#carico file dati
percorsoFile = "/home/protoss/Documenti/TESI/DATI/peakmap1.mat.mat"
#print(picchi.shape)
#picchi[0]
#nb: picchi ha 0-tempi
# 1-frequenze
# 4-pesi
#ora popolo il dataframe
tabella = pandas.DataFrame(scipy.io.loadmat(percorsoFile)['PEAKS'])
tabella.drop(tabella.columns[[2, 3]], axis = 1, inplace=True)
tabella.columns = ["tempi", "frequenze","pesi"]
#fascia di sicurezza
securbelt = 4000
headerFreq= scipy.io.loadmat(percorsoFile)['hm_job'][0,0]['fr'][0]
headerSpindown = scipy.io.loadmat(percorsoFile)['hm_job'][0,0]['sd'][0]
epoca = scipy.io.loadmat(percorsoFile)['basic_info'][0,0]['epoch'][0,0]
#nb: headerFreq ha 0- freq minima,
# 1- step frequenza,
# 2- enhancement in risoluzone freq,
# 3- freq massima,
#headerSpindown ha 0- spin down iniziale di pulsar
# 1- step spindown
# 2- numero di step di spindown
#Definisco relative variabili per comodità e chiarezza del codice
#frequenze
minFreq = headerFreq[0]
maxFreq = headerFreq[3]
enhancement = headerFreq[2]
stepFrequenza = headerFreq[1]
stepFreqRaffinato = stepFrequenza/enhancement
print(minFreq,maxFreq, enhancement, stepFrequenza, stepFreqRaffinato)
freqIniz = minFreq- stepFrequenza/2 - stepFreqRaffinato
freqFin = maxFreq + stepFrequenza/2 + stepFreqRaffinato
nstepFrequenze = numpy.ceil((freqFin-freqIniz)/stepFreqRaffinato)+securbelt
#spindown
spindownIniz = headerSpindown[0]
stepSpindown = headerSpindown[1]
nstepSpindown = headerSpindown[2].astype(int)
# riarrangio gli array in modo che abbia i dati
# nel formato che voglio io
frequenze = tabella['frequenze'].values
frequenze = ((frequenze-freqIniz)/stepFreqRaffinato)-round(enhancement/2+0.001)
tempi = tabella['tempi'].values
print(numpy.amax(tempi)-numpy.amin(tempi))
tempi = tempi-epoca
tempi = ((tempi)*3600*24/stepFreqRaffinato)
#tempi = tempi - numpy.amin(tempi)+1
#tempi = tempi.astype(int)
pesi = tabella['pesi'].values
#%reset_selective tabella
#nstepSpindown = 200
spindowns = numpy.arange(0, nstepSpindown)
spindowns = numpy.multiply(spindowns,stepSpindown)
spindowns = numpy.add(spindowns, spindownIniz)
# così ho i tre array delle tre grandezze
nRows = nstepSpindown
nColumns = nstepFrequenze.astype(int)
fakeRow = numpy.zeros(frequenze.size)
def itermatrix(stepIesimo):
sdPerTempo = spindowns[stepIesimo]*tempi
appoggio = numpy.round(frequenze-sdPerTempo+securbelt/2).astype(int)
valori = numpy.bincount(appoggio,pesi)
missColumns = (nColumns-valori.size)
zeros = numpy.zeros(missColumns)
matrix = numpy.concatenate((valori, zeros))
return matrix
pool = multiprocessing.Pool()
%time imageMapped = list(pool.map(itermatrix, range(nstepSpindown)))
pool.close
imageMapped = numpy.array(imageMapped)
imageMappedNonsum = imageMapped
semiLarghezza = numpy.round(enhancement/2+0.001).astype(int)
imageMapped[:,semiLarghezza*2:nColumns]=imageMapped[:,semiLarghezza*2:nColumns]-imageMapped[:,0:nColumns - semiLarghezza*2]
imageMapped = numpy.cumsum(imageMapped, axis = 1)
"""
Explanation: Numpy
End of explanation
"""
%matplotlib inline
pyplot.figure(figsize=(30,7))
a = pyplot.imshow(imageMapped[:,3400:nColumns-1500], aspect = 50)
pyplot.colorbar(shrink = 1 ,aspect = 10)
"""
Explanation: $$ H_{i\:bin} = \left[\nu_{bin}-\left(i\Delta \dot{T} + \dot{T}0 \right)t{bin} + 2000\right],\; i = 0,...,n;\; bin= 0,..., nbins$$
$$ H_{i\:bin} = \nu_{bin}-\dot{T}'i t{bin} + 2000,\; i = 0,...,n;\; bin= 0,..., nbins$$
End of explanation
"""
percorsoFile = "originale/concumsum.mat"
percorsoFile2 = "originale/senzacumsum.mat"
immagineOriginale = scipy.io.loadmat(percorsoFile)['binh_df0']
immagineOriginaleNonsum = scipy.io.loadmat(percorsoFile2)['binh_df0']
#percorsoFile = "debugExamples/concumsumDB.mat"
#imgOrigDB = scipy.io.loadmat(percorsoFile)['binh_df0']
pyplot.figure(figsize=(30,7))
pyplot.imshow(immagineOriginale[:,3200:nstepFrequenze.astype(int)-1500],
#cmap='gray',
aspect=50)
pyplot.colorbar(shrink = 1,aspect = 10)
#pyplot.colorbar(immagine)
pyplot.show
miaVSoriginale = immagineOriginale - imageMapped
#miaVSoriginale = immagineOriginale - imageParalled
#matlabVSoriginale = immagineOriginale - imgOrigDB
#pyplot.figure(figsize=(100, 30))
#verificadoppia = miaVSoriginale - matlabVSoriginale
pyplot.imshow(miaVSoriginale[:,3200:nstepFrequenze.astype(int)-1500],aspect=50)
pyplot.colorbar(shrink = 1,aspect = 10)
print(numpy.nonzero(miaVSoriginale))
miaVSoriginaleNonsum = immagineOriginaleNonsum - imageMapped
#miaVSoriginale = immagineOriginale - imageParalled
#matlabVSoriginale = immagineOriginale - imgOrigDB
#pyplot.figure(figsize=(100, 30))
#verificadoppia = miaVSoriginale - matlabVSoriginale
pyplot.imshow(miaVSoriginaleNonsum[:,3200:nstepFrequenze.astype(int)-1500],aspect=50)
pyplot.colorbar(shrink = 1,aspect = 10)
print(numpy.nonzero(miaVSoriginaleNonsum))
"""
Explanation: Utile notebook per imshow
Confronti
Hough dal programma originale in matlab
End of explanation
"""
percorsoFile = "matlabbo/miaimgconcumsum.mat"
#percorsoFile = "matlabbo/miaimgnoncumsum.mat"
print(numpy.shape(immagineMatlabbo))
immagineMatlabbo = scipy.io.loadmat(percorsoFile)['hough']
pyplot.figure(figsize=(30, 7))
pyplot.imshow(immagineMatlabbo[:,3200:nstepFrequenze.astype(int)-1500],
#cmap='gray',
aspect=50)
pyplot.colorbar(shrink = 1,aspect = 10)
#pyplot.colorbar(immagine)
pyplot.show
# CONFRONTO
miaMatvsorigMat = immagineMatlabbo - immagineOriginale
#miaMatvsorigMat = immagineMatlabbo - immagineOriginaleNonsum
pyplot.figure(figsize=(30, 7))
pyplot.imshow(miaMatvsorigMat[:,3200:nstepFrequenze.astype(int)-1500],aspect=50)
pyplot.colorbar(shrink = 1,aspect = 10)
#print(numpy.nonzero(verifica))
"""
Explanation: Hough dal mio programma in matlab
End of explanation
"""
import numpy
from scipy import sparse
import multiprocessing
from matplotlib import pyplot
#first i build a matrix of some x positions vs time datas in a sparse format
matrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)
x = numpy.nonzero(matrix)[0]
times = numpy.nonzero(matrix)[1]
weights = numpy.random.rand(x.size)
#then i define an array of y positions
nStepsY = 5
y = numpy.arange(1,nStepsY+1)
nRows = nStepsY
nColumns = 80
image = numpy.zeros((nRows, nColumns))
fakeRow = numpy.zeros(x.size)
def itermatrix(ithStep):
yTimed = y[ithStep]*times
positions = (numpy.round(x-yTimed)+50).astype(int)
matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense()
matrix = numpy.ravel(matrix)
missColumns = (nColumns-matrix.size)
zeros = numpy.zeros(missColumns)
matrix = numpy.concatenate((matrix, zeros))
return matrix
for i in numpy.arange(nStepsY):
image[i] = itermatrix(i)
#or, without initialization of image:
%time imageSparsed = list(map(itermatrix, range(nStepsY)))
imageSparsed = numpy.array(imageSparsed)
pyplot.imshow(imageSparsed, aspect = 10)
pyplot.colorbar(shrink = 0.75,aspect = 10)
#TEST PARALLELIZZAZIOME MAP
%time imageSparsed = list(map(itermatrix, range(nStepsY)))
pool = multiprocessing.Pool()
%time imageParSparsed = pool.map(itermatrix, range(nStepsY))
pool.close()
imageParalled = numpy.array(imageParSparsed)
#PROBLEMA CON PARALLELIZZAZIONE DA CAPIRE!
%matplotlib inline
#pyplot.figure(figsize=(100, 30))
a = pyplot.imshow(imageParSparsed, aspect = 10)
pyplot.colorbar(shrink = 0.5,aspect = 10)
# riarrangio gli array in modo che abbia i dati
# nel formato che voglio io
#nstepSpindown = 200
spindowns = numpy.arange(0, nstepSpindown)
spindowns = numpy.multiply(spindowns,stepSpindown)
spindowns = numpy.add(spindowns, spindownIniz)
# così ho i tre array delle tre grandezze
print(spindowns)
#nstepSpindown = 200
spindowns = numpy.arange(0, nstepSpindown)
spindowns = numpy.multiply(spindowns,stepSpindown)
spindowns = numpy.add(spindowns, spindownIniz)
# così ho i tre array delle tre grandezze
nRows = nstepSpindown
nColumns = nstepFrequenze.astype(int)
fakeRow = numpy.zeros(frequenze.size)
def itermatrix(stepIesimo):
sdPerTempo = spindowns[stepIesimo]*tempi
appoggio = numpy.round(frequenze-sdPerTempo+securbelt/2).astype(int)
matrix = sparse.coo_matrix((pesi, (fakeRow, appoggio))).todense()
matrix = numpy.ravel(matrix)
missColumns = (nColumns-matrix.size)
zeros = numpy.zeros(missColumns)
matrix = numpy.concatenate((matrix, zeros))
return matrix
#PROBLEMA CON PARALLELIZZAZIONE DA CAPIRE!
%time imageMapped = list(map(itermatrix, range(nstepSpindown)))
imageMapped = numpy.array(imageMapped)
imageMappedNonsum = imageMapped
semiLarghezza = numpy.round(enhancement/2+0.001).astype(int)
imageMapped[:,semiLarghezza*2:nColumns]=imageMapped[:,semiLarghezza*2:nColumns]-imageMapped[:,0:nColumns - semiLarghezza*2]
imageMapped = numpy.cumsum(imageMapped, axis = 1)
"""
Explanation: Programma semplificato per domande
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/ec-earth3-gris/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-gris', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-GRIS
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a_home/2020_covid.ipynb
|
mit
|
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: Algo - simulation COVID
Ou comment utiliser les mathématiques pour comprendre la propagation de l'épidémie.
End of explanation
"""
from pandas import read_csv, to_datetime
url = "https://www.data.gouv.fr/en/datasets/r/d3a98a30-893f-47f7-96c5-2f4bcaaa0d71"
covid = read_csv(url, sep=",")
covid['date'] = to_datetime(covid['date'])
covid.tail()
ax = covid.set_index("date").plot(
title="Evolution des hospitalisations par jour", figsize=(14, 4))
ax.set_yscale("log");
"""
Explanation: Enoncé
On récupère les données du COVID par région et par âge et premier graphe
A cette adresse : Données relatives à l’épidémie de COVID-19 en France : vue d’ensemble
End of explanation
"""
from pandas import concat, to_datetime
def extract_data(kind='deaths', country='France'):
url = (
"https://raw.githubusercontent.com/CSSEGISandData/COVID-19/"
"master/csse_covid_19_data/"
"csse_covid_19_time_series/time_series_covid19_%s_global.csv" %
kind)
df = read_csv(url)
eur = df[df['Country/Region'].isin([country])
& df['Province/State'].isna()]
tf = eur.T.iloc[4:]
tf.columns = [kind]
return tf
def extract_whole_data(kind=['deaths', 'confirmed', 'recovered'],
country='France'):
population = {
'France': 67e6,
}
total = population[country]
dfs = []
for k in kind:
df = extract_data(k, country)
dfs.append(df)
conc = concat(dfs, axis=1)
conc['infected'] = conc['confirmed'] - (conc['deaths'] + conc['recovered'])
conc['safe'] = total - conc.drop('confirmed', axis=1).sum(axis=1)
conc.index = to_datetime(conc.index)
return conc
covid = extract_whole_data()
covid.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
covid[['confirmed', 'infected']].plot(title="Evolution de l'épidémie par jour", ax=ax[0])
covid[['deaths', 'recovered']].plot(title="Evolution de l'épidémie par jour", ax=ax[1]);
"""
Explanation: Il y a quelques valeurs manquantes même pour les séries aggrégées... Comme je n'ai pas le courage de corriger les valeurs unes à unes, je prends un autre fichier et quelques aberrations comme le nombre de décès qui décroît, ce qui est rigoureusement impossible.
End of explanation
"""
from tqdm import tqdm # pour avoir une barre de progression
def correct_series(X):
for t in range(1, X.shape[0]):
if X[t-1] > 0 and X[t] == 0:
X[t] = X[t-1]
continue
if X[t] >= X[t-1] and X[t] < X[t-1] + 200000:
continue
ratio = X[t] / X[t-1]
for i in range(0, t):
X[i] *= ratio
covid_modified = covid.copy()
for c in tqdm(covid.columns):
values = covid_modified[c].values
correct_series(values)
covid_modified[c] = values
covid_modified.tail()
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
covid_modified[['confirmed', 'infected']].plot(title="Evolution de l'épidémie par jour", ax=ax[0])
covid_modified[['deaths', 'recovered']].plot(title="Evolution de l'épidémie par jour", ax=ax[1]);
"""
Explanation: Même aberration, un nombre décès qui décroît... Il faudrait comprendre pourquoi pour savoir comment réparer les données. Ou on improvise. Pour chaque observation $X_t < X_{t-1}$, on calcule le ratio $\frac{X_{t}}{X_{t-1}}$ et on multiplie toutes les observations $i<t$ par ce ratio.
End of explanation
"""
covid = covid_modified
"""
Explanation: C'est mieux.
End of explanation
"""
lisse = covid.rolling(7).mean()
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
lisse[['confirmed', 'infected']].plot(title="Evolution de l'épidémie par jour", ax=ax[0])
lisse[['deaths', 'recovered']].plot(title="Evolution de l'épidémie par jour", ax=ax[1]);
"""
Explanation: On lisse.
End of explanation
"""
from datetime import datetime, timedelta
def plot_simulation(sim, day0=datetime(2020, 1, 1), safe=True,
ax=None, title=None, logy=False, two=False,
true_data=None):
"""
On suppose que sim est une matrice (days, 4).
:param sim: la simulation
:param day0: le premier jour de la simulation (une observation par jour)
:param safe: ajouter les personnes *safe* (non infectées), comme elles sont nombreuses,
il vaut mieux aussi cocher *logy=True* pour que cela soit lisible
:param ax: axes existant (utile pour superposer), None pour un créer un nouveau
:param title: titre du graphe
:param logy: échelle logarithmique sur l'axe des y
:param two: faire deux graphes plutôt qu'un seul pour plus de visibilité
:param true_data: vraies données à tracer également en plus de celle de la simulation
:return: ax
"""
df = DataFrame(sim, columns=['S', 'I', 'R', 'D'])
# On ajoute des dates.
df["date"] = [day0 + timedelta(d) for d in range(0, df.shape[0])]
df = df.set_index("date")
if true_data is None:
tdf = None
else:
tdf = DataFrame(true_data, columns=['Sobs', 'Iobs', 'Robs', 'Dobs'])
tdf["date"] = [day0 + timedelta(d) for d in range(0, tdf.shape[0])]
tdf = tdf.set_index("date")
if two:
if ax is None:
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
if safe:
if tdf is not None:
tdf.drop(['Dobs'], axis=1).plot(ax=ax[0], logy=logy, linewidth=8)
df.drop('D', axis=1).plot(ax=ax[0], title=title, logy=logy, linewidth=4)
else:
if tdf is not None:
tdf.drop(['Sobs', 'Dobs'], axis=1).plot(ax=ax[0], logy=logy, linewidth=8)
df.drop(['S', 'D'], axis=1).plot(ax=ax[0], title=title, logy=logy, linewidth=4)
if tdf is not None:
tdf['Dobs'].plot(ax=ax[1], title=title, logy=logy, linewidth=8)
df[['D']].plot(ax=ax[1], title='Décès', logy=logy, linewidth=4)
ax[0].legend()
ax[1].legend()
else:
if ax is None:
fig, ax = plt.subplots(1, 1, figsize=(14, 4))
if safe:
if tdf is not None:
tdf.plot(ax=ax, title=title, logy=logy, linewidth=8)
df.plot(ax=ax, title=title, logy=logy, linewidth=4)
else:
if tdf is not None:
tdf.drop(['Sobs'], axis=1).plot(ax=ax, title=title, logy=logy, linewidth=8)
df.drop(['S'], axis=1).plot(ax=ax, title=title, logy=logy, linewidth=4)
ax.legend()
return ax
"""
Explanation: La série des recovered ne compte vraisemblablement que ceux qui sont passés à l'hôpital. Il faudrait recouper avec d'autres données pour être sûr. Ce sera pour un autre jour.
Modèle SIRD
Pour en savoir plus Modèles compartimentaux en épidémiologie. On classe la population en quatre catégories :
S : personnes non contaminées
I : nombre de personnes malades ou contaminantes
R : personnes guéries (recovered)
D : personnes décédées
Les gens changent de catégorie en fonction de l'évolution de l'épidémie selon les équations qui suivent :
$\frac{dS}{dt} = - \beta \frac{S I}{N}$
$\frac{dI}{dt} = \frac{\beta S I}{N} - \mu I - \nu I$
$\frac{dD}{dt} = \nu I$
$\frac{dR}{dt} = \mu I$
$\beta$ est lié au taux de transmission, $\frac{1}{\mu}$ est la durée moyenne jusqu'à guérison, $\frac{1}{\nu}$ est la durée moyenne jusqu'au décès.
Q0 : une petite fonction pour dessiner
Cette fonction servira à représenter graphiquement les résultats.
End of explanation
"""
import numpy
beta = 0.5
mu = 1./14
nu = 1./21
S0 = 9990
I0 = 10
R0 = 0
D0 = 0
"""
Explanation: Q1 : écrire une fonction qui calcule la propagation
On suppose que $\beta, \mu, \nu, S_0, I_0, R_0, D_0$ sont connus. On rappelle le modèle :
$dS = - \beta \frac{S I}{N}$
$dI = \frac{\beta S I}{N} - \mu I - \nu I$
$dD = \nu I$
$dR = \mu I$
End of explanation
"""
from pandas import DataFrame
def simulation(beta, mu, nu, S0, I0, R0, D0, days=14):
res = numpy.empty((days+1, 4), dtype=numpy.float64)
res[0, :] = [S0, I0, R0, D0]
N = sum(res[0, :])
for t in range(1, res.shape[0]):
dR = res[t-1, 1] * mu
# ....
return res
sim = simulation(beta, mu, nu, S0, I0, R0, D0, 30)
plot_simulation(sim);
"""
Explanation: Il faudra compléter le petit programme suivant :
End of explanation
"""
lisse_mars = lisse[30:]
dates = lisse_mars.index
france = numpy.zeros((lisse_mars.shape[0], 4), dtype=numpy.dtype)
france[:, 3] = lisse_mars['deaths']
france[:, 2] = lisse_mars['recovered']
france[:, 0] = lisse_mars['safe']
france[:, 1] = lisse_mars['infected']
france[:5]
plot_simulation(france, dates[0], safe=False, logy=True, title="Vraies données");
"""
Explanation: Q2 : on veut estimer les paramètres du modèle, une fonction d'erreur ?
C'est compliqué parce que... les paramètres évoluent au cours du temps, en fonction du comportement des gens, masque, pas masque, confinement, reconfinement, température, manque de tests également... Tout d'abord les vraies données.
End of explanation
"""
plot_simulation(france[-60:], dates[-60], two=True, safe=False, title="Vraies données, derniers mois");
"""
Explanation: Et sur les derniers jours.
End of explanation
"""
def simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, days=14):
# ...
pass
"""
Explanation: Bref, on part du principe que le modèle est plutôt fiable sur une courte période de temps, on tire plein de paramètres aléatoires et on regarde ce qui marche le mieux. Et pour comparer deux jeux de paramètres, il faut donc une fonction d'erreur qu'on prendra comme égal à la somme des erreurs de prédictions.
Maintenant il faut faire attention à ce qu'on compare. La simulation calcule les catégories de population au temps t, mais pas toujours les séries cumulées. La série des personnes contaminées est transitoire dans la simulation et cumulées dans les données récupérées. La première étape consiste à transformer les données simulées pour qu'elles soient comparables aux données collectées.
End of explanation
"""
def error(data, simulation):
# ... à compléter
return 0
"""
Explanation: Maintenant la fonction d'erreur :
End of explanation
"""
from tqdm import tqdm # pour avoir une barre de progression
def optimisation(true_data, i_range=(0, 0.2), beta_range=(0, 0.5),
mu_range=(0., 0.2), nu_range=(0., 0.2),
max_iter=1000, error_fct=error):
N = sum(true_data[0, :])
rnd = numpy.random.rand(max_iter, 4)
for i, (a, b) in enumerate([i_range, beta_range, mu_range, nu_range]):
rnd[:, i] = rnd[:, i] # à compléter ...
err_min = None
for it in tqdm(range(max_iter)):
i, beta, mu, nu = rnd[it, :]
D0 = true_data[0, 3]
# dI0 =
# S0 =
# I0 =
# R0 =
sim = simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, days=true_data.shape[0] - 1)
err = error_fct(true_data, sim)
if err_min is None or err < err_min:
# à compléter
pass
return best
"""
Explanation: Q3 : optimisation
Pour optimiser, on tire des paramètres de façon aléatoire dans un intervalle donné et on choisit ceux qui minimisent l'erreur.
End of explanation
"""
from datetime import datetime, timedelta
from pandas import DataFrame
def simulation(beta, mu, nu, S0, I0, R0, D0, days=14):
res = numpy.empty((days+1, 4), dtype=numpy.float64)
res[0, :] = [S0, I0, R0, D0]
N = sum(res[0, :])
for t in range(1, res.shape[0]):
dR = res[t-1, 1] * mu
dD = res[t-1, 1] * nu
dI = res[t-1, 0] * res[t-1, 1] / N * beta
res[t, 0] = res[t-1, 0] - dI
res[t, 1] = res[t-1, 1] + dI - dR - dD
res[t, 2] = res[t-1, 2] + dR
res[t, 3] = res[t-1, 3] + dD
return res
beta = 0.5
mu = 1./14
nu = 1./21
S0 = 9990
I0 = 10
R0 = 0
D0 = 0
sim = simulation(beta, mu, nu, S0, I0, R0, D0, 60)
plot_simulation(sim, dates[60], safe=False, two=True,
title="Simulation pour essayer");
"""
Explanation: Q4 : dessiner les résultats
Q5 : vérifier que cela marche sur des données synthétiques
On simule, on vérifie que l'optimisation retrouve les paramètres de la simulation.
Q6 : sur des données réelles
Réponses
Q1 : propagation
On l'applique aux données réelles.
End of explanation
"""
def simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, days=14):
res = numpy.empty((days+1, 4), dtype=numpy.float64)
cum = numpy.empty((days+1, 1), dtype=numpy.float64)
res[0, :] = [S0, I0, R0, D0]
cum[0, 0] = I0
N = sum(res[0, :])
for t in range(1, res.shape[0]):
dR = res[t-1, 1] * mu
dD = res[t-1, 1] * nu
dI = res[t-1, 0] * res[t-1, 1] / N * beta
res[t, 0] = res[t-1, 0] - dI
res[t, 1] = res[t-1, 1] + dI - dR - dD
res[t, 2] = res[t-1, 2] + dR
res[t, 3] = res[t-1, 3] + dD
cum[t, 0] = cum[t-1, 0] + dI
res[:, 1] = cum[:, 0]
return res
beta = 0.5
mu = 1./14
nu = 1./21
S0 = 9990
I0 = 10
R0 = 0
D0 = 0
sim = simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, 60)
plot_simulation(sim, dates[60], safe=False, two=True,
title="Simulation pour essayer");
"""
Explanation: Q2 : série cumulées et fonction erreur
On doit d'abord calculer la simulation qui modifie $I_t$ en $J_t$ qui correspond à l'ensemble des personnes contaminées jusqu'à présent.
End of explanation
"""
beta = 0.04
mu = 0.03
nu = 0.0001
S0, I0, R0, D0 = france[120, :]
sim = simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, 120)
plot_simulation(sim, dates[120], safe=False, two=True, true_data=france[120:240],
title="Simulation et vraies données (en gros)");
"""
Explanation: On compare avec les vraies données en gras.
End of explanation
"""
def error(data, simulation):
err = (data[:, 1] - simulation[:, 1]) ** 2 + (data[:, 3] - simulation[:, 3]) ** 2
total = (numpy.sum(err) / data.shape[0]) ** 0.5
return total
beta = 0.5
mu = 1./14
nu = 1./21
S0 = 9990
I0 = 10
R0 = 0
D0 = 0
sim = simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, 30)
plot_simulation(sim);
"""
Explanation: Pas simple de choisir des paramètres pour approximer la courbe.
End of explanation
"""
from tqdm import tqdm # pour avoir une barre de progression
def optimisation(true_data, i_range=(0, 0.2), beta_range=(0, 0.5),
mu_range=(0., 0.2), nu_range=(0., 0.2),
max_iter=1000, error_fct=error):
N = sum(true_data[0, :])
rnd = numpy.random.rand(max_iter, 4)
for i, (a, b) in enumerate([i_range, beta_range, mu_range, nu_range]):
rnd[:, i] = rnd[:, i] * (b - a) + a
err_min = None
for it in tqdm(range(max_iter)):
i, beta, mu, nu = rnd[it, :]
dI0 = true_data[0, 0] * i
D0 = true_data[0, 3]
S0 = true_data[0, 0] - dI0
I0 = true_data[0, 1] + dI0
R0 = N - D0 - I0 - S0
sim = simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, days=true_data.shape[0] - 1)
err = error_fct(true_data, sim)
if err_min is None or err < err_min:
err_min = err
best = dict(beta=beta, mu=mu, nu=nu, I0=I0, i=i,
S0=S0, R0=R0, D0=D0, err=err, sim=sim)
return best
beta = 0.04
mu = 0.07
nu = 0.04
S0 = 67e6
I0 = 100000
R0 = 10000
D0 = 10000
sim = simulation_cumulee(beta, mu, nu, S0, I0, R0, D0, 30)
res = optimisation(sim, max_iter=2000, error_fct=error, i_range=(0., 0.001))
sim_opt = res['sim']
del res['sim']
plot_simulation(sim_opt, dates[90], safe=False, two=True, true_data=sim,
title="beta=%1.3f mu=%1.3f nu=%1.3f err=%1.3g i=%1.3g" % (
res['beta'], res['mu'], res['nu'], res['err'], res['i']));
"""
Explanation: On regarde si on arrive à retrouver les paramètres de la simulation.
Q3, Q4, Q5 : optimisation sur des données synthétiques
End of explanation
"""
res = optimisation(france[180:], error_fct=error, i_range=(0, 0.0001))
sim = res['sim']
del res['sim']
plot_simulation(sim, dates[180], safe=False, two=True, true_data=france[180:],
title="beta=%1.3f mu=%1.3f nu=%1.3f err=%1.3f i=%1.3f" % (
res['beta'], res['mu'], res['nu'], res['err'], res['i']));
"""
Explanation: Ca ne marche pas trop mal.
Q6 : sur des données réelles
End of explanation
"""
def error_norm(data, simulation):
m1 = numpy.max(simulation[:, 1])
m3 = numpy.max(simulation[:, 3])
err = (data[:, 1] - simulation[:, 1]) ** 2 / m1 ** 2 + (data[:, 3] - simulation[:, 3]) ** 2 / m3 ** 2
total = (numpy.sum(err) / data.shape[0]) ** 0.5
return total
res = optimisation(france[150:], error_fct=error_norm, i_range=(0, 0.001))
sim = res['sim']
del res['sim']
plot_simulation(sim, dates[150], safe=False, two=True, true_data=france[150:],
title="beta=%1.3f mu=%1.3f nu=%1.3f err=%1.3f i=%1.3f" % (
res['beta'], res['mu'], res['nu'], res['err'], res['i']));
"""
Explanation: Ca ne marche pas super. On peut modifier l'erreur pour donner plus d'importance à la courbe des morts.
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization
|
Clustering_&_Retrieval/Week4/Assignment1/.ipynb_checkpoints/3_em-for-gmm_blank-checkpoint.ipynb
|
mit
|
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
import copy
from scipy.stats import multivariate_normal
%matplotlib inline
"""
Explanation: Fitting Gaussian Mixture Models with EM
In this assignment you will
* implement the EM algorithm for a Gaussian mixture model
* apply your implementation to cluster images
* explore clustering results and interpret the output of the EM algorithm
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
"""
def generate_MoG_data(num_data, means, covariances, weights):
""" Creates a list of data points """
num_clusters = len(weights)
data = []
for i in range(num_data):
# Use np.random.choice and weights to pick a cluster id greater than or equal to 0 and less than num_clusters.
k = np.random.choice(len(weights), 1, p=weights)[0]
# Use np.random.multivariate_normal to create data from this cluster
x = np.random.multivariate_normal(means[k], covariances[k])
data.append(x)
return data
"""
Explanation: Implementing the EM algorithm for Gaussian mixture models
In this section, you will implement the EM algorithm. We will take the following steps:
Create some synthetic data.
Provide a log likelihood function for this model.
Implement the EM algorithm.
Visualize the progress of the parameters during the course of running EM.
Visualize the convergence of the model.
Dataset
To help us develop and test our implementation, we will generate some observations from a mixture of Gaussians and then run our EM algorithm to discover the mixture components. We'll begin with a function to generate the data, and a quick plot to visualize its output for a 2-dimensional mixture of three Gaussians.
Now we will create a function to generate data from a mixture of Gaussians model.
End of explanation
"""
# Model parameters
init_means = [
[5, 0], # mean of cluster 1
[1, 1], # mean of cluster 2
[0, 5] # mean of cluster 3
]
init_covariances = [
[[.5, 0.], [0, .5]], # covariance of cluster 1
[[1., .7], [0, .7]], # covariance of cluster 2
[[.5, 0.], [0, .5]] # covariance of cluster 3
]
init_weights = [1/4., 1/2., 1/4.] # weights of each cluster
# Generate data
np.random.seed(4)
data = generate_MoG_data(100, init_means, init_covariances, init_weights)
"""
Explanation: After specifying a particular set of clusters (so that the results are reproducible across assignments), we use the above function to generate a dataset.
End of explanation
"""
assert len(data) == 100
assert len(data[0]) == 2
print 'Checkpoint passed!'
"""
Explanation: Checkpoint: To verify your implementation above, make sure the following code does not return an error.
End of explanation
"""
plt.figure()
d = np.vstack(data)
plt.plot(d[:,0], d[:,1],'ko')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: Now plot the data you created above. The plot should be a scatterplot with 100 points that appear to roughly fall into three clusters.
End of explanation
"""
def log_sum_exp(Z):
""" Compute log(\sum_i exp(Z_i)) for some array Z."""
return np.max(Z) + np.log(np.sum(np.exp(Z - np.max(Z))))
def loglikelihood(data, weights, means, covs):
""" Compute the loglikelihood of the data for a Gaussian mixture model with the given parameters. """
num_clusters = len(means)
num_dim = len(data[0])
ll = 0
for d in data:
Z = np.zeros(num_clusters)
for k in range(num_clusters):
# Compute (x-mu)^T * Sigma^{-1} * (x-mu)
delta = np.array(d) - means[k]
exponent_term = np.dot(delta.T, np.dot(np.linalg.inv(covs[k]), delta))
# Compute loglikelihood contribution for this data point and this cluster
Z[k] += np.log(weights[k])
Z[k] -= 1/2. * (num_dim * np.log(2*np.pi) + np.log(np.linalg.det(covs[k])) + exponent_term)
# Increment loglikelihood contribution of this data point across all clusters
ll += log_sum_exp(Z)
return ll
"""
Explanation: Log likelihood
We provide a function to calculate log likelihood for mixture of Gaussians. The log likelihood quantifies the probability of observing a given set of data under a particular setting of the parameters in our model. We will use this to assess convergence of our EM algorithm; specifically, we will keep looping through EM update steps until the log likehood ceases to increase at a certain rate.
End of explanation
"""
def EM(data, init_means, init_covariances, init_weights, maxiter=1000, thresh=1e-4):
# Make copies of initial parameters, which we will update during each iteration
means = init_means[:]
covariances = init_covariances[:]
weights = init_weights[:]
# Infer dimensions of dataset and the number of clusters
num_data = len(data)
num_dim = len(data[0])
num_clusters = len(means)
# Initialize some useful variables
resp = np.zeros((num_data, num_clusters))
ll = loglikelihood(data, weights, means, covariances)
ll_trace = [ll]
for i in range(maxiter):
if i % 5 == 0:
print("Iteration %s" % i)
# E-step: compute responsibilities
# Update resp matrix so that resp[j, k] is the responsibility of cluster k for data point j.
# Hint: To compute likelihood of seeing data point j given cluster k, use multivariate_normal.pdf.
for j in range(num_data):
for k in range(num_clusters):
# YOUR CODE HERE
resp[j, k] = multivariate_normal.pdf(data[j], means[k], covariances[k])
row_sums = resp.sum(axis=1)[:, np.newaxis]
resp = resp / row_sums # normalize over all possible cluster assignments
# M-step
# Compute the total responsibility assigned to each cluster, which will be useful when
# implementing M-steps below. In the lectures this is called N^{soft}
counts = np.sum(resp, axis=0)
for k in range(num_clusters):
Nsoft = resp[ ,k].sum()
# Update the weight for cluster k using the M-step update rule for the cluster weight, \hat{\pi}_k.
# YOUR CODE HERE
weights[k] = Nsoft/num_data
# Update means for cluster k using the M-step update rule for the mean variables.
# This will assign the variable means[k] to be our estimate for \hat{\mu}_k.
weighted_sum = 0
for j in range(num_data):
# YOUR CODE HERE
weighted_sum += resp[j,k]*data[j]
# YOUR CODE HERE
means[k] = weighted_sum/Nsoft
# Update covariances for cluster k using the M-step update rule for covariance variables.
# This will assign the variable covariances[k] to be the estimate for \hat{\Sigma}_k.
weighted_sum = np.zeros((num_dim, num_dim))
for j in range(num_data):
# YOUR CODE HERE (Hint: Use np.outer on the data[j] and this cluster's mean)
weighted_sum += np.outer(data[j],means[k])*resp[j,k]
# YOUR CODE HERE
covariances[k] = weighted_sum/resp[ :k].sum()
# Compute the loglikelihood at this iteration
# YOUR CODE HERE
ll_latest = loglikelihood(data, weights, means, covariances)
ll_trace.append(ll_latest)
# Check for convergence in log-likelihood and store
if (ll_latest - ll) < thresh and ll_latest > -np.inf:
break
ll = ll_latest
if i % 5 != 0:
print("Iteration %s" % i)
out = {'weights': weights, 'means': means, 'covs': covariances, 'loglik': ll_trace, 'resp': resp}
return out
"""
Explanation: Implementation
You will now complete an implementation that can run EM on the data you just created. It uses the loglikelihood function we provided above.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
Hint: Some useful functions
multivariate_normal.pdf: lets you compute the likelihood of seeing a data point in a multivariate Gaussian distribution.
np.outer: comes in handy when estimating the covariance matrix from data.
End of explanation
"""
np.random.seed(4)
# Initialization of parameters
chosen = np.random.choice(len(data), 3, replace=False)
initial_means = [data[x] for x in chosen]
initial_covs = [np.cov(data, rowvar=0)] * 3
initial_weights = [1/3.] * 3
# Run EM
results = EM(data, initial_means, initial_covs, initial_weights)
"""
Explanation: Testing the implementation on the simulated data
Now we'll fit a mixture of Gaussians to this data using our implementation of the EM algorithm. As with k-means, it is important to ask how we obtain an initial configuration of mixing weights and component parameters. In this simple case, we'll take three random points to be the initial cluster means, use the empirical covariance of the data to be the initial covariance in each cluster (a clear overestimate), and set the initial mixing weights to be uniform across clusters.
End of explanation
"""
# Your code here
"""
Explanation: Checkpoint. For this particular example, the EM algorithm is expected to terminate in 30 iterations. That is, the last line of the log should say "Iteration 29". If your function stopped too early or too late, you should re-visit your code.
Our algorithm returns a dictionary with five elements:
* 'loglik': a record of the log likelihood at each iteration
* 'resp': the final responsibility matrix
* 'means': a list of K means
* 'covs': a list of K covariance matrices
* 'weights': the weights corresponding to each model component
Quiz Question: What is the weight that EM assigns to the first component after running the above codeblock?
End of explanation
"""
# Your code here
"""
Explanation: Quiz Question: Using the same set of results, obtain the mean that EM assigns the second component. What is the mean in the first dimension?
End of explanation
"""
# Your code here
"""
Explanation: Quiz Question: Using the same set of results, obtain the covariance that EM assigns the third component. What is the variance in the first dimension?
End of explanation
"""
import matplotlib.mlab as mlab
def plot_contours(data, means, covs, title):
plt.figure()
plt.plot([x[0] for x in data], [y[1] for y in data],'ko') # data
delta = 0.025
k = len(means)
x = np.arange(-2.0, 7.0, delta)
y = np.arange(-2.0, 7.0, delta)
X, Y = np.meshgrid(x, y)
col = ['green', 'red', 'indigo']
for i in range(k):
mean = means[i]
cov = covs[i]
sigmax = np.sqrt(cov[0][0])
sigmay = np.sqrt(cov[1][1])
sigmaxy = cov[0][1]/(sigmax*sigmay)
Z = mlab.bivariate_normal(X, Y, sigmax, sigmay, mean[0], mean[1], sigmaxy)
plt.contour(X, Y, Z, colors = col[i])
plt.title(title)
plt.rcParams.update({'font.size':16})
plt.tight_layout()
# Parameters after initialization
plot_contours(data, initial_means, initial_covs, 'Initial clusters')
# Parameters after running EM to convergence
results = EM(data, initial_means, initial_covs, initial_weights)
plot_contours(data, results['means'], results['covs'], 'Final clusters')
"""
Explanation: Plot progress of parameters
One useful feature of testing our implementation on low-dimensional simulated data is that we can easily visualize the results.
We will use the following plot_contours function to visualize the Gaussian components over the data at three different points in the algorithm's execution:
At initialization (using initial_mu, initial_cov, and initial_weights)
After running the algorithm to completion
After just 12 iterations (using parameters estimates returned when setting maxiter=12)
End of explanation
"""
# YOUR CODE HERE
results = ...
plot_contours(data, results['means'], results['covs'], 'Clusters after 12 iterations')
"""
Explanation: Fill in the following code block to visualize the set of parameters we get after running EM for 12 iterations.
End of explanation
"""
results = EM(data, initial_means, initial_covs, initial_weights)
# YOUR CODE HERE
loglikelihoods = ...
plt.plot(range(len(loglikelihoods)), loglikelihoods, linewidth=4)
plt.xlabel('Iteration')
plt.ylabel('Log-likelihood')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: Quiz Question: Plot the loglikelihood that is observed at each iteration. Is the loglikelihood plot monotonically increasing, monotonically decreasing, or neither [multiple choice]?
End of explanation
"""
images = gl.SFrame('images.sf')
gl.canvas.set_target('ipynb')
import array
images['rgb'] = images.pack_columns(['red', 'green', 'blue'])['X4']
images.show()
"""
Explanation: Fitting a Gaussian mixture model for image data
Now that we're confident in our implementation of the EM algorithm, we'll apply it to cluster some more interesting data. In particular, we have a set of images that come from four categories: sunsets, rivers, trees and forests, and cloudy skies. For each image we are given the average intensity of its red, green, and blue pixels, so we have a 3-dimensional representation of our data. Our goal is to find a good clustering of these images using our EM implementation; ideally our algorithm would find clusters that roughly correspond to the four image categories.
To begin with, we'll take a look at the data and get it in a form suitable for input to our algorithm. The data are provided in SFrame format:
End of explanation
"""
np.random.seed(1)
# Initalize parameters
init_means = [images['rgb'][x] for x in np.random.choice(len(images), 4, replace=False)]
cov = np.diag([images['red'].var(), images['green'].var(), images['blue'].var()])
init_covariances = [cov, cov, cov, cov]
init_weights = [1/4., 1/4., 1/4., 1/4.]
# Convert rgb data to numpy arrays
img_data = [np.array(i) for i in images['rgb']]
# Run our EM algorithm on the image data using the above initializations.
# This should converge in about 125 iterations
out = EM(img_data, init_means, init_covariances, init_weights)
"""
Explanation: We need to come up with initial estimates for the mixture weights and component parameters. Let's take three images to be our initial cluster centers, and let's initialize the covariance matrix of each cluster to be diagonal with each element equal to the sample variance from the full data. As in our test on simulated data, we'll start by assuming each mixture component has equal weight.
This may take a few minutes to run.
End of explanation
"""
ll = out['loglik']
plt.plot(range(len(ll)),ll,linewidth=4)
plt.xlabel('Iteration')
plt.ylabel('Log-likelihood')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: The following sections will evaluate the results by asking the following questions:
Convergence: How did the log likelihood change across iterations? Did the algorithm achieve convergence?
Uncertainty: How did cluster assignment and uncertainty evolve?
Interpretability: Can we view some example images from each cluster? Do these clusters correspond to known image categories?
Evaluating convergence
Let's start by plotting the log likelihood at each iteration - we know that the EM algorithm guarantees that the log likelihood can only increase (or stay the same) after each iteration, so if our implementation is correct then we should see an increasing function.
End of explanation
"""
plt.figure()
plt.plot(range(3,len(ll)),ll[3:],linewidth=4)
plt.xlabel('Iteration')
plt.ylabel('Log-likelihood')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: The log likelihood increases so quickly on the first few iterations that we can barely see the plotted line. Let's plot the log likelihood after the first three iterations to get a clearer view of what's going on:
End of explanation
"""
import colorsys
def plot_responsibilities_in_RB(img, resp, title):
N, K = resp.shape
HSV_tuples = [(x*1.0/K, 0.5, 0.9) for x in range(K)]
RGB_tuples = map(lambda x: colorsys.hsv_to_rgb(*x), HSV_tuples)
R = img['red']
B = img['blue']
resp_by_img_int = [[resp[n][k] for k in range(K)] for n in range(N)]
cols = [tuple(np.dot(resp_by_img_int[n], np.array(RGB_tuples))) for n in range(N)]
plt.figure()
for n in range(len(R)):
plt.plot(R[n], B[n], 'o', c=cols[n])
plt.title(title)
plt.xlabel('R value')
plt.ylabel('B value')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: Evaluating uncertainty
Next we'll explore the evolution of cluster assignment and uncertainty. Remember that the EM algorithm represents uncertainty about the cluster assignment of each data point through the responsibility matrix. Rather than making a 'hard' assignment of each data point to a single cluster, the algorithm computes the responsibility of each cluster for each data point, where the responsibility corresponds to our certainty that the observation came from that cluster.
We can track the evolution of the responsibilities across iterations to see how these 'soft' cluster assignments change as the algorithm fits the Gaussian mixture model to the data; one good way to do this is to plot the data and color each point according to its cluster responsibilities. Our data are three-dimensional, which can make visualization difficult, so to make things easier we will plot the data using only two dimensions, taking just the [R G], [G B] or [R B] values instead of the full [R G B] measurement for each observation.
End of explanation
"""
N, K = out['resp'].shape
random_resp = np.random.dirichlet(np.ones(K), N)
plot_responsibilities_in_RB(images, random_resp, 'Random responsibilities')
"""
Explanation: To begin, we will visualize what happens when each data has random responsibilities.
End of explanation
"""
out = EM(img_data, init_means, init_covariances, init_weights, maxiter=1)
plot_responsibilities_in_RB(images, out['resp'], 'After 1 iteration')
"""
Explanation: We now use the above plotting function to visualize the responsibilites after 1 iteration.
End of explanation
"""
out = EM(img_data, init_means, init_covariances, init_weights, maxiter=20)
plot_responsibilities_in_RB(images, out['resp'], 'After 20 iterations')
"""
Explanation: We now use the above plotting function to visualize the responsibilites after 20 iterations. We will see there are fewer unique colors; this indicates that there is more certainty that each point belongs to one of the four components in the model.
End of explanation
"""
means = out['means']
covariances = out['covs']
rgb = images['rgb']
N = len(images)
K = len(means)
assignments = [0]*N
probs = [0]*N
for i in range(N):
# Compute the score of data point i under each Gaussian component:
p = np.zeros(K)
for k in range(K):
# YOUR CODE HERE (Hint: use multivariate_normal.pdf and rgb[i])
p[k] = ...
# Compute assignments of each data point to a given cluster based on the above scores:
# YOUR CODE HERE
assignments[i] = ...
# For data point i, store the corresponding score under this cluster assignment:
# YOUR CODE HERE
probs[i] = ...
assignments = gl.SFrame({'assignments':assignments, 'probs':probs, 'image': images['image']})
"""
Explanation: Plotting the responsibilities over time in [R B] space shows a meaningful change in cluster assignments over the course of the algorithm's execution. While the clusters look significantly better organized at the end of the algorithm than they did at the start, it appears from our plot that they are still not very well separated. We note that this is due in part our decision to plot 3D data in a 2D space; everything that was separated along the G axis is now "squashed" down onto the flat [R B] plane. If we were to plot the data in full [R G B] space, then we would expect to see further separation of the final clusters. We'll explore the cluster interpretability more in the next section.
Interpreting each cluster
Let's dig into the clusters obtained from our EM implementation. Recall that our goal in this section is to cluster images based on their RGB values. We can evaluate the quality of our clustering by taking a look at a few images that 'belong' to each cluster. We hope to find that the clusters discovered by our EM algorithm correspond to different image categories - in this case, we know that our images came from four categories ('cloudy sky', 'rivers', 'sunsets', and 'trees and forests'), so we would expect to find that each component of our fitted mixture model roughly corresponds to one of these categories.
If we want to examine some example images from each cluster, we first need to consider how we can determine cluster assignments of the images from our algorithm output. This was easy with k-means - every data point had a 'hard' assignment to a single cluster, and all we had to do was find the cluster center closest to the data point of interest. Here, our clusters are described by probability distributions (specifically, Gaussians) rather than single points, and our model maintains some uncertainty about the cluster assignment of each observation.
One way to phrase the question of cluster assignment for mixture models is as follows: how do we calculate the distance of a point from a distribution? Note that simple Euclidean distance might not be appropriate since (non-scaled) Euclidean distance doesn't take direction into account. For example, if a Gaussian mixture component is very stretched in one direction but narrow in another, then a data point one unit away along the 'stretched' dimension has much higher probability (and so would be thought of as closer) than a data point one unit away along the 'narrow' dimension.
In fact, the correct distance metric to use in this case is known as Mahalanobis distance. For a Gaussian distribution, this distance is proportional to the square root of the negative log likelihood. This makes sense intuitively - reducing the Mahalanobis distance of an observation from a cluster is equivalent to increasing that observation's probability according to the Gaussian that is used to represent the cluster. This also means that we can find the cluster assignment of an observation by taking the Gaussian component for which that observation scores highest. We'll use this fact to find the top examples that are 'closest' to each cluster.
Quiz Question: Calculate the likelihood (score) of the first image in our data set (images[0]) under each Gaussian component through a call to multivariate_normal.pdf. Given these values, what cluster assignment should we make for this image?
Now we calculate cluster assignments for the entire image dataset using the result of running EM for 20 iterations above:
End of explanation
"""
def get_top_images(assignments, cluster, k=5):
# YOUR CODE HERE
images_in_cluster = ...
top_images = images_in_cluster.topk('probs', k)
return top_images['image']
"""
Explanation: We'll use the 'assignments' SFrame to find the top images from each cluster by sorting the datapoints within each cluster by their score under that cluster (stored in probs). We can plot the corresponding images in the original data using show().
Create a function that returns the top 5 images assigned to a given category in our data (HINT: use the GraphLab Create function topk(column, k) to find the k top values according to specified column in an SFrame).
End of explanation
"""
gl.canvas.set_target('ipynb')
for component_id in range(4):
get_top_images(assignments, component_id).show()
"""
Explanation: Use this function to show the top 5 images in each cluster.
End of explanation
"""
|
guiquanz/msaf
|
examples/Run MSAF.ipynb
|
mit
|
from __future__ import print_function
import msaf
import librosa
import seaborn as sns
# and IPython.display for audio output
import IPython.display
# Setup nice plots
sns.set(style="dark")
%matplotlib inline
"""
Explanation: Running MSAF
The main MSAF functionality is demonstrated here.
End of explanation
"""
# Choose an audio file and listen to it
audio_file = "../datasets/Sargon/audio/01-Sargon-Mindless.mp3"
IPython.display.Audio(filename=audio_file)
# Segment the file using the default MSAF parameters
boundaries, labels = msaf.process(audio_file)
print(boundaries)
# Sonify boundaries
sonified_file = "my_boundaries.wav"
sr = 44100
boundaries, labels = msaf.process(audio_file, sonify_bounds=True,
out_bounds=sonified_file, out_sr=sr)
# Listen to results
audio = librosa.load(sonified_file, sr=sr)[0]
IPython.display.Audio(audio, rate=sr)
"""
Explanation: Single File Mode
This mode analyzes one audio file at a time.
End of explanation
"""
# First, let's list all the available boundary algorithms
print(msaf.get_all_boundary_algorithms())
# Try one of these boundary algorithms and print results
boundaries, labels = msaf.process(audio_file, boundaries_id="foote", plot=True)
# Let's check all the structural grouping (label) algorithms available
print(msaf.get_all_label_algorithms())
# Try one of these label algorithms
boundaries, labels = msaf.process(audio_file, boundaries_id="foote", labels_id="fmc2d")
print(boundaries)
print(labels)
# If available, you can use previously annotated boundaries and a specific labels algorithm
# Set plot = True to plot the results
boundaries, labels = msaf.process(audio_file, boundaries_id="gt",
labels_id="scluster", plot=True)
"""
Explanation: Using different Algorithms
MSAF includes multiple algorithms both for boundary retrieval and structural grouping (or labeling). In this section we demonstrate how to try them out.
Note: more algorithms are available in msaf-gpl.
End of explanation
"""
# Let's check what available features are there in MSAF
print(msaf.AVAILABLE_FEATS)
# Segment the file using the Foote method for boundaries, C-NMF method for labels, and MFCC features
boundaries, labels = msaf.process(audio_file, feature="mfcc", boundaries_id="foote",
labels_id="cnmf", plot=True)
"""
Explanation: Using different Features
Some algorithms allow the input of different type of features (e.g., harmonic, timbral). In this section we show how we can input different features to MSAF.
End of explanation
"""
# Evaluate the results. It returns a pandas data frame.
evaluations = msaf.eval.process(audio_file, boundaries_id="foote", labels_id="fmc2d")
IPython.display.display(evaluations)
"""
Explanation: Evaluate Results
The results can be evaluated as long as there is an existing file containing reference annotations. The results are stored in a pandas DataFrame. MSAF has to run these algorithms (using msaf.process described above) before being able to evaluate its results.
End of explanation
"""
# First, check which are foote's algorithm parameters:
print(msaf.algorithms.foote.config)
# play around with IPython.Widgets
from IPython.html.widgets import interact
# Obtain the default configuration
bid = "foote" # Boundaries ID
lid = None # Labels ID
feature = "hpcp"
config = msaf.io.get_configuration(feature, annot_beats=False, framesync=False,
boundaries_id=bid, labels_id=lid)
# Sweep M_gaussian parameters
@interact(M_gaussian=(50, 500, 25))
def _run_msaf(M_gaussian):
# Set the configuration
config["M_gaussian"] = M_gaussian
# Segment the file using the Foote method, and Pitch Class Profiles for the features
results = msaf.process(audio_file, feature=feature, boundaries_id=bid,
config=config, plot=True)
# Evaluate the results. It returns a pandas data frame.
evaluations = msaf.eval.process(audio_file, feature=feature, boundaries_id=bid,
config=config)
IPython.display.display(evaluations)
"""
Explanation: Explore Algorithm Parameters
Now let's modify the configuration of one of the files, and modify it to see how different the results are.
We will use Widgets, which will become handy here.
End of explanation
"""
dataset = "../datasets/Sargon/"
results = msaf.process(dataset, n_jobs=4, boundaries_id="foote")
# Evaluate in collection mode
evaluations = msaf.eval.process(dataset, n_jobs=4, boundaries_id="foote")
IPython.display.display(evaluations)
"""
Explanation: Collection Mode
MSAF is able to run and evaluate mutliple files using multi-threading. In this section we show this functionality.
End of explanation
"""
|
edosedgar/xs-pkg
|
machine_learning/hw3/HW3/ML2019HW03-part1.ipynb
|
gpl-2.0
|
import numpy as np
import pandas as pd
import torch
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Home Assignment No. 3: Part 1
In this part of the homework you are to solve several problems related to machine learning algorithms.
* For every separate problem you can get only 0 points or maximal points for this problem. There are NO INTERMEDIATE scores.
* Your solution must me COMPLETE, i.e. contain all required formulas/proofs/detailed explanations.
* You must write your solution for any problem just right after the words BEGIN SOLUTION. Attaching pictures of your handwriting is allowed, but highly discouraged.
* If you want an easy life, you have to use BUILT-IN METHODS of sklearn library instead of writing tons of our yown code. There exists a class/method for almost everything you can imagine (related to this homework).
* To do some tasks in this part of homework, you have to write CODE directly inside specified places inside notebook CELLS.
* In some problems you may be asked to provide short discussion of the results. In this cases you have to create MARKDOWN cell with your comments right after the your code cell.
* Your SOLUTION notebook MUST BE REPRODUCIBLE, i.e. if the reviewer decides to execute Kernel -> Restart Kernel and Run All Cells, after all the computation he will obtain exactly the same solution (with all the corresponding plots) as in your uploaded notebook. For this purpose, we suggest to fix random seed or (better) define random_state= inside every algorithm that uses some pseudorandomness.
Your code must be clear to the reviewer. For this purpose, try to include neccessary comments inside the code. But remember: GOOD CODE MUST BE SELF-EXPLANATORY without any additional comments.
The are problems with * mark - they are not obligatory. You can get EXTRA POINTS for solving them.
$\LaTeX$ in Jupyter
Jupyter has constantly improving $\LaTeX$ support. Below are the basic methods to
write neat, tidy, and well typeset equations in your notebooks:
* to write an inline equation use
markdown
$ you latex equation here $
* to write an equation, that is displayed on a separate line use
markdown
$$ you latex equation here $$
* to write a block of equations use
markdown
\begin{align}
left-hand-side
&= right-hand-side on line 1
\\
&= right-hand-side on line 2
\\
&= right-hand-side on the last line
\end{align}
The ampersand (&) aligns the equations horizontally and the double backslash
(\\) creates a new line.
Write your theoretical derivations within such blocks:
```markdown
BEGIN Solution
<!-- >>> your derivation here <<< -->
END Solution
```
Please, write your implementation within the designated blocks:
```python
...
BEGIN Solution
>>> your solution here <<<
END Solution
...
```
<br>
End of explanation
"""
import numdifftools as nd
from scipy.optimize import minimize
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
### BEGIN Solution
def p_star_w(w):
x = np.array([2/3, 1/6, 1/6], dtype=np.float64)
E = np.array([[1, -0.25, 0.75], [-0.25, 1, 0.5], [0.75, 0.5, 2]], dtype=np.float64)
left_part = 1/(1+np.exp(- w.T @ x))
right_part = multivariate_normal(mean=[0,0,0], cov=E).pdf(w)
return left_part * right_part
def log_star_w(w):
return -np.log(p_star_w(w))
w_0 = minimize(log_star_w, np.array([1,2,1], dtype=np.float64)).x
Hessian = nd.Hessian(log_star_w)
A = Hessian(w_0)
Z_p = p_star_w(w_0) * np.sqrt((2*np.pi)**3/np.linalg.det(A))
print("The value of intergral:", Z_p)
### END Solution
"""
Explanation: <br>
Bayesian Models. GLM
Task 1 (1 pt.)
Consider a univariate Gaussian distribution $\mathcal{N}(x; \mu, \tau^{-1})$.
Let's define Gaussian-Gamma prior for parameters $(\mu, \tau)$:
\begin{equation}
p(\mu, \tau)
= \mathcal{N}(\mu; \mu_0, (\beta \tau)^{-1})
\otimes \text{Gamma}(\tau; a, b)
\,.
\end{equation}
Find the posterior distribution of $(\mu, \tau)$ after observing $X = (x_1, \dots, x_n)$.
BEGIN Solution
$$
\mathbb{P}(\mu, \tau | X) \propto p(\mu, \tau) \mathbb{P}(X | \mu, \tau)
$$ By condition:
$$
\mathbb{P}(X | \mu, \tau) = \prod_{i=1}^n \mathbb{P}(x_i | \mu, \tau) $$
As we know distribution of each datasample:
$$ \mathbb{P}(X | \mu, \tau) = \prod_{i=1}^n \frac{\tau^{\frac{1}{2}}}{\sqrt{2 \pi}} \exp{ \Big[-\frac{\tau (x_i - \mu)^2 }{2}\Big]} =
\frac{\tau^{\frac{n}{2}}}{(2 \pi)^{\frac{n}{2}}} \exp{ \Big[-\frac{\tau}{2} \sum_{i=1}^n(x_i - \mu)^2 \Big]} $$
$$ p(\mu, \tau) = \mathcal{N}(\mu; \mu_0, (\beta \tau)^{-1}) \otimes \text{Gamma}(\tau; a, b) $$
$$ p(\mu, \tau) = \frac{b^a \beta ^{\frac{1}{2}}}{(2\pi)^{\frac{1}{2}}\Gamma(a)} \tau^{a-\frac{1}{2}} e^{-b\tau} \exp{\Big( - \frac{\beta \tau}{2} (\mu - \mu_0)^2 \Big)} $$
$$ p(\mu, \tau | X) \propto \tau^{\frac{n}{2} + a - \frac{1}{2}} e^{-b\tau} \exp{\Big( - \frac{\tau}{2} \Big[ \beta(\mu - \mu_0)^2 + \sum_{i=1}^{n} (x_i - \mu)^2 \Big] \Big)} $$
$$ \sum_{i=1}^n (x_i - \mu)^2 = ns + n(\overline{x} - \mu)^2, \, \overline{x} = \frac{1}{n} \sum_{i=1}^n x_i, \, s=\frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2 $$
$$ \exp{\Big[ -\frac{\tau}{2} \Big[ \beta (\mu - \mu_0)^2 +ns + n(\overline{x} - \mu)^2 \Big] \Big]} \exp{(-b\tau)}
= \exp{\Big[ -\tau \Big( \frac{1}{2} ns + b \Big) \Big]} \exp{\Big[ - \frac{\tau}{2} \Big( \beta(\mu - \mu_0)^2 + n (\overline{x} - \mu)^2 \Big) \Big]}
$$
After simple regrouping:
$$ \beta (\mu - \mu_0)^2 + n(\overline{x} - \mu)^2 = (\beta + n) \Big( \mu - \frac{\beta \mu_0 + n \overline{x}}{\beta + n} \Big)^2 + \frac{\beta n ( \overline{x} - \mu_0)^2}{\beta + n}
$$
$$ p(\mu, \tau | X) \propto \tau^{\frac{n}{2} + a - \frac{1}{2}} \exp{\Big[ -\tau \Big( \frac{1}{2} ns + b \Big) \Big]} \exp{\Big[ - \frac{\tau}{2} \Big[ (\beta + n) \Big( \mu - \frac{\beta \mu_0 + n \overline{x}}{\beta + n} \Big)^2 + \frac{\beta n ( \overline{x} - \mu_0)^2}{\beta + n} \Big] \Big]}
$$
Again regrouping:
$$ p(\mu, \tau | X) \propto \tau^{\frac{n}{2} + a - \frac{1}{2}} \exp{\Big[ -\tau \Big[ \frac{1}{2} ns + b + \frac{\beta n ( \overline{x} - \mu_0)^2}{2(\beta + n)} \Big] \Big]} \exp{\Big[ - \frac{\tau}{2} (\beta + n) \Big( \mu - \frac{\beta \mu_0 + n \overline{x}}{\beta + n} \Big)^2 \Big]}
$$
Finally:
$$ \boxed{ p(\mu, \tau | X) \propto \Gamma\Big(\tau, \frac{n}{2} + a, \frac{1}{2} ns +b + \frac{\beta n ( \overline{x} - \mu_0)^2}{2(\beta + n)} \Big) \mathcal{N}\Big( \mu, \frac{\beta \mu_0 + n \overline{x}}{\beta + n}, (\tau(\beta + n))^{-1}\Big) }
$$
END Solution
<br>
Task 2 (1 + 1 + 1 = 3 pt.)
Evaluate the following integral using the Laplace approximation:
\begin{equation}
x \mapsto \int \sigma(w^T x) \mathcal{N}(w; 0, \Sigma) dw \,,
\end{equation}
for $x = \bigl(\tfrac23, \tfrac16, \tfrac16\bigr)\in \mathbb{R}^3$ and
\begin{equation}
\Sigma
= \begin{pmatrix}
1 & -0.25 & 0.75 \
-0.25 & 1 & 0.5 \
0.75 & 0.5 & 2
\end{pmatrix}
\,.
\end{equation}
Task 2.1 (1 pt.)
Use the Hessian matrix computed numericaly via finite differences. (Check out Numdifftools)
End of explanation
"""
import torch
from torch.autograd import Variable, grad
### BEGIN Solution
def pt_p_star_w(w):
x = np.array([2/3, 1/6, 1/6], dtype=np.float64)
E = np.array([[1, -0.25, 0.75], [-0.25, 1, 0.5], [0.75, 0.5, 2]], dtype=np.float64)
left_part = torch.sigmoid(torch.dot(w, Variable(torch.from_numpy(x).type(torch.FloatTensor))))
right_part = 1 / (( 2 * np.pi )**(3/2) * np.linalg.det(E)**(1/2)) *\
torch.exp(-0.5 * w @ Variable(torch.from_numpy(np.linalg.inv(E)).type(torch.FloatTensor))@w)
return left_part * right_part
def pt_log_star_w(w):
return -torch.log(pt_p_star_w(w))
def hessian_diag(func, w):
w = Variable(torch.FloatTensor(w), requires_grad=True)
grad_params = torch.autograd.grad(func(w), w, create_graph=True)
hessian = [torch.autograd.grad(grad_params[0][i], w, create_graph=True)[0].data.numpy() \
for i in range(3)]
return np.diagonal(hessian)*np.eye(3)
A = hessian_diag(pt_log_star_w, w_0)
pt_Z_p = (np.sqrt((2*np.pi)**3 / np.linalg.det(A)) *\
pt_p_star_w(Variable(torch.from_numpy(w_0).type(torch.FloatTensor)))).data.numpy()
print('Integral value is', pt_Z_p)
### END Solution
"""
Explanation: <br>
Task 2.2 (1 pt.)
Use the diagonal approximation of the Hessian computed by autodifferentiation
in pytorch.
End of explanation
"""
from scipy.integrate import tplquad
### BEGIN Solution
def p_star_w_adapter(x, y, z):
return p_star_w(np.array([x,y,z]))
acc_Z_p = tplquad(p_star_w_adapter, -10, 10, -10, 10, -10, 10)
print("Laplace method: %.05f" % abs(acc_Z_p[0] - Z_p))
print("Diag. Hessian Approx: %.05f" % abs(acc_Z_p[0] - pt_Z_p))
### END Solution
"""
Explanation: <br>
Task 2.3 (1 pt.)
Compare the results comparing the absolute errors of the results (this is possible with Monte-Carlo estimate of the integral). Write 1-2 sentences in the results discussion.
End of explanation
"""
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
### BEGIN Solution
df = pd.read_csv('data/monthly_co2_mlo.csv')
df = df.replace(-99.99, np.nan).dropna()
df.head(10)
y = df['CO2 [ppm]']
X = df.drop(['CO2 [ppm]'], axis=1)
X['year'] -= 1958
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, shuffle=False, test_size=0.25)
X.head(10)
### END Solution
scaler = StandardScaler()
y_test_min = np.min(y_train.values)
y_test_abs = np.max(y_train.values) - np.min(y_train.values)
y_train_scaled = scaler.fit_transform(y_train.values.reshape(-1, 1))
y_test_scaled = scaler.transform(y_test.values.reshape(-1, 1))
plt.figure(figsize=(14, 5))
plt.plot(X_train['year'], y_train_scaled)
plt.plot(X_test['year'], y_test_scaled)
plt.axvline(x=0.75 * np.max([np.max(X_train['year'].values), np.max(X_test['year'].values)]), c='black', ls='-')
plt.grid()
plt.ylabel(r'${CO}_2$', size=18)
plt.xlabel('Train and test split', size=18)
plt.show()
"""
Explanation: BEGIN Solution
So, we have got big absolute error in the second line due to the fact that we used Hessian diagonal approximation which neglects a lot of values off the diagonal
END Solution
<br>
Gaussian Processes
Task 3 (1 + 2 = 3 pt.)
Task 3.1 (1 pt.)
Assuimng the matrices $A \in \mathbb{R}^{n \times n}$ and $D \in \mathbb{R}^{d \times d}$
are invertible, using gaussian elimination find the inverse matrix for the following
block matrix:
\begin{equation}
\begin{pmatrix} A & B \ C & D \end{pmatrix} \,,
\end{equation}
where $C \in \mathbb{R}^{d \times n}$ and $B \in \mathbb{R}^{n \times d}$.
BEGIN Solution
$$
\Bigg(
\begin{array}{cc|cc}
A & B & I_n & 0\
C & D & 0 & I_m\
\end{array}
\Bigg)
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & B A^{-1} & A^{-1} & 0\
C & D & 0 & I_m\
\end{array}
\Bigg)
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & B A^{-1} & A^{-1} & 0\
0 & D - B A^{-1} C & - A^{-1} C & I_m\
\end{array}
\Bigg)
\sim
$$
$$
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & B A^{-1} & A^{-1} & 0\
0 & I_m & - A^{-1} C (D - B A^{-1}C)^{-1} & (D - B A^{-1} C)^{-1}\
\end{array}
\Bigg)
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & 0 & A^{-1} + A^{-1}C (D - B A^{-1}C)^{-1} B A^{-1} & -(D - B A^{-1}C)^{-1} BA^{-1} \
0 & I_m & - A^{-1} C (D - B A^{-1}C)^{-1} & (D - B A^{-1}C)^{-1}\
\end{array}
\Bigg) $$
Finally,
$$
\boxed {\begin{pmatrix} A & B \ C & D \end{pmatrix}^{-1}
=
\Bigg(
\begin{array}{cc}
A^{-1} + A^{-1}C (D - B A^{-1}C)^{-1} B A^{-1} & - (D - B A^{-1}C)^{-1} BA^{-1} \
- A^{-1} C (D - B A^{-1}C)^{-1} & (D - B A^{-1}C)^{-1}\
\end{array}
\Bigg) }
$$
END Solution
<br>
Task 3.2 (2 pt.)
Assume that the function $y(x)$, $x \in \mathbb{R}^d$, is a realization of the Gaussian
Process $GP\bigl(0; K(\cdot, \cdot)\bigr)$ with $K(a, b) = \exp({- \gamma \|a - b\|_2^2}))$.
Suppose two datasets were observed: noiseless ${D_0}$ and noisy ${D_1}$
\begin{aligned}
& D_0 = \bigl(x_i, y(x_i) \bigr){i=1}^{n} \,, \
& D_1 = \bigl(x^\prime_i, y(x^\prime_i) + \varepsilon_i \bigr){i=1}^{m} \,,
\end{aligned}
where $\varepsilon_i \sim \text{ iid } \mathcal{N}(0, \sigma^2)$, independent of process $y$.
Derive the conditional distribution of $y(x) \big\vert_{D_0, D_1}$ at a new $x$.
BEGIN Solution
END Solution
<br>
Task 4 (1 + 2 = 3 pt.)
Task 4.1 (1 pt.)
In the late 1950’s Charles Keeling invented an accurate way to measure atmospheric $CO_2$ concentration and began taking regular measurements at the Mauna Loa observatory.
Take monthly_co2_mlo.csv file, load it and prepare the data.
Load the CO2 [ppm] time series
Replace $-99.99$ with NaN and drop the missing observations
Split the time series into train and test
Normalize the target value by fitting a transformation on the train
Plot the resulting target against the time index
End of explanation
"""
from GPy.models import GPRegression
from GPy.kern import RBF, Poly, StdPeriodic, White, Linear
from sklearn.metrics import r2_score
### BEGIN Solution
kernels = RBF(input_dim=1, variance=1., lengthscale=10.) + \
Poly(input_dim=1) + \
StdPeriodic(input_dim=1) + \
White(input_dim=1) + \
Linear(input_dim=1)
gpr = GPRegression(X_train['year'].values.reshape(-1, 1), y_train_scaled, kernels)
gpr.plot(figsize=(13,4))
plt.show()
### END Solution
predicted = gpr.predict(X_test['year'].values.reshape(-1, 1))
plt.figure(figsize=(13,4))
plt.plot(scaler.inverse_transform(y_test_scaled), scaler.inverse_transform(y_test_scaled), label='x = y', c='r')
plt.scatter(scaler.inverse_transform(predicted[0]), scaler.inverse_transform(y_test_scaled), label="")
plt.title("QQ - plot", size=16)
plt.xlabel("True value", size=16)
plt.ylabel("Predicted values", size=16)
plt.legend()
plt.show()
r2_score(predicted[0], y_test_scaled)
"""
Explanation: <br>
Task 4.2 (2 pt.)
Use GPy library for training and prediction. Fit a GP and run the predict on the test. Useful kernels to combine: GPy.kern.RBF, GPy.kern.Poly, GPy.kern.StdPeriodic, GPy.kern.White, GPy.kern.Linear.
Plot mean and confidence interval of the prediction.
Inspect them on normality by scatter plot: plot predicted points/time series against true values.
Estimate the prediction error with r2_score. R2-score accepted > 0.83 on test sample.
End of explanation
"""
|
sailuh/perceive
|
Notebooks/Dataset_Comparision/dataset_comparision.ipynb
|
gpl-2.0
|
#import packages
import pandas as pd
import glob
import csv
from xml.etree.ElementTree import ElementTree
import re
"""
Explanation: Dataset Comparision
End of explanation
"""
#function to load a csv file
#accepts folderpath and headerlist as parameter to load the data files
def file_csv(folderpath,addheader,headerlist):
#this reads all files under that folder
filepaths = glob.glob(folderpath+"data/*.csv")
#we prepare a list, that will contain all the tables that exist in these file paths
dataframe = []
for filepath in filepaths:
#if header is required to be added: as in NVD
if addheader is True:
dataframe.append(pd.read_csv(filepath,names=headerlist))
else:
dataframe.append(pd.read_csv(filepath))
return pd.concat(dataframe);
#function to load a xml file
#accepts folderpath as parameter to load the data files
def file_xml(folderpath):
filepaths = glob.glob(folderpath+"data/*.xml")
count = 0;
#uses ElementTree to parse the tree structure
for filepath in filepaths:
CVE_tree = ElementTree()
CVE_tree.parse(filepath)
CVE_root= CVE_tree.getroot()
count = count+ countrow_xml(CVE_root,'{http://www.icasi.org/CVRF/schema/vuln/1.1}')
return count
"""
Explanation: Motivation
The functions below are reused from the nvd_introduction notebook and cve_mitre_introdcution notebook. These functions define fileloading functionality of csv and xml files. There are seperate function defenitions for both as they differ in method. While running this notebook please create a seperate folder (called data) to hold specifically data files. You may add or delete files from this folder depending on what is required for the results at that point. To count the number of vulnerabilities(in this case CVE ID), the count functions will tackle respective files.
End of explanation
"""
#counts number of rows under cve_id header
def countrows_csv(dataframe):
count = 0
entries = []
#consider only unique CVE entries
for element in dataframe['cve_id']:
if element not in entries:
entries.append(element)
count = count + 1
return count
#counts number of rows in CVE tag
def countrow_xml(CVE_root,root_string):
cve_id =[] ;
description=[];
cell=0
entries=[]
for entry in CVE_root:
for child in entry:
if (child.tag == root_string+'CVE'):
if child.tag not in entries:
cve_id.append(child.text);
cell+=1
return len(cve_id)
#all entries added to a dictionary
data={}
#calling functions
nvd_dataframe=file_csv('NVD/',True,['cve_id', 'cwe_id','timestamp'])
details_dataframe=file_csv('CVE_Details/',False,None)
data["nvd"] = countrows_csv(nvd_dataframe)
data["cve_details"]= countrows_csv(details_dataframe)
data['cve_mitre']=file_xml('CVE_Mitre/')
#visualization of the count
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
plot_data={}
plot_data['Entries'] = data
#saving in dictionary for sorting and visualising
df_data = pd.DataFrame(plot_data).sort_values(by='Entries', ascending=True)
series = df_data.loc[:,'Entries']
p = figure(width=800, y_range=series.index.tolist(), title="Number of Vulnerabilities in each dataset")
p.xaxis.axis_label = 'Number of vulnerabilities/rows'
p.xaxis.axis_label_text_font_size = '10pt'
p.xaxis.major_label_text_font_size = '10pt'
p.yaxis.axis_label = 'Name of the dataset'
p.yaxis.axis_label_text_font_size = '14pt'
p.yaxis.major_label_text_font_size = '12pt'
j = 1
for k,v in series.iteritems():
#Print fields, values, orders
#print (k,v,j)
p.rect(x=v/2, y=j, width=abs(v), height=0.4,
width_units="data", height_units="data")
j += 1
show(p)
"""
Explanation: The blocks below contain functions to parse and count all entries in each loaded database. Since XML files have a table structure we will have seperate functions to count through each.
End of explanation
"""
|
roatienza/Deep-Learning-Experiments
|
versions/2022/mlp/python/mlp_pytorch_demo.ipynb
|
mit
|
import torch
import torchvision
import wandb
import math
from torch import nn
from einops import rearrange
from argparse import ArgumentParser
from pytorch_lightning import LightningModule, Trainer, Callback
from pytorch_lightning.loggers import WandbLogger
from torchmetrics.functional import accuracy
from torch.optim import SGD, Adam
from torch.optim.lr_scheduler import CosineAnnealingLR
"""
Explanation: MLP for CIFAR10
Multi-Layer Perceptron (MLP) is a simple neural network model that can be used for classification tasks.
In this demo, we will train a 3-layer MLP on the CIFAR10 dataset. We will illustrate 2 MLP implementations.
Let us first import the required modules.
End of explanation
"""
class SimpleMLP(nn.Module):
def __init__(self, n_features=3*32*32, n_hidden=512, num_classes=10):
super().__init__()
# the 3 Linear layers of the MLP
self.fc1 = nn.Linear(n_features, n_hidden)
self.fc2 = nn.Linear(n_hidden, n_hidden)
self.fc3 = nn.Linear(n_hidden, num_classes)
def forward(self, x):
# flatten x - (batch_size, 3, 32, 32) -> (batch_size, 3*32*32)
# ascii art for the case that x is 1 x 2 x 2 (channel, height, width)
# --------- -----------------
# | 1 | 2 | ---> | 1 | 2 | 3 | 4 |
# | 3 | 4 | -----------------
# ---------
# we can use any of the following methods to flatten the tensor
#y = torch.flatten(x, 1)
#y = x.view(x.size(0), -1)
# but this is the most intuitive since it shows the actual flattening
y = rearrange(x, 'b c h w -> b (c h w)')
y = nn.GELU()(self.fc1(y))
y = nn.GELU()(self.fc2(y))
y = self.fc3(y)
return y
# we dont need to compute softmax since it is already
# built into the CE loss function in PyTorch
#return F.log_softmax(y, dim=1)
"""
Explanation: MLP using PyTorch nn.Linear
The most straightforward way to implement an MLP is to use the nn.Linear module. In the following code, we implement a 3-layer MLP with GELU activation function. The GELU can be replaced by other activation functions such as RELU.
Pls take note of the correct sizes. fc1 input size is n_features which is size of the flattened input x. fc1 output size is n_hidden which then becomes the input size of fc2. In other words, all input/output sizes up to fc3 fit together perfectly.
End of explanation
"""
class TensorMLP(nn.Module):
def __init__(self, n_features=3*32*32, n_hidden=512, num_classes=10):
super().__init__()
# weights and biases for layer 1
self.w1 = nn.Parameter(torch.empty((n_hidden, n_features)))
self.b1 = nn.Parameter(torch.empty((n_hidden,)))
# weights and biases for layer 2
self.w2 = nn.Parameter(torch.empty((n_hidden, n_hidden)))
self.b2 = nn.Parameter(torch.empty((n_hidden,)))
# weights and biases for layer 3
self.w3 = nn.Parameter(torch.empty((num_classes, n_hidden)))
self.b3 = nn.Parameter(torch.empty((num_classes,)))
# initialize parameters manually bec we implemented the linear layer manually
self.reset_parameters()
def reset_parameters(self):
# we use Kaiming initializer for weights
nn.init.kaiming_uniform_(self.w1, a=math.sqrt(5))
# zero for biases
nn.init.constant_(self.b1, 0)
nn.init.kaiming_uniform_(self.w2, a=math.sqrt(5))
nn.init.constant_(self.b2, 0)
nn.init.kaiming_uniform_(self.w3, a=math.sqrt(5))
nn.init.constant_(self.b3, 0)
def forward(self, x):
# flatten
y = rearrange(x, 'b c h w -> b (c h w)')
# we manually compute the output of each layer
y = y @ self.w1.T + self.b1
y = nn.GELU()(y)
y = y @ self.w2.T + self.b2
y = nn.GELU()(y)
y = y @ self.w3.T + self.b3
return y
"""
Explanation: MLP implementation using Tensors
In this case, we illustrate how to implement the formula of an MLP layer using weights and biases. Note that if we remove the initialization of the weights and biases, the model will not converge. In the previous example, Linear automatically performs the weights and biases initialization.
End of explanation
"""
class LitCIFAR10Model(LightningModule):
def __init__(self, num_classes=10, lr=0.001, batch_size=64,
num_workers=4, max_epochs=30,
model=SimpleMLP):
super().__init__()
self.save_hyperparameters()
self.model = model(num_classes=num_classes)
self.loss = nn.CrossEntropyLoss()
def forward(self, x):
return self.model(x)
# this is called during fit()
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
return {"loss": loss}
# calls to self.log() are recorded in wandb
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("train_loss", avg_loss, on_epoch=True)
# this is called at the end of an epoch
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
acc = accuracy(y_hat, y) * 100.
# we use y_hat to display predictions during callback
return {"y_hat": y_hat, "test_loss": loss, "test_acc": acc}
# this is called at the end of all epochs
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
avg_acc = torch.stack([x["test_acc"] for x in outputs]).mean()
self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True)
self.log("test_acc", avg_acc, on_epoch=True, prog_bar=True)
# validation is the same as test
def validation_step(self, batch, batch_idx):
return self.test_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
return self.test_epoch_end(outputs)
# we use Adam optimizer
def configure_optimizers(self):
optimizer = Adam(self.parameters(), lr=self.hparams.lr)
# this decays the learning rate to 0 after max_epochs using cosine annealing
scheduler = CosineAnnealingLR(optimizer, T_max=self.hparams.max_epochs)
return [optimizer], [scheduler]
# this is called after model instatiation to initiliaze the datasets and dataloaders
def setup(self, stage=None):
self.train_dataloader()
self.test_dataloader()
# build train and test dataloaders using MNIST dataset
# we use simple ToTensor transform
def train_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=True, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=self.hparams.num_workers,
pin_memory=True,
)
def test_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=False, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=False,
num_workers=self.hparams.num_workers,
pin_memory=True,
)
def val_dataloader(self):
return self.test_dataloader()
"""
Explanation: PyTorch Lightning Module for MLP
This is the PL module so we can easily change the implementation of the MLP and compare the results. More detailed results can be found on the wandb.ai page.
Using model parameter, we can easily switch between the two MLP implementations shown above. We also benchmark the result using a ResNet18 model. The rest of the code is similar to our PL module example for MNIST.
End of explanation
"""
def get_args():
parser = ArgumentParser(description="PyTorch Lightning MNIST Example")
parser.add_argument("--max-epochs", type=int, default=30, help="num epochs")
parser.add_argument("--batch-size", type=int, default=64, help="batch size")
parser.add_argument("--lr", type=float, default=0.001, help="learning rate")
parser.add_argument("--num-classes", type=int, default=10, help="num classes")
parser.add_argument("--devices", default=1)
parser.add_argument("--accelerator", default='gpu')
parser.add_argument("--num-workers", type=int, default=4, help="num workers")
#parser.add_argument("--model", default=torchvision.models.resnet18)
#parser.add_argument("--model", default=TensorMLP)
parser.add_argument("--model", default=SimpleMLP)
args = parser.parse_args("")
return args
"""
Explanation: Arguments
Please change the --model argument to switch between the different models to be used as CIFAR10 classifier.
End of explanation
"""
class WandbCallback(Callback):
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# process first 10 images of the first batch
if batch_idx == 0:
label_human = ["airplane", "automobile", "bird", "cat",
"deer", "dog", "frog", "horse", "ship", "truck"]
n = 10
x, y = batch
outputs = outputs["y_hat"]
outputs = torch.argmax(outputs, dim=1)
# log image, ground truth and prediction on wandb table
columns = ['image', 'ground truth', 'prediction']
data = [[wandb.Image(x_i), label_human[y_i], label_human[y_pred]] for x_i, y_i, y_pred in list(
zip(x[:n], y[:n], outputs[:n]))]
wandb_logger.log_table(
key=pl_module.model.__class__.__name__,
columns=columns,
data=data)
"""
Explanation: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
End of explanation
"""
if __name__ == "__main__":
args = get_args()
model = LitCIFAR10Model(num_classes=args.num_classes,
lr=args.lr, batch_size=args.batch_size,
num_workers=args.num_workers,
model=args.model,)
model.setup()
# printing the model is useful for debugging
print(model)
print(model.model.__class__.__name__)
# wandb is a great way to debug and visualize this model
wandb_logger = WandbLogger(project="mlp-cifar")
trainer = Trainer(accelerator=args.accelerator,
devices=args.devices,
max_epochs=args.max_epochs,
logger=wandb_logger,
callbacks=[WandbCallback()])
trainer.fit(model)
trainer.test(model)
wandb.finish()
"""
Explanation: Training and Validation of Different Models
The validation accuracy of both MLP model implmentations are almost the same at ~53%. This shows that the 2 MLP implementations are almost the same.
Meanwhile the ResNet18 model has accuracy of ~78%. The MLP model has still a long way to go.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
stable/_downloads/48e14d460d6470997b890b156746a671/30_strf.ipynb
|
bsd-3-clause
|
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.decoding import ReceptiveField, TimeDelayingRidge
from scipy.stats import multivariate_normal
from scipy.io import loadmat
from sklearn.preprocessing import scale
rng = np.random.RandomState(1337) # To make this example reproducible
"""
Explanation: Spectro-temporal receptive field (STRF) estimation on continuous data
This demonstrates how an encoding model can be fit with multiple continuous
inputs. In this case, we simulate the model behind a spectro-temporal receptive
field (or STRF). First, we create a linear filter that maps patterns in
spectro-temporal space onto an output, representing neural activity. We fit
a receptive field model that attempts to recover the original linear filter
that was used to create this data.
End of explanation
"""
# Read in audio that's been recorded in epochs.
path_audio = mne.datasets.mtrf.data_path()
data = loadmat(str(path_audio / 'speech_data.mat'))
audio = data['spectrogram'].T
sfreq = float(data['Fs'][0, 0])
n_decim = 2
audio = mne.filter.resample(audio, down=n_decim, npad='auto')
sfreq /= n_decim
"""
Explanation: Load audio data
We'll read in the audio data from :footcite:CrosseEtAl2016 in order to
simulate a response.
In addition, we'll downsample the data along the time dimension in order to
speed up computation. Note that depending on the input values, this may
not be desired. For example if your input stimulus varies more quickly than
1/2 the sampling rate to which we are downsampling.
End of explanation
"""
n_freqs = 20
tmin, tmax = -0.1, 0.4
# To simulate the data we'll create explicit delays here
delays_samp = np.arange(np.round(tmin * sfreq),
np.round(tmax * sfreq) + 1).astype(int)
delays_sec = delays_samp / sfreq
freqs = np.linspace(50, 5000, n_freqs)
grid = np.array(np.meshgrid(delays_sec, freqs))
# We need data to be shaped as n_epochs, n_features, n_times, so swap axes here
grid = grid.swapaxes(0, -1).swapaxes(0, 1)
# Simulate a temporal receptive field with a Gabor filter
means_high = [.1, 500]
means_low = [.2, 2500]
cov = [[.001, 0], [0, 500000]]
gauss_high = multivariate_normal.pdf(grid, means_high, cov)
gauss_low = -1 * multivariate_normal.pdf(grid, means_low, cov)
weights = gauss_high + gauss_low # Combine to create the "true" STRF
kwargs = dict(vmax=np.abs(weights).max(), vmin=-np.abs(weights).max(),
cmap='RdBu_r', shading='gouraud')
fig, ax = plt.subplots()
ax.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax.set(title='Simulated STRF', xlabel='Time Lags (s)', ylabel='Frequency (Hz)')
plt.setp(ax.get_xticklabels(), rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
"""
Explanation: Create a receptive field
We'll simulate a linear receptive field for a theoretical neural signal. This
defines how the signal will respond to power in this receptive field space.
End of explanation
"""
# Reshape audio to split into epochs, then make epochs the first dimension.
n_epochs, n_seconds = 16, 5
audio = audio[:, :int(n_seconds * sfreq * n_epochs)]
X = audio.reshape([n_freqs, n_epochs, -1]).swapaxes(0, 1)
n_times = X.shape[-1]
# Delay the spectrogram according to delays so it can be combined w/ the STRF
# Lags will now be in axis 1, then we reshape to vectorize
delays = np.arange(np.round(tmin * sfreq),
np.round(tmax * sfreq) + 1).astype(int)
# Iterate through indices and append
X_del = np.zeros((len(delays),) + X.shape)
for ii, ix_delay in enumerate(delays):
# These arrays will take/put particular indices in the data
take = [slice(None)] * X.ndim
put = [slice(None)] * X.ndim
if ix_delay > 0:
take[-1] = slice(None, -ix_delay)
put[-1] = slice(ix_delay, None)
elif ix_delay < 0:
take[-1] = slice(-ix_delay, None)
put[-1] = slice(None, ix_delay)
X_del[ii][tuple(put)] = X[tuple(take)]
# Now set the delayed axis to the 2nd dimension
X_del = np.rollaxis(X_del, 0, 3)
X_del = X_del.reshape([n_epochs, -1, n_times])
n_features = X_del.shape[1]
weights_sim = weights.ravel()
# Simulate a neural response to the sound, given this STRF
y = np.zeros((n_epochs, n_times))
for ii, iep in enumerate(X_del):
# Simulate this epoch and add random noise
noise_amp = .002
y[ii] = np.dot(weights_sim, iep) + noise_amp * rng.randn(n_times)
# Plot the first 2 trials of audio and the simulated electrode activity
X_plt = scale(np.hstack(X[:2]).T).T
y_plt = scale(np.hstack(y[:2]))
time = np.arange(X_plt.shape[-1]) / sfreq
_, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6), sharex=True)
ax1.pcolormesh(time, freqs, X_plt, vmin=0, vmax=4, cmap='Reds',
shading='gouraud')
ax1.set_title('Input auditory features')
ax1.set(ylim=[freqs.min(), freqs.max()], ylabel='Frequency (Hz)')
ax2.plot(time, y_plt)
ax2.set(xlim=[time.min(), time.max()], title='Simulated response',
xlabel='Time (s)', ylabel='Activity (a.u.)')
mne.viz.tight_layout()
"""
Explanation: Simulate a neural response
Using this receptive field, we'll create an artificial neural response to
a stimulus.
To do this, we'll create a time-delayed version of the receptive field, and
then calculate the dot product between this and the stimulus. Note that this
is effectively doing a convolution between the stimulus and the receptive
field. See here for more
information.
End of explanation
"""
# Create training and testing data
train, test = np.arange(n_epochs - 1), n_epochs - 1
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
X_train, X_test, y_train, y_test = [np.rollaxis(ii, -1, 0) for ii in
(X_train, X_test, y_train, y_test)]
# Model the simulated data as a function of the spectrogram input
alphas = np.logspace(-3, 3, 7)
scores = np.zeros_like(alphas)
models = []
for ii, alpha in enumerate(alphas):
rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=alpha)
rf.fit(X_train, y_train)
# Now make predictions about the model output, given input stimuli.
scores[ii] = rf.score(X_test, y_test)
models.append(rf)
times = rf.delays_ / float(rf.sfreq)
# Choose the model that performed best on the held out data
ix_best_alpha = np.argmax(scores)
best_mod = models[ix_best_alpha]
coefs = best_mod.coef_[0]
best_pred = best_mod.predict(X_test)[:, 0]
# Plot the original STRF, and the one that we recovered with modeling.
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(6, 3), sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, coefs, **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Reconstructed STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
# Plot the actual response and the predicted response on a held out stimulus
time_pred = np.arange(best_pred.shape[0]) / sfreq
fig, ax = plt.subplots()
ax.plot(time_pred, y_test, color='k', alpha=.2, lw=4)
ax.plot(time_pred, best_pred, color='r', lw=1)
ax.set(title='Original and predicted activity', xlabel='Time (s)')
ax.legend(['Original', 'Predicted'])
plt.autoscale(tight=True)
mne.viz.tight_layout()
"""
Explanation: Fit a model to recover this receptive field
Finally, we'll use the :class:mne.decoding.ReceptiveField class to recover
the linear receptive field of this signal. Note that properties of the
receptive field (e.g. smoothness) will depend on the autocorrelation in the
inputs and outputs.
End of explanation
"""
# Plot model score for each ridge parameter
fig = plt.figure(figsize=(10, 4))
ax = plt.subplot2grid([2, len(alphas)], [1, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores, marker='o', color='r')
ax.annotate('Best parameter', (ix_best_alpha, scores[ix_best_alpha]),
(ix_best_alpha, scores[ix_best_alpha] - .1),
arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Ridge regularization value", ylabel="Score ($R^2$)",
xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()
# Plot the STRF of each ridge parameter
for ii, (rf, i_alpha) in enumerate(zip(models, alphas)):
ax = plt.subplot2grid([2, len(alphas)], [0, ii], 1, 1)
ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
plt.xticks([], [])
plt.yticks([], [])
plt.autoscale(tight=True)
fig.suptitle('Model coefficients / scores for many ridge parameters', y=1)
mne.viz.tight_layout()
"""
Explanation: Visualize the effects of regularization
Above we fit a :class:mne.decoding.ReceptiveField model for one of many
values for the ridge regularization parameter. Here we will plot the model
score as well as the model coefficients for each value, in order to
visualize how coefficients change with different levels of regularization.
These issues as well as the STRF pipeline are described in detail
in :footcite:TheunissenEtAl2001,WillmoreSmyth2003,HoldgrafEtAl2016.
End of explanation
"""
scores_lap = np.zeros_like(alphas)
models_lap = []
for ii, alpha in enumerate(alphas):
estimator = TimeDelayingRidge(tmin, tmax, sfreq, reg_type='laplacian',
alpha=alpha)
rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=estimator)
rf.fit(X_train, y_train)
# Now make predictions about the model output, given input stimuli.
scores_lap[ii] = rf.score(X_test, y_test)
models_lap.append(rf)
ix_best_alpha_lap = np.argmax(scores_lap)
"""
Explanation: Using different regularization types
In addition to the standard ridge regularization, the
:class:mne.decoding.TimeDelayingRidge class also exposes
Laplacian regularization
term as:
\begin{align}\left[\begin{matrix}
1 & -1 & & & & \
-1 & 2 & -1 & & & \
& -1 & 2 & -1 & & \
& & \ddots & \ddots & \ddots & \
& & & -1 & 2 & -1 \
& & & & -1 & 1\end{matrix}\right]\end{align}
This imposes a smoothness constraint of nearby time samples and/or features.
Quoting :footcite:CrosseEtAl2016 :
Tikhonov [identity] regularization (Equation 5) reduces overfitting by
smoothing the TRF estimate in a way that is insensitive to
the amplitude of the signal of interest. However, the Laplacian
approach (Equation 6) reduces off-sample error whilst preserving
signal amplitude (Lalor et al., 2006). As a result, this approach
usually leads to an improved estimate of the system’s response (as
indexed by MSE) compared to Tikhonov regularization.
End of explanation
"""
fig = plt.figure(figsize=(10, 6))
ax = plt.subplot2grid([3, len(alphas)], [2, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores_lap, marker='o', color='r')
ax.plot(np.arange(len(alphas)), scores, marker='o', color='0.5', ls=':')
ax.annotate('Best Laplacian', (ix_best_alpha_lap,
scores_lap[ix_best_alpha_lap]),
(ix_best_alpha_lap, scores_lap[ix_best_alpha_lap] - .1),
arrowprops={'arrowstyle': '->'})
ax.annotate('Best Ridge', (ix_best_alpha, scores[ix_best_alpha]),
(ix_best_alpha, scores[ix_best_alpha] - .1),
arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Laplacian regularization value", ylabel="Score ($R^2$)",
xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()
# Plot the STRF of each ridge parameter
xlim = times[[0, -1]]
for ii, (rf_lap, rf, i_alpha) in enumerate(zip(models_lap, models, alphas)):
ax = plt.subplot2grid([3, len(alphas)], [0, ii], 1, 1)
ax.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax.set(xticks=[], yticks=[], xlim=xlim)
if ii == 0:
ax.set(ylabel='Laplacian')
ax = plt.subplot2grid([3, len(alphas)], [1, ii], 1, 1)
ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax.set(xticks=[], yticks=[], xlim=xlim)
if ii == 0:
ax.set(ylabel='Ridge')
fig.suptitle('Model coefficients / scores for laplacian regularization', y=1)
mne.viz.tight_layout()
"""
Explanation: Compare model performance
Below we visualize the model performance of each regularization method
(ridge vs. Laplacian) for different levels of alpha. As you can see, the
Laplacian method performs better in general, because it imposes a smoothness
constraint along the time and feature dimensions of the coefficients.
This matches the "true" receptive field structure and results in a better
model fit.
End of explanation
"""
rf = models[ix_best_alpha]
rf_lap = models_lap[ix_best_alpha_lap]
_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(9, 3),
sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax3.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Ridge STRF')
ax3.set_title('Best Laplacian STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2, ax3]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
"""
Explanation: Plot the original STRF, and the one that we recovered with modeling.
End of explanation
"""
|
Capepy/scipy_2015_sklearn_tutorial
|
notebooks/03.6 Case Study - Titanic Survival.ipynb
|
cc0-1.0
|
from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data.shape)
"""
Explanation: Feature Extraction
Here we will talk about an important piece of machine learning: the extraction of
quantitative features from data. By the end of this section you will
Know how features are extracted from real-world data.
See an example of extracting numerical features from textual data
In addition, we will go over several basic tools within scikit-learn which can be used to accomplish the above tasks.
What Are Features?
Numerical Features
Recall that data in scikit-learn is expected to be in two-dimensional arrays, of size
n_samples $\times$ n_features.
Previously, we looked at the iris dataset, which has 150 samples and 4 features
End of explanation
"""
measurements = [
{'city': 'Dubai', 'temperature': 33.},
{'city': 'London', 'temperature': 12.},
{'city': 'San Francisco', 'temperature': 18.},
]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
vec
vec.fit_transform(measurements).toarray()
vec.get_feature_names()
"""
Explanation: These features are:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Numerical features such as these are pretty straightforward: each sample contains a list
of floating-point numbers corresponding to the features
Categorical Features
What if you have categorical features? For example, imagine there is data on the color of each
iris:
color in [red, blue, purple]
You might be tempted to assign numbers to these features, i.e. red=1, blue=2, purple=3
but in general this is a bad idea. Estimators tend to operate under the assumption that
numerical features lie on some continuous scale, so, for example, 1 and 2 are more alike
than 1 and 3, and this is often not the case for categorical features.
A better strategy is to give each category its own dimension.
The enriched iris feature set would hence be in this case:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
color=purple (1.0 or 0.0)
color=blue (1.0 or 0.0)
color=red (1.0 or 0.0)
Note that using many of these categorical features may result in data which is better
represented as a sparse matrix, as we'll see with the text classification example
below.
Using the DictVectorizer to encode categorical features
When the source data is encoded has a list of dicts where the values are either strings names for categories or numerical values, you can use the DictVectorizer class to compute the boolean expansion of the categorical features while leaving the numerical features unimpacted:
End of explanation
"""
import os
f = open(os.path.join('datasets', 'titanic', 'titanic3.csv'))
print(f.readline())
lines = []
for i in range(3):
lines.append(f.readline())
print(lines)
"""
Explanation: Derived Features
Another common feature type are derived features, where some pre-processing step is
applied to the data to generate features that are somehow more informative. Derived
features may be based in dimensionality reduction (such as PCA or manifold learning),
may be linear or nonlinear combinations of features (such as in Polynomial regression),
or may be some more sophisticated transform of the features. The latter is often used
in image processing.
For example, scikit-image provides a variety of feature
extractors designed for image data: see the skimage.feature submodule.
We will see some dimensionality-based feature extraction routines later in the tutorial.
Combining Numerical and Categorical Features
As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic.
We will use a version of the Titanic (titanic3.xls) dataset from Thomas Cason, as retrieved from Frank Harrell's webpage here. We converted the .xls to .csv for easier manipulation without involving external libraries, but the data is otherwise unchanged.
We need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.
End of explanation
"""
from helpers import process_titanic_line
print(process_titanic_line(lines[0]))
"""
Explanation: The site linked here gives a broad description of the keys and what they mean - we show it here for completeness
pclass Passenger Class
(1 = 1st; 2 = 2nd; 3 = 3rd)
survival Survival
(0 = No; 1 = Yes)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation
(C = Cherbourg; Q = Queenstown; S = Southampton)
boat Lifeboat
body Body Identification Number
home.dest Home/Destination
In general, it looks like name, sex, cabin, embarked, boat, body, and homedest may be candidates for categorical features, while the rest appear to be numerical features. We can now write a function to extract features from a text line, shown below.
Let's process an example line using the process_titanic_line function from helpers to see the expected output.
End of explanation
"""
from helpers import load_titanic
keys, train_data, test_data, train_labels, test_labels = load_titanic(
test_size=0.2, feature_skip_tuple=(), random_state=1999)
print("Key list: %s" % keys)
"""
Explanation: Now that we see the expected format from the line, we can call a dataset helper which uses this processing to read in the whole dataset. See helpers.py for more details.
End of explanation
"""
from sklearn.metrics import accuracy_score
from sklearn.dummy import DummyClassifier
clf = DummyClassifier('most_frequent')
clf.fit(train_data, train_labels)
pred_labels = clf.predict(test_data)
print("Prediction accuracy: %f" % accuracy_score(pred_labels, test_labels))
"""
Explanation: With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
pip freeze | grep kfp || pip install kfp
from os import path
import kfp
import kfp.compiler as compiler
import kfp.components as comp
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.notebook
"""
Explanation: Kubeflow pipelines
Learning Objectives:
1. Learn how to deploy a Kubeflow cluster on GCP
1. Learn how to create a experiment in Kubeflow
1. Learn how to package you code into a Kubeflow pipeline
1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way
Introduction
In this notebook, we will first setup a Kubeflow cluster on GCP.
Then, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.
End of explanation
"""
HOST = "<KFP HOST>"
BUCKET = "<YOUR PROJECT>"
"""
Explanation: Setup a Kubeflow cluster on GCP
TODO 1
To deploy a Kubeflow cluster
in your GCP project, use the AI Platform pipelines:
Go to AI Platform Pipelines in the GCP Console.
Create a new instance
Hit "Configure"
Check the box "Allow access to the following Cloud APIs"
Hit "Create New Cluster"
Hit "Deploy"
When the cluster is ready, go back to the AI Platform pipelines page and click on "SETTINGS" entry for your cluster.
This will bring up a pop up with code snippets on how to access the cluster
programmatically.
Copy the "host" entry and set the "HOST" variable below with that.
End of explanation
"""
client = kfp.Client(host=HOST)
"""
Explanation: Create an experiment
TODO 2
We will start by creating a Kubeflow client to pilot the Kubeflow cluster:
End of explanation
"""
client.list_experiments()
"""
Explanation: Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment:
End of explanation
"""
exp = client.create_experiment(name='taxifare')
"""
Explanation: Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:
End of explanation
"""
client.list_experiments()
"""
Explanation: Let's make sure the experiment has been created correctly:
End of explanation
"""
# Builds the taxifare trainer container in case you skipped the optional part of lab 1
!taxifare/scripts/build.sh
# Pushes the taxifare trainer container to gcr/io
!taxifare/scripts/push.sh
# Builds the KF component containers and push them to gcr/io
!cd pipelines && make components
"""
Explanation: Packaging your code into Kubeflow components
We have packaged our taxifare ml pipeline into three components:
* ./components/bq2gcs that creates the training and evaluation data from BigQuery and exports it to GCS
* ./components/trainjob that launches the training container on AI-platform and exports the model
* ./components/deploymodel that deploys the trained model to AI-platform as a REST API
Each of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.
If you inspect the code in these folders, you'll notice that the main.py or main.sh files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the Dockerfile tells you that these files are executed when the container is run.
So we just packaged our ml code into light container images for reproducibility.
We have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:
End of explanation
"""
%%writefile bq2gcs.yaml
name: bq2gcs
description: |
This component creates the training and
validation datasets as BiqQuery tables and export
them into a Google Cloud Storage bucket at
gs://<BUCKET>/taxifare/data.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-bq2gcs
args: ["--bucket", {inputValue: Input Bucket}]
%%writefile trainjob.yaml
name: trainjob
description: |
This component trains a model to predict that taxi fare in NY.
It takes as argument a GCS bucket and expects its training and
eval data to be at gs://<BUCKET>/taxifare/data/ and will export
the trained model at gs://<BUCKET>/taxifare/model/.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-trainjob
args: [{inputValue: Input Bucket}]
%%writefile deploymodel.yaml
name: deploymodel
description: |
This component deploys a trained taxifare model on GCP as taxifare:dnn.
It takes as argument a GCS bucket and expects the model to deploy
to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-deploymodel
args: [{inputValue: Input Bucket}]
"""
Explanation: Now that the container images are pushed to the registry in your project, we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to
* describing what arguments Kubeflow needs to pass to the containers when it runs them
* telling Kubeflow where to fetch the corresponding Docker images
In the cells below, we have three of these "Kubeflow component description files", one for each of our components.
TODO 3
IMPORTANT: Modify the image URI in the cell
below to reflect that you pushed the images into the gcr.io associated with your project.
End of explanation
"""
# TODO 3
PIPELINE_TAR = 'taxifare.tar.gz'
BQ2GCS_YAML = './bq2gcs.yaml'
TRAINJOB_YAML = './trainjob.yaml'
DEPLOYMODEL_YAML = './deploymodel.yaml'
@dsl.pipeline(
name='Taxifare',
description='Train a ml model to predict the taxi fare in NY')
def pipeline(gcs_bucket_name='<bucket where data and model will be exported>'):
bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)
bq2gcs = bq2gcs_op(
input_bucket=gcs_bucket_name,
)
trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)
trainjob = trainjob_op(
input_bucket=gcs_bucket_name,
)
deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)
deploymodel = deploymodel_op(
input_bucket=gcs_bucket_name,
)
trainjob.after(bq2gcs)
deploymodel.after(trainjob)
"""
Explanation: Create a Kubeflow pipeline
The code below creates a kubeflow pipeline by decorating a regular function with the
@dsl.pipeline decorator. Now the arguments of this decorated function will be
the input parameters of the Kubeflow pipeline.
Inside the function, we describe the pipeline by
* loading the yaml component files we created above into a Kubeflow op
* specifying the order into which the Kubeflow ops should be run
End of explanation
"""
compiler.Compiler().compile(pipeline, PIPELINE_TAR)
ls $PIPELINE_TAR
"""
Explanation: The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:
End of explanation
"""
# TODO 4
run = client.run_pipeline(
experiment_id=exp.id,
job_name='taxifare',
pipeline_package_path='taxifare.tar.gz',
params={
'gcs_bucket_name': BUCKET,
},
)
"""
Explanation: If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the
Python description of the pipeline into yaml description!
Now let's feed Kubeflow with our pipeline and run it using our client:
End of explanation
"""
|
garth-wells/IA-maths-Jupyter
|
Lecture02.ipynb
|
mit
|
from sympy import *
# This initialises pretty printing
init_printing()
from IPython.display import display
# This command makes plots appear inside the browser window
%matplotlib inline
"""
Explanation: Lecture 2: second-order ordinary differential equations
We now look at solving second-order ordinary differential equations using a computer algebra system.
To use SymPy, we first need to import it and call init_printing() to get nicely typeset equations:
End of explanation
"""
t, m, lmbda, k = symbols("t m lambda k")
y = Function("y")
"""
Explanation: Mass-spring-damper system
The differential equation that governs an unforced, single degree-of-freedom mass-spring-damper system is
$$
m \frac{d^{2}y}{dt^{2}} + \lambda \frac{dy}{dt} + ky = 0
$$
To solve this problem using SymPy, we first define the symbols $t$ (time), $m$ (mass), $\lambda$ (damper coefficient) and $k$ (spring stiffness), and the function $y$ (displacement):
End of explanation
"""
eqn = Eq(m*Derivative(y(t), t, t) + lmbda*Derivative(y(t), t) + k*y(t), 0)
display(eqn)
"""
Explanation: Note that we mis-spell $\lambda$ as lmbda because lambda is a protected keyword in Python.
Next, we define the differential equation, and print it to the screen:
End of explanation
"""
print("This order of the ODE is: {}".format(ode_order(eqn, y(t))))
"""
Explanation: Checking the order of the ODE:
End of explanation
"""
print("Properties of the ODE are: {}".format(classify_ode(eqn)))
"""
Explanation: and now classifying the ODE:
End of explanation
"""
y = dsolve(eqn, y(t))
display(y)
"""
Explanation: we see as expected that the equation is linear, constant coefficient, homogeneous and second order.
The dsolve function solves the differential equation:
End of explanation
"""
y = Function("y")
x = symbols("x")
eqn = Eq(Derivative(y(x), x, x) + 2*Derivative(y(x), x) - 3*y(x), 0)
display(eqn)
"""
Explanation: The solution looks very complicated because we have not specified values for the constants $m$, $\lambda$ and $k$. The nature of the solution depends heavily on the relative values of the coefficients, as we will see later. We have four constants because the most general case the solution is complex, with two complex constants having four real coefficients.
Note that the solution is make up of expoential functions and sinusoidal functions. This is typical of second-order ODEs.
Second order, constant coefficient equation
We'll now solve
$$
\frac{d^{2}y}{dx^{2}} + 2 \frac{dy}{dx} - 3 y = 0
$$
The solution for this problem will appear simpler because we have concrete values for the coefficients.
Entering the differential equation:
End of explanation
"""
y1 = dsolve(eqn)
display(y1)
"""
Explanation: Solving this equation,
End of explanation
"""
eqn = Eq(lmbda**2 + 2*lmbda -3, 0)
display(eqn)
"""
Explanation: which is the general solution. As expected for a second-order equation, there are two constants.
Note that the general solution is of the form
$$
y = C_{1} e^{\lambda_{1} x} + C_{2} e^{\lambda_{2} x}
$$
The constants $\lambda_{1}$ and $\lambda_{2}$ are roots of the \emph{characteristic} equation
$$
\lambda^{2} + 2\lambda - 3 = 0
$$
This quadratic equation is trivial to solve, but for completeness we'll look at how to solve it using SymPy. We first define the quadratic equation:
End of explanation
"""
solve(eqn)
"""
Explanation: and then compute the roots:
End of explanation
"""
|
linamnt/studyGroup
|
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
|
apache-2.0
|
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Simulating Grover's Search Algorithm with 2 Qubits
End of explanation
"""
zero = np.matrix([[1],[0]]);
one = np.matrix([[0],[1]]);
psi = np.kron(zero,zero);
print(psi)
"""
Explanation: Define the zero and one vectors
Define the initial state $\psi$
End of explanation
"""
Id = np.matrix([[1,0],[0,1]]);
X = np.matrix([[0,1],[1,0]]);
Z = np.matrix([[1,0],[0,-1]]);
H = np.sqrt(0.5) * np.matrix([[1,1],[1,-1]]);
CNOT = np.matrix([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]]);
CZ = np.kron(Id,H).dot(CNOT).dot(np.kron(Id,H));
print(CZ)
"""
Explanation: Define the gates we will use:
$
\text{Id} = \begin{pmatrix}
1 & 0 \
0 & 1
\end{pmatrix},
\quad
X = \begin{pmatrix}
0 & 1 \
1 & 0
\end{pmatrix},
\quad
Z = \begin{pmatrix}
1 & 0 \
0 & -1
\end{pmatrix},
\quad
H = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1 \
1 & -1
\end{pmatrix},
\quad
\text{CNOT} = \begin{pmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & 0 & 1 \
0 & 0 & 1 & 0
\end{pmatrix},
\quad
CZ = (\text{Id} \otimes H) \text{ CNOT } (\text{Id} \otimes H)
$
End of explanation
"""
oracle = np.kron(Z,Id).dot(CZ);
print(oracle)
"""
Explanation: Define the oracle for Grover's algorithm (take search answer to be "10")
$
\text{oracle} = \begin{pmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & -1 & 0 \
0 & 0 & 0 & 1
\end{pmatrix}
= (Z \otimes \text{Id}) CZ
$
Use different combinations of $Z \otimes \text{Id}$ to change where search answer is.
End of explanation
"""
psi0 = np.kron(H,H).dot(psi);
psi1 = oracle.dot(psi0);
print(psi1)
"""
Explanation: Act the H gates on the input vector and apply the oracle
End of explanation
"""
print(np.multiply(psi1,psi1))
"""
Explanation: Remember that when we measure the result ("00", "01", "10", "11") is chosen randomly with probabilities given by the vector elements squared.
End of explanation
"""
W = np.kron(H,H).dot(np.kron(Z,Z)).dot(CZ).dot(np.kron(H,H));
print(W)
psif = W.dot(psi1);
print(np.multiply(psif,psif))
x = [0,1,2,3];
xb = [0.25,1.25,2.25,3.25];
labels=['00', '01', '10', '11'];
plt.axis([-0.5,3.5,-1.25,1.25]);
plt.xticks(x,labels);
plt.bar(x, np.ravel(psi0), 1/1.5, color="red");
plt.bar(xb, np.ravel(np.multiply(psi0,psi0)), 1/2., color="blue");
labels=['00', '01', '10', '11'];
plt.axis([-0.5,3.5,-1.25,1.25]);
plt.xticks(x,labels);
plt.bar(x, np.ravel(psi1), 1/1.5, color="red");
plt.bar(xb, np.ravel(np.multiply(psi1,psi1)), 1/2., color="blue");
labels=['00', '01', '10', '11'];
plt.axis([-0.5,3.5,-1.25,1.25]);
plt.xticks(x,labels);
plt.bar(x, np.ravel(psif), 1/1.5, color="red");
plt.bar(xb, np.ravel(np.multiply(psif,psif)), 1/2., color="blue");
"""
Explanation: There is no difference between any of the probabilities. It's still just a 25% chance of getting the right answer.
We need some of gates after the oracle before measuring to converge on the right answer.
These gates do the operation $W = \frac{1}{2}\begin{pmatrix}
-1 & 1 & 1 & 1 \
1 & -1 & 1 & 1 \
1 & 1 & -1 & 1 \
1 & 1 & 1 & -1
\end{pmatrix}
=
(H \otimes H)(Z \otimes Z) CZ (H \otimes H)
$
Notice that if the matrix W is multiplied by the vector after the oracle, W $\frac{1}{2}\begin{pmatrix}
1 \
1 \
-1 \
1
\end{pmatrix}
= \begin{pmatrix}
0 \
0 \
1 \
0
\end{pmatrix} $,
every vector element decreases, except the correct answer element which increases. This would be true if if we chose a different place for the search result originally.
End of explanation
"""
|
alshedivat/tensorflow
|
tensorflow/contrib/autograph/examples/notebooks/dev_summit_2018_demo.ipynb
|
apache-2.0
|
# Install TensorFlow; note that Colab notebooks run remotely, on virtual
# instances provided by Google.
!pip install -U -q tf-nightly
import os
import time
import tensorflow as tf
from tensorflow.contrib import autograph
import matplotlib.pyplot as plt
import numpy as np
import six
from google.colab import widgets
"""
Explanation: Experimental: TF AutoGraph
TensorFlow Dev Summit, 2018.
This interactive notebook demonstrates AutoGraph, an experimental source-code transformation library to automatically convert Python, TensorFlow and NumPy code to TensorFlow graphs.
Note: this is pre-alpha software! The notebook works best with Python 2, for now.
Table of Contents
Write Eager code that is fast and scalable.
Case study: complex control flow.
Case study: training MNIST with Keras.
Case study: building an RNN.
End of explanation
"""
def g(x):
if x > 0:
x = x * x
else:
x = 0
return x
"""
Explanation: 1. Write Eager code that is fast and scalable
TF.Eager gives you more flexibility while coding, but at the cost of losing the benefits of TensorFlow graphs. For example, Eager does not currently support distributed training, exporting models, and a variety of memory and computation optimizations.
AutoGraph gives you the best of both worlds: you can write your code in an Eager style, and we will automatically transform it into the equivalent TF graph code. The graph code can be executed eagerly (as a single op), included as part of a larger graph, or exported.
For example, AutoGraph can convert a function like this:
End of explanation
"""
print(autograph.to_code(g))
"""
Explanation: ... into a TF graph-building function:
End of explanation
"""
tf_g = autograph.to_graph(g)
with tf.Graph().as_default():
g_ops = tf_g(tf.constant(9))
with tf.Session() as sess:
tf_g_result = sess.run(g_ops)
print('g(9) = %s' % g(9))
print('tf_g(9) = %s' % tf_g_result)
"""
Explanation: You can then use the converted function as you would any regular TF op -- you can pass Tensor arguments and it will return Tensors:
End of explanation
"""
def sum_even(numbers):
s = 0
for n in numbers:
if n % 2 > 0:
continue
s += n
return s
tf_sum_even = autograph.to_graph(sum_even)
with tf.Graph().as_default():
with tf.Session() as sess:
result = sess.run(tf_sum_even(tf.constant([10, 12, 15, 20])))
print('Sum of even numbers: %s' % result)
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(sum_even))
"""
Explanation: 2. Case study: complex control flow
Autograph can convert a large subset of the Python language into graph-equivalent code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in AutoGraph.
AutoGraph will automatically convert most Python control flow statements into their graph equivalent.
We support common statements like while, for, if, break, return and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand:
End of explanation
"""
def f(x):
assert x != 0, 'Do not pass zero!'
return x * x
tf_f = autograph.to_graph(f)
with tf.Graph().as_default():
with tf.Session() as sess:
try:
print(sess.run(tf_f(tf.constant(0))))
except tf.errors.InvalidArgumentError as e:
print('Got error message: %s' % e.message)
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(f))
"""
Explanation: Try replacing the continue in the above code with break -- Autograph supports that as well!
The Python code above is much more readable than the matching graph code. Autograph takes care of tediously converting every piece of Python code into the matching TensorFlow graph version for you, so that you can quickly write maintainable code, but still benefit from the optimizations and deployment benefits of graphs.
Let's try some other useful Python constructs, like print and assert. We automatically convert Python assert statements into the equivalent tf.Assert code.
End of explanation
"""
def print_sign(n):
if n >= 0:
print(n, 'is positive!')
else:
print(n, 'is negative!')
return n
tf_print_sign = autograph.to_graph(print_sign)
with tf.Graph().as_default():
with tf.Session() as sess:
sess.run(tf_print_sign(tf.constant(1)))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(print_sign))
"""
Explanation: You can also use print functions in-graph:
End of explanation
"""
def f(n):
numbers = []
# We ask you to tell us about the element dtype.
autograph.set_element_type(numbers, tf.int32)
for i in range(n):
numbers.append(i)
return autograph.stack(numbers) # Stack the list so that it can be used as a Tensor
tf_f = autograph.to_graph(f)
with tf.Graph().as_default():
with tf.Session() as sess:
print(sess.run(tf_f(tf.constant(5))))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(f))
"""
Explanation: Appending to lists also works, with a few modifications:
End of explanation
"""
def print_primes(n):
"""Returns all the prime numbers less than n."""
assert n > 0
primes = []
autograph.set_element_type(primes, tf.int32)
for i in range(2, n):
is_prime = True
for k in range(2, i):
if i % k == 0:
is_prime = False
break
if not is_prime:
continue
primes.append(i)
all_primes = autograph.stack(primes)
print('The prime numbers less than', n, 'are:')
print(all_primes)
return tf.no_op()
tf_print_primes = autograph.to_graph(print_primes)
with tf.Graph().as_default():
with tf.Session() as sess:
n = tf.constant(50)
sess.run(tf_print_primes(n))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(print_primes))
"""
Explanation: And all of these functionalities, and more, can be composed into more complicated code:
End of explanation
"""
import gzip
import shutil
from six.moves import urllib
def download(directory, filename):
filepath = os.path.join(directory, filename)
if tf.gfile.Exists(filepath):
return filepath
if not tf.gfile.Exists(directory):
tf.gfile.MakeDirs(directory)
url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz'
zipped_filepath = filepath + '.gz'
print('Downloading %s to %s' % (url, zipped_filepath))
urllib.request.urlretrieve(url, zipped_filepath)
with gzip.open(zipped_filepath, 'rb') as f_in, open(filepath, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
os.remove(zipped_filepath)
return filepath
def dataset(directory, images_file, labels_file):
images_file = download(directory, images_file)
labels_file = download(directory, labels_file)
def decode_image(image):
# Normalize from [0, 255] to [0.0, 1.0]
image = tf.decode_raw(image, tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [784])
return image / 255.0
def decode_label(label):
label = tf.decode_raw(label, tf.uint8)
label = tf.reshape(label, [])
return tf.to_int32(label)
images = tf.data.FixedLengthRecordDataset(
images_file, 28 * 28, header_bytes=16).map(decode_image)
labels = tf.data.FixedLengthRecordDataset(
labels_file, 1, header_bytes=8).map(decode_label)
return tf.data.Dataset.zip((images, labels))
def mnist_train(directory):
return dataset(directory, 'train-images-idx3-ubyte',
'train-labels-idx1-ubyte')
def mnist_test(directory):
return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte')
"""
Explanation: 3. Case study: training MNIST with Keras
As we've seen, writing control flow in AutoGraph is easy. So running a training loop in graph should be easy as well!
Here, we show an example of such a training loop for a simple Keras model that trains on MNIST.
End of explanation
"""
def mlp_model(input_shape):
model = tf.keras.Sequential((
tf.keras.layers.Dense(100, activation='relu', input_shape=input_shape),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax'),
))
model.build()
return model
"""
Explanation: First, we'll define a small three-layer neural network using the Keras API
End of explanation
"""
def predict(m, x, y):
y_p = m(x)
losses = tf.keras.losses.categorical_crossentropy(y, y_p)
l = tf.reduce_mean(losses)
accuracies = tf.keras.metrics.categorical_accuracy(y, y_p)
accuracy = tf.reduce_mean(accuracies)
return l, accuracy
"""
Explanation: Let's connect the model definition (here abbreviated as m) to a loss function, so that we can train our model.
End of explanation
"""
def fit(m, x, y, opt):
l, accuracy = predict(m, x, y)
opt.minimize(l)
return l, accuracy
"""
Explanation: Now the final piece of the problem specification (before loading data, and clicking everything together) is backpropagating the loss through the model, and optimizing the weights using the gradient.
End of explanation
"""
def setup_mnist_data(is_training, hp, batch_size):
if is_training:
ds = mnist_train('/tmp/autograph_mnist_data')
ds = ds.shuffle(batch_size * 10)
else:
ds = mnist_test('/tmp/autograph_mnist_data')
ds = ds.repeat()
ds = ds.batch(batch_size)
return ds
def get_next_batch(ds):
itr = ds.make_one_shot_iterator()
image, label = itr.get_next()
x = tf.to_float(tf.reshape(image, (-1, 28 * 28)))
y = tf.one_hot(tf.squeeze(label), 10)
return x, y
"""
Explanation: These are some utility functions to download data and generate batches for training
End of explanation
"""
def train(train_ds, test_ds, hp):
m = mlp_model((28 * 28,))
opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9)
train_losses = []
autograph.set_element_type(train_losses, tf.float32)
test_losses = []
autograph.set_element_type(test_losses, tf.float32)
train_accuracies = []
autograph.set_element_type(train_accuracies, tf.float32)
test_accuracies = []
autograph.set_element_type(test_accuracies, tf.float32)
i = 0
while i < hp.max_steps:
train_x, train_y = get_next_batch(train_ds)
test_x, test_y = get_next_batch(test_ds)
step_train_loss, step_train_accuracy = fit(m, train_x, train_y, opt)
step_test_loss, step_test_accuracy = predict(m, test_x, test_y)
if i % (hp.max_steps // 10) == 0:
print('Step', i, 'train loss:', step_train_loss, 'test loss:',
step_test_loss, 'train accuracy:', step_train_accuracy,
'test accuracy:', step_test_accuracy)
train_losses.append(step_train_loss)
test_losses.append(step_test_loss)
train_accuracies.append(step_train_accuracy)
test_accuracies.append(step_test_accuracy)
i += 1
return (autograph.stack(train_losses), autograph.stack(test_losses),
autograph.stack(train_accuracies),
autograph.stack(test_accuracies))
"""
Explanation: This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time.
In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well.
End of explanation
"""
def plot(train, test, label):
plt.title('MNIST model %s' % label)
plt.plot(train, label='train %s' % label)
plt.plot(test, label='test %s' % label)
plt.legend()
plt.xlabel('Training step')
plt.ylabel(label.capitalize())
plt.show()
with tf.Graph().as_default():
hp = tf.contrib.training.HParams(
learning_rate=0.05,
max_steps=tf.constant(500),
)
train_ds = setup_mnist_data(True, hp, 50)
test_ds = setup_mnist_data(False, hp, 1000)
tf_train = autograph.to_graph(train)
all_losses = tf_train(train_ds, test_ds, hp)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
(train_losses, test_losses, train_accuracies,
test_accuracies) = sess.run(all_losses)
plot(train_losses, test_losses, 'loss')
plot(train_accuracies, test_accuracies, 'accuracy')
"""
Explanation: Everything is ready to go, let's train the model and plot its performance!
End of explanation
"""
def parse(line):
"""Parses a line from the colors dataset.
Args:
line: A comma-separated string containing four items:
color_name, red, green, and blue, representing the name and
respectively the RGB value of the color, as an integer
between 0 and 255.
Returns:
A tuple of three tensors (rgb, chars, length), of shapes: (batch_size, 3),
(batch_size, max_sequence_length, 256) and respectively (batch_size).
"""
items = tf.string_split(tf.expand_dims(line, 0), ",").values
rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
color_name = items[0]
chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)
length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)
return rgb, chars, length
def maybe_download(filename, work_directory, source_url):
"""Downloads the data from source url."""
if not tf.gfile.Exists(work_directory):
tf.gfile.MakeDirs(work_directory)
filepath = os.path.join(work_directory, filename)
if not tf.gfile.Exists(filepath):
temp_file_name, _ = six.moves.urllib.request.urlretrieve(source_url)
tf.gfile.Copy(temp_file_name, filepath)
with tf.gfile.GFile(filepath) as f:
size = f.size()
print('Successfully downloaded', filename, size, 'bytes.')
return filepath
def load_dataset(data_dir, url, batch_size, training=True):
"""Loads the colors data at path into a tf.PaddedDataset."""
path = maybe_download(os.path.basename(url), data_dir, url)
dataset = tf.data.TextLineDataset(path)
dataset = dataset.skip(1)
dataset = dataset.map(parse)
dataset = dataset.cache()
dataset = dataset.repeat()
if training:
dataset = dataset.shuffle(buffer_size=3000)
dataset = dataset.padded_batch(batch_size, padded_shapes=((None,), (None, None), ()))
return dataset
train_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/train.csv"
test_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/test.csv"
data_dir = "tmp/rnn/data"
"""
Explanation: 4. Case study: building an RNN
In this exercise we build and train a model similar to the RNNColorbot model that was used in the main Eager notebook. The model is adapted for converting and training in graph mode.
To get started, we load the colorbot dataset. The code is identical to that used in the other exercise and its details are unimportant.
End of explanation
"""
def model_components():
lower_cell = tf.contrib.rnn.LSTMBlockCell(256)
lower_cell.build(tf.TensorShape((None, 256)))
upper_cell = tf.contrib.rnn.LSTMBlockCell(128)
upper_cell.build(tf.TensorShape((None, 256)))
relu_layer = tf.layers.Dense(3, activation=tf.nn.relu)
relu_layer.build(tf.TensorShape((None, 128)))
return lower_cell, upper_cell, relu_layer
def rnn_layer(chars, cell, batch_size, training):
"""A simple RNN layer.
Args:
chars: A Tensor of shape (max_sequence_length, batch_size, input_size)
cell: An object of type tf.contrib.rnn.LSTMBlockCell
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (max_sequence_length, batch_size, output_size).
"""
hidden_outputs = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
state, output = cell.zero_state(batch_size, tf.float32)
initial_state_shape = state.shape
initial_output_shape = output.shape
n = tf.shape(chars)[0]
i = 0
while i < n:
ch = chars[i]
cell_output, (state, output) = cell.call(ch, (state, output))
hidden_outputs.append(cell_output)
i += 1
hidden_outputs = autograph.stack(hidden_outputs)
if training:
hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5)
return hidden_outputs
def model(inputs, lower_cell, upper_cell, relu_layer, batch_size, training):
"""RNNColorbot model.
The model consists of two RNN layers (made by lower_cell and upper_cell),
followed by a fully connected layer with ReLU activation.
Args:
inputs: A tuple (chars, length)
lower_cell: An object of type tf.contrib.rnn.LSTMBlockCell
upper_cell: An object of type tf.contrib.rnn.LSTMBlockCell
relu_layer: An object of type tf.layers.Dense
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (batch_size, 3) - the model predictions.
"""
(chars, length) = inputs
chars_time_major = tf.transpose(chars, (1, 0, 2))
chars_time_major.set_shape((None, batch_size, 256))
hidden_outputs = rnn_layer(chars_time_major, lower_cell, batch_size, training)
final_outputs = rnn_layer(hidden_outputs, upper_cell, batch_size, training)
# Grab just the end-of-sequence from each output.
indices = tf.stack((length - 1, range(batch_size)), axis=1)
sequence_ends = tf.gather_nd(final_outputs, indices)
sequence_ends.set_shape((batch_size, 128))
return relu_layer(sequence_ends)
def loss_fn(labels, predictions):
return tf.reduce_mean((predictions - labels) ** 2)
"""
Explanation: Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise.
Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions.
End of explanation
"""
def train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps):
iterator = train_data.make_one_shot_iterator()
step = 0
while step < num_steps:
labels, chars, sequence_length = iterator.get_next()
predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=True)
loss = loss_fn(labels, predictions)
optimizer.minimize(loss)
if step % (num_steps // 10) == 0:
print('Step', step, 'train loss', loss)
step += 1
return step
def test(eval_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps):
total_loss = 0.0
iterator = eval_data.make_one_shot_iterator()
step = 0
while step < num_steps:
labels, chars, sequence_length = iterator.get_next()
predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=False)
total_loss += loss_fn(labels, predictions)
step += 1
print('Test loss', total_loss)
return total_loss
def train_model(train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps):
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps=tf.constant(train_steps))
test(eval_data, lower_cell, upper_cell, relu_layer, 50, num_steps=tf.constant(2))
print('Colorbot is ready to generate colors!\n\n')
# In graph mode, every op needs to be a dependent of another op.
# Here, we create a no_op that will drive the execution of all other code in
# this function. Autograph will add the necessary control dependencies.
return tf.no_op()
"""
Explanation: The train and test functions are also similar to the ones used in the Eager notebook. Since the network requires a fixed batch size, we'll train in a single shot, rather than by epoch.
End of explanation
"""
@autograph.do_not_convert(run_as=autograph.RunMode.PY_FUNC)
def draw_prediction(color_name, pred):
pred = pred * 255
pred = pred.astype(np.uint8)
plt.axis('off')
plt.imshow(pred)
plt.title(color_name)
plt.show()
def inference(color_name, lower_cell, upper_cell, relu_layer):
_, chars, sequence_length = parse(color_name)
chars = tf.expand_dims(chars, 0)
sequence_length = tf.expand_dims(sequence_length, 0)
pred = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, 1, training=False)
pred = tf.minimum(pred, 1.0)
pred = tf.expand_dims(pred, 0)
draw_prediction(color_name, pred)
# Create an op that will drive the entire function.
return tf.no_op()
"""
Explanation: Finally, we add code to run inference on a single input, which we'll read from the input.
Note the do_not_convert annotation that lets us disable conversion for certain functions and run them as a py_func instead, so you can still call them from compiled code.
End of explanation
"""
def run_input_loop(sess, inference_ops, color_name_placeholder):
"""Helper function that reads from input and calls the inference ops in a loop."""
tb = widgets.TabBar(["RNN Colorbot"])
while True:
with tb.output_to(0):
try:
color_name = six.moves.input("Give me a color name (or press 'enter' to exit): ")
except (EOFError, KeyboardInterrupt):
break
if not color_name:
break
with tb.output_to(0):
tb.clear_tab()
sess.run(inference_ops, {color_name_placeholder: color_name})
plt.show()
with tf.Graph().as_default():
# Read the data.
batch_size = 64
train_data = load_dataset(data_dir, train_url, batch_size)
eval_data = load_dataset(data_dir, test_url, 50, training=False)
# Create the model components.
lower_cell, upper_cell, relu_layer = model_components()
# Create the helper placeholder for inference.
color_name_placeholder = tf.placeholder(tf.string, shape=())
# Compile the train / test code.
tf_train_model = autograph.to_graph(train_model)
train_model_ops = tf_train_model(
train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps=100)
# Compile the inference code.
tf_inference = autograph.to_graph(inference)
inference_ops = tf_inference(color_name_placeholder, lower_cell, upper_cell, relu_layer)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Run training and testing.
sess.run(train_model_ops)
# Run the inference loop.
run_input_loop(sess, inference_ops, color_name_placeholder)
"""
Explanation: Finally, we put everything together.
Note that the entire training and testing code is all compiled into a single op (tf_train_model) that you only execute once! We also still use a sess.run loop for the inference part, because that requires keyboard input.
End of explanation
"""
|
benwaugh/NuffieldProject2016
|
notebooks/ROOTDataAccessExample.ipynb
|
mit
|
import pylab
import matplotlib.pyplot as plt
%matplotlib inline
pylab.rcParams['figure.figsize'] = 12,8
"""
Explanation: Simple test of using ROOT in a Python notebook
Trying to read and process some data from a ROOT file over the network. Using material from
* Example of a Z Analysis ROOT C++ kernel
* ROOT reference guide
Import the usual Matplotlib stuff for plotting histograms etc.
End of explanation
"""
from ROOT import TChain, TFile
"""
Explanation: Import whatever classes we need from ROOT:
End of explanation
"""
data = TChain("mini"); # "mini" is the name of the TTree stored in the data files
data.Add("http://atlas-opendata.web.cern.ch/atlas-opendata/release/samples/Data/DataMuons.root")
"""
Explanation: Create a "chain" of files (but just one file for now):
End of explanation
"""
n_events = data.GetEntries()
print(n_events)
"""
Explanation: Count the number of events in the data:
End of explanation
"""
leaves = data.GetListOfLeaves()
for branch in leaves:
print branch.GetName()
"""
Explanation: This is the list of "leaves" in the "tree", corresponding to bits of data stored for each event:
End of explanation
"""
data.GetEntry(0)
"""
Explanation: This is how to read the first event into memory:
End of explanation
"""
num_leptons = data.lep_n # number of identified leptons in the event
pt_lepton = data.lep_pt[0] # transverse momentum of the first lepton
print("Number of leptons = {}".format(num_leptons))
print("Pt of first lepton = {}".format(pt_lepton))
"""
Explanation: Let's look at some of the data from the first event. There is a list of variable names in the ATLAS Open Data documentation on the web. It doesn't state the units used, but it looks like momenta are in MeV. The names on the web also don't always match exactly the names in the data, so a bit of guesswork is required!
End of explanation
"""
met = []
for event_num in xrange(1000):
data.GetEntry(event_num)
met.append(data.met_et)
plt.hist(met)
plt.xlabel('Missing Et [MeV]')
plt.ylabel('Events per bin')
"""
Explanation: Let's construct a histogram of the missing transverse energy in each event, but just the first 1000 events for now so we're not waiting too long:
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/guide/keras/custom_callback.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
from tensorflow import keras
"""
Explanation: Writing your own callbacks
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/custom_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/custom_callback.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
はじめに
コールバックは、トレーニング、評価、推論の間に Keras モデルの動作をカスタマイズするための強力なツールです。例には、TensorBoard でトレーニングの進捗状況や結果を可視化できる tf.keras.callbacks.TensorBoard や、トレーニング中にモデルを定期的に保存できる tf.keras.callbacks.ModelCheckpoint などを含みます。
このガイドでは、Keras コールバックとは何か、それができること、そして独自のコールバックを構築する方法を学ぶことができます。まずは、簡単なコールバックアプリケーションのデモをいくつか紹介します。
Setup
End of explanation
"""
# Define the Keras model to add callbacks to
def get_model():
model = keras.Sequential()
model.add(keras.layers.Dense(1, input_dim=784))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
loss="mean_squared_error",
metrics=["mean_absolute_error"],
)
return model
"""
Explanation: Keras コールバックの概要
全てのコールバックは keras.callbacks.Callbacks.Callback クラスをサブクラス化し、トレーニング、テスト、予測のさまざまな段階で呼び出される一連のメソッドをオーバーライドします。コールバックは、トレーニング中にモデルの内部状態や統計上のビューを取得するのに有用です。
以下のモデルメソッドには、(キーワード引数 callbacks として)コールバックのリストを渡すことができます。
keras.Model.fit()
keras.Model.evaluate()
keras.Model.predict()
コールバックメソッドの概要
グローバルメソッド
on_(train|test|predict)_begin(self, logs=None)
fit/evaluate/predict の先頭で呼び出されます。
on_(train|test|predict)_end(self, logs=None)
fit/evaluate/predict の最後に呼び出されます。
トレーニング/テスト/予測のためのバッチレベルのメソッド
on_(train|test|predict)_batch_begin(self, batch, logs=None)
トレーニング/テスト/予測中に、バッチを処理する直前に呼び出されます。
on_(train|test|predict)_batch_end(self, batch, logs=None)
バッチのトレーニング/テスト/予測の終了時に呼び出されます。このメソッド内では、logs はメトリクスの結果を含むディクショナリです。
エポックレベルのメソッド(トレーニングのみ)
on_epoch_begin(self, epoch, logs=None)
トレーニング中に、エポックの最初に呼び出されます。
on_epoch_end(self, epoch, logs=None)
トレーニング中、エポックの最後に呼び出されます。
基本的な例
具体的な例を見てみましょう。まず最初に、TensorFlow をインポートして単純な Sequential Keras モデルを定義してみます。
End of explanation
"""
# Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0
# Limit the data to 1000 samples
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000]
"""
Explanation: 次に、Keras データセット API からトレーニングとテスト用の MNIST データを読み込みます。
End of explanation
"""
class CustomCallback(keras.callbacks.Callback):
def on_train_begin(self, logs=None):
keys = list(logs.keys())
print("Starting training; got log keys: {}".format(keys))
def on_train_end(self, logs=None):
keys = list(logs.keys())
print("Stop training; got log keys: {}".format(keys))
def on_epoch_begin(self, epoch, logs=None):
keys = list(logs.keys())
print("Start epoch {} of training; got log keys: {}".format(epoch, keys))
def on_epoch_end(self, epoch, logs=None):
keys = list(logs.keys())
print("End epoch {} of training; got log keys: {}".format(epoch, keys))
def on_test_begin(self, logs=None):
keys = list(logs.keys())
print("Start testing; got log keys: {}".format(keys))
def on_test_end(self, logs=None):
keys = list(logs.keys())
print("Stop testing; got log keys: {}".format(keys))
def on_predict_begin(self, logs=None):
keys = list(logs.keys())
print("Start predicting; got log keys: {}".format(keys))
def on_predict_end(self, logs=None):
keys = list(logs.keys())
print("Stop predicting; got log keys: {}".format(keys))
def on_train_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: start of batch {}; got log keys: {}".format(batch, keys))
def on_train_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: end of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys))
"""
Explanation: 今度は、以下のログを記録する単純なカスタムコールバックを定義します。
When fit/evaluate/predict starts & ends
When each epoch starts & ends
各トレーニングバッチの開始時と終了時
各評価(テスト)バッチの開始時と終了時
各推論(予測)バッチの開始時と終了時
End of explanation
"""
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=1,
verbose=0,
validation_split=0.5,
callbacks=[CustomCallback()],
)
res = model.evaluate(
x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]
)
res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])
"""
Explanation: 試してみましょう。
End of explanation
"""
class LossAndErrorPrintingCallback(keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_test_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_epoch_end(self, epoch, logs=None):
print(
"The average loss for epoch {} is {:7.2f} "
"and mean absolute error is {:7.2f}.".format(
epoch, logs["loss"], logs["mean_absolute_error"]
)
)
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=2,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
res = model.evaluate(
x_test,
y_test,
batch_size=128,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
"""
Explanation: logs ディクショナリを使用する
logs ディクショナリは、バッチまたはエポックの最後の損失値と全てのメトリクスを含みます。次の例は、損失値と平均絶対誤差を含んでいます。
End of explanation
"""
import numpy as np
class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
"""Stop training when the loss is at its min, i.e. the loss stops decreasing.
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
"""
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("loss")
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
)
"""
Explanation: self.model 属性を使用する
コールバックは、そのメソッドの 1 つが呼び出された時にログ情報を受け取ることに加え、現在のトレーニング/評価/推論のラウンドに関連付けられたモデルに、self.model でアクセスすることができます。
コールバックで self.model を使用してできることを幾つか次に示します。
self.model.stop_training = True を設定して直ちにトレーニングを中断する。
self.model.optimizer.learning_rate など、オプティマイザ(self.model.optimizer として使用可能)のハイパーパラメータを変化させる。
一定間隔でモデルを保存する。
各エポックの終了時に幾つかのテストサンプルの model.predict() の出力を記録し、トレーニング中にサ二ティーチェックとして使用する。
各エポックの終了時に中間特徴の可視化を抽出して、モデルが何を学習しているかを経時的に監視する。
など
これを確認するために、2 つの例で見てみましょう。
Keras コールバックアプリケーションの例
最小損失で Early stopping する
この最初の例は、属性 self.model.stop_training(ブール)を設定して、損失の最小値に達した時点でトレーニングを停止する Callback を作成しています。オプションで、ローカル最小値に到達した後、実際に停止するまでに幾つのエポックを待つべきか、引数 patience で指定することが可能です。
tf.keras.callbacks.EarlyStopping は、より完全で一般的な実装を提供します。
End of explanation
"""
class CustomLearningRateScheduler(keras.callbacks.Callback):
"""Learning rate scheduler which sets the learning rate according to schedule.
Arguments:
schedule: a function that takes an epoch index
(integer, indexed from 0) and current learning rate
as inputs and returns a new learning rate as output (float).
"""
def __init__(self, schedule):
super(CustomLearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, "lr"):
raise ValueError('Optimizer must have a "lr" attribute.')
# Get the current learning rate from model's optimizer.
lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))
# Call schedule function to get the scheduled learning rate.
scheduled_lr = self.schedule(epoch, lr)
# Set the value back to the optimizer before this epoch starts
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr))
LR_SCHEDULE = [
# (epoch to start, learning rate) tuples
(3, 0.05),
(6, 0.01),
(9, 0.005),
(12, 0.001),
]
def lr_schedule(epoch, lr):
"""Helper function to retrieve the scheduled learning rate based on epoch."""
if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
return lr
for i in range(len(LR_SCHEDULE)):
if epoch == LR_SCHEDULE[i][0]:
return LR_SCHEDULE[i][1]
return lr
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=15,
verbose=0,
callbacks=[
LossAndErrorPrintingCallback(),
CustomLearningRateScheduler(lr_schedule),
],
)
"""
Explanation: 学習率をスケジューリングする
この例では、トレーニングの過程でカスタムコールバックを使用して、オプティマイザの学習率を動的に変更する方法を示します。
より一般的な実装については、callbacks.LearningRateScheduler をご覧ください。
End of explanation
"""
|
jjehl/poppy_education
|
poppy-4dof-arm-mini/poppy_4dof_arm_mini_test.ipynb
|
gpl-2.0
|
import pypot.dynamixel
import time
"""
Explanation: Some tests to check if your setup is running correctly - Using dynamixel XL320 motor
End of explanation
"""
print(pypot.dynamixel.get_available_ports())
"""
Explanation: Low level test
Find the available usb port. The port where USB2AX or USBDynamixel is plug.
End of explanation
"""
dxl_io = pypot.dynamixel.Dxl320IO('COM3', use_sync_read=False)
"""
Explanation: Open a low level connexion to the motors, don't forget to replace 'COM3' if your port is different.
End of explanation
"""
print(dxl_io.scan(range(30)))
"""
Explanation: Find the different motors which must have differents id. Id have been set up before with the herborist tool.
End of explanation
"""
%timeit dxl_io.get_present_position((1, 2, 3, 4))
"""
Explanation: A test to check the speed of the communication with your motor. On windows it is possible that you have to change the latency time of the driver of the usbdynamixel (see forum : https://forum.poppy-project.org/t/birth-of-poppy-ergo-jr-and-support-for-low-cost-xl-320-motors/1052/22)
End of explanation
"""
dxl_io.set_goal_position({1: 0})
dxl_io.set_goal_position({2: 0})
dxl_io.set_goal_position({3: 0})
dxl_io.set_goal_position({4: 0})
"""
Explanation: Setting the robot to 0 position :
End of explanation
"""
dxl_io.close()
"""
Explanation: If you want to close the connexion :
End of explanation
"""
from pypot.dynamixel import autodetect_robot
my_robot = autodetect_robot()
for m in my_robot.motors:
m.goal_position = 0.0
my_robot.motors
my_robot.motor_1.goal_position = 0
"""
Explanation: Config and json file
Now, if you want to have a robot acces and not only motors access, you have to configure your robot.
End of explanation
"""
import json
config = my_robot.to_config()
with open('test.json', 'wb') as f:
json.dump(config, f)
"""
Explanation: You can save the configuration in a file :
End of explanation
"""
my_robot.close()
"""
Explanation: And close the robot :
End of explanation
"""
from pypot.robot import from_json
mini_4dof = from_json('test.json')
mini_4dof.motors
"""
Explanation: You can use your previous json file to instanciate your robot :
End of explanation
"""
for m in mini_4dof.motors:
m.compliant=False
m.goto_position(0,1)
mini_4dof.m4.goto_position(90,0.5)
mini_4dof.m4.goto_position(-90,0.5)
mini_4dof.m3.goto_position(-10,0.5)
mini_4dof.m3.goto_position(90,0.5)
mini_4dof.m4.goto_position(90,0.5)
mini_4dof.m4.goto_position(-90,0.5)
import time
%pylab inline
from pypot.primitive import Primitive
class graph_primitive(Primitive):
def setup(self):
self.m4 = []
self.m3 = []
self.load = []
self.m2 = []
self.t = []
self.temoin=[]
self.a=1
def run(self):
while not self.should_stop():
self.m4.append(mini_4dof.m4.present_position)
self.m3.append(mini_4dof.m3.present_position)
self.m2.append(mini_4dof.m2.present_position)
self.load.append(mini_4dof.m2.present_load)
self.t.append(time.time())
if self.a==1 :
self.a=-1
else :
self.a=1
self.temoin.append(self.a)
time.sleep(0.02)
graph = graph_primitive(mini_4dof)
graph.start()
mini_4dof.m2.goto_position(40,2)
mini_4dof.m4.goto_position(90,2)
mini_4dof.m3.goto_position(130,3,wait=True)
mini_4dof.m2.goto_position(-40,2)
mini_4dof.m4.goto_position(-90,2)
mini_4dof.m3.goto_position(-130,3,wait=True)
mini_4dof.m2.goto_position(40,2)
mini_4dof.m4.goto_position(90,2)
mini_4dof.m3.goto_position(130,3,wait=True)
mini_4dof.m2.goto_position(-40,2)
mini_4dof.m4.goto_position(-90,2)
mini_4dof.m3.goto_position(-130,3,wait=True)
mini_4dof.m2.goto_position(40,2)
mini_4dof.m4.goto_position(90,2)
mini_4dof.m3.goto_position(130,3,wait=True)
mini_4dof.m2.goto_position(-40,2)
mini_4dof.m4.goto_position(-90,2)
mini_4dof.m3.goto_position(-130,3,wait=True)
time.sleep(3)
graph.stop()
figure(1)
plot(graph.t,graph.m3)
xlabel('time seconds')
ylabel('m3 position')
title ('Position of motor')
figure(2)
plot(graph.t,graph.m2)
xlabel('time seconds')
ylabel('m2 position')
title ('Position of motor')
figure(3)
plot(graph.t,graph.load)
xlabel('time seconds')
ylabel('m3 load')
title ('Position of motor')
mini_4dof.m3.goal_position = 20
mini_4dof.close()
"""
Explanation: And make move your robot :
End of explanation
"""
from poppy.creatures import Poppy4dofArmMini
"""
Explanation: Robot class
If you have correctly set your robot as a poppy creature you just have to import the class and instanciate your robot :
End of explanation
"""
import pip
pip.get_installed_distributions()
installed_poppy_creatures_packages()
poppy = Poppy4dofArmMini()
poppy.motors
from pypot.primitive import Primitive
class graph_primitive(Primitive):
def setup(self):
self.m4 = []
self.m3 = []
self.load = []
self.m2 = []
self.t = []
self.temoin=[]
self.a=1
def run(self):
while not self.should_stop():
self.m4.append(poppy.m4.present_position)
self.m3.append(poppy.m3.present_position)
self.m2.append(poppy.m2.present_position)
self.load.append(poppy.m2.present_load)
self.t.append(time.time())
if self.a==1 :
self.a=-1
else :
self.a=1
self.temoin.append(self.a)
time.sleep(0.02)
graph = graph_primitive(poppy)
graph.start()
poppy.m4.goto_position(130,0.5,wait=True)
poppy.m4.goto_position(0,0.5,wait=True)
graph.stop()
figure(1)
plot(graph.t,graph.m4)
xlabel('time seconds')
ylabel('m3 position')
title ('Position of motor')
figure(2)
plot(graph.t,graph.temoin)
xlabel('time seconds')
ylabel('m3 position')
title ('Position of motor')
poppy.close()
"""
Explanation: To know what poppy creature are installed on your computer :
To check what is installed on your computer. You have to find the name of your creature.
End of explanation
"""
|
CristinaFoltea/pythonD3
|
IPythonD3.ipynb
|
bsd-2-clause
|
# import requirments
from IPython.display import Image
from IPython.display import display
from IPython.display import HTML
from datetime import *
import json
from copy import *
from pprint import *
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
from ggplot import *
import networkx as nx
from networkx.readwrite import json_graph
#from __future__ import http_server
from BaseHTTPServer import BaseHTTPRequestHandler
from IPython.display import IFrame
import rpy2
%load_ext rpy2.ipython
%R require("ggplot2")
% matplotlib inline
randn = np.random.randn
"""
Explanation: IPython & D3
Let's start with a few techniques for working with data in ipython and then build a d3 network graph.
End of explanation
"""
%%javascript
require.config({
paths: {
//d3: "http://d3js.org/d3.v3.min" //<-- url
d3: 'd3/d3.min.js' //<-- local path
}
});
"""
Explanation: JS with IPython?
The nice thing about IPython is that we can write in almost any lanaguage. For example, we can use javascript below and pull in the D3 library.
End of explanation
"""
import json
import networkx as nx
from networkx.readwrite import json_graph
from IPython.display import IFrame
G = nx.barbell_graph(6,3)
# this d3 example uses the name attribute for the mouse-hover value,
# so add a name to each node
for n in G:
G.node[n]['name'] = n
# write json formatted data
d = json_graph.node_link_data(G) # node-link format to serialize
# write json
json.dump(d, open('force/force.json','w'))
# render html inline
IFrame('force/force.html', width=700, height=350)
#print('Or copy all files in force/ to webserver and load force/force.html')
"""
Explanation: Python data | D3 Viz
A basic method is to serialze your results and then render html that pulls in the data. In this example, we save a json file and then load the html doc in an IFrame. We're now using D3 in ipython!
The example below is adapted from:
* Hagberg, A & Schult, D. & Swart, P. Networkx (2011). Github repository, https://github.com/networkx/networkx/tree/master/examples/javascript/force
End of explanation
"""
from IPython.display import Javascript
import numpy as np
mu, sig = 0.05, 0.2
rnd = np.random.normal(loc=mu, scale=sig, size=4)
## Use the variable rnd above in Javascript:
javascript = 'element.append("{}");'.format(str(rnd))
Javascript(javascript)
"""
Explanation: Passing data from IPython to JS
Let's create some random numbers and render them in js (see the stackoverflow explanation and discussion).
End of explanation
"""
from IPython.display import HTML
input_form = """
<div style="background-color:gainsboro; border:solid black; width:300px; padding:20px;">
Name: <input type="text" id="var_name" value="foo"><br>
Value: <input type="text" id="var_value" value="bar"><br>
<button onclick="set_value()">Set Value</button>
</div>
"""
javascript = """
<script type="text/Javascript">
function set_value(){
var var_name = document.getElementById('var_name').value;
var var_value = document.getElementById('var_value').value;
var command = var_name + " = '" + var_value + "'";
console.log("Executing Command: " + command);
var kernel = IPython.notebook.kernel;
kernel.execute(command);
}
</script>
"""
HTML(input_form + javascript)
"""
Explanation: Passing data from JS to IPython
We can also interact with js to define python variables (see this example).
End of explanation
"""
print foo
"""
Explanation: Click "Set Value" then run the cell below.
End of explanation
"""
from pythonD3 import visualize
data = [{'x': 10, 'y': 20, 'r': 15, 'name': 'circle one'},
{'x': 40, 'y': 40, 'r': 5, 'name': 'circle two'},
{'x': 20, 'y': 30, 'r': 8, 'name': 'circle three'},
{'x': 25, 'y': 10, 'r': 10, 'name': 'circle four'}]
visualize.plot_circle(data, id=2)
visualize.plot_chords(id=5)
"""
Explanation: Custom D3 module.
Now we're having fun. The simplicity of this process wins. We can pass data to javascript via a module called visualize that contains an attribute plot_circle, which uses jinja to render our js template. The advantage of using jinja to read our html is apparent: we can pass variables directly from python!
End of explanation
"""
|
d00d/quantNotebooks
|
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
|
unlicense
|
# This is a comment
# These lines of code will not change any values
# Anything following the first # is not run as code
"""
Explanation: Introduction to Python
by Maxwell Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
All of the coding that you will do on the Quantopian platform will be in Python. It is also just a good, jack-of-all-trades language to know! Here we will provide you with the basics so that you can feel confident going through our other lectures and understanding what is happening.
Code Comments
A comment is a note made by a programmer in the source code of a program. Its purpose is to clarify the source code and make it easier for people to follow along with what is happening. Anything in a comment is generally ignored when the code is actually run, making comments useful for including explanations and reasoning as well as removing specific lines of code that you may be unsure about. Comments in Python are created by using the pound symbol (# Insert Text Here). Including a # in a line of code will comment out anything that follows it.
End of explanation
"""
""" This is a special string """
"""
Explanation: You may hear text enclosed in triple quotes (""" Insert Text Here """) referred to as multi-line comments, but this is not entirely accurate. This is a special type of string (a data type we will cover), called a docstring, used to explain the purpose of a function.
End of explanation
"""
my_integer = 50
print my_integer, type(my_integer)
"""
Explanation: Make sure you read the comments within each code cell (if they are there). They will provide more real-time explanations of what is going on as you look at each line of code.
Variables
Variables provide names for values in programming. If you want to save a value for later or repeated use, you give the value a name, storing the contents in a variable. Variables in programming work in a fundamentally similar way to variables in algebra, but in Python they can take on various different data types.
The basic variable types that we will cover in this section are integers, floating point numbers, booleans, and strings.
An integer in programming is the same as in mathematics, a round number with no values after the decimal point. We use the built-in print function here to display the values of our variables as well as their types!
End of explanation
"""
one = 1
print One
"""
Explanation: Variables, regardless of type, are assigned by using a single equals sign (=). Variables are case-sensitive so any changes in variation in the capitals of a variable name will reference a different variable entirely.
End of explanation
"""
my_float = 1.0
print my_float, type(my_float)
my_float = float(1)
print my_float, type(my_float)
"""
Explanation: A floating point number, or a float is a fancy name for a real number (again as in mathematics). To define a float, we need to either include a decimal point or specify that the value is a float.
End of explanation
"""
my_int = int(3.14159)
print my_int, type(my_int)
"""
Explanation: A variable of type float will not round the number that you store in it, while a variable of type integer will. This makes floats more suitable for mathematical calculations where you want more than just integers.
Note that as we used the float() function to force an number to be considered a float, we can use the int() function to force a number to be considered an int.
End of explanation
"""
my_string = 'This is a string with single quotes'
print my_string
my_string = "This is a string with double quotes"
print my_string
"""
Explanation: The int() function will also truncate any digits that a number may have after the decimal point!
Strings allow you to include text as a variable to operate on. They are defined using either single quotes ('') or double quotes ("").
End of explanation
"""
my_string = '"Jabberwocky", by Lewis Carroll'
print my_string
my_string = "'Twas brillig, and the slithy toves / Did gyre and gimble in the wabe;"
print my_string
"""
Explanation: Both are allowed so that we can include apostrophes or quotation marks in a string if we so choose.
End of explanation
"""
my_bool = True
print my_bool, type(my_bool)
"""
Explanation: Booleans, or bools are binary variable types. A bool can only take on one of two values, these being True or False. There is much more to this idea of truth values when it comes to programming, which we cover later in the Logical Operators of this notebook.
End of explanation
"""
print 'Addition: ', 2 + 2
print 'Subtraction: ', 7 - 4
print 'Multiplication: ', 2 * 5
print 'Division: ', 10 / 2
print 'Exponentiation: ', 3**2
"""
Explanation: There are many more data types that you can assign as variables in Python, but these are the basic ones! We will cover a few more later as we move through this tutorial.
Basic Math
Python has a number of built-in math functions. These can be extended even further by importing the math package or by including any number of other calculation-based packages.
All of the basic arithmetic operations are supported: +, -, /, and *. You can create exponents by using ** and modular arithmetic is introduced with the mod operator, %.
End of explanation
"""
print 'Modulo: ', 15 % 4
"""
Explanation: If you are not familiar with the the mod operator, it operates like a remainder function. If we type $15 \ \% \ 4$, it will return the remainder after dividing $15$ by $4$.
End of explanation
"""
first_integer = 4
second_integer = 5
print first_integer * second_integer
"""
Explanation: Mathematical functions also work on variables!
End of explanation
"""
first_integer = 11
second_integer = 3
print first_integer / second_integer
first_number = 11.0
second_number = 3.0
print first_number / second_number
"""
Explanation: Make sure that your variables are floats if you want to have decimal points in your answer. If you perform math exclusively with integers, you get an integer. Including any float in the calculation will make the result a float.
End of explanation
"""
import math
"""
Explanation: Python has a few built-in math functions. The most notable of these are:
abs()
round()
max()
min()
sum()
These functions all act as you would expect, given their names. Calling abs() on a number will return its absolute value. The round() function will round a number to a specified number of the decimal points (the default is $0$). Calling max() or min() on a collection of numbers will return, respectively, the maximum or minimum value in the collection. Calling sum() on a collection of numbers will add them all up. If you're not familiar with how collections of values in Python work, don't worry! We will cover collections in-depth in the next section.
Additional math functionality can be added in with the math package.
End of explanation
"""
print 'Pi: ', math.pi
print "Euler's Constant: ", math.e
"""
Explanation: The math library adds a long list of new mathematical functions to Python. Feel free to check out the documentation for the full list and details. It concludes some mathematical constants
End of explanation
"""
print 'Cosine of pi: ', math.cos(math.pi)
"""
Explanation: As well as some commonly used math functions
End of explanation
"""
my_list = [1, 2, 3]
print my_list
"""
Explanation: Collections
Lists
A list in Python is an ordered collection of objects that can contain any data type. We define a list using brackets ([]).
End of explanation
"""
print my_list[0]
print my_list[2]
"""
Explanation: We can access and index the list by using brackets as well. In order to select an individual element, simply type the list name followed by the index of the item you are looking for in braces.
End of explanation
"""
print 'The first, second, and third list elements: ', my_list[0], my_list[1], my_list[2]
print 'Accessing outside the list bounds causes an error: ', my_list[3]
"""
Explanation: Indexing in Python starts from $0$. If you have a list of length $n$, the first element of the list is at index $0$, the second element is at index $1$, and so on and so forth. The final element of the list will be at index $n-1$. Be careful! Trying to access a non-existent index will cause an error.
End of explanation
"""
print len(my_list)
"""
Explanation: We can see the number of elements in a list by calling the len() function.
End of explanation
"""
print my_list
my_list[0] = 42
print my_list
"""
Explanation: We can update and change a list by accessing an index and assigning new value.
End of explanation
"""
my_string = "Strings never change"
my_string[0] = 'Z'
"""
Explanation: This is fundamentally different from how strings are handled. A list is mutable, meaning that you can change a list's elements without changing the list itself. Some data types, like strings, are immutable, meaning you cannot change them at all. Once a string or other immutable data type has been created, it cannot be directly modified without creating an entirely new object.
End of explanation
"""
my_list_2 = ['one', 'two', 'three']
print my_list_2
"""
Explanation: As we stated before, a list can contain any data type. Thus, lists can also contain strings.
End of explanation
"""
my_list_3 = [True, 'False', 42]
"""
Explanation: Lists can also contain multiple different data types at once!
End of explanation
"""
my_list_4 = my_list + my_list_2 + my_list_3
print my_list_4
"""
Explanation: If you want to put two lists together, they can be combined with a + symbol.
End of explanation
"""
my_list = ['friends', 'romans', 'countrymen', 'lend', 'me', 'your', 'ears']
"""
Explanation: In addition to accessing individual elements of a list, we can access groups of elements through slicing.
End of explanation
"""
print my_list[2:4]
"""
Explanation: Slicing
We use the colon (:) to slice lists.
End of explanation
"""
print my_list[1:]
"""
Explanation: Using : we can select a group of elements in the list starting from the first element indicated and going up to (but not including) the last element indicated.
We can also select everything after a certain point
End of explanation
"""
print my_list[:4]
"""
Explanation: And everything before a certain point
End of explanation
"""
print my_list[-1]
"""
Explanation: Using negative numbers will count from the end of the indices instead of from the beginning. For example, an index of -1 indicates the last element of the list.
End of explanation
"""
print my_list[0:7:2]
"""
Explanation: You can also add a third component to slicing. Instead of simply indicating the first and final parts of your slice, you can specify the step size that you want to take. So instead of taking every single element, you can take every other element.
End of explanation
"""
print my_list[::2]
"""
Explanation: Here we have selected the entire list (because 0:7 will yield elements 0 through 6) and we have selected a step size of 2. So this will spit out element 0 , element 2, element 4, and so on through the list element selected. We can skip indicated the beginning and end of our slice, only indicating the step, if we like.
End of explanation
"""
print my_list[:]
"""
Explanation: Lists implictly select the beginning and end of the list when not otherwise specified.
End of explanation
"""
print my_list[::-1]
"""
Explanation: With a negative step size we can even reverse the list!
End of explanation
"""
b = 10
my_list = range(b)
print my_list
"""
Explanation: Python does not have native matrices, but with lists we can produce a working fascimile. Other packages, such as numpy, add matrices as a separate data type, but in base Python the best way to create a matrix is to use a list of lists.
We can also use built-in functions to generate lists. In particular we will look at range() (because we will be using it later!). Range can take several different inputs and will return a list.
End of explanation
"""
a = 0
b = 10
my_list = range(a, b)
print my_list
"""
Explanation: Similar to our list-slicing methods from before, we can define both a start and an end for our range. This will return a list that is includes the start and excludes the end, just like a slice.
End of explanation
"""
a = 0
b = 10
step = 2
my_list = range(a, b, step)
print my_list
"""
Explanation: We can also specify a step size. This again has the same behavior as a slice.
End of explanation
"""
my_tuple = 'I', 'have', 30, 'cats'
print my_tuple
my_tuple = ('I', 'have', 30, 'cats')
print my_tuple
"""
Explanation: Tuples
A tuple is a data type similar to a list in that it can hold different kinds of data types. The key difference here is that a tuple is immutable. We define a tuple by separating the elements we want to include by commas. It is conventional to surround a tuple with parentheses.
End of explanation
"""
my_tuple[3] = 'dogs' # Attempts to change the 'cats' value stored in the the tuple to 'dogs'
"""
Explanation: As mentioned before, tuples are immutable. You can't change any part of them without defining a new tuple.
End of explanation
"""
print my_tuple[1:3]
"""
Explanation: You can slice tuples the same way that you slice lists!
End of explanation
"""
my_other_tuple = ('make', 'that', 50)
print my_tuple + my_other_tuple
"""
Explanation: And concatenate them the way that you would with strings!
End of explanation
"""
str_1, str_2, int_1 = my_other_tuple
print str_1, str_2, int_1
"""
Explanation: We can 'pack' values together, creating a tuple (as above), or we can 'unpack' values from a tuple, taking them out.
End of explanation
"""
things_i_like = {'dogs', 7, 'the number 4', 4, 4, 4, 42, 'lizards', 'man I just LOVE the number 4'}
print things_i_like, type(things_i_like)
"""
Explanation: Unpacking assigns each value of the tuple in order to each variable on the left hand side of the equals sign. Some functions, including user-defined functions, may return tuples, so we can use this to directly unpack them and access the values that we want.
Sets
A set is a collection of unordered, unique elements. It works almost exactly as you would expect a normal set of things in mathematics to work and is defined using braces ({}).
End of explanation
"""
animal_list = ['cats', 'dogs', 'dogs', 'dogs', 'lizards', 'sponges', 'cows', 'bats', 'sponges']
animal_set = set(animal_list)
print animal_set # Removes all extra instances from the list
"""
Explanation: Note how any extra instances of the same item are removed in the final set. We can also create a set from a list, using the set() function.
End of explanation
"""
print len(animal_set)
"""
Explanation: Calling len() on a set will tell you how many elements are in it.
End of explanation
"""
'cats' in animal_set # Here we check for membership using the `in` keyword.
"""
Explanation: Because a set is unordered, we can't access individual elements using an index. We can, however, easily check for membership (to see if something is contained in a set) and take the unions and intersections of sets by using the built-in set functions.
End of explanation
"""
print animal_set | things_i_like # You can also write things_i_like | animal_set with no difference
"""
Explanation: Here we checked to see whether the string 'cats' was contained within our animal_set and it returned True, telling us that it is indeed in our set.
We can connect sets by using typical mathematical set operators, namely |, for union, and &, for intersection. Using | or & will return exactly what you would expect if you are familiar with sets in mathematics.
End of explanation
"""
print animal_set & things_i_like # You can also write things_i_like & animal_set with no difference
"""
Explanation: Pairing two sets together with | combines the sets, removing any repetitions to make every set element unique.
End of explanation
"""
my_dict = {"High Fantasy": ["Wheel of Time", "Lord of the Rings"],
"Sci-fi": ["Book of the New Sun", "Neuromancer", "Snow Crash"],
"Weird Fiction": ["At the Mountains of Madness", "The House on the Borderland"]}
"""
Explanation: Pairing two sets together with & will calculate the intersection of both sets, returning a set that only contains what they have in common.
If you are interested in learning more about the built-in functions for sets, feel free to check out the documentation.
Dictionaries
Another essential data structure in Python is the dictionary. Dictionaries are defined with a combination of curly braces ({}) and colons (:). The braces define the beginning and end of a dictionary and the colons indicate key-value pairs. A dictionary is essentially a set of key-value pairs. The key of any entry must be an immutable data type. This makes both strings and tuples candidates. Keys can be both added and deleted.
In the following example, we have a dictionary composed of key-value pairs where the key is a genre of fiction (string) and the value is a list of books (list) within that genre. Since a collection is still considered a single entity, we can use one to collect multiple variables or values into one key-value pair.
End of explanation
"""
print my_dict["Sci-fi"]
"""
Explanation: After defining a dictionary, we can access any individual value by indicating its key in brackets.
End of explanation
"""
my_dict["Sci-fi"] = "I can't read"
print my_dict["Sci-fi"]
"""
Explanation: We can also change the value associated with a given key
End of explanation
"""
my_dict["Historical Fiction"] = ["Pillars of the Earth"]
print my_dict["Historical Fiction"]
print my_dict
"""
Explanation: Adding a new key-value pair is as simple as defining it.
End of explanation
"""
first_string = '"Beware the Jabberwock, my son! /The jaws that bite, the claws that catch! /'
second_string = 'Beware the Jubjub bird, and shun /The frumious Bandersnatch!"/'
third_string = first_string + second_string
print third_string
"""
Explanation: String Shenanigans
We already know that strings are generally used for text. We can used built-in operations to combine, split, and format strings easily, depending on our needs.
The + symbol indicates concatenation in string language. It will combine two strings into a longer string.
End of explanation
"""
my_string = 'Supercalifragilisticexpialidocious'
print 'The first letter is: ', my_string[0] # Uppercase S
print 'The last letter is: ', my_string[-1] # lowercase s
print 'The second to last letter is: ', my_string[-2] # lowercase u
print 'The first five characters are: ', my_string[0:5] # Remember: slicing doesn't include the final element!
print 'Reverse it!: ', my_string[::-1]
"""
Explanation: Strings are also indexed much in the same way that lists are.
End of explanation
"""
print 'Count of the letter i in Supercalifragilisticexpialidocious: ', my_string.count('i')
print 'Count of "li" in the same word: ', my_string.count('li')
"""
Explanation: Built-in objects and classes often have special functions associated with them that are called methods. We access these methods by using a period ('.'). We will cover objects and their associated methods more in another lecture!
Using string methods we can count instances of a character or group of characters.
End of explanation
"""
print 'The first time i appears is at index: ', my_string.find('i')
"""
Explanation: We can also find the first instance of a character or group of characters in a string.
End of explanation
"""
print "All i's are now a's: ", my_string.replace('i', 'a')
print "It's raining cats and dogs".replace('dogs', 'more cats')
"""
Explanation: As well as replace characters in a string.
End of explanation
"""
my_string = "I can't hear you"
print my_string.upper()
my_string = "I said HELLO"
print my_string.lower()
"""
Explanation: There are also some methods that are unique to strings. The function upper() will convert all characters in a string to uppercase, while lower() will convert all characters in a string to lowercase!
End of explanation
"""
my_string = "{0} {1}".format('Marco', 'Polo')
print my_string
my_string = "{1} {0}".format('Marco', 'Polo')
print my_string
"""
Explanation: String Formatting
Using the format() method we can add in variable values and generally format our strings.
End of explanation
"""
print 'insert %s here' % 'value'
"""
Explanation: We use braces ({}) to indicate parts of the string that will be filled in later and we use the arguments of the format() function to provide the values to substitute. The numbers within the braces indicate the index of the value in the format() arguments.
See the format() documentation for additional examples.
If you need some quick and dirty formatting, you can instead use the % symbol, called the string formatting operator.
End of explanation
"""
print 'There are %s cats in my %s' % (13, 'apartment')
"""
Explanation: The % symbol basically cues Python to create a placeholder. Whatever character follows the % (in the string) indicates what sort of type the value put into the placeholder will have. This character is called a conversion type. Once the string has been closed, we need another % that will be followed by the values to insert. In the case of one value, you can just put it there. If you are inserting more than one value, they must be enclosed in a tuple.
End of explanation
"""
print 5 == 5
print 5 > 5
"""
Explanation: In these examples, the %s indicates that Python should convert the values into strings. There are multiple conversion types that you can use to get more specific with the the formatting. See the string formatting documentation for additional examples and more complete details on use.
Logical Operators
Basic Logic
Logical operators deal with boolean values, as we briefly covered before. If you recall, a bool takes on one of two values, True or False (or $1$ or $0$). The basic logical statements that we can make are defined using the built-in comparators. These are == (equal), != (not equal), < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to).
End of explanation
"""
m = 2
n = 23
print m < n
"""
Explanation: These comparators also work in conjunction with variables.
End of explanation
"""
statement_1 = 10 > 2
statement_2 = 4 <= 6
print "Statement 1 truth value: {0}".format(statement_1)
print "Statement 2 truth value: {0}".format(statement_2)
print "Statement 1 and Statement 2: {0}".format(statement_1 and statement_2)
"""
Explanation: We can string these comparators together to make more complex logical statements using the logical operators or, and, and not.
End of explanation
"""
print ((2 < 3) and (3 > 0)) or ((5 > 6) and not (4 < 2))
"""
Explanation: The or operator performs a logical or calculation. This is an inclusive or, so if either component paired together by or is True, the whole statement will be True. The and statement only outputs True if all components that are anded together are True. Otherwise it will output False. The not statement simply inverts the truth value of whichever statement follows it. So a True statement will be evaluated as False when a not is placed in front of it. Similarly, a False statement will become True when a not is in front of it.
Say that we have two logical statements, or assertions, $P$ and $Q$. The truth table for the basic logical operators is as follows:
| P | Q | not P| P and Q | P or Q|
|:-----:|:-----:|:---:|:---:|:---:|
| True | True | False | True | True |
| False | True | True | False | True |
| True | False | False | False | True |
| False | False | True | False | False |
We can string multiple logical statements together using the logical operators.
End of explanation
"""
# Similar to how float() and int() work, bool() forces a value to be considered a boolean!
print bool('')
print bool('I have character!')
print bool([])
print bool([1, 2, 3])
"""
Explanation: Logical statements can be as simple or complex as we like, depending on what we need to express. Evaluating the above logical statement step by step we see that we are evaluating (True and True) or (False and not False). This becomes True or (False and True), subsequently becoming True or False, ultimately being evaluated as True.
Truthiness
Data types in Python have a fun characteristic called truthiness. What this means is that most built-in types will evaluate as either True or False when a boolean value is needed (such as with an if-statement). As a general rule, containers like strings, tuples, dictionaries, lists, and sets, will return True if they contain anything at all and False if they contain nothing.
End of explanation
"""
# This is the basic format of an if statement. This is a vacuous example.
# The string "Condition" will always evaluated as True because it is a
# non-empty string. he purpose of this code is to show the formatting of
# an if-statement.
if "Condition":
# This block of code will execute because the string is non-empty
# Everything on these indented lines
print True
else:
# So if the condition that we examined with if is in fact False
# This block of code will execute INSTEAD of the first block of code
# Everything on these indented lines
print False
# The else block here will never execute because "Condition" is a non-empty string.
i = 4
if i == 5:
print 'The variable i has a value of 5'
"""
Explanation: And so on, for the other collections and containers. None also evaluates as False. The number 1 is equivalent to True and the number 0 is equivalent to False as well, in a boolean context.
If-statements
We can create segments of code that only execute if a set of conditions is met. We use if-statements in conjunction with logical statements in order to create branches in our code.
An if block gets entered when the condition is considered to be True. If condition is evaluated as False, the if block will simply be skipped unless there is an else block to accompany it. Conditions are made using either logical operators or by using the truthiness of values in Python. An if-statement is defined with a colon and a block of indented text.
End of explanation
"""
i = 4
if i == 5:
print "All lines in this indented block are part of this block"
print 'The variable i has a value of 5'
else:
print "All lines in this indented block are part of this block"
print 'The variable i is not equal to 5'
"""
Explanation: Because in this example i = 4 and the if-statement is only looking for whether i is equal to 5, the print statement will never be executed. We can add in an else statement to create a contingency block of code in case the condition in the if-statement is not evaluated as True.
End of explanation
"""
i = 1
if i == 1:
print 'The variable i has a value of 1'
elif i == 2:
print 'The variable i has a value of 2'
elif i == 3:
print 'The variable i has a value of 3'
else:
print "I don't care what i is"
"""
Explanation: We can implement other branches off of the same if-statement by using elif, an abbreviation of "else if". We can include as many elifs as we like until we have exhausted all the logical branches of a condition.
End of explanation
"""
i = 10
if i % 2 == 0:
if i % 3 == 0:
print 'i is divisible by both 2 and 3! Wow!'
elif i % 5 == 0:
print 'i is divisible by both 2 and 5! Wow!'
else:
print 'i is divisible by 2, but not 3 or 5. Meh.'
else:
print 'I guess that i is an odd number. Boring.'
"""
Explanation: You can also nest if-statements within if-statements to check for further conditions.
End of explanation
"""
i = 5
j = 12
if i < 10 and j > 11:
print '{0} is less than 10 and {1} is greater than 11! How novel and interesting!'.format(i, j)
"""
Explanation: Remember that we can group multiple conditions together by using the logical operators!
End of explanation
"""
my_string = "Carthago delenda est"
if my_string == "Carthago delenda est":
print 'And so it was! For the glory of Rome!'
else:
print 'War elephants are TERRIFYING. I am staying home.'
"""
Explanation: You can use the logical comparators to compare strings!
End of explanation
"""
if 'a' in my_string or 'e' in my_string:
print 'Those are my favorite vowels!'
"""
Explanation: As with other data types, == will check for whether the two things on either side of it have the same value. In this case, we compare whether the value of the strings are the same. Using > or < or any of the other comparators is not quite so intuitive, however, so we will stay from using comparators with strings in this lecture. Comparators will examine the lexicographical order of the strings, which might be a bit more in-depth than you might like.
Some built-in functions return a boolean value, so they can be used as conditions in an if-statement. User-defined functions can also be constructed so that they return a boolean value. This will be covered later with function definition!
The in keyword is generally used to check membership of a value within another value. We can check memebership in the context of an if-statement and use it to output a truth value.
End of explanation
"""
i = 5
while i > 0: # We can write this as 'while i:' because 0 is False!
i -= 1
print 'I am looping! {0} more to go!'.format(i)
"""
Explanation: Here we use in to check whether the variable my_string contains any particular letters. We will later use in to iterate through lists!
Loop Structures
Loop structures are one of the most important parts of programming. The for loop and the while loop provide a way to repeatedly run a block of code repeatedly. A while loop will iterate until a certain condition has been met. If at any point after an iteration that condition is no longer satisfied, the loop terminates. A for loop will iterate over a sequence of values and terminate when the sequence has ended. You can instead include conditions within the for loop to decide whether it should terminate early or you could simply let it run its course.
End of explanation
"""
for i in range(5):
print 'I am looping! I have looped {0} times!'.format(i + 1)
"""
Explanation: With while loops we need to make sure that something actually changes from iteration to iteration so that that the loop actually terminates. In this case, we use the shorthand i -= 1 (short for i = i - 1) so that the value of i gets smaller with each iteration. Eventually i will be reduced to 0, rendering the condition False and exiting the loop.
A for loop iterates a set number of times, determined when you state the entry into the loop. In this case we are iterating over the list returned from range(). The for loop selects a value from the list, in order, and temporarily assigns the value of i to it so that operations can be performed with the value.
End of explanation
"""
my_list = {'cats', 'dogs', 'lizards', 'cows', 'bats', 'sponges', 'humans'} # Lists all the animals in the world
mammal_list = {'cats', 'dogs', 'cows', 'bats', 'humans'} # Lists all the mammals in the world
my_new_list = set()
for animal in my_list:
if animal in mammal_list:
# This adds any animal that is both in my_list and mammal_list to my_new_list
my_new_list.add(animal)
print my_new_list
"""
Explanation: Note that in this for loop we use the in keyword. Use of the in keyword is not limited to checking for membership as in the if-statement example. You can iterate over any collection with a for loop by using the in keyword.
In this next example, we will iterate over a set because we want to check for containment and add to a new set.
End of explanation
"""
i = 10
while True:
if i == 14:
break
i += 1 # This is shorthand for i = i + 1. It increments i with each iteration.
print i
for i in range(5):
if i == 2:
break
print i
"""
Explanation: There are two statements that are very helpful in dealing with both for and while loops. These are break and continue. If break is encountered at any point while a loop is executing, the loop will immediately end.
End of explanation
"""
i = 0
while i < 5:
i += 1
if i == 3:
continue
print i
"""
Explanation: The continue statement will tell the loop to immediately end this iteration and continue onto the next iteration of the loop.
End of explanation
"""
for i in range(5):
loop_string = 'I transcend the loop!'
print 'I am eternal! I am {0} and I exist everywhere!'.format(i)
print 'I persist! My value is {0}'.format(i)
print loop_string
"""
Explanation: This loop skips printing the number $3$ because of the continue statement that executes when we enter the if-statement. The code never sees the command to print the number $3$ because it has already moved to the next iteration. The break and continue statements are further tools to help you control the flow of your loops and, as a result, your code.
The variable that we use to iterate over a loop will retain its value when the loop exits. Similarly, any variables defined within the context of the loop will continue to exist outside of it.
End of explanation
"""
my_dict = {'firstname' : 'Inigo', 'lastname' : 'Montoya', 'nemesis' : 'Rugen'}
for key in my_dict:
print key
"""
Explanation: We can also iterate over a dictionary!
End of explanation
"""
for key in my_dict:
print my_dict[key]
"""
Explanation: If we just iterate over a dictionary without doing anything else, we will only get the keys. We can either use the keys to get the values, like so:
End of explanation
"""
for key, value in my_dict.iteritems():
print key, ':', value
"""
Explanation: Or we can use the iteritems() function to get both key and value at the same time.
End of explanation
"""
def hello_world():
""" Prints Hello, world! """
print 'Hello, world!'
hello_world()
for i in range(5):
hello_world()
"""
Explanation: The iteritems() function creates a tuple of each key-value pair and the for loop stores unpacks that tuple into key, value on each separate execution of the loop!
Functions
A function is a reusable block of code that you can call repeatedly to make calculations, output data, or really do anything that you want. This is one of the key aspects of using a programming language. To add to the built-in functions in Python, you can define your own!
End of explanation
"""
def see_the_scope():
in_function_string = "I'm stuck in here!"
see_the_scope()
print in_function_string
"""
Explanation: Functions are defined with def, a function name, a list of parameters, and a colon. Everything indented below the colon will be included in the definition of the function.
We can have our functions do anything that you can do with a normal block of code. For example, our hello_world() function prints a string every time it is called. If we want to keep a value that a function calculates, we can define the function so that it will return the value we want. This is a very important feature of functions, as any variable defined purely within a function will not exist outside of it.
End of explanation
"""
def free_the_scope():
in_function_string = "Anything you can do I can do better!"
return in_function_string
my_string = free_the_scope()
print my_string
"""
Explanation: The scope of a variable is the part of a block of code where that variable is tied to a particular value. Functions in Python have an enclosed scope, making it so that variables defined within them can only be accessed directly within them. If we pass those values to a return statement we can get them out of the function. This makes it so that the function call returns values so that you can store them in variables that have a greater scope.
In this case specifically,including a return statement allows us to keep the string value that we define in the function.
End of explanation
"""
def multiply_by_five(x):
""" Multiplies an input number by 5 """
return x * 5
n = 4
print n
print multiply_by_five(n)
"""
Explanation: Just as we can get values out of a function, we can also put values into a function. We do this by defining our function with parameters.
End of explanation
"""
def calculate_area(length, width):
""" Calculates the area of a rectangle """
return length * width
l = 5
w = 10
print 'Area: ', calculate_area(l, w)
print 'Length: ', l
print 'Width: ', w
def calculate_volume(length, width, depth):
""" Calculates the volume of a rectangular prism """
return length * width * depth
"""
Explanation: In this example we only had one parameter for our function, x. We can easily add more parameters, separating everything with a comma.
End of explanation
"""
def sum_values(*args):
sum_val = 0
for i in args:
sum_val += i
return sum_val
print sum_values(1, 2, 3)
print sum_values(10, 20, 30, 40, 50)
print sum_values(4, 2, 5, 1, 10, 249, 25, 24, 13, 6, 4)
"""
Explanation: If we want to, we can define a function so that it takes an arbitrary number of parameters. We tell Python that we want this by using an asterisk (*).
End of explanation
"""
def test_args(*args):
print type(args)
test_args(1, 2, 3, 4, 5, 6)
"""
Explanation: The time to use *args as a parameter for your function is when you do not know how many values may be passed to it, as in the case of our sum function. The asterisk in this case is the syntax that tells Python that you are going to pass an arbitrary number of parameters into your function. These parameters are stored in the form of a tuple.
End of explanation
"""
def has_a_vowel(word):
"""
Checks to see whether a word contains a vowel
If it doesn't contain a conventional vowel, it
will check for the presence of 'y' or 'w'. Does
not check to see whether those are in the word
in a vowel context.
"""
vowel_list = ['a', 'e', 'i', 'o', 'u']
for vowel in vowel_list:
if vowel in word:
return True
# If there is a vowel in the word, the function returns, preventing anything after this loop from running
return False
my_word = 'catnapping'
if has_a_vowel(my_word):
print 'How surprising, an english word contains a vowel.'
else:
print 'This is actually surprising.'
def point_maker(x, y):
""" Groups x and y values into a point, technically a tuple """
return x, y
"""
Explanation: We can put as many elements into the args tuple as we want to when we call the function. However, because args is a tuple, we cannot modify it after it has been created.
The args name of the variable is purely by convention. You could just as easily name your parameter *vars or *things. You can treat the args tuple like you would any other tuple, easily accessing arg's values and iterating over it, as in the above sum_values(*args) function.
Our functions can return any data type. This makes it easy for us to create functions that check for conditions that we might want to monitor.
Here we define a function that returns a boolean value. We can easily use this in conjunction with if-statements and other situations that require a boolean.
End of explanation
"""
a = point_maker(0, 10)
b = point_maker(5, 3)
def calculate_slope(point_a, point_b):
""" Calculates the linear slope between two points """
return (point_b[1] - point_a[1])/(point_b[0] - point_a[0])
print "The slope between a and b is {0}".format(calculate_slope(a, b))
"""
Explanation: This above function returns an ordered pair of the input parameters, stored as a tuple.
End of explanation
"""
print "The slope-intercept form of the line between a and b, using point a, is: y - {0} = {2}(x - {1})".format(a[1], a[0], calculate_slope(a, b))
"""
Explanation: And that one calculates the slope between two points!
End of explanation
"""
|
jrg365/gpytorch
|
examples/04_Variational_and_Approximate_GPs/Modifying_the_variational_strategy_and_distribution.ipynb
|
mit
|
import urllib.request
import os
from scipy.io import loadmat
from math import floor
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat')
if smoke_test: # this is for running the notebook in our testing framework
X, y = torch.randn(1000, 3), torch.randn(1000)
else:
data = torch.Tensor(loadmat('../elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
train_n = int(floor(0.8 * len(X)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()
test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=500, shuffle=True)
test_dataset = TensorDataset(test_x, test_y)
test_loader = DataLoader(test_dataset, batch_size=500, shuffle=False)
"""
Explanation: Modifying the Variational Strategy/Variational Distribution
The predictive distribution for approximate GPs is given by
$$
p( \mathbf f(\mathbf x^) ) = \int_{\mathbf u} p( f(\mathbf x^) \mid \mathbf u) \: q(\mathbf u) \: d\mathbf u,
\quad
q(\mathbf u) = \mathcal N( \mathbf m, \mathbf S).
$$
$\mathbf u$ represents the function values at the $m$ inducing points.
Here, $\mathbf m \in \mathbb R^m$ and $\mathbf S \in \mathbb R^{m \times m}$ are learnable parameters.
If $m$ (the number of inducing points) is quite large, the number of learnable parameters in $\mathbf S$ can be quite unwieldy.
Furthermore, a large $m$ might make some of the computations rather slow.
Here we show a few ways to use different variational distributions and
variational strategies to accomplish this.
Experimental setup
We're going to train an approximate GP on a medium-sized regression dataset, taken from the UCI repository.
End of explanation
"""
# this is for running the notebook in our testing framework
num_epochs = 1 if smoke_test else 10
# Our testing script takes in a GPyTorch MLL (objective function) class
# and then trains/tests an approximate GP with it on the supplied dataset
def train_and_test_approximate_gp(model_cls):
inducing_points = torch.randn(128, train_x.size(-1), dtype=train_x.dtype, device=train_x.device)
model = model_cls(inducing_points)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.numel())
optimizer = torch.optim.Adam(list(model.parameters()) + list(likelihood.parameters()), lr=0.1)
if torch.cuda.is_available():
model = model.cuda()
likelihood = likelihood.cuda()
# Training
model.train()
likelihood.train()
epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc=f"Training {model_cls.__name__}")
for i in epochs_iter:
# Within each iteration, we will go over each minibatch of data
for x_batch, y_batch in train_loader:
optimizer.zero_grad()
output = model(x_batch)
loss = -mll(output, y_batch)
epochs_iter.set_postfix(loss=loss.item())
loss.backward()
optimizer.step()
# Testing
model.eval()
likelihood.eval()
means = torch.tensor([0.])
with torch.no_grad():
for x_batch, y_batch in test_loader:
preds = model(x_batch)
means = torch.cat([means, preds.mean.cpu()])
means = means[1:]
error = torch.mean(torch.abs(means - test_y.cpu()))
print(f"Test {model_cls.__name__} MAE: {error.item()}")
"""
Explanation: Some quick training/testing code
This will allow us to train/test different model classes.
End of explanation
"""
class StandardApproximateGP(gpytorch.models.ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(inducing_points.size(-2))
variational_strategy = gpytorch.variational.VariationalStrategy(
self, inducing_points, variational_distribution, learn_inducing_locations=True
)
super().__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
train_and_test_approximate_gp(StandardApproximateGP)
"""
Explanation: The Standard Approach
As a default, we'll use the default VariationalStrategy class with a CholeskyVariationalDistribution.
The CholeskyVariationalDistribution class allows $\mathbf S$ to be on any positive semidefinite matrix. This is the most general/expressive option for approximate GPs.
End of explanation
"""
class MeanFieldApproximateGP(gpytorch.models.ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = gpytorch.variational.MeanFieldVariationalDistribution(inducing_points.size(-2))
variational_strategy = gpytorch.variational.VariationalStrategy(
self, inducing_points, variational_distribution, learn_inducing_locations=True
)
super().__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
train_and_test_approximate_gp(MeanFieldApproximateGP)
"""
Explanation: Reducing parameters
MeanFieldVariationalDistribution: a diagonal $\mathbf S$ matrix
One way to reduce the number of parameters is to restrict that $\mathbf S$ is only diagonal. This is less expressive, but the number of parameters is now linear in $m$ instead of quadratic.
All we have to do is take the previous example, and change CholeskyVariationalDistribution (full $\mathbf S$ matrix) to MeanFieldVariationalDistribution (diagonal $\mathbf S$ matrix).
End of explanation
"""
class MAPApproximateGP(gpytorch.models.ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = gpytorch.variational.DeltaVariationalDistribution(inducing_points.size(-2))
variational_strategy = gpytorch.variational.VariationalStrategy(
self, inducing_points, variational_distribution, learn_inducing_locations=True
)
super().__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
train_and_test_approximate_gp(MAPApproximateGP)
"""
Explanation: DeltaVariationalDistribution: no $\mathbf S$ matrix
A more extreme method of reducing parameters is to get rid of $\mathbf S$ entirely. This corresponds to learning a delta distribution ($\mathbf u = \mathbf m$) rather than a multivariate Normal distribution for $\mathbf u$. In other words, this corresponds to performing MAP estimation rather than variational inference.
In GPyTorch, getting rid of $\mathbf S$ can be accomplished by using a DeltaVariationalDistribution.
End of explanation
"""
def make_orthogonal_vs(model, train_x):
mean_inducing_points = torch.randn(1000, train_x.size(-1), dtype=train_x.dtype, device=train_x.device)
covar_inducing_points = torch.randn(100, train_x.size(-1), dtype=train_x.dtype, device=train_x.device)
covar_variational_strategy = gpytorch.variational.VariationalStrategy(
model, covar_inducing_points,
gpytorch.variational.CholeskyVariationalDistribution(covar_inducing_points.size(-2)),
learn_inducing_locations=True
)
variational_strategy = gpytorch.variational.OrthogonallyDecoupledVariationalStrategy(
covar_variational_strategy, mean_inducing_points,
gpytorch.variational.DeltaVariationalDistribution(mean_inducing_points.size(-2)),
)
return variational_strategy
"""
Explanation: Reducing computation (through decoupled inducing points)
One way to reduce the computational complexity is to use separate inducing points for the mean and covariance computations. The Orthogonally Decoupled Variational Gaussian Processes method of Salimbeni et al. (2018) uses more inducing points for the (computationally easy) mean computations and fewer inducing points for the (computationally intensive) covariance computations.
In GPyTorch we implement this method in a modular way. The OrthogonallyDecoupledVariationalStrategy defines the variational strategy for the mean inducing points. It wraps an existing variational strategy/distribution that defines the covariance inducing points:
End of explanation
"""
class OrthDecoupledApproximateGP(gpytorch.models.ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = gpytorch.variational.DeltaVariationalDistribution(inducing_points.size(-2))
variational_strategy = make_orthogonal_vs(self, train_x)
super().__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
train_and_test_approximate_gp(OrthDecoupledApproximateGP)
"""
Explanation: Putting it all together we have:
End of explanation
"""
|
jseabold/statsmodels
|
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from IPython.display import display, Latex
"""
Explanation: Detrending, Stylized Facts and the Business Cycle
In an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.
Their paper begins:
"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step
in macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic
properties of the data and (2) present meaningful information."
In particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.
statsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.
End of explanation
"""
# Datasets
from pandas_datareader.data import DataReader
# Get the raw data
start = '1948-01'
end = '2008-01'
us_gnp = DataReader('GNPC96', 'fred', start=start, end=end)
us_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)
us_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS').mean()
recessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS').last().values[:,0]
# Construct the dataframe
dta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)
dta.columns = ['US GNP','US Prices','US monetary base']
dta.index.freq = dta.index.inferred_freq
dates = dta.index._mpl_repr()
"""
Explanation: Unobserved Components
The unobserved components model available in statsmodels can be written as:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{\gamma{t}}{\text{seasonal}} + \underbrace{c{t}}{\text{cycle}} + \sum{j=1}^k \underbrace{\beta_j x_{jt}}{\text{explanatory}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
see Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.
Trend
The trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.
$$
\begin{align}
\underbrace{\mu_{t+1}}{\text{level}} & = \mu_t + \nu_t + \eta{t+1} \qquad & \eta_{t+1} \sim N(0, \sigma_\eta^2) \\
\underbrace{\nu_{t+1}}{\text{trend}} & = \nu_t + \zeta{t+1} & \zeta_{t+1} \sim N(0, \sigma_\zeta^2) \
\end{align}
$$
where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.
For both elements (level and trend), we can consider models in which:
The element is included vs excluded (if the trend is included, there must also be a level included).
The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)
The only additional parameters to be estimated via MLE are the variances of any included stochastic components.
This leads to the following specifications:
| | Level | Trend | Stochastic Level | Stochastic Trend |
|----------------------------------------------------------------------|-------|-------|------------------|------------------|
| Constant | ✓ | | | |
| Local Level <br /> (random walk) | ✓ | | ✓ | |
| Deterministic trend | ✓ | ✓ | | |
| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |
| Local linear trend | ✓ | ✓ | ✓ | ✓ |
| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |
Seasonal
The seasonal component is written as:
<span>$$
\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \qquad \omega_t \sim N(0, \sigma_\omega^2)
$$</span>
The periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.
The variants of this model are:
The periodicity s
Whether or not to make the seasonal effects stochastic.
If the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).
Cycle
The cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between "1.5 and 12 years" (see Durbin and Koopman).
The cycle is written as:
<span>$$
\begin{align}
c_{t+1} & = c_t \cos \lambda_c + c_t^ \sin \lambda_c + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \\
c_{t+1}^ & = -c_t \sin \lambda_c + c_t^ \cos \lambda_c + \tilde \omega_t^ & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)
\end{align}
$$</span>
The parameter $\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).
Irregular
The irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.
$$
\varepsilon_t \sim N(0, \sigma_\varepsilon^2)
$$
In some cases, we may want to generalize the irregular component to allow for autoregressive effects:
$$
\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2)
$$
In this case, the autoregressive parameters would also be estimated via MLE.
Regression effects
We may want to allow for explanatory variables by including additional terms
<span>$$
\sum_{j=1}^k \beta_j x_{jt}
$$</span>
or for intervention effects by including
<span>$$
\begin{align}
\delta w_t \qquad \text{where} \qquad w_t & = 0, \qquad t < \tau, \\
& = 1, \qquad t \ge \tau
\end{align}
$$</span>
These additional parameters could be estimated via MLE or by including them as components of the state space formulation.
Data
Following Harvey and Jaeger, we will consider the following time series:
US real GNP, "output", (GNPC96)
US GNP implicit price deflator, "prices", (GNPDEF)
US monetary base, "money", (AMBSL)
The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.
All data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.
End of explanation
"""
# Plot the data
ax = dta.plot(figsize=(13,3))
ylim = ax.get_ylim()
ax.xaxis.grid()
ax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);
"""
Explanation: To get a sense of these three variables over the timeframe, we can plot them:
End of explanation
"""
# Model specifications
# Unrestricted model, using string specification
unrestricted_model = {
'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Unrestricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# local linear trend model with a stochastic damped cycle:
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
# The restricted model forces a smooth trend
restricted_model = {
'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Restricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# smooth trend model with a stochastic damped cycle. Notice
# that the difference from the local linear trend model is that
# `stochastic_level=False` here.
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
"""
Explanation: Model
Since the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:
$$
y_t = \underbrace{\mu_{t}}{\text{trend}} + \underbrace{c{t}}{\text{cycle}} + \underbrace{\varepsilon_t}{\text{irregular}}
$$
The irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:
Local linear trend (the "unrestricted" model)
Smooth trend (the "restricted" model, since we are forcing $\sigma_\eta = 0$)
Below, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.
End of explanation
"""
# Output
output_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)
output_res = output_mod.fit(method='powell', disp=False)
# Prices
prices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)
prices_res = prices_mod.fit(method='powell', disp=False)
prices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)
prices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)
# Money
money_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)
money_res = money_mod.fit(method='powell', disp=False)
money_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)
money_restricted_res = money_restricted_mod.fit(method='powell', disp=False)
"""
Explanation: We now fit the following models:
Output, unrestricted model
Prices, unrestricted model
Prices, restricted model
Money, unrestricted model
Money, restricted model
End of explanation
"""
print(output_res.summary())
"""
Explanation: Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.
End of explanation
"""
fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));
"""
Explanation: For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.
The plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
End of explanation
"""
# Create Table I
table_i = np.zeros((5,6))
start = dta.index[0]
end = dta.index[-1]
time_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)
models = [
('US GNP', time_range, 'None'),
('US Prices', time_range, 'None'),
('US Prices', time_range, r'$\sigma_\eta^2 = 0$'),
('US monetary base', time_range, 'None'),
('US monetary base', time_range, r'$\sigma_\eta^2 = 0$'),
]
index = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])
parameter_symbols = [
r'$\sigma_\zeta^2$', r'$\sigma_\eta^2$', r'$\sigma_\kappa^2$', r'$\rho$',
r'$2 \pi / \lambda_c$', r'$\sigma_\varepsilon^2$',
]
i = 0
for res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):
if res.model.stochastic_level:
(sigma_irregular, sigma_level, sigma_trend,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
else:
(sigma_irregular, sigma_level,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
sigma_trend = np.nan
period_cycle = 2 * np.pi / frequency_cycle
table_i[i, :] = [
sigma_level*1e7, sigma_trend*1e7,
sigma_cycle*1e7, damping_cycle, period_cycle,
sigma_irregular*1e7
]
i += 1
pd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')
table_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)
table_i
"""
Explanation: Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
End of explanation
"""
|
empet/Math
|
hypocycloid-online.ipynb
|
bsd-3-clause
|
from IPython.display import Image
Image(filename='generate-hypocycloid.png')
"""
Explanation: Hypocycloid definition and animation
Deriving the parametric equations of a hypocycloid
On May 11 @fermatslibrary posted a gif file, https://twitter.com/fermatslibrary/status/862659602776805379, illustrating the motion of eight cocircular points. The Fermat's Library followers found it so fascinating that the tweet picked up more than 1000 likes and 800 retweets. Soon after I saw the gif I created a similar Python Plotly animation
although the tweet did not mention how it was generated. @plotlygraphs tweeted a link
to my Jupyter notebook presenting the animation code.
How I succeeded to reproduce it so fast? Here I explain the secret:
At the first sight you can think that the gif displays an illusory rectiliniar motion of the eight points, but it is a real one. I noticed that the moving points lie on a rolling circle along another circle, and I knew that a fixed point on a rolling circle describes a curve called hypocycloid. In the particular case when the ratio of the two radii is 2 the hypocycloid degenerates to a diameter in the base (fixed) circle.
In this Jupyter notebook I deduce the parametric equations of a hypoycyloid, animate its construction
and explain why when $R/r=2$ any point on the rolling circle runs a diameter in the base circle.
End of explanation
"""
Image(filename='hypocycloid-2r.png')
"""
Explanation: We refer to the figure in the above cell to explain how we get the parameterization of the hypocycloid generated by a fixed point of a circle of center $O'_0$ and radius r, rolling without slipping along the circle
of center O and radius $R>r$.
Suppose that initially the hypocycloid generating point, $P$, is located at $(R,0)$.
After the small circle was rolling along the greater circle a length corresponding to an angle of measure, $t$, it reaches the point $P'$ on the circle $C(O'_t, r)$.
Rolling without slipping means that the length the arc $\stackrel{\frown}{PQ}$ of the greater circle equals the length of the arc $\stackrel{\frown}{P'Q}$ on the smaller one, i.e $Rt=r\omega$, where $\omega$ is the measure of the non-oriented angle $\widehat{P'O'_tQ}$ (i.e. we consider $\omega>0$) . Thus $\omega=(R/r)t$
The center $O'_t$ has the coordinates $x=(R-r)\cos(t), (R-r)\sin(t)$. The clockwise parameterization of the circle $C(O'_t,r)$ with respect to the coordinate system $x'O'_ty'$ is as follows:
$$\begin{array}{llr}
x'(\tau)&=&r\cos(\tau)\
y'(\tau)&=&-r\sin(\tau),
\end{array}$$
$\tau\in[0,2\pi]$.
Hence the point $P'$ on the hypocycloid has the coordinates:
$x'=r\cos(\omega-t), y'=-r\sin(\omega-t)$, and with respect to $xOy$, the coordinates:
$x=(R-r)\cos(t)+r\cos(\omega-t), y=(R-r)\sin(t)-r\sin(\omega-t)$.
Replacing $\omega=(R/r)t$ we get the parameterization of the hypocycloid generated by the initial point $P$:
$$\begin{array}{lll}
x(t)&=&(R-r)\cos(t)+r\cos(t(R-r)/r)\
y(t)&=&(R-r)\sin(t)-r\sin(t(R-r)/r), \quad t\in[0,2\pi]
\end{array}$$
If $R/r=2$ the parametric equations of the corresponding hypocycloid are:
$$\begin{array}{lll}
x(t)&=&2r\cos(t)\
y(t)&=&0
\end{array}$$
i.e. the moving point $P$ runs the diameter $y=0$, from the position $(R=2r, 0)$ to $(-R,0)$ when $t\in[0,\pi]$,
and back to $(R,0)$, for $t\in[\pi, 2\pi]$.
What about the trajectory of any other point, $A$, on the rolling circle that at $t=0$ has the angular coordinate $\varphi$ with respect to the center $O'_0$?
We show that it is also a diameter in the base circle, referring to the figure in the next cell that is a particularization of
the above figure to the case $R=2r$.
End of explanation
"""
import numpy as np
from numpy import pi, cos, sin
import copy
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
import time
"""
Explanation: The arbitrary point $A$ on the rolling circle has, for t=0, the coordinates:
$x=r+r\cos(\varphi), y=r\sin(\varphi)$.
The angle $\widehat{QO'_tP'}=\omega$ is in this case $2t$, and $\widehat{B'O'_tP'}=t$. Since $\widehat{A'O'_tP'}=\varphi$, we get that the position of the fixed point on the smaller circle, after rolling along an arc of length $r(2t-\varphi)$,
is $A'(x(t)=r\cos(t)+r\cos(t-\varphi), y(t)=r\sin(t)-r\sin(t-\varphi))$, with $\varphi$ constant, and $t$ variable in the interval $[\varphi, 2\pi+\varphi]$.
Let us show that $y(t)/x(t)=$constant for all $t$, i.e. the generating point of the hypocycloid lies on a segment of line (diameter in the base circle):
$$\displaystyle\frac{y(t)}{x(t)}=\frac{r\sin(t)-r\sin(t-\varphi)}{r\cos(t)+r\cos(t-\varphi)}=\left{\begin{array}{ll}\tan(\varphi/2)& \mbox{if}\:\: t=\varphi/2\
\displaystyle\frac{2\cos(t-\varphi/2)\sin(\varphi/2)}{2\cos(t-\varphi/2)\cos(\varphi/2)}=\tan(\varphi/2)& \mbox{if}\:\: t\neq\varphi/2 \end{array}\right.$$
Hence the @fermatslibrary animation, illustrated by a Python Plotly code in my Jupyter notebook, displays the motion of the eight points placed on the rolling
circle of radius $r=R/2$, along the corresponding diameters in the base circle.
Animating the hypocycloid generation
End of explanation
"""
axis=dict(showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
range=[-1.1,1.1],
autorange=False,
title=''
)
layout=dict(title='',
font=dict(family='Balto'),
autosize=False,
width=600,
height=600,
showlegend=False,
xaxis=dict(axis),
yaxis=dict(axis),
hovermode='closest',
shapes=[],
updatemenus=[dict(type='buttons',
showactive=False,
y=1,
x=1.2,
xanchor='right',
yanchor='top',
pad=dict(l=10),
buttons=[dict(label='Play',
method='animate',
args=[None, dict(frame=dict(duration=90, redraw=False),
transition=dict(duration=0),
fromcurrent=True,
mode='immediate'
)]
)]
)]
)
"""
Explanation: Set the layout of the plot:
End of explanation
"""
layout['shapes'].append(dict(type= 'circle',
layer= 'below',
xref= 'x',
yref='y',
fillcolor= 'rgba(245,245,245, 0.95)',
x0= -1.005,
y0= -1.005,
x1= 1.005,
y1= 1.005,
line= dict(color= 'rgb(40,40,40)', width=2
)
)
)
def circle(C, rad):
#C=center, rad=radius
theta=np.linspace(0,1,100)
return C[0]+rad*cos(2*pi*theta), C[1]-rad*sin(2*pi*theta)
"""
Explanation: Define the base circle:
End of explanation
"""
def set_my_columns(R=1.0, ratio=3):
#R=the radius of base circle
#ratio=R/r, where r=is the radius of the rolling circle
r=R/float(ratio)
xrol, yrol=circle([R-r, 0], 0)
my_columns=[Column(xrol, 'xrol'), Column(yrol, 'yrol')]
my_columns.append(Column([R-r, R], 'xrad'))
my_columns.append(Column([0,0], 'yrad'))
my_columns.append(Column([R], 'xstart'))
my_columns.append(Column([0], 'ystart'))
a=R-r
b=(R-r)/float(r)
frames=[]
t=np.linspace(0,1,50)
xpts=[]
ypts=[]
for k in range(t.shape[0]):
X,Y=circle([a*cos(2*pi*t[k]), a*sin(2*pi*t[k])], r)
my_columns.append(Column(X, 'xrcirc{}'.format(k+1)))
my_columns.append(Column(Y, 'yrcirc{}'.format(k+1)))
#The generator point has the coordinates(xp,yp)
xp=a*cos(2*pi*t[k])+r*cos(2*pi*b*t[k])
yp=a*sin(2*pi*t[k])-r*sin(2*pi*b*t[k])
xpts.append(xp)
ypts.append(yp)
my_columns.append(Column([a*cos(2*pi*t[k]), xp], 'xrad{}'.format(k+1)))
my_columns.append(Column([a*sin(2*pi*t[k]), yp], 'yrad{}'.format(k+1)))
my_columns.append(Column(copy.deepcopy(xpts), 'xpt{}'.format(k+1)))
my_columns.append(Column(copy.deepcopy(ypts), 'ypt{}'.format(k+1)))
return t, Grid(my_columns)
def set_data(grid):
return [dict(xsrc=grid.get_column_reference('xrol'),#rolling circle
ysrc= grid.get_column_reference('yrol'),
mode='lines',
line=dict(width=2, color='blue'),
name='',
),
dict(xsrc=grid.get_column_reference('xrad'),#radius in the rolling circle
ysrc= grid.get_column_reference('yrad'),
mode='markers+lines',
line=dict(width=1.5, color='blue'),
marker=dict(size=4, color='blue'),
name=''),
dict(xsrc=grid.get_column_reference('xstart'),#starting point on the hypocycloid
ysrc= grid.get_column_reference('ystart'),
mode='marker+lines',
line=dict(width=2, color='red', shape='spline'),
name='')
]
"""
Explanation: Prepare data for animation to be uploaded to Plotly cloud:
End of explanation
"""
def set_frames(t, grid):
return [dict(data=[dict(xsrc=grid.get_column_reference('xrcirc{}'.format(k+1)),#update rolling circ position
ysrc=grid.get_column_reference('yrcirc{}'.format(k+1))
),
dict(xsrc=grid.get_column_reference('xrad{}'.format(k+1)),#update the radius
ysrc=grid.get_column_reference('yrad{}'.format(k+1))#of generating point
),
dict(xsrc=grid.get_column_reference('xpt{}'.format(k+1)),#update hypocycloid arc
ysrc=grid.get_column_reference('ypt{}'.format(k+1))
)
],
traces=[0,1,2]) for k in range(t.shape[0])
]
"""
Explanation: Set data for each animation frame:
End of explanation
"""
py.sign_in('empet', 'my_api_key')#access my Plotly account
t, grid=set_my_columns(R=1, ratio=3)
py.grid_ops.upload(grid, 'animdata-hypo3'+str(time.time()), auto_open=False)#upload data to Plotly cloud
data1=set_data(grid)
frames1=set_frames(t, grid)
title='Hypocycloid with '+str(3)+' cusps, '+'<br>generated by a fixed point of a circle rolling inside another circle; R/r=3'
layout.update(title=title)
fig1=dict(data=data1, layout=layout, frames=frames1)
py.icreate_animations(fig1, filename='anim-hypocycl3'+str(time.time()))
"""
Explanation: Animate the generation of a hypocycloid with 3 cusps(i.e. $R/r=3$):
End of explanation
"""
t, grid=set_my_columns(R=1, ratio=4)
py.grid_ops.upload(grid, 'animdata-hypo4'+str(time.time()), auto_open=False)#upload data to Plotly cloud
data2=set_data(grid)
frames2=set_frames(t, grid)
title2='Hypocycloid with '+str(4)+' cusps, '+'<br>generated by a fixed point of a circle rolling inside another circle; R/r=4'
layout.update(title=title2)
fig2=dict(data=data2, layout=layout, frames=frames2)
py.icreate_animations(fig2, filename='anim-hypocycl4'+str(time.time()))
"""
Explanation: Hypocycloid with four cusps (astroid):
End of explanation
"""
t, grid=set_my_columns(R=1, ratio=2)
py.grid_ops.upload(grid, 'animdata-hypo2'+str(time.time()), auto_open=False)#upload data to Plotly cloud
data3=set_data(grid)
frames3=set_frames(t, grid)
title3='Degenerate Hypocycloid; R/r=2'
layout.update(title=title3)
fig3=dict(data=data3, layout=layout, frames=frames3)
py.icreate_animations(fig3, filename='anim-hypocycl2'+str(time.time()))
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Degenerate hypocycloid (R/r=2):
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session14/Day2/BuildingPerceptronsForClassification.ipynb
|
mit
|
def walk_dog( # complete
'''Perceptron to calculate whether we should walk the dog
Parameters
----------
questions : array-like, size = 3
weights : array-lik, optional (default = np.array([-2, -1, 5]))
threshold : float, optional (default = 2.5)
decision threshold for whether to walk the dog or not
Returns
-------
walk : bool
Boolean variable to indicate whether to walk the dog
'''
# complete
# complete
# complete
# complete
# complete
"""
Explanation: Classification with a Perceptron
Version 0.1
By AA Miller (Northwestern/CIERA)
21 February 2022
Perceptrons are a type of articifical neuron. We will construct a basic neuron in pure python today and use it to classify data in a "simple" two class problem.
Consider the following image - what do you see?
<img style="display: block; margin-left: auto; margin-right: auto" src="images/number8.png" width="450" align="middle">
<div align="right"> <font size="-3">(data credit: MNIST) </font></div>
Without hesitation, I am certain that you were able to identify the above image as the number 8.
Breakout Problem 1
Using everything we have learned this week about machine learning – devise how to use the Random Forest algorithm to build a binary classifier to separate the number 8 from other handwritten digits.
Take a few minutes to discuss with your partner
There are many possible approaches -
build an algorithm that identifies "circles"
$~~~~$(number 3 does not have fully closed circles)
build an algorithm that indentifies lines
$~~~~$(number 4 is only straight lines)
examine only the bottom half of the image
$~~~~$(tops of 8 and 9 are similar, but bottoms different)
At this point we only have a few features, they are all very complex to derive, and it isn't at all clear that we would successfully separate 8 and 0.
In sum, we are on the road towards success.
And yet,
when you looked at the image of the 8, you immediately recognized the contents of the image at the start of this lecture.
We each have a super computer in our heads. One trained over many, many generations of evolution to immediately recognize what is right in front of our eyes. Constructing "rules" (especially very precise rules) to teach a computer to make the same recognition is extremely challenging.
For complex tasks, like computer vision/classifying handwritten digits, we need a new type of machine learning relative to what we discussed earlier this week.
(We will need deep learning to accomplish this task. We will develop these ideas over the next three days.)
To start that process, I would like to introduce the perception – an artificial neuron.
I'm no biologist, here's my (over-simplified) model of a biological neuron: a neuron receives multiple inputs (from other neurons or other signals present within the body), then weighs the relative information before becoming "activated" and further sending signals, or remaining dormant.
(It is often said in popular literature that deep learning is designed to work like the human brain.
This is very inaccurate, instead it is far more appropriate to say that the principles behind deep learning are inspired by our biological understanding of how the brain works. There is a lot that we do not understand about the brain, and it is worthwhile to remember this distinction.)
Problem 1) The Perceptron
A perceptron takes several binary inputs, $x_1, x_2, \ldots, x_n$, and, ultimately, produces a single binary output.
Each input has a relevant weight, $w_1, w_2, \ldots, w_n$, which is multiplied by its corresponding input, before taking the sum of all the weighted inputs and comparing that to some threshold.
If the weighted sum is greater than the threshold, then the output $= 1$ and the perceptron is "activated."
Otherwise the output $= 0$.
Here is a graphical representation of the perceptron:
<img style="display: block; margin-left: auto; margin-right: auto" src="images/perceptron.png" width="650" align="middle">
<div align="right"> <font size="-3">(credit: https://towardsdatascience.com/the-perceptron-3af34c84838c) </font></div>
Here is a mathematical representation of the perceptron:
$$\mathrm{output} = \left{ \begin{array}{lcr}
0 & \mathrm{if} \; \sum_n x_n w_n & \le \mathrm{threshold} \
1 & \mathrm{if} \; \sum_n x_n w_n & > \mathrm{threshold}
\end{array}\right.
$$
A single perceptron can be used to answer questions that rely on many varied inputs.
Consider the question: "Should I walk the dog?"
The answer to that question depends on other questions:
- Is it raining?
- Is the dog asleep?
- Has is been more than 6 hours since the dog last went outside?
Ultimately we need a binary output (to walk the dog, or not walk the dog) based on several binary inputs. In other words, we can use a perceptron to determine whether to walk the dog.
Problem 1a
Write a function walk_dog that acts as a perceptron to decide whether or not you should walk the dog.
The function should take three arugments, a tuple with the binary answer to your three questions, an optional tuple with the relative weights for each input (default = (-2, -1, 5)), and an optional threshold (default = 2.5).
End of explanation
"""
walk_dog(# complete
"""
Explanation: Problem 1b
Use the newly created perceptron to determine whether to walk the dog if:
1. it is raining outside
2. the dog is asleep
3. the dog last went out 2.5 hours ago
Hint - recall that the inputs should be binary.
End of explanation
"""
walk_dog(# complete
"""
Explanation: Problem 1c
After sleeping for four hours, it is still raining, but the dog wakes up (i.e., it has been 6.5 hr since the dog last went out). Should you walk the dog?
Hint - recall that the inputs should be binary.
End of explanation
"""
def perceptron(# complete
'''Generic perceptron function
Parameters
----------
signals : array-like
the input signals for the perceptron
weights : array-like
the weight applied to each input
bias : float
the value required for activation
Returns
-------
activated : bool
whether or not the perceptron is activated
'''
# complete
# complete
# complete
# complete
"""
Explanation: Problem 2) Generic Perceptron
We can simplify our representation of the perceptron by replacing the sum with a dot product, $w_n \cdot x_n = \sum_n w_n x_n$, and we can move the threshold to the other side of the inequality, which we will now refer to as the bias $b$.
$$\mathrm{output} = \left{ \begin{array}{lcr}
0 & \mathrm{if} \; w_n \cdot x_n + b & \le 0 \
1 & \mathrm{if} \; w_n \cdot x_n + b & > 0
\end{array}\right.
$$
(in this notation, the bias can be thought of how easy it is for the neuron to be activated)
Now, build a generic perceptron that can take any collection of input signals and weights, as well as a bias, to determine the binary output from the artificial neuron.
(this will prove useful for more than just walking the dog)
Problem 2a
Write a generic function perceptron that takes as input arrays called signals and weights as well as a float called bias. The function should return a boolean indicating whether or not the perceptron is "activated".
End of explanation
"""
perceptron(# complete
"""
Explanation: Problem 2b
Is the perceptron activated if the signal = [2.3, 5.3, 1.2, 3.4], the weights = [-3, 2, 0.5, -1], and no bias (i.e., bias = 0)?
End of explanation
"""
perceptron(# complete
"""
Explanation: Problem 2c
What if the signal and weights do not change but the bias = -2?
End of explanation
"""
X, y = make_blobs(n_samples=30, centers=2, n_features=2,
center_box = (0,4), random_state=1885)
fig, ax = plt.subplots()
activated = y == 1
ax.plot(X[activated,0], X[activated,1], 'o',
ms = 9, mew=2, mfc='None')
ax.plot(X[~activated,0], X[~activated,1], '+',
ms=15, mew=2)
ax.set_xlabel('X1', fontsize=15)
ax.set_ylabel('X2', fontsize=15)
fig.tight_layout()
"""
Explanation: Problem 3) Perceptrons for classification
Perceptrons can be used for binary classification problems!
(perhaps this is no surprise given that this session is about machine learning...)
To demonstrate how this works we will generate some synthetic two-dimensional data, but the principle can easily be scaled to an arbitrarily large number of dimensions.
We use scikit-learn to simulate two classes in a 2d plane using the make_blobs() function. We will only include 30 samples so the data are easy to visualize.
End of explanation
"""
def train_perceptron(# complete
'''Train a perceptron to classify binary labels via numerical features
Parameters
----------
X : array-like
Feature array for the data, in the style of scikit-learn
y : array-like, type = bool
Label array for the data
weights : array-like
Weights for the input signals to the perceptron
bias : array-like
Bias value for the perceptron
epochs : int
Number of instances for training the perceptron
learning_rate : float
Relative step size for tuning the weights and bias
'''
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
"""
Explanation: How can we use a perceptron to classify this data?
In this case we have two inputs, and thus two weights, plus the bias to determine whether the perceptron is activated (class = 1, the open circles in the previous plot).
We also have 30 observations that we can use to train the algorithm.
For a perceptron, training means updating the weights to better reflect the training data.
Here's the pseudo-code:
1. Apply the perceptron to one of the data points
2. Adjust the weights if the perceptron makes an incorrect classification
3. Repeat this procedure over all N datapoints
4. Repeat this procedure for M iterations
How do we adjust the weights? For every sample we evaluate the model error (is the classification correct or not). We then adjust the weight to reduce the error for the following prediction.
For a perceptron, these updates can be calculated simply as:
$$w_\mathrm{updated} = w_\mathrm{current} + \eta\,\, (y_\mathrm{true} - y_\mathrm{pred})\,\, x,$$
where $w_\mathrm{updated}$ is the new value for the weight, $w_\mathrm{current}$ is the current value for the weight, $\eta$ is the called the learning rate, $x$ is the value of the input signal, and $(y_\mathrm{true} - y_\mathrm{pred})$ captures whether or not the classification was correct.
The learning rate is a small number that adjusts the weight in the direction of being more accurate. It is selected by the user, though familiar tricks like cross validation can be used to identify an optimal size.
To train the perceptron, we need to decide the total number of iterations $M$ to pass through the training data. These iterations are called epochs. Within each epoch, we update the weights and bias to improve our predictions on the generated data.
Updating the bias is similar to updating the weights, but we exclude the value of the input $x$ as this does not affect the bias.
Problem 3a
Write a function train_perceptron that accepts as input X, y, weights, bias, epochs, and learning rate. The function should train the perceptron for $M$ epochs. During each epoch, the weights and bias should be updated using the equation given above while looping over every source in the training set.
Hint – it is useful to track the number of misclassifications that occur during each epoch.
Hint 2 – for this problem we only care about training, but if you eventually wanted to classify data with the perceptron then you would need to extract the weights and bias from the function, or, even better, write the perceptron as a class object that be trained and also classify (similar to scikit-learn).
End of explanation
"""
train_perceptron( # complete
"""
Explanation: Problem 3b
Train the perceptron. Use weights of [.1, 1], a bias of 0, train for 20 epochs, with a learning rate $\eta = 0.005$.
Note – as we will see below the perceptron is highly sensitive to the initial guess for weights and biases.
End of explanation
"""
train_perceptron( # complete
"""
Explanation: We see that the accuracy slowly improves over the course of the 20 epochs.
In other words, the machine... IT IS LEARNING.
Problem 3c
Adjust the weights, or bias, or number of epochs, or learning rate, or all of them, to see how the changes affect the output of the perceptron.
What do you notice as these changes are made?
End of explanation
"""
fig, ax = plt.subplots()
ax. plot([-10, 0, 0, 10], [0,0,1,1], lw=3, label='Perceptron')
xgrid = np.linspace(-10, 10, 1000)
ax.plot(xgrid, 1/(1 + np.exp(-xgrid)), lw=2, label='Sigmoid')
ax.set_xlim(-7.5, 7.5)
ax.legend()
ax.set_xlabel('input*weights - bias', fontsize=14)
ax.set_ylabel('output', fontsize=14)
fig.tight_layout()
"""
Explanation: write answer here
It is possible to build a perceptron that gets worse at classification. This can happen if the learning rate is extremely large, essentially flipping on or off the activation for every source during each epoch.
It is also possible to build a perceptron that effectively never learns anything if the initial weights are extremely far from the optimal solution. The weights effectively define a line (or hyperplane in more than two dimensions) to separate the classes. If the line is very distant from the data itself, then the weights cannot be easily updated to improve the classification.
You have now built a perceptron. You have also seen it's limitations.
The strength of machine learning solutions lies in their ability to identify and capture non-linear structure within data.
But the perceptron is almost too non-linear. For inputs that are close to the activation boundary, very small changes in the weights can rapidly lead to an extreme difference in outcome.
It would be better if our adjustments led to more gradual changes, so that, if possible, the quality of the model improved during each epoch of the classifier. This can be achieved with a different model for our artificial neuron.
Consider, for instance, a neuron that is activated via the sigmoid function:
$$\sigma(z) \equiv \frac{1}{1 + e^{-z}}$$
where $z$ is the previously defined activation for a neuron: $w \cdot x - b$
We can visually show the difference between the perceptron and sigmoid neurons.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_openml
train_samples = 5000
# Load data from https://www.openml.org/d/554
X, y = fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False)
fig, ax = plt.subplots(figsize=(4,4))
ax.imshow(X[41].reshape(28,28)[3:-3,3:-3],
cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
fig.tight_layout()
fig.savefig('./images/number8.png')
"""
Explanation: The above plot shows that the sigmoid function is a "smooth" version of the perceptron. We can exploit this smoothness to achieve better learning outcomes. Small changes in the weights and biases will produce small changes in the sigmoid function, enabling gradual improvement, which cannot be said for the perceptron near the region of activation.
This will prove important in the next lecture.
Challenge Problem
Train a perceptron to classify the number 8 in the handwritten digits data set.
Hint - you will need to convert the 2d data into a vector format.
Appendix
Functions to load plots shown in the notebook.
End of explanation
"""
|
dinrker/PredictiveModeling
|
Session 3 - Classification.ipynb
|
mit
|
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import time
%matplotlib inline
"""
Explanation: Goals of this Lesson
Extend the regression framework to support classification
Logistic Regression
Training with Gradient Descent
Training with Newton's Method
Implement...
The Logistic function
A function to compute the Hessian matrix
An instantiation of SciKit-Learn's Logistic regression class
References
Chapter 4 of Elements of Statistical Learning by Hastie, Tibshirani, Friedman
A Few Useful Things to Know about Machine Learning
SciKit-Learn's Logistic Regression Documentation
0. Python Preliminaries
As usual, first we need to import Numpy, Pandas, MatPlotLib...
End of explanation
"""
from matplotlib.colors import ListedColormap
# A somewhat complicated function to make pretty plots
def plot_classification_data(data1, data2, beta, logistic_flag=False):
plt.figure()
grid_size = .2
features = np.vstack((data1, data2))
# generate a grid over the plot
x_min, x_max = features[:, 0].min() - .5, features[:, 0].max() + .5
y_min, y_max = features[:, 1].min() - .5, features[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, grid_size), np.arange(y_min, y_max, grid_size))
# color the grid based on the predictions
if logistic_flag:
Z = logistic(np.dot(np.c_[xx.ravel(), yy.ravel(), np.ones(xx.ravel().shape[0])], beta))
colorbar_label=r"Value of f($X \beta)$"
else:
Z = np.dot(np.c_[xx.ravel(), yy.ravel(), np.ones(xx.ravel().shape[0])], beta)
colorbar_label=r"Value of $X \beta$"
Z = Z.reshape(xx.shape)
background_img = plt.pcolormesh(xx, yy, Z, cmap=plt.cm.coolwarm)
# Also plot the training points
plt.scatter(class1_features[:, 0], class1_features[:, 1], c='b', edgecolors='k', s=70)
plt.scatter(class2_features[:, 0], class2_features[:, 1], c='r', edgecolors='k', s=70)
plt.title('Data with Class Prediction Intensities')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
color_bar = plt.colorbar(background_img, orientation='horizontal')
color_bar.set_label(colorbar_label)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
# Another messy looking function to make pretty plots of basketball courts
def visualize_court(log_reg_model, court_image = './data/nba_experiment/nba_court.jpg'):
two_class_cmap = ListedColormap(['#FFAAAA', '#AAFFAA']) # light red for miss, light green for make
x_min, x_max = 0, 50 #width (feet) of NBA court
y_min, y_max = 0, 47 #length (feet) of NBA half-court
grid_step_size = 0.2
grid_x, grid_y = np.meshgrid(np.arange(x_min, x_max, grid_step_size), np.arange(y_min, y_max, grid_step_size))
grid_predictions = log_reg_model.predict(np.c_[grid_x.ravel(), grid_y.ravel()])
grid_predictions = grid_predictions.reshape(grid_x.shape)
fig, ax = plt.subplots()
court_image = plt.imread(court_image)
ax.imshow(court_image, interpolation='bilinear', origin='lower',extent=[x_min,x_max,y_min,y_max])
ax.imshow(grid_predictions, cmap=two_class_cmap, interpolation = 'nearest',
alpha = 0.60, origin='lower',extent=[x_min,x_max,y_min,y_max])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.title( "Make / Miss Prediction Boundaries" )
plt.show()
"""
Explanation: I've created two functions that we'll use later to visualize which datapoints are being assigned to which classes. They are a bit messy and not essential to the material so don't worry about understanding them. I'll be happy to explain them to anyone interested during a break or after the session.
End of explanation
"""
### function for shuffling the data and labels
def shuffle_in_unison(a, b):
rng_state = np.random.get_state()
np.random.shuffle(a)
np.random.set_state(rng_state)
np.random.shuffle(b)
### calculate classification errors
# return a percentage: (number misclassified)/(total number of datapoints)
def calc_classification_error(predictions, class_labels):
n = predictions.size
num_of_errors = 0.
for idx in xrange(n):
if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1):
num_of_errors += 1
return num_of_errors/n
# set the random number generator for reproducability
np.random.seed(182)
#### create artificial data
N = 400
D = 2
# Sample the features from a Multivariate Normal Dist.
mean1 = [13,5]
mean2 = [5,5]
covariance = [[13,5],[5,13]]
class1_features = np.random.multivariate_normal(mean1,covariance,N/2)
class2_features = np.random.multivariate_normal(mean2,covariance,N/2)
features = np.vstack((class1_features, class2_features))
# add column of ones for bias term
features = np.hstack((features,np.ones((N,1))))
# Set the class labels
class1_labels = [0]*(N/2)
class2_labels = [1]*(N/2)
class_labels = class1_labels+class2_labels
# shuffle the data
shuffle_in_unison(features, class_labels)
class_labels = np.array(class_labels)[np.newaxis].T
### fit the linear model --- OLS Solution
beta = np.dot(np.linalg.inv(np.dot(features.T, features)),np.dot(features.T,class_labels))
### compute error on training data
predictions = np.dot(features, beta)
print "Classification Error on Training Set: %.2f%%" %(calc_classification_error(predictions, class_labels) * 100)
### generate a plot
plot_classification_data(class1_features, class2_features, beta)
"""
Explanation: 1. Classes as Targets
Now that we've seen how to train and evaluate a linear model for real-valued responses, next we turn to classification. At first glance, jumping from regression to classification seems trivial. Say there are two classes, the first denoted by 0 and the second by 1. We could just set each $y_{i}$ to 0 or 1 according to its class membership and fit a linear model just as before.
Here's an example doing just that on some artificial data...
End of explanation
"""
# define the transformation function
def logistic(z):
# TO DO: return the output of the logistic function
return 1.0/(1 + np.exp(-z))
# a few tests to make sure your function is working
print "Should print 0.5:"
print logistic(0)
print
print "Should print 0.81757...:"
print logistic(1.5)
print
# needs to handle arrays too
print "Should print [ 0.450166 0.5124974 0.98201379]:"
print logistic(np.array([-.2,.05,4]))
print
# graph the function
z = np.linspace(-6,6,50)
logistic_out = logistic(z)
plt.figure()
# TO DO: write the line of code to plot the function
plt.plot(z, logistic_out, 'b-o')
plt.title("Logistic Function")
plt.xlabel('Input')
plt.ylabel('Output')
plt.show()
"""
Explanation: That worked okay. 9.75% error is respectable. Yet, let's think a bit harder about what's going on...
* It seems a bit arbitary to set the class labels to 0 vs. 1. Why couldn't we have set them to -1 vs. +1? Or 500 vs. 230? The responses don't have the same intrinsic meaning they did before. Now the labels represent exclusive class membership whereas before they represented physical quantities (baseball player's salary, for example).
* During training, we're optimizing squared error, but the metric we truly care about is classification percentage. Squared distance seems inappropriate especially when it's not even clear to what value the responses should be set.
Here's an idea: since we care primarily about classification error, let's make that our loss function...
\begin{eqnarray}
\mathcal{L}{\mathrm{class}} = \begin{cases} 1, & \text{if $y{i}\ne$round($\hat y_{i}$).}\0, & \text{otherwise}.\end{cases}
\end{eqnarray}
where $\hat y_{i}$ is our model's prediction of label $y_{i} \in {0,1}$ and round() sends $\hat y_{i}$ to 0 or 1, whichever is closer. Great. Now all we have to do is perform gradient descent to train the model...wait a minute...$\mathcal{L}_{\mathrm{class}}$ isn't differentiable.
Let's consider another loss function:
\begin{eqnarray}
\mathcal{L} = \sum_{i=1}^{N} -y_{i} \log \hat y_{i} - (1-y_{i}) \log (1-\hat y_{i})
\end{eqnarray}
where, again, $\hat y_{i}$ is our model's prediction of label $y_{i} \in {0,1}$. Here $\log$ will refer to the natural logarithm, base $e$. This is called the cross-entropy error function. Notice it's well-suited for classification in that it is directly optimizing towards $0$ and $1$. To see this, let $y_{i}=1$. In that case, the second term is zero (due to the $1-y_{i}$ coefficient) and the loss becomes $\mathcal{L}= - \log \hat y_{i}$. Recall that $-\log \hat y_{i} = 0$ when $\hat y_{i}=1$ and that $-\log \hat y_{i} = \infty$ when $\hat y_{i}=0$. Thus, we are encouraging $\hat y_{i}$ to become equal to $1$, its class label, and incurring penalty the more it moves towards $0$.
On an advanced note: Cross-entroy loss still may seem arbitrary to some readers. It is derived by taking the negative logarithm of the Bernoulli distribution's density function, which has support {0,1}. Therefore, we can think of each class label as the result of a Bernoulli trial--a parameterized coin flip, essentially. Many loss functions are merely the negative logarithm of some probability density function. Squared error is derived by taking the $-\log$ of the Normal density funciton.
2. Modifying the Linear Model
Now that we have our loss function and proper labels, we turn to the model itself, represented by the parameter $\hat y$ above. What if we keep define $\hat y$ just as we did for linear regression?
\begin{equation}
\hat y_{i} = \beta_0 + \beta_1 x_{i,1} + \dots + \beta_p x_{i,D} = \mathbf{x}_i^T \mathbf{\beta}
\end{equation}
Notice parameterizing $\hat y_{i}$ with $\mathbf{x}_i^T\beta$ doesn't work since the value would be unconstrained and result in the loss being undefined if $\hat y\le 0$. Thus, we need a function $f$ such that $f:\mathbb{R} \mapsto (0,1)$. We can probably think-up many functions that have a range on this interval so we'll limit the functions we can use by specifying two more requirements: the function must (1) be differentiable (in order to perform gradient descent) and (2) have a probabilistic interpretation (to think of the output as the probability the input is in class 1).
Cumulative Distribution Functions (CDFs) have all of these nice properties. They 'squeeze' their input onto $(0,1)$, are differentiable (since that's how a pdf is derived) and have a probabilistic interpretation. In this case, we can use any CDF as long as it has support on $(-\infty, +\infty)$ since this is the range of $X_i^T\beta$.
Choosing which CDF to use can be a hard decision since each choice drags along assumptions we don't have time to go into here. We'll choose the Logistic Distribution's CDF:
\begin{equation}
f(z; 0, 1) = \frac{1}{1+e^{-z}}.
\end{equation}
Tradition partly dictates this choice, but it does provide the nice interpretation that $x_i^T\beta$ is modeling the 'log odds':
\begin{eqnarray}
\log \frac{\hat y}{1-\hat y} &=& \log \frac{f(z; 0, 1)}{1-f(z; 0, 1)} \ &=& \log f(z; 0, 1) - \log (1-f(z; 0, 1) )\ &=& -\log (1+e^{-z}) - \log (1-(1+e^{-z})^{-1}) \ &=& -\log (1+e^{-z}) - \log e^{-z} + \log (1+e^{-z}) \ &=& - \log e^{-z} \ &=& z \ &=& \mathbf{x}_i^T \mathbf{\beta} \end{eqnarray}
This use of the Logistic Distribution is where Logistic Regression gets its name. As a side note before proceeding, using the Normal CDF instead of the Logistic is called 'Probit Regression,' the second most popular regression framework.
<span style="color:red">STUDENT ACTIVITY (5 MINS)</span>
The Logistic transformation function is the key to extending regression to classification. Below you'll see the function def logistic(z). Complete it by filling in the logistic function and then graph the output.
End of explanation
"""
### compute the cross-entropy error
# labels: Numpy array containing the true class labels
# f: column vector of predictions (i.e. output of logistic function)
def cross_entropy(labels, f):
return np.sum(-1*np.multiply(labels,np.log(f)) - np.multiply((np.ones(N)-labels),np.log(np.ones(N)-f)))
"""
Explanation: 3. Logistic Regression: A Summary
Data
We observe pairs $(\mathbf{x}{i},y{i})$ where
\begin{eqnarray}
y_{i} \in { 0, 1} &:& \mbox{class label} \
\mathbf{x}{i} = (1, x{i,1}, \dots, x_{i,D}) &:& \mbox{set of $D$ explanatory variables (aka features) and a bias term }
\end{eqnarray}
Parameters
\begin{eqnarray}
\mathbf{\beta}^{T} = (\beta_{0}, \dots, \beta_{D}) : \mbox{values encoding the relationship between the features and label}
\end{eqnarray}
Transformation Function
\begin{equation}
f(z_{i}=\mathbf{x}{i} \mathbf{\beta} ) = (1+e^{-\mathbf{x}{i} \mathbf{\beta} })^{-1}
\end{equation}
Error Function
\begin{eqnarray}
\mathcal{L} = \sum_{i=1}^{N} -y_{i} \log f(\mathbf{x}{i} \mathbf{\beta} ) - (1-y{i}) \log (1-f(\mathbf{x}_{i} \mathbf{\beta} ))
\end{eqnarray}
End of explanation
"""
### compute the gradient (derivative w.r.t. Beta)
# features: NxD feature matrix
# labels: Numpy array containing the true class labels
# f: column vector of predictions (i.e. output of logistic function)
def compute_Gradient(features, labels, f):
return np.sum(np.multiply(f-labels,features),0)[np.newaxis].T
np.sum([1,2],0)
?np.sum
"""
Explanation: Learning $\beta$
Like Linear Regression, learning a Logistic Regression model will entail minimizing the error function $\mathcal{L}$ above. Can we solve for $\beta$ in closed form? Let's look at the derivative of $\mathcal{L}$ with respect to $\beta$:
\begin{eqnarray}
\frac{\partial \mathcal{L}{i}}{\partial \mathbf{\beta}} &=& \frac{\partial \mathcal{L}{i}}{\partial f(z_{i})} \frac{\partial f(z_{i})}{\partial z_{i}} \frac{\partial z_{i}}{\partial \mathbf{\beta}}\
&=& \left[\frac{-y_{i}}{f(\mathbf{x}{i} \mathbf{\beta})} - \frac{y{i}-1}{1-f(\mathbf{x}{i} \mathbf{\beta})} \right] f(\mathbf{x}{i} \mathbf{\beta})(1-f(\mathbf{x}{i} \mathbf{\beta}))\mathbf{x}{i}\
&=& [-y_{i}(1-f(\mathbf{x}{i} \mathbf{\beta} )) - (y{i}-1)f(\mathbf{x}{i} \mathbf{\beta} )]\mathbf{x}{i}\
&=& [f(\mathbf{x}{i} \mathbf{\beta} ) - y{i}]\mathbf{x}_{i}
\end{eqnarray}
End of explanation
"""
# set the random number generator for reproducability
np.random.seed(49)
# Randomly initialize the Beta vector
beta = np.random.multivariate_normal([0,0,0], [[1,0,0],[0,1,0],[0,0,1]], 1).T
# Initialize the step-size
alpha = 0.00001
# Initialize the gradient
grad = np.infty
# Set the tolerance
tol = 1e-6
# Initialize error
old_error = 0
error = [np.infty]
# Run Gradient Descent
start_time = time.time()
iter_idx = 1
# loop until gradient updates become small
while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):
f = logistic(np.dot(features,beta))
old_error = error[-1]
# track the error
error.append(cross_entropy(class_labels, f))
grad = compute_Gradient(features, class_labels, f)
# update parameters
beta = beta - alpha*grad
iter_idx += 1
end_time = time.time()
print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time)
print "Final Cross-Entropy Error: %.2f" %(error[-1])
# compute error on training data
predictions = logistic(np.dot(features, beta))
print "Classification Error on Training Set: %.2f%%" %(calc_classification_error(predictions, class_labels) * 100)
# generate the plot
plot_classification_data(class1_features, class2_features, beta, logistic_flag=True)
np.random.multivariate_normal([0,0,0], [[1,0,0],[0,1,0],[0,0,1]], 1).T.shape
"""
Explanation: We see that the first derivative contains the term $f(X_{i}\beta)$, meaning the gradient depends on $\beta$ in some non-linear way. We have no choice but to use the Gradient Descent algorithm:
- Randomly initialize $\beta$
- Until $\alpha || \nabla \mathcal{L} || < tol $:
- $\mathbf{\beta}{t+1} = \mathbf{\beta}{t} - \alpha \nabla_{\mathbf{\beta}} \mathcal{L}$
Putting it all together in a simple example...
End of explanation
"""
def compute_Hessian(features, f):
# X = feature matrix, size NxD),
# f = predictions (logistic outputs), size Nx1
# TO DO: return the Hessian matrix, size DxD
n = len(features)
A = np.multiply(f,np.ones(n)[np.newaxis].T-f)
A = np.diag(A.T[0])
return np.dot(features.T, np.dot(A ,features))
# a few tests to make sure your function is working
X = np.array([[1,2],[3,4],[5,6]])
f = np.array([.1,.3,.5])[np.newaxis].T
print "Should print [[ 8.23 10.2 ];[ 10.2 12.72]]:"
print compute_Hessian(X,f)
print
X = np.array([[1],[4],[6]])
f = np.array([.01,.13,.55])[np.newaxis].T
print "Should print [[ 10.7295]]:"
print compute_Hessian(X,f)
"""
Explanation: 4. Newton's Method
Choosing the step-size, $\alpha$, can be painful since there is no principled way to set it. We have little intuition for what parameter space really looks like and therefore no sense of how to move most efficiently. Knowing the curvature of the space will solve this problem (to some extent). Therefore, we arrive at Newton's Method:
\begin{equation}
\beta_{t+1} = \beta_{t} - (\frac{\partial^{2} \mathcal{L}}{\partial \beta \partial \beta^{T}})^{-1} \nabla_{\beta} \mathcal{L}
\end{equation}
where $(\frac{\partial^{2} \mathcal{L}}{\partial \beta \partial \beta^{T}})^{-1}$ is the inverse of the matrix of second derivatives, also known as the Hessian Matrix. For Logistic regression, the Hessian is
\begin{equation}
\frac{\partial^{2} \mathcal{L}}{\partial \beta \partial \beta^{T}} = \mathbf{X}^{T}\mathbf{A}\mathbf{X}
\end{equation}
where $\mathbf{A}= \mathrm{diag}(f(X_{i}\beta)(1-f(X_{i}\beta)))$, a matrix with $f''$ along its diagonal.
Our new parameter update is:
\begin{eqnarray}
\beta_{t+1} &=& \beta_{t} - (\mathbf{X}^{T}\mathbf{A}\mathbf{X})^{-1}\mathbf{X}^{T}[f(\mathbf{X}\beta) - \mathbf{y}]
\end{eqnarray}
As you can see, we no longer need to specify a step-size. We've replaced $\alpha$ with $(\frac{\partial^{2} \mathcal{L}}{\partial \beta \partial \beta^{T}})^{-1}$ and everything else stays the same.
<span style="color:red">STUDENT ACTIVITY (10 MINS)</span>
Write a function that computes the Hessian matrix ($\mathbf{X}^{T}\mathbf{A}\mathbf{X}$).
End of explanation
"""
# set the random number generator for reproducability
np.random.seed(1801843607)
# Save the errors from run above
no_Newton_errors = error
# Randomly initialize the Beta vector
beta = np.random.multivariate_normal([0,0,0], [[.1,0,0],[0,.1,0],[0,0,.1]], 1).T
# Initialize error
old_error = 0
error = [np.infty]
# Run Newton's Method
start_time = time.time()
iter_idx = 1
# Loop until error doesn't change (as opposed to gradient)
while (abs(error[-1] - old_error) > tol) and (iter_idx < 300):
f = logistic(np.dot(features,beta))
old_error = error[-1]
# track the error
error.append(cross_entropy(class_labels, f))
grad = compute_Gradient(features, class_labels, f)
hessian = compute_Hessian(features,f)
# update parameters via Newton's method
beta = beta - np.dot(np.linalg.inv(hessian),grad)
iter_idx += 1
end_time = time.time()
print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time)
print "Final Cross-Entropy Error: %.2f" %(error[-1])
# compute the classification error on training data
predictions = logistic(np.dot(features, beta))
print "Classification Error on Training Set: %.2f%%" %(calc_classification_error(predictions, class_labels) * 100)
# generate the plot
plot_classification_data(class1_features, class2_features, beta, logistic_flag=True)
"""
Explanation: Let's try Newton's Method on our simple example...
End of explanation
"""
# plot difference between with vs without Newton
plt.figure()
# grad descent w/ step size
plt.plot(range(len(no_Newton_errors)), no_Newton_errors, 'k-', linewidth=4, label='Without Newton')
# newton's method
plt.plot(range(len(error)), error, 'g-', linewidth=4, label='With Newton')
plt.ylim([0,300000])
plt.xlim([0,150])
plt.legend()
plt.title("Newton's Method vs. Gradient Descent w/ Step Size")
plt.xlabel("Training Iteration")
plt.ylabel("Cross-Entropy Error")
plt.show()
"""
Explanation: Let's look at the training progress to see how much more efficient Newton's method is.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
# set the random number generator for reproducability
np.random.seed(75)
#Initialize the model
skl_LogReg = LogisticRegression()
#Train it
start_time = time.time()
skl_LogReg.fit(features, np.ravel(class_labels))
end_time = time.time()
print "Training ended after %.4f seconds." %(end_time-start_time)
# compute the classification error on training data
predictions = skl_LogReg.predict(features)
print "Classification Error on Training Set: %.2f%%" %(calc_classification_error(predictions, class_labels) * 100)
# generate the plot
plot_classification_data(class1_features, class2_features, skl_LogReg.coef_.T, logistic_flag=True)
"""
Explanation: 5. Logistic Regression with SciKit-Learn
Here is the documentation for SciKit-Learn's implementation of Logistic Regression
It's quite easy to use. Let's jump right in and repeat the above experiments.
End of explanation
"""
nba_shot_data = pd.read_csv('./data/nba_experiment/NBA_xy_features.csv')
nba_shot_data.head()
nba_shot_data.describe()
"""
Explanation: Experiments
6. Dataset #1: NBA Shot Outcomes
The first real dataset we'll tackle is one describing the location and outcome of shots taken in professional basketball games. Let's use Pandas to load and examine the data.
End of explanation
"""
# split data into train and test
train_set_size = int(.80*len(nba_shot_data))
train_features = nba_shot_data.ix[:train_set_size,['x_Coordinate','y_Coordinate']]
test_features = nba_shot_data.ix[train_set_size:,['x_Coordinate','y_Coordinate']]
train_class_labels = nba_shot_data.ix[:train_set_size,['shot_outcome']]
test_class_labels = nba_shot_data.ix[train_set_size:,['shot_outcome']]
#Train it
start_time = time.time()
skl_LogReg.fit(train_features, np.ravel(train_class_labels))
end_time = time.time()
print "Training ended after %.2f seconds." %(end_time-start_time)
# compute the classification error on training data
predictions = skl_LogReg.predict(test_features)
print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, np.array(test_class_labels)) * 100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %(np.sum(test_class_labels)/len(test_class_labels)*100)
# visualize the boundary on the basketball court
visualize_court(skl_LogReg)
"""
Explanation: Simple enough. Now let's train a Logistic Regression model on it, leaving out a test set.
End of explanation
"""
# first we need to extract the file from the zip
import zipfile
zip = zipfile.ZipFile('./data/nba_experiment/NBA_all_features.csv.zip')
zip.extractall('./data/nba_experiment/')
nba_all_features = pd.read_csv('./data/nba_experiment/NBA_all_features.csv')
nba_all_features.head()
"""
Explanation: Not bad. We're beating the random baseline of 45% error. However, visualizing the decision boundary exposes a systemic problem with using a linear model on this dataset: it is not powerful enough to adapt to the geometry of the court. This is a domain-specific contraint that should be considered when selecting the model and features. For instance, a Gaussian-based classifier works a bit better, achieving 39.02% error. Its decision boundary is visualized below.
<img src="https://raw.githubusercontent.com/enalisnick/NBA_shot_analysis/master/results/spatial_features_results/Gaussian_Mixture_Model.png" alt="" style="width: 250px;"/>
Can we do better by adding more features? For instance, if we knew the position (Guard vs. Forward vs. Center) of the player taking the shot, would that help? Let's try. First, load a new dataset.
End of explanation
"""
# split data into train and test
train_features = nba_all_features.ix[:train_set_size,:'Center']
test_features = nba_all_features.ix[train_set_size:,:'Center']
train_class_labels = nba_all_features.ix[:train_set_size,['shot_outcome']]
test_class_labels = nba_all_features.ix[train_set_size:,['shot_outcome']]
########## TO DO: TRAIN SCIKIT-LEARN'S LOG. REG. MODEL ##########
skl_LogReg.fit(train_features, np.ravel(train_class_labels))
predictions = skl_LogReg.predict(test_features)
print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, np.array(test_class_labels)) * 100)
#################################################################
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %(np.sum(test_class_labels)/len(test_class_labels)*100)
# we can't visualize since D>2
"""
Explanation: One thing to notice is that this data is noisy. Look at row 2 above; it says a player made a dunk from 33 feet above the baseline--that's beyond the three point line.
<span style="color:red">STUDENT ACTIVITY (20 MINS)</span>
Your task is to train Scikit-Learn's Logistic Regression model on the new NBA data. The data is split into train and test features already. Your task is to train SciKit-Learn's Logistic Regression model on the train_features and train_class_labels and then compute the test classification error--which should be around 38%-39%. BONUS: If you sucessfully train the SciKit-Learn model, implement gradient descent or Newton's method.
End of explanation
"""
for idx, feature in enumerate(nba_all_features):
if idx<11:
print "%s: %.2f" %(feature, skl_LogReg.coef_[0][idx])
"""
Explanation: Great! We've improved by a few percentage points. Let's look at which features the model weighted.
End of explanation
"""
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
# use SciKit Learn's loading methods
categories = ['soc.religion.christian', 'alt.atheism']
train_20ng = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'), categories=categories)
test_20ng = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'), categories=categories)
# transform the text into word counts
vectorizer = CountVectorizer(stop_words='english', max_features=1000)
train_vectors = vectorizer.fit_transform(train_20ng.data)
test_vectors = vectorizer.transform(test_20ng.data) #use the transform fit to the training data
train_targets = train_20ng.target
test_targets = test_20ng.target
print "The training data size is "+str(train_vectors.shape)
print "The test data size is "+str(test_vectors.shape)
# print the first 500 words of an article
print "Example text:"
print train_20ng.data[0][:500]
print
print "Example count vector:"
#print train_vectors[0].todense()
"""
Explanation: Interestingly, the classifier exploited the location features very little. The position of the player was much more important, especially if he was a center.
## 7. Dataset #2: 20 News Groups
For the second experiment, we'll work with the very popular '20 News Groups' dataset consisting of, well, 20 different categories of articles. SciKit-Learn already has it ready for import.
End of explanation
"""
#Train it
start_time = time.time()
skl_LogReg.fit(train_vectors, train_targets)
end_time = time.time()
print "Training ended after %.2f seconds." %(end_time-start_time)
# compute the classification error on training data
predictions = skl_LogReg.predict(test_vectors)
print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, test_targets) * 100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %(100 - sum(test_targets)*100./len(test_targets))
"""
Explanation: As you can see, the vector is super sparse and very high dimensional--much different than the data we've been working with previously. Let's see how SciKit-Learn's Logistic Regression model handles it.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
#### YOUR CODE GOES HERE
print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, test_targets) * 100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %(100 - sum(test_targets)*100./len(test_targets))
"""
Explanation: 24% error is respectable, but there's still room for improvement. In general, working with natural language is one of the hardest application domains in Machine Learning due to the fact that we often have to reduce the abstract, sometimes ambiguous semantic meaning to a superficial token.
<span style="color:red">STUDENT ACTIVITY</span>
In the time remaining in the session, we'd like you to try an open-ended activity to get experience implementing the full prediction pipeline. We've provided some suggestions below, but feel free to improvise.
Suggestion #1: Feature Engineering for 20 News Groups
Can you beat the baseline error rate on the 20 News Groups dataset? One way to do this is to have better features--word counts are rather blunt. Go read about TFIDF and then use SciKit-Learn's TFIDF Vectorizer to compute a new feature matrix for the 20 News Groups dataset. You should be able to get an error rate of about 40% if not better. The code is started for you below.
End of explanation
"""
|
transcranial/keras-js
|
notebooks/layers/pooling/GlobalAveragePooling2D.ipynb
|
mit
|
data_in_shape = (6, 6, 3)
L = GlobalAveragePooling2D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: GlobalAveragePooling2D
[pooling.GlobalAveragePooling2D.0] input 6x6x3, data_format='channels_last'
End of explanation
"""
data_in_shape = (3, 6, 6)
L = GlobalAveragePooling2D(data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.GlobalAveragePooling2D.1] input 3x6x6, data_format='channels_first'
End of explanation
"""
data_in_shape = (5, 3, 2)
L = GlobalAveragePooling2D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.GlobalAveragePooling2D.2] input 5x3x2, data_format='channels_last'
End of explanation
"""
import os
filename = '../../../test/data/layers/pooling/GlobalAveragePooling2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
"""
Explanation: export for Keras.js tests
End of explanation
"""
|
jmhsi/justin_tinker
|
data_science/courses/temp/courses/dl1/embedding_refactoring_unit_tests.ipynb
|
apache-2.0
|
embed = torch.nn.Embedding(10,3)
words = torch.autograd.Variable(torch.LongTensor([[1,2,4,5] ,[4,3,2,9]]))
"""
Explanation: Test 1
Initialize embedding matrix and input
End of explanation
"""
torch.manual_seed(88123)
dropout_out_old = embedded_dropout(embed, words, dropout=0.40)
dropout_out_old
"""
Explanation: propagate the input via the old method (embedded_dropout)
End of explanation
"""
torch.manual_seed(88123)
embed_dropout_layer = EmbeddingDropout(embed)
dropout_out_new = embed_dropout_layer(words, dropout=0.4)
dropout_out_new
print(np.testing.assert_array_equal(to_np(dropout_out_old),to_np(dropout_out_new)))
"""
Explanation: propagate the input via the forward method in the new layer (EmbeddingDropout)
End of explanation
"""
embed = torch.nn.Embedding(10,7)
words = torch.autograd.Variable(torch.LongTensor([[1,2,4,5,2,8] ,[4,3,2,9,7,6]]))
"""
Explanation: Test 2
Initialize embedding and matrix
End of explanation
"""
torch.manual_seed(7123)
dropout_out_old = embedded_dropout(embed, words, dropout=0.64)
dropout_out_old
"""
Explanation: get the input by propagating via the old method
End of explanation
"""
torch.manual_seed(7123)
embed_dropout_layer = EmbeddingDropout(embed)
dropout_out_new = embed_dropout_layer(words, dropout=0.64)
dropout_out_new
print(np.testing.assert_array_equal(to_np(dropout_out_old),to_np(dropout_out_new)))
"""
Explanation: get the input by propagating input via the embedding layer
End of explanation
"""
|
martinjrobins/hobo
|
examples/toy/distribution-neals-funnel.ipynb
|
bsd-3-clause
|
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf
log_pdf = pints.toy.NealsFunnelLogPDF()
# Plot marginal density
levels = np.linspace(-7, -1, 20)
x = np.linspace(-10, 10, 100)
y = np.linspace(-10, 10, 100)
X, Y = np.meshgrid(x, y)
Z = [[log_pdf.marginal_log_pdf(i, j) for i in x] for j in y]
plt.contour(X, Y, Z, levels = levels)
plt.xlabel('x_i')
plt.ylabel('nu')
plt.show()
"""
Explanation: Neal's funnel
This notebook introduces a toy distribution introduced by Radford Neal and is the $d+1$ dimensional,
$p(\boldsymbol{x},\nu) = \left[\prod_{i=1}^{d} \mathcal{N}(x_i|0,e^{\nu / 2})\right] \mathcal{N}(\nu|0,3),$
which has shown to cause problems for samplers owing to its "funnel" shaped geometry in the marginals $(x_i,\nu)$,
$p(x_i,\nu) = \mathcal{N}(x_i|0,e^{\nu / 2})\mathcal{N}(\nu|0,3),$
which we now plot.
End of explanation
"""
direct = log_pdf.sample(1500)
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.scatter(direct[:, 0], direct[:, 9], alpha=0.2)
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
"""
Explanation: We can also sample independently from this toy LogPDF, and add that to the visualisation:
End of explanation
"""
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform(-25, 25, size=(3, 10))
mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HaarioBardenetACMC)
# Stop after 10000 iterations
mcmc.set_max_iterations(3000)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
"""
Explanation: We now try to sample from the distribution with MCMC:
End of explanation
"""
stacked = np.vstack(chains)
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.scatter(stacked[:, 0], stacked[:, 9], alpha=0.2)
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
"""
Explanation: The adaptive covariance fails to get into the funnel region.
End of explanation
"""
print(log_pdf.kl_divergence(stacked))
print(log_pdf.kl_divergence(direct))
"""
Explanation: Now check how close the result is to the expected result, using the Kullback-Leibler divergence, and compare this to the result from sampling directly.
End of explanation
"""
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform(0, 10, size=(3, 10))
sigma0 = np.repeat(0.25, 10)
mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)
# Stop after 10000 iterations
mcmc.set_max_iterations(500)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
stacked = np.vstack(chains)
"""
Explanation: Hamiltonian Monte Carlo fares much better on this curved density.
End of explanation
"""
print(log_pdf.kl_divergence(stacked))
print(log_pdf.kl_divergence(direct))
"""
Explanation: Hamiltonian Monte Carlo does better than adaptive but still not great.
End of explanation
"""
divergent_transitions = mcmc.samplers()[0].divergent_iterations()
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.plot(chains[2][:, 1], chains[2][:, 9], alpha=0.5)
plt.scatter(chains[0][divergent_transitions, 0], chains[0][divergent_transitions, 1], color='red')
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
"""
Explanation: Visualising the path of one of the chains the sampler struggles to explore both the neck and the outside region efficiently.
End of explanation
"""
|
ampl/amplpy
|
notebooks/colab_bash.ipynb
|
bsd-3-clause
|
!pip install -q amplpy
"""
Explanation: AMPLPY: Google Colab Template
Documentation: http://amplpy.readthedocs.io
GitHub Repository: https://github.com/ampl/amplpy
PyPI Repository: https://pypi.python.org/pypi/amplpy
Jupyter Notebooks: https://github.com/ampl/amplpy/tree/master/notebooks
Setup
End of explanation
"""
%%bash
test ! -z $COLAB_GPU || exit 0 # Run only if running on Google Colab
test ! -f ampl.installed || exit 0 # Run only once
rm -rf ampl.linux-intel64
# You can install a demo bundle with all solvers:
# curl -O https://portal.ampl.com/dl/amplce/ampl.linux64.tgz && tar xzf ampl.linux64.tgz
# Or pick individual modules (recommended in order to reduce disk usage):
curl -O https://portal.ampl.com/dl/modules/ampl-module.linux64.tgz && tar xzf ampl-module.linux64.tgz
curl -O https://portal.ampl.com/dl/modules/coin-module.linux64.tgz && tar xzf coin-module.linux64.tgz
cp ampl.linux-intel64/ampl.lic ampl.linux-intel64/ampl.lic.demo
touch ampl.installed
%%bash
# If you have an AMPL Cloud License or an AMPL CE license, you can use it on Google Colab
# Note: Your license UUID should never be shared. Please make sure you delete the license UUID
# and rerun this cell before sharing the notebook with anyone
LICENSE_UUID=
test ! -z $COLAB_GPU || exit 0 # Run only if running on Google Colab
cd ampl.linux-intel64 && pwd
test -z $LICENSE_UUID && cp ampl.lic.demo ampl.lic # Restore demo license in case LICENSE_UUID is empty
test ! -z $LICENSE_UUID && curl -O https://portal.ampl.com/download/license/$LICENSE_UUID/ampl.lic
./ampl -vvq
import os
if 'COLAB_GPU' in os.environ:
os.environ['PATH'] += os.pathsep + '/content/ampl.linux-intel64'
"""
Explanation: Google Colab interagration
End of explanation
"""
from amplpy import AMPL, register_magics
ampl = AMPL()
# Store %%ampl cells in the list _ampl_cells
# Evaluate %%ampl_eval cells with ampl.eval()
register_magics(store_name='_ampl_cells', ampl_object=ampl)
"""
Explanation: Import, instantiate an ampl object and register jupyter notebook magics
End of explanation
"""
%%ampl_eval
option version;
"""
Explanation: Use %%ampl_eval to evaluate AMPL commands
End of explanation
"""
%%writefile cut2.mod
problem Cutting_Opt;
# ----------------------------------------
param nPAT integer >= 0, default 0;
param roll_width;
set PATTERNS = 1..nPAT;
set WIDTHS;
param orders {WIDTHS} > 0;
param nbr {WIDTHS,PATTERNS} integer >= 0;
check {j in PATTERNS}: sum {i in WIDTHS} i * nbr[i,j] <= roll_width;
var Cut {PATTERNS} integer >= 0;
minimize Number: sum {j in PATTERNS} Cut[j];
subject to Fill {i in WIDTHS}:
sum {j in PATTERNS} nbr[i,j] * Cut[j] >= orders[i];
problem Pattern_Gen;
# ----------------------------------------
param price {WIDTHS} default 0;
var Use {WIDTHS} integer >= 0;
minimize Reduced_Cost:
1 - sum {i in WIDTHS} price[i] * Use[i];
subject to Width_Limit:
sum {i in WIDTHS} i * Use[i] <= roll_width;
%%writefile cut.dat
data;
param roll_width := 110 ;
param: WIDTHS: orders :=
20 48
45 35
50 24
55 10
75 8 ;
%%writefile cut2.run
# ----------------------------------------
# GILMORE-GOMORY METHOD FOR
# CUTTING STOCK PROBLEM
# ----------------------------------------
option solver cbc;
option solution_round 6;
model cut2.mod;
data cut.dat;
problem Cutting_Opt;
option relax_integrality 1;
option presolve 0;
problem Pattern_Gen;
option relax_integrality 0;
option presolve 1;
let nPAT := 0;
for {i in WIDTHS} {
let nPAT := nPAT + 1;
let nbr[i,nPAT] := floor (roll_width/i);
let {i2 in WIDTHS: i2 <> i} nbr[i2,nPAT] := 0;
};
repeat {
solve Cutting_Opt;
let {i in WIDTHS} price[i] := Fill[i].dual;
solve Pattern_Gen;
if Reduced_Cost < -0.00001 then {
let nPAT := nPAT + 1;
let {i in WIDTHS} nbr[i,nPAT] := Use[i];
}
else break;
};
display nbr;
display Cut;
option Cutting_Opt.relax_integrality 0;
option Cutting_Opt.presolve 10;
solve Cutting_Opt;
display Cut;
"""
Explanation: Use %%writeifile to create files
End of explanation
"""
%%ampl_eval
commands cut2.run;
"""
Explanation: Use %%ampl_eval to run the script cut2.run
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/inpe/cmip6/models/sandbox-2/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INPE
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
YeEmrick/learning
|
cs231/assignment/assignment1/two_layer_net.ipynb
|
apache-2.0
|
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
"""
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
"""
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
"""
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
"""
scores = net.loss(X)
print('Your scores:')
print(scores)
print()
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print()
# The difference should be very small. We get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
"""
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
"""
loss, _ = net.loss(X, y, reg=0.05)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
"""
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
"""
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
"""
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
"""
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=5e-6,
num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
"""
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
"""
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
"""
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
"""
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
"""
Explanation: Train a network
To train our network we will use SGD. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
"""
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.legend()
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
"""
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
"""
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
best_val = -1
best_stats = None
learning_rates = [1e-2, 1e-3]
regularization_strengths = [0.4, 0.5, 0.6]
results = {}
iters = 2000
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_size, hidden_size, num_classes)
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=iters, batch_size=200, learning_rate=lr,
learning_rate_decay=0.95, reg=rs)
y_train_pred = net.predict(X_train)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_train, acc_val)
if best_val < acc_val:
best_stats = stats
best_val = acc_val
best_net = net
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print("lr, ", lr, "reg ", reg, "train_accuracy", train_accuracy, "val_accuracy", val_accuracy)
print("best validation accuracy achieved during cross-validation:", best_val)
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
"""
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
"""
test_acc = (best_net.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
"""
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
End of explanation
"""
|
bjshaw/phys202-2015-work
|
days/day11/Interpolation.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
"""
Explanation: Interpolation
Learning Objective: Learn to interpolate 1d and 2d datasets of structured and unstructured points using SciPy.
End of explanation
"""
x = np.linspace(0,4*np.pi,10)
x
"""
Explanation: Overview
We have already seen how to evaluate a Python function at a set of numerical points:
$$ f(x) \rightarrow f_i = f(x_i) $$
Here is an array of points:
End of explanation
"""
f = np.sin(x)
f
plt.plot(x, f, marker='o')
plt.xlabel('x')
plt.ylabel('f(x)');
"""
Explanation: This creates a new array of points that are the values of $\sin(x_i)$ at each point $x_i$:
End of explanation
"""
from scipy.interpolate import interp1d
"""
Explanation: This plot shows that the points in this numerical array are an approximation to the actual function as they don't have the function's value at all possible points. In this case we know the actual function ($\sin(x)$). What if we only know the value of the function at a limited set of points, and don't know the analytical form of the function itself? This is common when the data points come from a set of measurements.
Interpolation is a numerical technique that enables you to construct an approximation of the actual function from a set of points:
$$ {x_i,f_i} \rightarrow f(x) $$
It is important to note that unlike curve fitting or regression, interpolation doesn't not allow you to incorporate a statistical model into the approximation. Because of this, interpolation has limitations:
It cannot accurately construct the function's approximation outside the limits of the original points.
It cannot tell you the analytical form of the underlying function.
Once you have performed interpolation you can:
Evaluate the function at other points not in the original dataset.
Use the function in other calculations that require an actual function.
Compute numerical derivatives or integrals.
Plot the approximate function on a finer grid that the original dataset.
Warning:
The different functions in SciPy work with a range of different 1d and 2d arrays. To help you keep all of that straight, I will use lowercase variables for 1d arrays (x, y) and uppercase variables (X,Y) for 2d arrays.
1d data
We begin with a 1d interpolation example with regularly spaced data. The function we will use it interp1d:
End of explanation
"""
x = np.linspace(0,4*np.pi,10) # only use 10 points to emphasize this is an approx
f = np.sin(x)
"""
Explanation: Let's create the numerical data we will use to build our interpolation.
End of explanation
"""
sin_approx = interp1d(x, f, kind='cubic')
"""
Explanation: To create our approximate function, we call interp1d as follows, with the numerical data. Options for the kind argument includes:
linear: draw a straight line between initial points.
nearest: return the value of the function of the nearest point.
slinear, quadratic, cubic: use a spline (particular kinds of piecewise polynomial of a given order.
The most common case you will want to use is cubic spline (try other options):
End of explanation
"""
newx = np.linspace(0,4*np.pi,100)
newf = sin_approx(newx)
"""
Explanation: The sin_approx variabl that interp1d returns is a callable object that can be used to compute the approximate function at other points. Compute the approximate function on a fine grid:
End of explanation
"""
plt.plot(x, f, marker='o', linestyle='', label='original data')
plt.plot(newx, newf, marker='.', label='interpolated');
plt.legend();
plt.xlabel('x')
plt.ylabel('f(x)');
"""
Explanation: Plot the original data points, along with the approximate interpolated values. It is quite amazing to see how the interpolation has done a good job of reconstructing the actual function with relatively few points.
End of explanation
"""
plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))
plt.xlabel('x')
plt.ylabel('Absolute error');
"""
Explanation: Let's look at the absolute error between the actual function and the approximate interpolated function:
End of explanation
"""
x = 4*np.pi*np.random.rand(15)
f = np.sin(x)
sin_approx = interp1d(x, f, kind='cubic')
# We have to be careful about not interpolating outside the range
newx = np.linspace(np.min(x), np.max(x),100)
newf = sin_approx(newx)
plt.plot(x, f, marker='o', linestyle='', label='original data')
plt.plot(newx, newf, marker='.', label='interpolated');
plt.legend();
plt.xlabel('x')
plt.ylabel('f(x)');
plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx)))
plt.xlabel('x')
plt.ylabel('Absolute error');
"""
Explanation: 1d non-regular data
It is also possible to use interp1d when the x data is not regularly spaced. To show this, let's repeat the above analysis with randomly distributed data in the range $[0,4\pi]$. Everything else is the same.
End of explanation
"""
from scipy.interpolate import interp2d
"""
Explanation: Notice how the absolute error is larger in the intervals where there are no points.
2d structured
For the 2d case we want to construct a scalar function of two variables, given
$$ {x_i, y_i, f_i} \rightarrow f(x,y) $$
For now, we will assume that the points ${x_i,y_i}$ are on a structured grid of points. This case is covered by the interp2d function:
End of explanation
"""
def wave2d(x, y):
return np.sin(2*np.pi*x)*np.sin(3*np.pi*y)
"""
Explanation: Here is the actual function we will use the generate our original dataset:
End of explanation
"""
x = np.linspace(0.0, 1.0, 10)
y = np.linspace(0.0, 1.0, 10)
"""
Explanation: Build 1d arrays to use as the structured grid:
End of explanation
"""
X, Y = np.meshgrid(x, y)
Z = wave2d(X, Y)
X
"""
Explanation: Build 2d arrays to use in computing the function on the grid points:
End of explanation
"""
plt.pcolor(X, Y, Z)
plt.colorbar();
plt.scatter(X, Y);
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
"""
Explanation: Here is a scatter plot of the points overlayed with the value of the function at those points:
End of explanation
"""
wave2d_approx = interp2d(X, Y, Z, kind='cubic')
"""
Explanation: You can see in this plot that the function is not smooth as we don't have its value on a fine grid.
Now let's compute the interpolated function using interp2d. Notice how we are passing 2d arrays to this function:
End of explanation
"""
xnew = np.linspace(0.0, 1.0, 40)
ynew = np.linspace(0.0, 1.0, 40)
Xnew, Ynew = np.meshgrid(xnew, ynew) # We will use these in the scatter plot below
Fnew = wave2d_approx(xnew, ynew) # The interpolating function automatically creates the meshgrid!
Fnew.shape
"""
Explanation: Compute the interpolated function on a fine grid:
End of explanation
"""
plt.pcolor(xnew, ynew, Fnew);
plt.colorbar();
plt.scatter(X, Y, label='original points')
plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);
"""
Explanation: Plot the original course grid of points, along with the interpolated function values on a fine grid:
End of explanation
"""
from scipy.interpolate import griddata
"""
Explanation: Notice how the interpolated values (green points) are now smooth and continuous. The amazing thing is that the interpolation algorithm doesn't know anything about the actual function. It creates this nice approximation using only the original course grid (blue points).
2d unstructured
It is also possible to perform interpolation when the original data is not on a regular grid. For this, we will use the griddata function:
End of explanation
"""
x = np.random.rand(100)
y = np.random.rand(100)
"""
Explanation: There is an important difference between griddata and the interp1d/interp2d:
interp1d and interp2d return callable Python objects (functions).
griddata returns the interpolated function evaluated on a finer grid.
This means that you have to pass griddata an array that has the finer grid points to be used. Here is the course unstructured grid we will use:
End of explanation
"""
f = wave2d(x, y)
"""
Explanation: Notice how we pass these 1d arrays to our function and don't use meshgrid:
End of explanation
"""
plt.scatter(x, y);
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
"""
Explanation: It is clear that our grid is very unstructured:
End of explanation
"""
xnew = np.linspace(x.min(), x.max(), 40)
ynew = np.linspace(y.min(), y.max(), 40)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Xnew.shape, Ynew.shape
Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)
Fnew.shape
plt.pcolor(Xnew, Ynew, Fnew, label="points")
plt.colorbar()
plt.scatter(x, y, label='original points')
plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points')
plt.xlim(0,1)
plt.ylim(0,1)
plt.xlabel('x')
plt.ylabel('y');
plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.);
"""
Explanation: To use griddata we need to compute the final (strcutured) grid we want to compute the interpolated function on:
End of explanation
"""
|
mattmcd/PyBayes
|
scripts/amm_math_20210308.ipynb
|
apache-2.0
|
from IPython.display import HTML
# Hide code cells https://gist.github.com/uolter/970adfedf44962b47d32347d262fe9be
def hide_code():
return HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$("div.input").hide();
} else {
$("div.input").show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
import sympy as sp
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%matplotlib inline
hide_code()
"""
Explanation: Characteristics of Autonomous Market Makers
Date: 2021-03-26
(Date started: December 2020 Christmas holidays)
Author: @mattmcd
This notebook describes an approach using method of characteristics solutions of partial differential equations (PDEs) to examine existing AMM invariants such as Uniswap, Balancer and Curve (a.k.a. Stableswap).
End of explanation
"""
x, y, w_x, w_y = sp.symbols('x y w_x w_y', positive=True)
k = sp.symbols('k', real=True)
X, Y = map(sp.Function, 'XY')
V = sp.Function('V')
"""
Explanation: From the Balancer whitepaper:
The bedrock of Balancer’s exchange functions is a surface defined by constraining a value function $V$
— a function of the pool’s weights and balances — to a constant. We will prove that this surface implies a spot price at each point such that, no matter what exchanges are carried out, the share of value of each token in the pool remains constant.
The Balancer whitepaper shows that the value function
$$V = \prod_{i=1}^{n} x_{i}^{w_{i}}$$
is related to the token spot prices by the ratio of partial derivative.
Starting from the Constraints
The idea of constant level sets of a value function creating constraints on system state (including prices) is discussed in 'From Curved Bonding to Configuration Spaces' by Zargham, Shorish, and Paruch (2020).
The existing Balancer value function implicit state constraint is 'the share of value of each token in the pool remains constant'.
In this section we look at starting from a set of constraints and see if it is possible to derive the corresponding value function. We can then use this value function to determine allowed state changes e.g. for a swap the number of output tokens for an initial state and given number of input tokens.
This approach feels familiar to the Lagrangian dynamics approach in classical physics (the author's background). In economics the standard approach seems to be start from a value function (a.k.a. utility function) and derive substitution functions that give prices. Here we attempt to solve the inverse problem.
As a starting point, we consider deriving the Balancer value function (a Cobb-Douglas Utility Function) from the set of constraints for swaps 'the share of value of each token in the pool remains constant'.
We consider below two and three asset pools with tokens $X, Y, Z$. The state of the system can be defined by three token balances $x$, $y$, $z$, and three weights $w_x$, $w_y$, $w_z$.
Uniswap
The two asset case is a generalized form of Uniswap having token weights $w_x$ and $w_y$ summing to 1. Uniswap uses $w_x = w_y = \frac{1}{2}$. Here we use $x$ and $y$ to represent the token X and token Y balances.
The total value of the pool in tems of token X is
$$v_{x}(x,y) = x + p_{x}^{y}y = x - \frac{\partial{x}}{\partial{y}}y$$
i.e. number of tokens X plus the spot price of converting Y tokens into X tokens.
The constant share of value constraint is hence represented by the equations:
$$w_{x} = \frac{x}{x - \frac{\partial{x}}{\partial{y}}y} \
w_y = \frac{- \frac{\partial{x}}{\partial{y}}y}{x - \frac{\partial{x}}{\partial{y}}y}$$
where in the two asset case the second equation is redundant since the weights sum to 1.
End of explanation
"""
sp.Eq(x/(x + y*V(x,y).diff(y)/V(x,y).diff(x)), w_x)
"""
Explanation: We can specify that swaps happen on some invariant surface $V(x,y)$ which allows us to replace the spot price $-\frac{\partial{x}}{\partial{y}}$ in $x - y\frac{\partial{x}}{\partial{y}} = w_x$, subsitituting $-\frac{\partial{x}}{\partial{y}} = \frac{\partial{V}}{\partial{y}}/\frac{\partial{V}}{\partial{x}}$ via the implicit function theorem.
End of explanation
"""
const_share_eq = sp.Eq(x*V(x,y).diff(x), w_x*(x*V(x,y).diff(x) + y*V(x,y).diff(y)))
const_share_eq
"""
Explanation: SymPy's PDE solver balks at this equation as written, so multiply through to make things easier.
End of explanation
"""
V_sol = sp.pdsolve(const_share_eq).subs({(1-w_x): w_y})
V_sol
"""
Explanation: It turns out that SymPy is capable of solving the PDE directly to give a general solution.
End of explanation
"""
sp.Eq(V(x,y), V_sol.rhs.simplify())
"""
Explanation: We can simplify the solution:
End of explanation
"""
sp.Eq((V(x,y).diff(y)/V(x,y).diff(x)), V_sol.rhs.diff(y)/V_sol.rhs.diff(x))
"""
Explanation: We show below that the spot price is as expected regardless of the exact form of $F$ and if a specific form is chosen we achieve the desired Cobb-Douglas form.
End of explanation
"""
sp.Eq(V(x,y), sp.exp(w_y*V_sol.rhs.args[0]).simplify())
"""
Explanation: By taking the exponential of the constant we obtain the general Uniswap invariant.
End of explanation
"""
sp.Eq(V(x,y), (w_y*V_sol.rhs.args[0]).expand())
"""
Explanation: Interestingly, the general solution offers the opportunity to use different functional forms to achieve the same constant share of value contraint.
End of explanation
"""
x, y, z, w_x, w_y, w_z = sp.symbols('x y z w_x w_y w_z', positive=True)
u, v, w = sp.symbols('u v w', positive=True)
xi = sp.Function('xi')
eta = sp.Function('eta')
zeta = sp.Function('zeta')
V = sp.Function('V')
V_d = sp.Function('\mathcal{V}')
const_share_eq_3_1 = sp.Eq(x*V(x,y,z).diff(x), w_x*(x*V(x,y,z).diff(x) + y*V(x,y,z).diff(y) + z*V(x,y,z).diff(z)))
const_share_eq_3_1
"""
Explanation: We could derive the swap formulae for each form. Would the formulae from log form of invariant be easier to implement on the Ethererum blockchain? This is left as an exercise for the reader.
Balancer
We now consider the general case of a Balancer pool with $n$ assets and weights summing to 1. Here we examine the three asset case and identify the geometric constraints imposed by the share of value conditions
$$
w_x = \frac{x}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}\
w_y = \frac{-\frac{\partial{x}}{\partial{y}}y}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}\
w_z = \frac{-\frac{\partial{x}}{\partial{z}}z}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}
$$
for tokens X, Y and Z having total value $v_x(x,y,z) = x + p_{x}^{y}y + p_{x}^{z}z = x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z$.
As before we can replace the spot prices given by the partial derivatives with expressions for the invariant surface V. Below we show the $w_x$ condition, the others are similar. The weight sum to 1 condition again allows us to eliminate one of the constraint equations.
End of explanation
"""
const_share_eq_3_1.subs({
V(x,y,z): V(xi(x),eta(y),zeta(z)),
}).simplify().subs({V(xi(x),eta(y),zeta(z)): V_d })
"""
Explanation: We can simplify this to a constant coefficient first order PDE by the change of variables
$$
u = \xi(x) = \log{x}\
v = \eta(y) = \log{y}\
w = \zeta(z) = \log{z}
$$
The general form for the the change of variables is given by
End of explanation
"""
const_share_eq_3_1.subs({
V(x,y,z): V(xi(x),eta(y),zeta(z)),
xi(x): sp.log(x),
eta(y): sp.log(y),
zeta(z): sp.log(z)
}).simplify()
"""
Explanation: where $\mathcal{V} = V(\xi(x), \eta(y), \zeta(z))$
Substituting the logarithmic functional form for the transformed variables gives
End of explanation
"""
balancer_constraints = sp.ImmutableMatrix([
[1-w_x, -w_x, -w_x],
[-w_y, 1-w_y, -w_y],
[-w_z, -w_z, 1-w_z]
])
v_grad = sp.ImmutableMatrix([V(u,v,w).diff(i) for i in [u,v,w]])
sp.Eq(sp.MatMul(balancer_constraints, sp.UnevaluatedExpr(v_grad), evaluate=False).subs({V(u,v,w): V_d }),
(balancer_constraints*v_grad).subs({V(u,v,w): V_d }),
evaluate=False)
"""
Explanation: The share of value weight constraints can written as a matrix equation:
End of explanation
"""
(sp.simplify(sp.ImmutableMatrix([
# [1-w_x, -w_x, -w_x],
[-w_y, 1-w_y, -w_y],
[-w_z, -w_z, 1-w_z]
]).nullspace()[0]).subs({(1-w_y-w_z): w_x})*w_z
).T*sp.ImmutableMatrix([sp.log(x), sp.log(y), sp.log(z)])
"""
Explanation: We can use the weights summing to 1 condition to eliminate one of these equations. The nullspace of the constraint matrix then defines a plane of constant value of the invariant in $u,v,w$ space which we can then map back to the original X, Y, Z token balances.
End of explanation
"""
s = sp.symbols('s', positive=True)
V_ss = s*(x+y) + x**w_x*y**w_y
sp.Eq(V(x,y), V_ss)
"""
Explanation: It can be seen by inspection that exponentiating this invariant results in the original Balancer form. As for the two asset case it is also possible to retain the value function in this form and derive new forms for the trading functions, which is again an exercise left to the reader.
Curve
Curve uses the StableSwap invariant to reduce slippage for stablecoins all having equivalent value. For example a pool could consist of USDC, USDT, BUSD and DAI, all of which are designed to track USD. Pools tracking other assets are possible e.g. a BTC pool backed by sBTC, renBTC, wBTC. The market maker token pool ideally consists of a balanced mix of each token type. We consider a two asset pool below, this analysis extends to an arbitrary number of assets.
The StableSwap invariant is designed to act as a constant sum market maker $x+y=1$ for small imbalances, and a constant product Uniswap market maker $xy=k$ as the pool becomes more imbalanced. These are the price constraints defining the system i.e.
at small imbalance $-\frac{\partial{x}}{\partial{y}}=1$ and tokens are freely interchangeable
at larger imbalance $-\frac{\partial{x}}{\partial{y}}=x/y$ as for Uniswap (or in general an equal weight Balancer pool)
The StableSwap invariant can be written as $V{\left(x,y \right)} = s \left(x + y\right) + x^{w_{x}} y^{w_{y}}$ where $s$ is an amplification parameter that determines the transition between constant sum and constant product behaviour.
End of explanation
"""
V_x = (V_ss).diff(x).subs({w_x: sp.Rational(1,2), w_y: sp.Rational(1,2)}).simplify()
V_y = (V_ss).diff(y).subs({w_x: sp.Rational(1,2), w_y: sp.Rational(1,2)}).simplify()
ss_spot = V_y/V_x.simplify()
ss_spot_eq = sp.Eq(V(x,y).diff(y)/V(x,y).diff(x), ss_spot)
ss_spot_eq
sp.Eq(sp.Limit(ss_spot, s, sp.oo), (ss_spot).limit(s, sp.oo))
sp.Eq(sp.Limit(ss_spot, s, 0), (ss_spot).limit(s, 0))
"""
Explanation: Below we show the spot price of Y tokens in terms of X tokens that results from this invariant function. We can see that the limit $s \rightarrow \infty$ gives the constant sum behaviour while the $s \rightarrow 0$ limit gives constant product behaviour.
End of explanation
"""
ss_spot_denom = sp.denom(ss_spot_eq.rhs)* sp.denom(ss_spot_eq.lhs)
sp.Eq(ss_spot_eq.lhs * ss_spot_denom, ss_spot_eq.rhs * ss_spot_denom)
"""
Explanation: Following the previous procedure we'd hope to be able to solve the PDE for $V(x,y)$ from the spot price constraint:
$$\left(s + \frac{\sqrt{y}}{2 \sqrt{x}}\right) \frac{\partial}{\partial y} V{\left(x,y \right)} = \left(s + \frac{\sqrt{x}}{2 \sqrt{y}}\right) \frac{\partial}{\partial x} V{\left(x,y \right)}$$
Attempting this with the actual StableSwap spot price above doesn't immediately work although it should be possible to solve numerically.
End of explanation
"""
sol2 = sp.pdsolve((s*x*y + y)*V(x,y).diff(y) - (s*x*y + x)*V(x,y).diff(x), V(x,y)).rhs
V_ss_new = sp.log(sol2.args[0]).expand()
sp.Eq(V(x,y),V_ss_new)
"""
Explanation: A new Curve
We can look at the form of the PDE from the StableSwap constraint and explore related functional forms. It looks like the same limiting behaviour could be achieved with any ratio of $x/y$ i.e. without requiring the $\surd$. We hence try $$\left(s x y + y\right) \frac{\partial}{\partial y} V{\left(x,y \right)} = \left(s x y + x\right) \frac{\partial}{\partial x} V{\left(x,y \right)}$$
which is easily solvable by SymPy and results in a new Curve invariant:
End of explanation
"""
ss_spot_new = (V_ss_new.diff(y)/V_ss_new.diff(x)).simplify()
ss_spot_new
sp.Eq(sp.Limit(ss_spot_new, s, sp.oo), (ss_spot_new).limit(s, sp.oo))
sp.Eq(sp.Limit(ss_spot_new, s, 0), (ss_spot_new).limit(s, 0))
"""
Explanation: We see that the spot price has the same desired limiting behaviour:
End of explanation
"""
|
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/deep-learning/tensor-flow-exercises/2_fullyconnected.ipynb
|
mit
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import cPickle as pickle
import numpy as np
import tensorflow as tf
"""
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this exercise is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
"""
Explanation: First reload the data we generated in 1_notmist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
"""
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print 'Initialized'
for step in xrange(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print 'Loss at step', step, ':', l
print 'Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :])
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print 'Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels)
print 'Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)
"""
Explanation: Let's run this computation and iterate:
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
End of explanation
"""
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print "Initialized"
for step in xrange(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print "Minibatch loss at step", step, ":", l
print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)
print "Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels)
print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)
"""
Explanation: Let's run it:
End of explanation
"""
|
gdsfactory/gdsfactory
|
docs/notebooks/04_components_hierarchy.ipynb
|
mit
|
import gdsfactory as gf
# gf.CONF.plotter = 'holoviews'
@gf.cell
def bend_with_straight(
bend=gf.components.bend_euler,
straight=gf.components.straight,
) -> gf.Component:
c = gf.Component()
b = bend()
s = straight()
bref = c << b
sref = c << s
sref.connect("o2", bref.ports["o2"])
c.info["length"] = b.info["length"] + s.info["length"]
return c
c = bend_with_straight()
print(c.metadata.info.length)
c
"""
Explanation: Components with hierarchy
You can define some components (waveguides, bends, couplers) as a stand alone components, with basic input parameters (width, length, radius ...)
Then you can re-use those components in a more complex hiearchical components.
gdsfactory does this by passing the higher level component with the lower level functions to build the components.
You can customize any of the functions thanks to functools.partial
End of explanation
"""
c = gf.components.bend_circular()
c
"""
Explanation: Lets customize the functions that we pass.
For example, we want to increase the radius of the bend from the default 10um to 20um.
End of explanation
"""
from functools import partial
bend20 = partial(gf.components.bend_circular, radius=20)
b = bend20()
b
type(bend20)
bend20.func.__name__
bend20.keywords
b = bend_with_straight(bend=bend20)
print(b.metadata.info.length)
b
# You can still modify the bend to have any bend radius
b3 = bend20(radius=10)
b3
"""
Explanation: functools.partial
Partial lets you define different default parameters for a function, so you can modify the settings for the child cells.
End of explanation
"""
import gdsfactory as gf
pad = gf.partial(gf.components.pad, layer=(41, 0))
c = pad()
c
"""
Explanation: PDK custom fab
You can define a new PDK by creating function that customize partial parameters of the generic functions.
Lets say that this PDK uses layer (41, 0) for the pads (instead of the layer used in the generic pad function).
You can also access functools.partial from gf.partial
End of explanation
"""
c1 = gf.components.straight()
c1
straight_wide = gf.partial(gf.components.straight, width=3)
c3 = straight_wide()
c3
c1 = gf.components.straight(width=3)
c1
c2 = gf.add_tapers(c1)
c2
c2.metadata_child.changed # You can still access the child metadata
c3 = gf.routing.add_fiber_array(c2, with_loopback=False)
c3
c3.metadata_child.changed # You can still access the child metadata
"""
Explanation: Composing functions
You can combine more complex functions out of smaller functions.
Lets say that we want to add tapers and grating couplers to a wide waveguide.
End of explanation
"""
import toolz
add_fiber_array = gf.partial(gf.routing.add_fiber_array, with_loopback=False)
add_tapers = gf.add_tapers
# pipe is more readable than the equivalent add_fiber_array(add_tapers(c1))
c3 = toolz.pipe(c1, add_tapers, add_fiber_array)
c3
"""
Explanation: Lets do it with a single step thanks to toolz.pipe
End of explanation
"""
add_tapers_fiber_array = toolz.compose_left(add_tapers, add_fiber_array)
c4 = add_tapers_fiber_array(c1)
c4
"""
Explanation: we can even combine add_tapers and add_fiber_array thanks to toolz.compose or toolz.compose
For example:
End of explanation
"""
c5 = add_fiber_array(add_tapers(c1))
c5
"""
Explanation: is equivalent to
End of explanation
"""
add_tapers_fiber_array = toolz.compose(add_fiber_array, add_tapers)
c6 = add_tapers_fiber_array(c1)
c6
"""
Explanation: as well as equivalent to
End of explanation
"""
c7 = toolz.pipe(c1, add_tapers, add_fiber_array)
c7
c7.metadata_child.changed # You can still access the child metadata
c7.metadata.child.child.name, c7.metadata.child.child.function_name
c7.metadata.child.name, c7.metadata.child.function_name
c7.metadata.name, c7.metadata.function_name
c7.metadata.changed.keys()
"""
Explanation: or
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/image_classification/labs/1_mnist_linear.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import Dense, Flatten, Softmax
print(tf.__version__)
"""
Explanation: MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image model on MNIST using the tf.keras API. It builds the foundation for this <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb">companion notebook</a>, which explores tackling the same problem with other types of models such as DNN and CNN.
Learning objectives
Know how to read and display image data.
Know how to find incorrect predictions to analyze the model.
Visually see how computers see images.
End of explanation
"""
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
HEIGHT, WIDTH = x_train[0].shape
NCLASSES = tf.size(tf.unique(y_train).y)
print("Image height x width is", HEIGHT, "x", WIDTH)
tf.print("There are", NCLASSES, "classes")
"""
Explanation: Exploring the data
The MNIST dataset is already included in tensorflow through the keras datasets module. Let's load it and get a sense of the data.
End of explanation
"""
IMGNO = 12
# Uncomment to see raw numerical values.
# print(x_test[IMGNO])
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
print("The label for image number", IMGNO, "is", y_test[IMGNO])
"""
Explanation: Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
End of explanation
"""
def linear_model():
# TODO: Build a sequential model and compile it.
return model
"""
Explanation: Define the model
Let's start with a very simple linear classifier. This was the first method to be tried on MNIST in 1998, and scored an 88% accuracy. Quite ground breaking at the time!
We can build our linear classifier using the tf.keras API, so we don't have to define or initialize our weights and biases. This happens automatically for us in the background. We can also add a softmax layer to transform the logits into probabilities. Finally, we can compile the model using categorical cross entropy in order to strongly penalize high probability predictions that were incorrect.
When building more complex models such as DNNs and CNNs our code will be more readable by using the tf.keras API. Let's get one working so we can test it and use it as a benchmark.
End of explanation
"""
BUFFER_SIZE = 5000
BATCH_SIZE = 100
def scale(image, label):
# TODO
def load_dataset(training=True):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = mnist
x = x_train if training else x_test
y = y_train if training else y_test
# TODO: a) one-hot encode labels, apply `scale` function, and create dataset.
# One-hot encode the classes
if training:
# TODO
return dataset
def create_shape_test(training):
dataset = load_dataset(training=training)
data_iter = dataset.__iter__()
(images, labels) = data_iter.get_next()
expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)
expected_label_ndim = 2
assert(images.shape == expected_image_shape)
assert(labels.numpy().ndim == expected_label_ndim)
test_name = 'training' if training else 'eval'
print("Test for", test_name, "passed!")
create_shape_test(True)
create_shape_test(False)
"""
Explanation: Write Input Functions
As usual, we need to specify input functions for training and evaluating. We'll scale each pixel value so it's a decimal value between 0 and 1 as a way of normalizing the data.
TODO 1: Define the scale function below and build the dataset
End of explanation
"""
NUM_EPOCHS = 10
STEPS_PER_EPOCH = 100
model = linear_model()
train_data = load_dataset()
validation_data = load_dataset(training=False)
OUTDIR = "mnist_linear/"
checkpoint_callback = ModelCheckpoint(
OUTDIR, save_weights_only=True, verbose=1)
tensorboard_callback = TensorBoard(log_dir=OUTDIR)
history = model.fit(
# TODO: specify training/eval data, # epochs, steps per epoch.
verbose=2,
callbacks=[checkpoint_callback, tensorboard_callback]
)
BENCHMARK_ERROR = .12
BENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
assert(accuracy[-1] > BENCHMARK_ACCURACY)
assert(val_accuracy[-1] > BENCHMARK_ACCURACY)
print("Test to beat benchmark accuracy passed!")
assert(accuracy[0] < accuracy[1])
assert(accuracy[1] < accuracy[-1])
assert(val_accuracy[0] < val_accuracy[1])
assert(val_accuracy[1] < val_accuracy[-1])
print("Test model accuracy is improving passed!")
assert(loss[0] > loss[1])
assert(loss[1] > loss[-1])
assert(val_loss[0] > val_loss[1])
assert(val_loss[1] > val_loss[-1])
print("Test loss is decreasing passed!")
"""
Explanation: Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
End of explanation
"""
image_numbers = range(0, 10, 1) # Change me, please.
def load_prediction_dataset():
dataset = (x_test[image_numbers], y_test[image_numbers])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
dataset = dataset.map(scale).batch(len(image_numbers))
return dataset
predicted_results = model.predict(load_prediction_dataset())
for index, prediction in enumerate(predicted_results):
predicted_value = np.argmax(prediction)
actual_value = y_test[image_numbers[index]]
if actual_value != predicted_value:
print("image number: " + str(image_numbers[index]))
print("the prediction was " + str(predicted_value))
print("the actual label is " + str(actual_value))
print("")
bad_image_number = 8
plt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));
"""
Explanation: Evaluating Predictions
Were you able to get an accuracy of over 90%? Not bad for a linear estimator! Let's make some predictions and see if we can find where the model has trouble. Change the range of values below to find incorrect predictions, and plot the corresponding images. What would you have guessed for these images?
TODO 2: Change the range below to find an incorrect prediction
End of explanation
"""
DIGIT = 0 # Change me to be an integer from 0 to 9.
LAYER = 1 # Layer 0 flattens image, so no weights
WEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases
dense_layer_weights = model.layers[LAYER].get_weights()
digit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]
plt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))
"""
Explanation: It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
Each of the 10 neurons in the dense layer of our model has 785 weights feeding into it. That's 1 weight for every pixel in the image + 1 for a bias term. These weights are flattened feeding into the model, but we can reshape them back into the original image dimensions to see what the computer sees.
TODO 3: Reshape the layer weights to be the shape of an input image and plot.
End of explanation
"""
|
noammor/coursera-machinelearning-python
|
ex4/ml-ex4.ipynb
|
mit
|
import numpy as np
import scipy.io
import scipy.optimize
import matplotlib.pyplot as plt
%matplotlib inline
# uncomment for console - useful for debugging
# %qtconsole
ex3data1 = scipy.io.loadmat("./ex4data1.mat")
X = ex3data1['X']
y = ex3data1['y'][:,0]
m, n = X.shape
m, n
input_layer_size = n # 20x20 Input Images of Digits
hidden_layer_size = 25 # 25 hidden units
num_labels = 10 # 10 labels, from 1 to 10
# (note that we have mapped "0" to label 10)
lambda_ = 1
"""
Explanation: Exercise 4: Neural Network Learning
End of explanation
"""
def display(X, display_rows=5, display_cols=5, figsize=(4,4), random_x=False):
m = X.shape[0]
fig, axes = plt.subplots(display_rows, display_cols, figsize=figsize)
fig.subplots_adjust(wspace=0.1, hspace=0.1)
import random
for i, ax in enumerate(axes.flat):
ax.set_axis_off()
x = None
if random_x:
x = random.randint(0, m-1)
else:
x = i
image = X[x].reshape(20, 20).T
image = image / np.max(image)
ax.imshow(image, cmap=plt.cm.Greys_r)
display(X, random_x=True)
def add_ones_column(array):
return np.insert(array, 0, 1, axis=1)
"""
Explanation: Part 1: Loading and Visualizing Data
We start the exercise by first loading and visualizing the dataset. You will be working with a dataset that contains handwritten digits.
End of explanation
"""
ex4weights = scipy.io.loadmat('./ex4weights.mat')
Theta1 = ex4weights['Theta1']
Theta2 = ex4weights['Theta2']
print(Theta1.shape, Theta2.shape)
"""
Explanation: Part 2: Loading Parameters
In this part of the exercise, we load some pre-initialized
neural network parameters.
End of explanation
"""
nn_params = np.concatenate((Theta1.flat, Theta2.flat))
nn_params.shape
def sigmoid(z):
return 1 / (1+np.exp(-z))
"""
Explanation: Unrolling the parameters into one vector:
End of explanation
"""
def nn_cost_function(nn_params, input_layer_size, hidden_layer_size,
num_labels, X, y, lambda_):
#NNCOSTFUNCTION Implements the neural network cost function for a two layer
#neural network which performs classification
# [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
# X, y, lambda) computes the cost and gradient of the neural network. The
# parameters for the neural network are "unrolled" into the vector
# nn_params and need to be converted back into the weight matrices.
#
# The returned parameter grad should be a "unrolled" vector of the
# partial derivatives of the neural network.
#
# Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
# for our 2 layer neural network
t1_len = (input_layer_size+1)*hidden_layer_size
Theta1 = nn_params[:t1_len].reshape(hidden_layer_size, input_layer_size+1)
Theta2 = nn_params[t1_len:].reshape(num_labels, hidden_layer_size+1)
m = X.shape[0]
# You need to return the following variables correctly
J = 0;
Theta1_grad = np.zeros(Theta1.shape);
Theta2_grad = np.zeros(Theta2.shape);
# ====================== YOUR CODE HERE ======================
# Instructions: You should complete the code by working through the
# following parts.
#
# Part 1: Feedforward the neural network and return the cost in the
# variable J. After implementing Part 1, you can verify that your
# cost function computation is correct by verifying the cost
# computed for lambda == 0.
#
# Part 2: Implement the backpropagation algorithm to compute the gradients
# Theta1_grad and Theta2_grad. You should return the partial derivatives of
# the cost function with respect to Theta1 and Theta2 in Theta1_grad and
# Theta2_grad, respectively. After implementing Part 2, you can check
# that your implementation is correct by running checkNNGradients
#
# Note: The vector y passed into the function is a vector of labels
# containing values from 1..K. You need to map this vector into a
# binary vector of 1's and 0's to be used with the neural network
# cost function.
#
# Hint: We recommend implementing backpropagation using a for-loop
# over the training examples if you are implementing it for the
# first time.
#
# Part 3: Implement regularization with the cost function and gradients.
#
# Hint: You can implement this around the code for
# backpropagation. That is, you can compute the gradients for
# the regularization separately and then add them to Theta1_grad
# and Theta2_grad from Part 2.
#
# =========================================================================
# Unroll gradients
gradient = np.concatenate((Theta1_grad.flat, Theta2_grad.flat))
return J, gradient
"""
Explanation: Part 3: Compute Cost (Feedforward)
To the neural network, you should first start by implementing the
feedforward part of the neural network that returns the cost only. You
should complete the code in nn_cost_function() to return cost. After
implementing the feedforward to compute the cost, you can verify that
your implementation is correct by verifying that you get the same cost
as us for the fixed debugging parameters.
We suggest implementing the feedforward cost without regularization
first so that it will be easier for you to debug. Later, in part 4, you
will get to implement the regularized cost.
End of explanation
"""
lambda_ = 0 # No regularization
nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
"""
Explanation: The cost at the given parameters should be about 0.287629.
End of explanation
"""
lambda_ = 1
nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
"""
Explanation: The cost at the given parameters and a regularization factor of 1 should be about 0.38377.
Part 4: Implement Regularization
Once your cost function implementation is correct, you should now
continue to implement the regularization with the cost.
End of explanation
"""
def sigmoid_gradient(z):
#SIGMOIDGRADIENT returns the gradient of the sigmoid function
#evaluated at z
# g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
# evaluated at z. This should work regardless if z is a matrix or a
# vector. In particular, if z is a vector or matrix, you should return
# the gradient for each element.
g = np.zeros(z.shape)
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the gradient of the sigmoid function evaluated at
# each value of z (z can be a matrix, vector or scalar).
# =============================================================
return g
sigmoid_gradient(np.array([1, -0.5, 0, 0.5, 1]))
"""
Explanation: Part 5: Sigmoid Gradient
Before you start implementing the neural network, you will first
implement the gradient for the sigmoid function. You should complete the
code in sigmoid_gradient.
End of explanation
"""
def rand_initialize_weight(L_in, L_out):
#RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
#incoming connections and L_out outgoing connections
# W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights
# of a layer with L_in incoming connections and L_out outgoing
# connections.
#
# Note that W should be set to a matrix of size(L_out, 1 + L_in) as
# the column row of W handles the "bias" terms
#
# You need to return the following variables correctly
W = np.zeros((L_out, L_in))
# ====================== YOUR CODE HERE ======================
# Instructions: Initialize W randomly so that we break the symmetry while
# training the neural network.
#
# Note: The first row of W corresponds to the parameters for the bias units
#
return W
# =========================================================================
"""
Explanation: Part 6: Initializing Pameters
In this part of the exercise, you will be starting to implment a two
layer neural network that classifies digits. You will start by
implementing a function to initialize the weights of the neural network.
End of explanation
"""
def numerical_gradient(f, x, dx=1e-6):
perturb = np.zeros(x.size)
result = np.zeros(x.size)
for i in range(x.size):
perturb[i] = dx
result[i] = (f(x+perturb) - f(x-perturb)) / (2*dx)
perturb[i] = 0
return result
def check_NN_gradients(lambda_=0):
input_layer_size = 3
hidden_layer_size = 5
num_labels = 3
m = 5
def debug_matrix(fan_out, fan_in):
W = np.sin(np.arange(fan_out * (fan_in+1))+1) / 10
return W.reshape(fan_out, fan_in+1)
Theta1 = debug_matrix(hidden_layer_size, input_layer_size)
Theta2 = debug_matrix(num_labels, hidden_layer_size)
X = debug_matrix(m, input_layer_size - 1)
y = 1 + ((1 + np.arange(m)) % num_labels)
nn_params = np.concatenate([Theta1.flat, Theta2.flat])
cost, grad = nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
def just_cost(nn_params):
cost, grad = nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
return cost
return np.sum(np.abs(grad - numerical_gradient(just_cost, nn_params))) / grad.size
"""
Explanation: Part 7: Implement Backpropagation
Once your cost matches up with ours, you should proceed to implement the
backpropagation algorithm for the neural network. You should add to the
code you've written in nn_cost_function to return the partial
derivatives of the parameters.
End of explanation
"""
check_NN_gradients()
initial_Theta1 = rand_initialize_weight(hidden_layer_size, input_layer_size+1)
initial_Theta2 = rand_initialize_weight(num_labels, hidden_layer_size+1)
"""
Explanation: If your backpropagation implementation is correct, then the relative difference will be small (less than 1e-9).
End of explanation
"""
def cost_fun(nn_params):
return nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
lambda_ = 3
nn_params = np.concatenate((initial_Theta1.flat, initial_Theta2.flat))
res = scipy.optimize.minimize(cost_fun, nn_params, jac=True, method='L-BFGS-B',
options=dict(maxiter=200, disp=True))
res
"""
Explanation: Part 8: Implement Regularization
Once your backpropagation implementation is correct, you should now
continue to implement the regularization with the cost and gradient.
End of explanation
"""
res.fun
"""
Explanation: The cost at lambda = 3 should be about 0.57.
End of explanation
"""
lambda_ = 1
nn_params = np.concatenate((initial_Theta1.flat, initial_Theta2.flat))
res = scipy.optimize.minimize(cost_fun, nn_params, jac=True, method='L-BFGS-B',
options=dict(maxiter=200, disp=True))
nn_params = res.x
"""
Explanation: Part 8: Training NN
You have now implemented all the code necessary to train a neural
network. To train your neural network, we will use scipy.optimize.minimize.
Recall that these
advanced optimizers are able to train our cost functions efficiently as
long as we provide them with the gradient computations.
After you have completed the assignment, change the MaxIter to a larger
value to see how more training helps. You should also try different values of lambda.
End of explanation
"""
t1_len = (input_layer_size+1)*hidden_layer_size
Theta1 = nn_params[:t1_len].reshape(hidden_layer_size, input_layer_size+1)
Theta2 = nn_params[t1_len:].reshape(num_labels, hidden_layer_size+1)
"""
Explanation: Obtain Theta1 and Theta2 back from nn_params:
End of explanation
"""
display(Theta1[:,1:], figsize=(6,6))
def predict(Theta1, Theta2, X):
#PREDICT Predict the label of an input given a trained neural network
# p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
# trained weights of a neural network (Theta1, Theta2)
m = X.shape[0]
num_labels = Theta2.shape[1]
# You need to return the following variables correctly. Remember that
# the given data labels go from 1..10, with 10 representing the digit 0!
p = np.zeros(X.shape[0])
# ====================== YOUR CODE HERE ======================
# ============================================================
return p
predictions = predict(Theta1, Theta2, X)
np.mean(predictions == y)
"""
Explanation: Part 9: Visualize Weights
You can now "visualize" what the neural network is learning by
displaying the hidden units to see what features they are capturing in
the data.
End of explanation
"""
|
flutter/codelabs
|
tfrs-flutter/step5/backend/ranking/ranking.ipynb
|
bsd-3-clause
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
"""
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
"""
Explanation: Recommending movies: ranking
This tutorial is a slightly adapted version of the basic ranking tutorial from TensorFlow Recommenders documentation.
Imports
Let's first get our imports out of the way.
End of explanation
"""
ratings = tfds.load("movielens/100k-ratings", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"user_rating": x["user_rating"]
})
"""
Explanation: Preparing the dataset
We're continuing to use the MovieLens dataset. This time, we're also going to keep the ratings: these are the objectives we are trying to predict.
End of explanation
"""
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
"""
Explanation: We'll split the data by putting 80% of the ratings in the train set, and 20% in the test set.
End of explanation
"""
movie_titles = ratings.batch(1_000_000).map(lambda x: x["movie_title"])
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
"""
Explanation: Next we figure out unique user ids and movie titles present in the data so that we can create the embedding user and movie embedding tables.
End of explanation
"""
class RankingModel(tf.keras.Model):
def __init__(self):
super().__init__()
embedding_dimension = 32
# Compute embeddings for users.
self.user_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
# Compute embeddings for movies.
self.movie_embeddings = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
# Compute predictions.
self.ratings = tf.keras.Sequential([
# Learn multiple dense layers.
tf.keras.layers.Dense(256, activation="relu"),
tf.keras.layers.Dense(64, activation="relu"),
# Make rating predictions in the final layer.
tf.keras.layers.Dense(1)
])
def call(self, inputs):
user_id, movie_title = inputs
user_embedding = self.user_embeddings(user_id)
movie_embedding = self.movie_embeddings(movie_title)
return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1))
"""
Explanation: Implementing a model
Architecture
Ranking models do not face the same efficiency constraints as retrieval models do, and so we have a little bit more freedom in our choice of architectures. We can implement our ranking model as follows:
End of explanation
"""
task = tfrs.tasks.Ranking(
loss = tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
"""
Explanation: Loss and metrics
We'll make use of the Ranking task object: a convenience wrapper that bundles together the loss function and metric computation.
We'll use it together with the MeanSquaredError Keras loss in order to predict the ratings.
End of explanation
"""
class MovielensModel(tfrs.models.Model):
def __init__(self):
super().__init__()
self.ranking_model: tf.keras.Model = RankingModel()
self.task: tf.keras.layers.Layer = tfrs.tasks.Ranking(
loss = tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError()]
)
def call(self, features: Dict[str, tf.Tensor]) -> tf.Tensor:
return self.ranking_model(
(features["user_id"], features["movie_title"]))
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
labels = features.pop("user_rating")
rating_predictions = self(features)
# The task computes the loss and the metrics.
return self.task(labels=labels, predictions=rating_predictions)
"""
Explanation: The full model
We can now put it all together into a model.
End of explanation
"""
model = MovielensModel()
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
"""
Explanation: Fitting and evaluating
After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.
Let's first instantiate the model.
End of explanation
"""
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
"""
Explanation: Then shuffle, batch, and cache the training and evaluation data.
End of explanation
"""
model.fit(cached_train, epochs=3)
"""
Explanation: Then train the model:
End of explanation
"""
model.evaluate(cached_test, return_dict=True)
"""
Explanation: As the model trains, the loss is falling and the RMSE metric is improving.
Finally, we can evaluate our model on the test set:
End of explanation
"""
tf.saved_model.save(model, "exported-ranking/123")
"""
Explanation: The lower the RMSE metric, the more accurate our model is at predicting ratings.
Exporting for serving
The model can be easily exported for serving:
End of explanation
"""
# Zip the SavedModel folder for easier download
!zip -r exported-ranking.zip exported-ranking/
"""
Explanation: We will deploy the model with TensorFlow Serving soon.
End of explanation
"""
|
quantumlib/OpenFermion-Cirq
|
openfermioncirq/experiments/hfvqe/quickstart.ipynb
|
apache-2.0
|
# Import library functions and define a helper function
import numpy as np
import cirq
from openfermioncirq.experiments.hfvqe.gradient_hf import rhf_func_generator
from openfermioncirq.experiments.hfvqe.opdm_functionals import OpdmFunctional
from openfermioncirq.experiments.hfvqe.analysis import (compute_opdm,
mcweeny_purification,
resample_opdm,
fidelity_witness,
fidelity)
from openfermioncirq.experiments.hfvqe.third_party.higham import fixed_trace_positive_projection
from openfermioncirq.experiments.hfvqe.molecular_example import make_h6_1_3
"""
Explanation: Using the library
This code tutorial shows how to estimate a 1-RDM and perform variational optimization
End of explanation
"""
rhf_objective, molecule, parameters, obi, tbi = make_h6_1_3()
ansatz, energy, gradient = rhf_func_generator(rhf_objective)
# settings for quantum resources
qubits = [cirq.GridQubit(0, x) for x in range(molecule.n_orbitals)]
sampler = cirq.Simulator(dtype=np.complex128) # this can be a QuantumEngine
# OpdmFunctional contains an interface for running experiments
opdm_func = OpdmFunctional(qubits=qubits,
sampler=sampler,
constant=molecule.nuclear_repulsion,
one_body_integrals=obi,
two_body_integrals=tbi,
num_electrons=molecule.n_electrons // 2, # only simulate spin-up electrons
clean_xxyy=True,
purification=True
)
"""
Explanation: Generate the input files, set up quantum resources, and set up the OpdmFunctional to make measurements.
End of explanation
"""
# 1.
# default to 250_000 shots for each circuit.
# 7 circuits total, printed for your viewing pleasure
# return value is a dictionary with circuit results for each permutation
measurement_data = opdm_func.calculate_data(parameters)
# 2.
opdm, var_dict = compute_opdm(measurement_data,
return_variance=True)
opdm_pure = mcweeny_purification(opdm)
# 3.
raw_energies = []
raw_fidelity_witness = []
purified_eneriges = []
purified_fidelity_witness = []
purified_fidelity = []
true_unitary = ansatz(parameters)
nocc = molecule.n_electrons // 2
nvirt = molecule.n_orbitals - nocc
initial_fock_state = [1] * nocc + [0] * nvirt
for _ in range(1000): # 1000 repetitions of the measurement
new_opdm = resample_opdm(opdm, var_dict)
raw_energies.append(opdm_func.energy_from_opdm(new_opdm))
raw_fidelity_witness.append(
fidelity_witness(target_unitary=true_unitary,
omega=initial_fock_state,
measured_opdm=new_opdm)
)
# fix positivity and trace of sampled 1-RDM if strictly outside
# feasible set
w, v = np.linalg.eigh(new_opdm)
if len(np.where(w < 0)[0]) > 0:
new_opdm = fixed_trace_positive_projection(new_opdm, nocc)
new_opdm_pure = mcweeny_purification(new_opdm)
purified_eneriges.append(opdm_func.energy_from_opdm(new_opdm_pure))
purified_fidelity_witness.append(
fidelity_witness(target_unitary=true_unitary,
omega=initial_fock_state,
measured_opdm=new_opdm_pure)
)
purified_fidelity.append(
fidelity(target_unitary=true_unitary,
measured_opdm=new_opdm_pure)
)
print('\n\n\n\n')
print("Canonical Hartree-Fock energy ", molecule.hf_energy)
print("True energy ", energy(parameters))
print("Raw energy ", opdm_func.energy_from_opdm(opdm),
"+- ", np.std(raw_energies))
print("Raw fidelity witness ", np.mean(raw_fidelity_witness).real,
"+- ", np.std(raw_fidelity_witness))
print("purified energy ", opdm_func.energy_from_opdm(opdm_pure),
"+- ", np.std(purified_eneriges))
print("Purified fidelity witness ", np.mean(purified_fidelity_witness).real,
"+- ", np.std(purified_fidelity_witness))
print("Purified fidelity ", np.mean(purified_fidelity).real,
"+- ", np.std(purified_fidelity))
"""
Explanation: The displayed text is the output of the gradient based restricted Hartree-Fock. We define the gradient in rhf_objective and use the conjugate-gradient optimizer to optimize the basis rotation parameters. This is equivalent to doing Hartree-Fock theory from the canonical transformation perspective.
Next, we will do the following:
Do measurements for a given set of parameters
Compute 1-RDM, variances, and purification
Compute energy, fidelities, and errorbars
End of explanation
"""
from openfermioncirq.experiments.hfvqe.mfopt import moving_frame_augmented_hessian_optimizer
from openfermioncirq.experiments.hfvqe.opdm_functionals import RDMGenerator
import matplotlib.pyplot as plt
rdm_generator = RDMGenerator(opdm_func, purification=True)
opdm_generator = rdm_generator.opdm_generator
result = moving_frame_augmented_hessian_optimizer(
rhf_objective=rhf_objective,
initial_parameters=parameters + 1.0E-1,
opdm_aa_measurement_func=opdm_generator,
verbose=True, delta=0.03,
max_iter=20,
hessian_update='diagonal',
rtol=0.50E-2)
"""
Explanation: This should print out the various energies estimated from the 1-RDM along with error bars. Generated from resampling the 1-RDM based on the estimated covariance.
Optimization
We use the sampling functionality to variationally relax the parameters of
my ansatz such that the energy is decreased.
For this we will need the augmented Hessian optimizer
The optimizerer code we have takes:
rhf_objective object, initial parameters,
a function that takes a n x n unitary and returns an opdm
maximum iterations,
hassian_update which indicates how much of the hessian to use
rtol which is the gradient stopping condition.
A natural thing that we will want to save is the variance dictionary of
the non-purified 1-RDM. This is accomplished by wrapping the 1-RDM
estimation code in another object that keeps track of the variance
dictionaries.
End of explanation
"""
plt.semilogy(range(len(result.func_vals)),
np.abs(np.array(result.func_vals) - energy(parameters)),
'C0o-')
plt.xlabel("Optimization Iterations", fontsize=18)
plt.ylabel(r"$|E - E^{*}|$", fontsize=18)
plt.tight_layout()
plt.show()
"""
Explanation: Each interation prints out a variety of information that the user might find useful. Watching energies go down is known to be one of the best forms of entertainment during a shelter-in-place order.
After the optimization we can print the energy as a function of iteration number to see close the energy gets to the true minium.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/production_ml/labs/samples/core/ai_platform/ai_platform.ipynb
|
apache-2.0
|
%%capture
# Install the SDK (Uncomment the code if the SDK is not installed before)
!python3 -m pip install 'kfp>=0.1.31' --quiet
!python3 -m pip install pandas --upgrade -q
"""
Explanation: Chicago Crime Prediction Pipeline
An example notebook that demonstrates how to:
* Download data from BigQuery
* Create a Kubeflow pipeline
* Include Google Cloud AI Platform components to train and deploy the model in the pipeline
* Submit a job for execution
* Query the final deployed model
The model forecasts how many crimes are expected to be reported the next day, based on how many were reported over the previous n days.
Imports
End of explanation
"""
import json
import kfp
import kfp.components as comp
import kfp.dsl as dsl
import pandas as pd
import time
"""
Explanation: Restart the kernel for changes to take effect
End of explanation
"""
# Required Parameters
project_id = '<ADD GCP PROJECT HERE>'
output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash
# Optional Parameters
REGION = 'us-central1'
RUNTIME_VERSION = '1.13'
PACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])
TRAINER_OUTPUT_GCS_PATH = output + '/train/output/' + str(int(time.time())) + '/'
DATA_GCS_PATH = output + '/reports.csv'
PYTHON_MODULE = 'trainer.task'
PIPELINE_NAME = 'Chicago Crime Prediction'
PIPELINE_FILENAME_PREFIX = 'chicago'
PIPELINE_DESCRIPTION = ''
MODEL_NAME = 'chicago_pipeline_model' + str(int(time.time()))
MODEL_VERSION = 'chicago_pipeline_model_v1' + str(int(time.time()))
"""
Explanation: Pipeline
Constants
End of explanation
"""
bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')
QUERY = """
SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day
FROM `bigquery-public-data.chicago_crime.crime`
GROUP BY day
ORDER BY day
"""
def download(project_id, data_gcs_path):
return bigquery_query_op(
query=QUERY,
project_id=project_id,
output_gcs_path=data_gcs_path
)
"""
Explanation: Download data
Define a download function that uses the BigQuery component
End of explanation
"""
mlengine_train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/train/component.yaml')
def train(project_id,
trainer_args,
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version):
return mlengine_train_op(
project_id=project_id,
python_module=python_module,
package_uris=package_uris,
region=region,
args=trainer_args,
job_dir=trainer_output_gcs_path,
runtime_version=runtime_version
)
"""
Explanation: Train the model
Run training code that will pre-process the data and then submit a training job to the AI Platform.
End of explanation
"""
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml')
def deploy(
project_id,
model_uri,
model_id,
model_version,
runtime_version):
return mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
version_id=model_version,
runtime_version=runtime_version,
replace_existing_version=True,
set_default=True)
"""
Explanation: Deploy model
Deploy the model with the ID given from the training step
End of explanation
"""
@dsl.pipeline(
name=PIPELINE_NAME,
description=PIPELINE_DESCRIPTION
)
def pipeline(
data_gcs_path=DATA_GCS_PATH,
gcs_working_dir=output,
project_id=project_id,
python_module=PYTHON_MODULE,
region=REGION,
runtime_version=RUNTIME_VERSION,
package_uris=PACKAGE_URIS,
trainer_output_gcs_path=TRAINER_OUTPUT_GCS_PATH,
):
download_task = download(project_id,
data_gcs_path)
train_task = train(project_id,
json.dumps(
['--data-file-url',
'%s' % download_task.outputs['output_gcs_path'],
'--job-dir',
output]
),
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version)
deploy_task = deploy(project_id,
train_task.outputs['job_dir'],
MODEL_NAME,
MODEL_VERSION,
runtime_version)
return True
# Reference for invocation later
pipeline_func = pipeline
"""
Explanation: Define pipeline
End of explanation
"""
pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipeline, arguments={})
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
run_detail = pipeline.wait_for_run_completion(timeout=1800)
print(run_detail.run.status)
"""
Explanation: Wait for the pipeline to finish
End of explanation
"""
%%bash
gcloud config set ai_platform/region global
import os
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
"""
Explanation: Use the deployed model to predict (online prediction)
End of explanation
"""
%%writefile test.json
{"lstm_input": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]}
!gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.json
"""
Explanation: Create normalized input representing 14 days prior to prediction day.
End of explanation
"""
!gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME -q
!gcloud ai-platform models delete $MODEL_NAME -q
"""
Explanation: Examine cloud services invoked by the pipeline
BigQuery query: https://console.cloud.google.com/bigquery?page=queries (click on 'Project History')
AI Platform training job: https://console.cloud.google.com/ai-platform/jobs
AI Platform model serving: https://console.cloud.google.com/ai-platform/models
Clean models
End of explanation
"""
|
ScottFreeLLC/AlphaPy
|
alphapy/examples/Trading Model/A Trading Model.ipynb
|
apache-2.0
|
%matplotlib inline
import numpy as np
import pandas as pd
pwd
cd output
ls
"""
Explanation: This notebook analyzes the predictions of the trading model. <br/>At different thresholds, how effective is the model at predicting<br/> larger-than-average range days?
End of explanation
"""
ranking_frame = pd.read_csv('rankings_20170425.csv')
ranking_frame.columns
"""
Explanation: This file contains the ranked predictions of the test set.
End of explanation
"""
ranking_frame.rrover.head(20)
ranking_frame.rrover.tail(20)
"""
Explanation: The probabilities are in descending order. Observe the greater number of True values at the top of the rankings versus the bottom.
End of explanation
"""
ranking_frame['bins'] = pd.qcut(ranking_frame.probability, 10, labels=False)
grouped = ranking_frame.groupby('bins')
def get_ratio(series):
ratio = series.value_counts()[1] / series.size
return ratio
grouped['rrover'].apply(get_ratio).plot(kind='bar')
"""
Explanation: Let's plot the True/False ratios for each probability decile. These ratios should roughly reflect the trend in the calibration plot.
End of explanation
"""
|
rochefort-lab/fissa
|
examples/Basic usage.ipynb
|
gpl-3.0
|
# Import the FISSA toolbox
import fissa
"""
Explanation: Object-oriented FISSA interface
This notebook contains a step-by-step example of how to use the object-oriented (class-based) interface to the FISSA toolbox.
The object-oriented interface, which involves creating a fissa.Experiment instance, allows more flexiblity than the fissa.run_fissa function.
For more details about the methodology behind FISSA, please see our paper:
Keemink, S. W., Lowe, S. C., Pakan, J. M. P., Dylda, E., van Rossum, M. C. W., and Rochefort, N. L. FISSA: A neuropil decontamination toolbox for calcium imaging signals, Scientific Reports, 8(1):3493, 2018. doi: 10.1038/s41598-018-21640-2.
See basic_usage.py (or basic_usage_windows.py for Windows users) for a short example script outside of a notebook interface.
Import packages
Before we can begin, we need to import fissa.
End of explanation
"""
# For plotting our results, import numpy and matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Fetch the colormap object for Cynthia Brewer's Paired color scheme
colors = plt.get_cmap("Paired")
"""
Explanation: We also need to import some plotting dependencies which we'll make use in this notebook to display the results.
End of explanation
"""
# Define path to imagery and to the ROI set
images_location = "exampleData/20150529"
rois_location = "exampleData/20150429.zip"
# Create the experiment object
experiment = fissa.Experiment(images_location, rois_location)
"""
Explanation: Defining an experiment
To run a separation step with fissa, you need create a fissa.Experiment object, which will hold your extraction parameters and results.
The mandatory inputs to fissa.Experiment are:
the experiment images
the regions of interest (ROIs) to extract
Images can be given as a path to a folder containing tiff stacks:
python
images = "folder"
Each of these tiff-stacks in the folder (e.g. "folder/trial_001.tif") is a trial with many frames.
Although we refer to one trial as an image, it is actually a video recording.
Alternatively, the image data can be given as a list of paths to tiffs:
python
images = ["folder/trial_001.tif", "folder/trial_002.tif", "folder/trial_003.tif"]
or as a list of arrays which you have already loaded into memory:
python
images = [array1, array2, array3, ...]
For the regions of interest (ROIs) input, you can either provide a single set of ROIs, or a set of ROIs for every image.
If the ROIs were defined using ImageJ, use ImageJ's export function to save them in a zip.
Then, provide the ROI filename.
python
rois = "rois.zip" # for a single set of ROIs used across all images
The same set of ROIs will be used for every image in images.
Sometimes there is motion between trials causing the alignment of the ROIs to drift.
In such a situation, you may need to use a slightly different location of the ROIs for each trial.
This can be handled by providing FISSA with a list of ROI sets — one ROI set (i.e. one ImageJ zip file) per trial.
python
rois = ["rois1.zip", "rois2.zip", ...] # for a unique roiset for each image
Please note that the ROIs defined in each ROI set must correspond to the same physical reigons across all trials, and that the order must be consistent.
That is to say, the 1st ROI listed in each ROI set must correspond to the same item appearing in each trial, etc.
In this notebook, we will demonstrate how to use FISSA with ImageJ ROI sets, saved as zip files.
However, you are not restricted to providing your ROIs to FISSA in this format.
FISSA will also accept ROIs which are arbitrarily defined by providing them as arrays (numpy.ndarray objects).
ROIs provided in this way can be defined either as boolean-valued masks indicating the presence of a ROI per-pixel in the image, or defined as a list of coordinates defining the boundary of the ROI.
For examples of such usage, see our Suite2p, CNMF, and SIMA example notebooks.
As an example, we will run FISSA on a small test dataset.
The test dataset can be found and downloaded from the examples folder of the fissa repository, along with the source for this example notebook.
End of explanation
"""
experiment.separate()
"""
Explanation: Extracting traces and separating them
Now we have our experiment object, we need to call the separate() method to run FISSA on the data.
FISSA will extract the traces, and then separate them.
End of explanation
"""
trial = 0
# Plot the mean image for one of the trials
plt.figure(figsize=(7, 7))
plt.imshow(experiment.means[trial], cmap="gray")
plt.title("Mean over Trial {}".format(trial))
plt.show()
"""
Explanation: Accessing results
After running experiment.separate() the analysis parameters, raw traces, output signals, ROI definitions, and mean images are stored as attributes of the experiment object, and can be accessed as follows.
Mean image
The temporal-mean image for each trial is stored in experiment.means.
We can read out and plot the mean of one of the trials as follows.
End of explanation
"""
# Plot the mean image over all the trials
plt.figure(figsize=(7, 7))
plt.imshow(np.mean(experiment.means, axis=0), cmap="gray")
plt.title("Mean over all trials")
plt.show()
"""
Explanation: Plotting the mean image for each trial can be useful to see if there is motion drift between trials.
As a summary, you can also take the mean over all trials.
Some cells don't appear in every trial, so the overall mean may indicate the location of more cells than the mean image from a single trial.
End of explanation
"""
# Plot one ROI along with its neuropil regions
# Select which ROI and trial to plot
trial = 0
roi = 3
# Plot the mean image for the trial
plt.figure(figsize=(7, 7))
plt.imshow(experiment.means[trial], cmap="gray")
# Get current axes limits
XLIM = plt.xlim()
YLIM = plt.ylim()
# Check the number of neuropil regions
n_npil = len(experiment.roi_polys[roi, trial]) - 1
# Plot all the neuropil regions in yellow
for i_npil in range(1, n_npil + 1):
for contour in experiment.roi_polys[roi, trial][i_npil]:
plt.fill(
contour[:, 1],
contour[:, 0],
facecolor="none",
edgecolor="y",
alpha=0.6,
)
# Plot the ROI outline in red
for contour in experiment.roi_polys[roi, trial][0]:
plt.fill(
contour[:, 1],
contour[:, 0],
facecolor="none",
edgecolor="r",
alpha=0.6,
)
# Reset axes limits to be correct for the image
plt.xlim(XLIM)
plt.ylim(YLIM)
plt.title("ROI {}, and its {} neuropil regions".format(roi, experiment.nRegions))
plt.show()
"""
Explanation: ROI outlines
The ROI outlines, and the definitions of the surrounding neuropil regions added by FISSA to determine the contaminating signals, are stored in the experiment.roi_polys attribute.
For cell number c and TIFF number t, the set of ROIs for that cell and TIFF is located at
python
experiment.roi_polys[c, t][0][0] # user-provided ROI, converted to polygon format
experiment.roi_polys[c, t][n][0] # n = 1, 2, 3, ... the neuropil regions
Sometimes ROIs cannot be expressed as a single polygon (e.g. a ring-ROI, which needs a line for the outside and a line for the inside); in those cases several polygons are used to describe it as:
python
experiment.roi_polys[c, t][n][i] # i iterates over the series of polygons defining the ROI
As an example, we will plot the first ROI along with its surrounding neuropil subregions, overlaid on top of the mean image for one trial.
End of explanation
"""
# Plot all cell ROI locations
# Select which trial (TIFF index) to plot
trial = 0
# Plot the mean image for the trial
plt.figure(figsize=(7, 7))
plt.imshow(experiment.means[trial], cmap="gray")
# Plot each of the cell ROIs
for i_roi in range(len(experiment.roi_polys)):
# Plot border around ROI
for contour in experiment.roi_polys[i_roi, trial][0]:
plt.plot(
contour[:, 1],
contour[:, 0],
color=colors((i_roi * 2 + 1) % colors.N),
)
plt.show()
"""
Explanation: Similarly, we can plot the location of all 4 ROIs used in this experiment.
End of explanation
"""
# Plot sample trace
# Select the ROI and trial to plot
roi = 2
trial = 1
# Create the figure
plt.figure(figsize=(12, 6))
plt.plot(
experiment.raw[roi, trial][0, :],
lw=2,
label="Raw",
color=colors((roi * 2) % colors.N),
)
plt.plot(
experiment.result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.title("ROI {}, Trial {}".format(roi, trial), fontsize=15)
plt.xlabel("Time (frame number)", fontsize=15)
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.grid()
plt.legend()
plt.show()
"""
Explanation: FISSA extracted traces
The final signals after separation can be found in experiment.result as follows.
For cell number c and TIFF number t, the extracted trace is given by:
python
experiment.result[c, t][0, :]
In experiment.result one can find the signals present in the cell ROI, ordered by how strongly they are present (relative to the surrounding regions). experiment.result[c, t][0, :] gives the most strongly present signal, and is considered the cell's "true" signal. [i, :] for i=1,2,3,... gives the other signals which are present in the ROI, but driven by other cells or neuropil.
Before decontamination
The raw extracted signals can be found in experiment.raw in the same way. Now in experiment.raw[c, t][i, :], i indicates the region number, with i=0 being the cell, and i=1,2,3,... indicating the surrounding regions.
As an example, plotting the raw and extracted signals for the second trial for the third cell:
End of explanation
"""
# Plot all ROIs and trials
# Get the number of ROIs and trials
n_roi = experiment.result.shape[0]
n_trial = experiment.result.shape[1]
# Find the maximum signal intensities for each ROI
roi_max_raw = [
np.max([np.max(experiment.raw[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max_result = [
np.max([np.max(experiment.result[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max = np.maximum(roi_max_raw, roi_max_result)
# Plot our figure using subplot panels
plt.figure(figsize=(16, 10))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_trial * n_roi + i_roi
plt.subplot(n_trial, n_roi, i_subplot)
# Plot the data
plt.plot(
experiment.raw[i_roi][i_trial][0, :],
label="Raw",
color=colors((i_roi * 2) % colors.N),
)
plt.plot(
experiment.result[i_roi][i_trial][0, :],
label="Decontaminated",
color=colors((i_roi * 2 + 1) % colors.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_roi == 0:
plt.ylabel(
"Trial {}\n\nSignal intensity\n(candela per unit area)".format(
i_trial + 1
)
)
if i_trial == 0:
plt.title("ROI {}".format(i_roi))
plt.legend()
if i_trial == n_trial - 1:
plt.xlabel("Time (frame number)")
plt.show()
"""
Explanation: We can similarly plot raw and decontaminated traces for every ROI and every trial.
End of explanation
"""
# Get the total number of regions
nRegions = experiment.nRegions
# Select the ROI and trial to plot
roi = 2
trial = 1
# Create the figure
plt.figure(figsize=(12, 12))
# Plot extracted traces for each neuropil subregion
plt.subplot(2, 1, 1)
# Plot trace of raw ROI signal
plt.plot(
experiment.raw[roi, trial][0, :],
lw=2,
label="Raw ROI signal",
color=colors((roi * 2) % colors.N),
)
# Plot traces from each neuropil region
for i_neuropil in range(1, nRegions + 1):
alpha = i_neuropil / nRegions
plt.plot(
experiment.raw[roi, trial][i_neuropil, :],
lw=2,
label="Neuropil region {}".format(i_neuropil),
color="k",
alpha=alpha,
)
plt.ylim([0, 125])
plt.grid()
plt.legend()
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.title("ROI {}, Trial {}, neuropil region traces".format(roi, trial), fontsize=15)
# Plot the ROI signal
plt.subplot(2, 1, 2)
# Plot trace of raw ROI signal
plt.plot(
experiment.raw[roi, trial][0, :],
lw=2,
label="Raw",
color=colors((roi * 2) % colors.N),
)
# Plot decontaminated signal matched to the ROI
plt.plot(
experiment.result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.ylim([0, 125])
plt.grid()
plt.legend()
plt.xlabel("Time (frame number)", fontsize=15)
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.title("ROI {}, Trial {}, raw and decontaminated".format(roi, trial), fontsize=15)
plt.show()
"""
Explanation: The figure above shows the raw signal from the annotated ROI location (pale), and the result after decontaminating the signal with FISSA (dark).
The hues match the ROI locations drawn above.
Each column shows the results from one of the ROI, and each row shows the results from one of the three trials.
Comparing ROI signal to neuropil region signals
It can be very instructive to compare the signal in the central ROI with the surrounding neuropil regions. These can be found for cell c and trial t in experiment.raw[c, t][i, :], with i=0 being the cell, and i=1,2,3,... indicating the surrounding regions.
Below we compare directly the raw ROI trace, the decontaminated trace, and the surrounding neuropil region traces.
End of explanation
"""
sampling_frequency = 10 # Hz
experiment.calc_deltaf(freq=sampling_frequency)
"""
Explanation: df/f<sub>0</sub>
It is often useful to calculate the intensity of a signal relative to the baseline value, df/f<sub>0</sub>, for the traces.
This can be done with FISSA by calling the experiment.calc_deltaf method as follows.
End of explanation
"""
# Plot sample df/f0 trace
# Select the ROI and trial to plot
roi = 2
trial = 1
# Create the figure
plt.figure(figsize=(12, 6))
n_frames = experiment.deltaf_result[roi, trial].shape[1]
tt = np.arange(0, n_frames, dtype=np.float64) / sampling_frequency
plt.plot(
tt,
experiment.deltaf_raw[roi, trial][0, :],
lw=2,
label="Raw",
color=colors((roi * 2) % colors.N),
)
plt.plot(
tt,
experiment.deltaf_result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.title("ROI {}, Trial {}".format(roi, trial), fontsize=15)
plt.xlabel("Time (s)", fontsize=15)
plt.ylabel(r"$\Delta f\,/\,f_0$", fontsize=15)
plt.grid()
plt.legend()
plt.show()
"""
Explanation: The sampling frequency is required because we our process for determining f<sub>0</sub> involves applying a lowpass filter to the data.
Note that by default, f<sub>0</sub> is determined as the minimum across all trials (all TIFFs) to ensure that results are directly comparable between trials, but you can normalise each trial individually instead if you prefer by providing the parameter across_trials=False.
Since FISSA is very good at removing contamination from the ROI signals, the minimum value on the decontaminated trace will typically be 0.. Consequently, we use the minimum value of the (smoothed) raw signal to provide the f<sub>0</sub> from the raw trace for both the raw and decontaminated df/f<sub>0</sub>.
As we performed above, we can plot the raw and decontaminated df/f<sub>0</sub> for each ROI in each trial.
End of explanation
"""
# Plot df/f0 for all ROIs and trials
# Find the maximum df/f0 values for each ROI
roi_max_raw = [
np.max(
[np.max(experiment.deltaf_raw[i_roi, i_trial][0]) for i_trial in range(n_trial)]
)
for i_roi in range(n_roi)
]
roi_max_result = [
np.max(
[
np.max(experiment.deltaf_result[i_roi, i_trial][0])
for i_trial in range(n_trial)
]
)
for i_roi in range(n_roi)
]
roi_max = np.maximum(roi_max_raw, roi_max_result)
# Plot our figure using subplot panels
plt.figure(figsize=(16, 10))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_trial * n_roi + i_roi
plt.subplot(n_trial, n_roi, i_subplot)
# Plot the data
n_frames = experiment.deltaf_result[i_roi, i_trial].shape[1]
tt = np.arange(0, n_frames, dtype=np.float64) / sampling_frequency
plt.plot(
tt,
experiment.deltaf_raw[i_roi][i_trial][0, :],
label="Raw",
color=colors((i_roi * 2) % colors.N),
)
plt.plot(
tt,
experiment.deltaf_result[i_roi][i_trial][0, :],
label="Decontaminated",
color=colors((i_roi * 2 + 1) % colors.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_roi == 0:
plt.ylabel("Trial {}\n\n".format(i_trial + 1) + r"$\Delta f\,/\,f_0$")
if i_trial == 0:
plt.title("ROI {}".format(i_roi))
plt.legend()
if i_trial == n_trial - 1:
plt.xlabel("Time (s)")
plt.show()
"""
Explanation: We can also plot df/f<sub>0</sub> for the raw data to compare against the decontaminated signal for each ROI and each trial.
End of explanation
"""
# Define the folder where FISSA's outputs will be cached, so they can be
# quickly reloaded in the future without having to recompute them.
#
# This argument is optional; if it is not provided, FISSA will not save its
# results for later use.
#
# Note: you *must* use a different folder for each experiment,
# otherwise FISSA will load in the folder provided instead
# of computing results for the new experiment.
output_folder = "fissa-example"
# Create a new experiment object set up to save results to the specified output folder
experiment = fissa.Experiment(images_location, rois_location, folder=output_folder)
"""
Explanation: The figure above shows the df/f<sub>0</sub> for the raw signal from the annotated ROI location (pale), and the result after decontaminating the signal with FISSA (dark).
For each figure, the baseline value f<sub>0</sub> is the same (taken from the raw signal).
The hues match the ROI locations and fluorescence intensity traces from above.
Each column shows the results from one of the ROI, and each row shows the results from one of the three trials.
Caching
After using FISSA to clean the data from an experiment, you will probably want to save the output for later use, so you don't have to keep re-running FISSA on the data all the time.
An option to cache the results is built into FISSA.
If you provide fissa.run_fissa with the name of the experiment using the folder argument, it will cache results into that directory.
Later, if you call fissa.run_fissa again with the same experiment name (folder argument), it will load the saved results from the cache instead of recomputing them.
End of explanation
"""
experiment.separate()
"""
Explanation: Because we have created a new experiment object, it is yet not populated with our results.
We need to run the separate routine again to generate the outputs.
But this time, our results will be saved to the directory named fissa-example for future reference.
End of explanation
"""
experiment.separate()
"""
Explanation: Calling the separate method again, or making a new Experiment with the same experiment folder name will not have to re-run FISSA because it can use load the pre-computed results from the cache instead.
End of explanation
"""
experiment.separate(redo_prep=True, redo_sep=True)
"""
Explanation: If you need to force FISSA to ignore the cache and rerun the preparation and/or separation step, you can call it with redo_prep=True and/or redo_sep=True as appropriate.
End of explanation
"""
experiment.to_matfile()
"""
Explanation: Exporting to MATLAB
The results can easily be exported to a MATLAB-compatible MAT-file by calling the experiment.to_matfile() method.
The results can easily be exported to a MATLAB-compatible matfile as follows.
The output file, "separated.mat", will appear in the output_folder we supplied to experiment when we created it.
End of explanation
"""
trial_of_interest = 1
print(experiment.images[trial_of_interest])
"""
Explanation: Loading the generated file (e.g. "output_folder/separated.mat") in MATLAB will provide you with all of FISSA's outputs.
These are structured similarly to experiment.raw and experiment.result described above, with a few small differences.
With the python interface, the outputs are 2d numpy.ndarrays each element of which is itself a 2d numpy.ndarrays.
In comparison, when the output is loaded into MATLAB this becomes a 2d cell-array each element of which is a 2d matrix.
Additionally, whilst Python indexes from 0, MATLAB indexes from 1 instead.
As a consequence of this, the results seen on Python for a given roi and trial experiment.result[roi, trial] correspond to the index S.result{roi + 1, trial + 1} on MATLAB.
Our first plot in this notebook can be replicated in MATLAB as follows:
octave
%% Plot example traces
% Load the FISSA output data
S = load('fissa-example/separated.mat')
% Select the third ROI, second trial
% (On Python, this would be roi = 2; trial = 1;)
roi = 3; trial = 2;
% Plot the raw and result traces for the ROI signal
figure;
hold on;
plot(S.raw{roi, trial}(1, :));
plot(S.result{roi, trial}(1, :));
title(sprintf('ROI %d, Trial %d', roi, trial));
xlabel('Time (frame number)');
ylabel('Signal intensity (candela per unit area)');
legend({'Raw', 'Result'});
grid on;
box on;
set(gca,'TickDir','out');
Assuming all ROIs are contiguous and described by a single contour, the mean image and ROI locations can be plotted in MATLAB as follows:
octave
%% Plot ROI locations overlaid on mean image
% Load the FISSA output data
S = load('fissa-example/separated.mat')
trial = 1;
figure;
hold on;
% Plot the mean image
imagesc(squeeze(S.means(trial, :, :)));
colormap('gray');
% Plot ROI locations
for i_roi = 1:size(S.result, 1);
contour = S.roi_polys{i_roi, trial}{1};
plot(contour(:, 2), contour(:, 1));
end
set(gca, 'YDir', 'reverse');
Addendum
Finding the TIFF files
If you find something noteworthy in one of the traces and need to backreference to the corresponding TIFF file, you can look up the path to the TIFF file with experiment.images.
End of explanation
"""
# Call FISSA with elevated verbosity
experiment = fissa.Experiment(images_location, rois_location, verbosity=2)
experiment.separate()
"""
Explanation: FISSA customisation settings
FISSA has several user-definable settings, which can be set when defining the fissa.Experiment instance.
Controlling verbosity
The level of verbosity of FISSA can be controlled with the verbosity parameter.
The default is verbosity=1.
If the verbosity parameter is higher, FISSA will print out more information while it is processing.
This can be helpful for debugging puproses.
The verbosity reaches its maximum at verbosity=6.
If verbosity=0, FISSA will run silently.
End of explanation
"""
# FISSA uses multiprocessing to speed up its processing.
# By default, it will spawn one worker per CPU core on your machine.
# However, if you have a lot of cores and not much memory, you many not
# be able to suport so many workers simultaneously.
# In particular, this can be problematic during the data preparation step
# in which tiffs are loaded into memory.
# The default number of cores for the data preparation and separation steps
# can be changed as follows.
ncores_preparation = 4 # If None, uses all available cores
ncores_separation = None # if None, uses all available cores
# By default, FISSA uses 4 subregions for the neuropil region.
# If you have very dense data with a lot of different signals per unit area,
# you may wish to increase the number of regions.
nRegions = 8
# By default, each surrounding region has the same area as the central ROI.
# i.e. expansion = 1
# However, you may wish to increase or decrease this value.
expansion = 0.75
# The degree of signal sparsity can be controlled with the alpha parameter.
alpha = 0.02
# If you change the experiment parameters, you need to change the cache directory too.
# Otherwise FISSA will try to reload the results from the previous run instead of
# computing the new results. FISSA will throw an error if you try to load data which
# was generated with different analysis parameters to its parameters.
output_folder2 = output_folder + "_alt"
# Set up a FISSA experiment with these parameters
experiment = fissa.Experiment(
images_location,
rois_location,
output_folder2,
nRegions=nRegions,
expansion=expansion,
alpha=alpha,
ncores_preparation=ncores_preparation,
ncores_separation=ncores_separation,
)
# Extract the data with these new parameters.
experiment.separate()
"""
Explanation: Analysis parameters
The analysis performed by FISSA can be controlled with several parameters.
End of explanation
"""
# Plot one ROI along with its neuropil regions
# Select which ROI and trial to plot
trial = 0
roi = 3
# Plot the mean image for the trial
plt.figure(figsize=(7, 7))
plt.imshow(experiment.means[trial], cmap="gray")
# Get axes limits
XLIM = plt.xlim()
YLIM = plt.ylim()
# Check the number of neuropil
n_npil = len(experiment.roi_polys[roi, trial]) - 1
# Plot all the neuropil regions in yellow
for i_npil in range(1, n_npil + 1):
for contour in experiment.roi_polys[roi, trial][i_npil]:
plt.fill(
contour[:, 1],
contour[:, 0],
facecolor="none",
edgecolor="y",
alpha=0.6,
)
# Plot the ROI outline in red
for contour in experiment.roi_polys[roi, trial][0]:
plt.fill(
contour[:, 1],
contour[:, 0],
facecolor="none",
edgecolor="r",
alpha=0.6,
)
# Reset axes limits
plt.xlim(XLIM)
plt.ylim(YLIM)
plt.title("ROI {}, and its {} neuropil regions".format(roi, experiment.nRegions))
plt.show()
# Plot the new results
roi = 2
trial = 1
plt.figure(figsize=(12, 6))
plt.plot(
experiment.raw[roi, trial][0, :],
lw=2,
label="Raw",
color=colors((roi * 2) % colors.N),
)
plt.plot(
experiment.result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.title("ROI {}, Trial {}".format(roi, i_trial), fontsize=15)
plt.xlabel("Time (frame number)", fontsize=15)
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.grid()
plt.legend()
plt.show()
"""
Explanation: We can plot the new results for our example trace from before. Although we doubled the number of neuropil regions around the cell, very little has changed for this example because there were not many sources of contamination.
However, there will be more of a difference if your data has more neuropil sources per unit area within the image.
End of explanation
"""
experiment.ncores_preparation = 8
experiment.alpha = 0.02
experiment.expansion = 0.75
"""
Explanation: Alternatively, these settings can be refined after creating the experiment object, as follows.
End of explanation
"""
experiment = fissa.Experiment(
images_location, rois_location, output_folder, lowmemory_mode=True
)
experiment.separate(redo_prep=True)
"""
Explanation: Loading data from large tiff files
By default, FISSA loads entire tiff files into memory at once and then manipulates all ROIs within the tiff.
This can sometimes be problematic when working with very large tiff files which can not be loaded into memory all at once.
If you have out-of-memory problems, you can activate FISSA's low memory mode, which will cause it to manipulate each tiff file frame-by-frame.
End of explanation
"""
from fissa.extraction import DataHandlerTifffile
# Define a custom datahandler class.
#
# By inheriting from DataHandlerTifffile, most methods are defined
# appropriately. In this case, we only need to overwrite the
# `image2array` method to work with our custom data format.
class DataHandlerCustom(DataHandlerTifffile):
@staticmethod
def image2array(image):
"""Open a given image file as a custom instance.
Parameters
----------
image : custom
Your image format (avi, hdf5, etc.)
Returns
-------
numpy.ndarray
A 3D array containing the data, shaped
``(frames, y_coordinate, x_coordinate)``.
"""
# Some custom code
pass
# Then pass an instance of this class to fissa.Experiment when creating
# a new experiment.
datahandler = DataHandlerCustom()
experiment = fissa.Experiment(
images_location,
rois_location,
datahandler=datahandler,
)
"""
Explanation: Handling custom formats
By default, FISSA can use tiff files or numpy arrays as its input image data, and numpy arrays or ImageJ zip files for the ROI definitions.
However, it is also possible to extend this functionality and integrate other data formats into FISSA in order to work with other custom and/or proprietary formats that might be used in your lab.
This is done by defining your own DataHandler class.
Your custom data handler should be a subclass of fissa.extraction.DataHandlerAbstract, and implement the following methods:
image2array(image) takes an image of whatever format and turns it into data (typically a numpy.ndarray).
getmean(data) calculates the 2D mean for a video defined by data.
rois2masks(rois, data) creates masks from the rois inputs, of appropriate size data.
extracttraces(data, masks) applies the masks to data in order to extract traces.
See fissa.extraction.DataHandlerAbstract for further description for each of the methods.
If you only need to handle a new image input format, which is converted to a numpy.ndarray, you may find it is easier to create a subclass of the default datahandler, fissa.extraction.DataHandlerTifffile.
In this case, only the image2array method needs to be overwritten and the other methods can be left as they are.
End of explanation
"""
|
javierfdr/credit-scoring-analysis
|
src/credit_notebook.ipynb
|
mit
|
%matplotlib inline
from classifiers import *
from dim_red import *
"""
Explanation: Fitting Linear and Non-Linear Models to solve the German credit risk scoring classification problem
Let's import the support libraries developed manually for this project and load the original dataset
End of explanation
"""
[X,y] = load_dataset('new-german-data.numeric',delim=',')
print X.shape
print y.shape
"""
Explanation: Loading German Credit scoring dataset transformed to use comma-separated values and printing numpy array dimensions
End of explanation
"""
pca = PCA(n_components=2)
pca.fit(X,y)
print pca.explained_variance_
plotPCA(X,y)
"""
Explanation: In order to take a first glance of the distribution of data, the first two principal components are calculated from the original data using PCA and plotted in 2D. It can be observed how there are certain spaces where data points of an specific class appear together, however there is no clear separation devisable through PCA analysis.
End of explanation
"""
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3, random_state=0)
clf = svm.SVC(kernel='linear', C=10, probability=True)
def feature_analysis_prec(X,y,clf):
scores = []
names = range(0,X.shape[1])
for i in range(X.shape[1]):
score = cross_val_score(clf, X[:, i:i+1], y, scoring="precision",
cv=ShuffleSplit(len(X), 3, .3))
scores.append((round(np.mean(score), 3), names[i]))
sorted_scores = sorted(scores, reverse=True)
print sorted_scores
return sorted_scores[0:4]
ss = feature_analysis_prec(X,y,clf)
"""
Explanation: In order to understand the relevance of the features of the dataset wrapper methods will be use to see the influence of the features on the final output. First the dataset will be splitted two obtain a training and test set, and then a simple SVM Linear classifier will be built in order to have a first glance on the influence of each attribute.
End of explanation
"""
X_r = X[:,[ss[0][1],ss[1][1],ss[2][1],ss[3][1]]]
pca = PCA(n_components=2)
pca.fit(X_r,y)
print pca.explained_variance_
plotPCA(X_r,y)
"""
Explanation: It can be seen that most of the features has a level of relevance regarding the precision of the classifier; which works as an indication of the false positive rate, critical for credit scoring. To take an additional glance to the ability of the features to represent the outcome let's take the first 4 higher relevant features and calculate and plot PCA on them.
End of explanation
"""
from sklearn.metrics import classification_report
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3, random_state=0)
param_grid = [
{'C': [0.01,0.1, 1, 10, 100,1000], 'kernel': ['linear']}
]
clf = svm.SVC(kernel='linear', C=10)
clf = GridSearchCV(clf, param_grid, cv=5, scoring='f1')
clf.fit(X_train, y_train)
print("Best parameters combination")
print(clf.best_params_)
print("F1 scores on each combination:")
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print("Detailed classification report:")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
"""
Explanation: As expected, no much better representational ability is obtained from the principal components on the first 4 most relevant features since each of the features is adding a certain degree of value on predicting the final outcome.
Given the complexity of the feature space more sophisticated models must be built in order to represent more accurately the nature of the data. In the following sections three different models are built using a Linear SVM, RBF Kernel SVM and Random Decision Forests. The latest two are expected to give better prediction rates (in terms of f1 measure, a ratio of precision and recall) at the expense of higher risk of overfitting. In order to avoid this a grid search is performed on each method to obtain the better model in terms of test set fitting accuracy, and cross validation is performed on each parameter combination to obtain a more representative mean accuracy in each case.
Linear SVM
Let's split the dataset into 70% training set and 30% test set. Then less obtain the best model through cross validated grid search
End of explanation
"""
from sklearn.metrics import confusion_matrix
from __future__ import division
cm = confusion_matrix(y_test,y_pred)
plot_confusion_matrix(cm,['Approve','Deny'])
# brute confusion matrix values
print cm
TP = cm[0,0]
FN = cm[0,1]
FP = cm[1,0]
TN = cm[1,1]
# percentual confusion matrix values
total = cm.sum()
print cm / total
print("Good Accepted + Bad Rejected: %0.2f "% ((cm[0,0]+cm[1,1])/total))
print("Bad Accepted + Good Rejected: %0.2f "% ((cm[0,1]+cm[1,0])/total))
# storing linear SVM results
lsvm_cm = cm
# Associated cost - from cost matrix
cost = (TN*1) + (FP*5)
print "Associated cost: %0.2f" % cost
"""
Explanation: Plotting the confusion matrix on the results we obtain the following:
End of explanation
"""
from sklearn.metrics import classification_report
from sklearn.lda import LDA
param_grid = [
{'C': [0.01,0.1, 1, 10, 100,1000], 'kernel': ['rbf'], 'gamma' : [0.01,0.1,1,10,100,1000]}
]
clf = svm.SVC(kernel='rbf', C=10, gamma = 1)
clf = GridSearchCV(clf, param_grid, cv=5, scoring='f1')
clf.fit(X_train, y_train)
print("Best parameters combination")
print(clf.best_params_)
print("F1 scores on each combination:")
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print("Detailed classification report:")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
"""
Explanation: RBF Kernel SVM
Given the complexity of the feature space, evidenced in the initial plots, it is expected that a more sophisticated transformation of the feature space, such as the nonlinear transformation attempted by RBF Kernel SVM, would allow a better fit. We will reuse the same training/test splitting performed before in order to guarantee consistency. Again the best model will be obtained a cross validated grid search.
End of explanation
"""
from sklearn.metrics import confusion_matrix
from __future__ import division
cm = confusion_matrix(y_test,y_pred)
plot_confusion_matrix(cm,['Approve','Deny'])
# brute confusion matrix values
print cm
TP = cm[0,0]
FN = cm[0,1]
FP = cm[1,0]
TN = cm[1,1]
# percentual confusion matrix values
total = cm.sum()
print cm / total
print("Good Accepted + Bad Rejected: %0.2f "% ((cm[0,0]+cm[1,1])/total))
print("Bad Accepted + Good Rejected: %0.2f "% ((cm[0,1]+cm[1,0])/total))
# storing RBF Kernel SVM results
rbfsvm_cm = cm
# Associated cost - from cost matrix
cost = (TN*1) + (FP*5)
print "Associated cost: %0.2f" % cost
"""
Explanation: Plotting the confusion matrix on the results we obtain the following:
End of explanation
"""
from sklearn.metrics import classification_report
from scipy.stats import randint as sp_randint
param_grid = [{"max_depth": [3, 5,10,15,20,30,40,70],
"max_features": [1, 3, 10,15,20]
}
]
clf = RandomForestClassifier(max_features = 'auto', max_depth=10)
clf = GridSearchCV(clf, param_grid, cv=5, scoring='f1')
clf.fit(X_train, y_train)
print("Best parameters combination")
print(clf.best_params_)
print("F1 scores on each combination:")
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print("Detailed classification report:")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
# Please wait, this could take a while
from sklearn.metrics import confusion_matrix
from __future__ import division
cm = confusion_matrix(y_test,y_pred)
plot_confusion_matrix(cm,['Approve','Deny'])
# brute confusion matrix values
print cm
TP = cm[0,0]
FN = cm[0,1]
FP = cm[1,0]
TN = cm[1,1]
# percentual confusion matrix values
total = cm.sum()
print cm / total
print("Good Accepted + Bad Rejected: %0.2f "% ((cm[0,0]+cm[1,1])/total))
print("Bad Accepted + Good Rejected: %0.2f "% ((cm[0,1]+cm[1,0])/total))
# storing RDF results
rdf_cm = cm
# Associated cost - from cost matrix
cost = (TN*1) + (FP*5)
print "Associated cost: %0.2f" % cost
"""
Explanation: Random Decision Forests
This bagging algorithm produces and ensemble of decision trees, capable to fit very complex feature spaces. RDF's have proven to be consistently accurate in wider ranges of problems. Since a sufficiently long trees within the ensembles are capable to fit any particular space, a high risk of overfitting arises, so its necessary to perform a thorough search in the parameters space. The same procedure as before is followed.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
N = 4
svm_linear = (lsvm_cm[0,0],lsvm_cm[0,1],lsvm_cm[1,0],lsvm_cm[1,1])
rbfsvm = (rbfsvm_cm[0,0],rbfsvm_cm[0,1],rbfsvm_cm[1,0],rbfsvm_cm[1,1])
rdf = (rdf_cm[0,0],rdf_cm[0,1],rdf_cm[1,0],rdf_cm[1,1])
ind = np.arange(N) # the x locations for the groups
width = 0.25 # the width of the bars: can also be len(x) sequence
fig = plt.figure()
ax = fig.add_subplot(111)
rects1 = ax.bar(ind, svm_linear, width, color='r')
rects2 = ax.bar(ind+width, rbfsvm, width, color='g')
rects3 = ax.bar(ind+(width*2), rdf, width, color='b')
# add some
ax.set_ylabel('Number of examples')
ax.set_title('Number of examples per classification type')
ax.set_xticks(ind+width)
ax.set_xticklabels( ('True Positive', 'False Positive', 'False Negative', 'True Negative') )
ax.legend( (rects1[0], rects2[0],rects3[0]), ('LinearSVM', 'RBF-SVM','RDF') )
plt.show()
import numpy as np
import matplotlib.pyplot as plt
N = 4
total = lsvm_cm.sum()
lsvm_cm = lsvm_cm/total
rbfsvm_cm = rbfsvm_cm/total
rdf_cm = rdf_cm/total
svm_linear = (lsvm_cm[0,0],lsvm_cm[0,1],lsvm_cm[1,0],lsvm_cm[1,1])
rbfsvm = (rbfsvm_cm[0,0],rbfsvm_cm[0,1],rbfsvm_cm[1,0],rbfsvm_cm[1,1])
rdf = (rdf_cm[0,0],rdf_cm[0,1],rdf_cm[1,0],rdf_cm[1,1])
ind = np.arange(N) # the x locations for the groups
width = 0.25 # the width of the bars: can also be len(x) sequence
fig = plt.figure()
ax = fig.add_subplot(111)
rects1 = ax.bar(ind, svm_linear, width, color='r')
rects2 = ax.bar(ind+width, rbfsvm, width, color='g')
rects3 = ax.bar(ind+(width*2), rdf, width, color='b')
# add some
ax.set_ylabel('Percentage from the total')
ax.set_title('Percentage from the total per classification type')
ax.set_xticks(ind+width)
ax.set_xticklabels( ('True Positive', 'False Positive', 'False Negative', 'True Negative') )
ax.legend( (rects1[0], rects2[0],rects3[0]), ('LinearSVM', 'RBF-SVM','RDF') )
plt.show()
"""
Explanation: Results Summary
The following graph presents the results comparison between the best models of the three presented approaches. The comparison is performed directly on the ability of discerning of a credit should be assigned or rejected, and the feasibility of a wrongly conceded credit which is a critical issue for this problem.
End of explanation
"""
|
makcedward/nlpaug
|
example/flow.ipynb
|
mit
|
import os
os.environ["MODEL_DIR"] = '../model'
"""
Explanation: Example of Flow Usage<a class="anchor" id="home"></a>:
Flow
Sequential
Sometimes
End of explanation
"""
import nlpaug.augmenter.char as nac
import nlpaug.augmenter.word as naw
import nlpaug.augmenter.sentence as nas
import nlpaug.flow as naf
from nlpaug.util import Action
text = 'The quick brown fox jumps over the lazy dog .'
print(text)
"""
Explanation: Config
End of explanation
"""
aug = naf.Sequential([
nac.RandomCharAug(action="insert"),
naw.RandomWordAug()
])
aug.augment(text)
"""
Explanation: Flow <a class="anchor" id="flow">
To make use of multiple augmentation, sequential and sometimes pipelines are introduced to connect augmenters.
Sequential Pipeline<a class="anchor" id="seq_pipeline">
Apply different augmenters sequentially
End of explanation
"""
aug = naf.Sequential([
nac.RandomCharAug(action="insert"),
naw.RandomWordAug()
])
aug.augment(text, n=3)
"""
Explanation: Generate mulitple synthetic data
End of explanation
"""
aug = naf.Sometimes([
nac.RandomCharAug(action="delete"),
nac.RandomCharAug(action="insert"),
naw.RandomWordAug()
])
aug.augment(text)
"""
Explanation: Sometimes Pipeline<a class="anchor" id="sometimes_pipeline">
Apply some augmenters randomly
End of explanation
"""
|
ozak/geopandas
|
examples/choropleths.ipynb
|
bsd-3-clause
|
%matplotlib inline
import geopandas as gpd
import matplotlib.pyplot as plt
# We use a PySAL example shapefile
import pysal as ps
pth = ps.examples.get_path("columbus.shp")
tracts = gpd.GeoDataFrame.from_file(pth)
print('Observations, Attributes:',tracts.shape)
tracts.head()
"""
Explanation: Choropleth classification schemes from PySAL for use with GeoPandas
<img src="http://pysal.readthedocs.io/en/latest/_static/images/socal_3.jpg" width="200" align="right" alt="PySAL image" title="PySAL image">
PySAL is a Spatial Analysis Library, which packages fast spatial algorithms used in various fields. These include Exploratory spatial data analysis, spatial inequality analysis, spatial analysis on networks, spatial dynamics, and many more.
It is used under the hood in geopandas when plotting measures with a set of colors. There are many ways to classify data into different bins, depending on a number of classification schemes.
<img src="http://alumni.media.mit.edu/~tpminka/courses/36-350.2001/lectures/day11/boston-kmeans.png" width="300">
For example, if we have 20 countries whose average annual temperature varies between 5C and 25C, we can classify them in 4 bins by:
* Quantiles
- Separates the rows into equal parts, 5 countries per bin.
* Equal Intervals
- Separates the measure's interval into equal parts, 5C per bin.
* Natural Breaks (Fischer Jenks)
- This algorithm tries to split the rows into naturaly occurring clusters. The numbers per bin will depend on how the observations are located on the interval.
End of explanation
"""
# Let's take a look at how the CRIME variable is distributed with a histogram
tracts['CRIME'].hist(bins=20)
plt.xlabel('CRIME\nResidential burglaries and vehicle thefts per 1000 households')
plt.ylabel('Number of neighbourhoods')
plt.title('Distribution of neighbourhoods by crime rate in Columbus, OH')
plt.show()
"""
Explanation: Plotting the CRIME variable
In this example, we are taking a look at neighbourhood-level statistics for the city of Columbus, OH. We'd like to have an idea of how the crime rate variable is distributed around the city.
From the shapefile's metadata:
CRIME: residential burglaries and vehicle thefts per 1000 households
End of explanation
"""
tracts.plot(column='CRIME', cmap='OrRd', edgecolor='k', legend=True)
"""
Explanation: Now let's see what it looks like without a classification scheme:
End of explanation
"""
# Splitting the data in three shows some spatial clustering around the center
tracts.plot(column='CRIME', scheme='quantiles', k=3, cmap='OrRd', edgecolor='k', legend=True)
# We can also see where the top and bottom halves are located
tracts.plot(column='CRIME', scheme='quantiles', k=2, cmap='OrRd', edgecolor='k', legend=True)
"""
Explanation: All the 49 neighbourhoods are colored along a white-to-dark-red gradient, but the human eye can have a hard time comparing the color of shapes that are distant one to the other. In this case, it is especially hard to rank the peripheral districts colored in beige.
Instead, we'll classify them in color bins.
Classification by quantiles
QUANTILES will create attractive maps that place an equal number of observations in each class: If you have 30 counties and 6 data classes, you’ll have 5 counties in each class. The problem with quantiles is that you can end up with classes that have very different numerical ranges (e.g., 1-4, 4-9, 9-250).
End of explanation
"""
tracts.plot(column='CRIME', scheme='equal_interval', k=4, cmap='OrRd', edgecolor='k', legend=True)
# No legend here as we'd be out of space
tracts.plot(column='CRIME', scheme='equal_interval', k=12, cmap='OrRd', edgecolor='k')
"""
Explanation: Classification by equal intervals
EQUAL INTERVAL divides the data into equal size classes (e.g., 0-10, 10-20, 20-30, etc.) and works best on data that is generally spread across the entire range. CAUTION: Avoid equal interval if your data are skewed to one end or if you have one or two really large outlier values.
End of explanation
"""
# Compare this to the previous 3-bin figure with quantiles
tracts.plot(column='CRIME', scheme='fisher_jenks', k=3, cmap='OrRd', edgecolor='k', legend=True)
"""
Explanation: Classificaton by natural breaks
NATURAL BREAKS is a kind of “optimal” classification scheme that finds class breaks that will minimize within-class variance and maximize between-class differences. One drawback of this approach is each dataset generates a unique classification solution, and if you need to make comparison across maps, such as in an atlas or a series (e.g., one map each for 1980, 1990, 2000) you might want to use a single scheme that can be applied across all of the maps.
End of explanation
"""
def max_p(values, k):
"""
Given a list of values and `k` bins,
returns a list of their Maximum P bin number.
"""
from pysal.esda.mapclassify import Max_P_Classifier
binning = Max_P_Classifier(values, k=k)
return binning.yb
tracts['Max_P'] = max_p(tracts['CRIME'].values, k=5)
tracts.head()
tracts.plot(column='Max_P', cmap='OrRd', edgecolor='k', categorical=True, legend=True)
"""
Explanation: Other classification schemes in PySAL
Geopandas includes only the most used classifiers found in PySAL. In order to use the others, you will need to add them as additional columns to your GeoDataFrame.
The max-p algorithm determines the number of regions (p) endogenously based on a set of areas, a matrix of attributes on each area and a floor constraint. The floor constraint defines the minimum bound that a variable must reach for each region; for example, a constraint might be the minimum population each region must have. max-p further enforces a contiguity constraint on the areas within regions.
End of explanation
"""
|
mohsinhaider/pythonbootcampacm
|
Objects and Data Structures/List Comprehensions.ipynb
|
mit
|
# Store even numbers from 0 to 20
even_lst = [num for num in range(21) if num % 2 == 0]
print(even_lst)
"""
Explanation: List Comprehensions and Generators
Python comes with more than just a programming language, it also includes a way to write elegant code. Pythonic code is syntax that wishes to emulate natural constructs of programming.
Let's look at the basic syntax for a list comprehension:
some_list = [item for item in domain if .... ]
We store the item that exists in the domain into the list, as long as it passes any conditions in the if statement.
Example 1 Extract the even numbers from a given range.
End of explanation
"""
cash_value = 20
rsu_dict = {"Max":20, "Willie":13, "Joanna":14}
lst = [rsu_dict[name]*cash_value for name in rsu_dict]
print(lst)
my_dict = {"Ross":19, "Bernie":13, "Micah":15}
cash_value = 20
# [19*20, 13*20, 15*20]
cash_lst = [my_dict[key]*20 for key in my_dict]
print(cash_lst)
"""
Explanation: Example 2 Convert the reserved stock units (RSUs) an employee has in a company to the current cash value.
End of explanation
"""
rows = 'ABC'
cols = '123'
vowels = ('a', 'e', 'i', 'o', 'u')
sentence = 'cogito ergo sum'
words = sentence.split()
# Produce [A3, B2, C1]
number_letter_lst = [rows[element]+cols[2-element] for element in range(3)]
print(number_letter_lst)
row_col_lst = [rows[i]+cols[2-i] for i in range(3)]
# Produce [A1, B1, C1, A2, B2, C2, A3, B3, C3]
my_lst = [r+c for c in cols for r in rows]
print(my_lst)
"""
Explanation: Let's take a look at some values and see how we can produce certain outputs.
End of explanation
"""
# Simply accessing rows and cols in a comprehensions [A1, A2, A3, B1, B2, B3, C1, C2, C3]
# Non-Pythonic
lst = []
for r in rows:
for c in cols:
lst.append(r+c)
# Pythonic
lst = [r+c for r in rows for c in cols]
print(lst)
"""
Explanation: Generators
We can do more complex things than just the basic list comprehensions. We can create more complex, succint comprehensions by introducing generators in our very statements. We haev the following data to work with:
rows = 'ABC
cols = '123'
In general, we want to access both the rows in cols at the same time.... How did we do that?
End of explanation
"""
# let's figure this list out with normal syntax
lst = []
for r in (rows[i]+cols[i] for i in range(3)):
for c in (rows[2-i]+cols[i] for i in range(3)):
lst.append(r + 'x' + c)
print(lst)
# shortened
crossed_list = [x + " x " + y for x in (rows[i]+cols[i] for i in range(3)) for y in (rows[2-i]+cols[i] for i in range(3))]
print(crossed_list)
x = sorted(words, key=lambda x: len(x))
print(x)
"""
Explanation: Let's try Creating:
['A1 x C1', 'A1 x B2', 'A1 x A3', 'B2 x C1', 'B2 x B2', 'B2 x A3', 'C3 x C1', 'C3 x B2', 'C3 x A3']
Thought process, we know that rows will change every other 3, meaning we keep it constant...
End of explanation
"""
|
DavidDobr/icef_thesis
|
data/.ipynb_checkpoints/dobrinskiy_thesis_v2_october-Copy1-checkpoint.ipynb
|
gpl-3.0
|
# You should be running python3
import sys
print(sys.version)
import pandas as pd # http://pandas.pydata.org/
import numpy as np # http://numpy.org/
import statsmodels.api as sm # http://statsmodels.sourceforge.net/stable/index.html
import statsmodels.formula.api as smf
import statsmodels
print("Pandas Version: {}".format(pd.__version__)) # pandas version
print("StatsModels Version: {}".format(statsmodels.__version__)) # StatsModels version
"""
Explanation: This is a Jupyter notebook for David Dobrinskiy's HSE Thesis
How Venture Capital Affects Startups' Success
End of explanation
"""
# load the pwc dataset from azure
frame = pd.read_csv('pwc_moneytree.csv')
frame.head()
del frame['Grand Total']
frame.columns = ['year', 'type', 'q1', 'q2', 'q3', 'q4']
frame['year'] = frame['year'].fillna(method='ffill')
frame.head()
"""
Explanation: Let us look at the dynamics of total US VC investment
End of explanation
"""
deals_df = frame.iloc[0::2]
investments_df = frame.iloc[1::2]
# once separated, 'type' field is identical within each df
# let's delete it
del deals_df['type']
del investments_df['type']
deals_df.head()
investments_df.head()
def unstack_to_series(df):
"""
Takes q1-q4 in a dataframe and converts it to a series
input: a dataframe containing ['q1', 'q2', 'q3', 'q4']
ouput: a pandas series
"""
quarters = ['q1', 'q2', 'q3', 'q4']
d = dict()
for i, row in df.iterrows():
for q in quarters:
key = str(int(row['year'])) + q
d[key] = row[q]
# print(key, q, row[q])
return pd.Series(d)
deals = unstack_to_series(deals_df ).dropna()
investments = unstack_to_series(investments_df).dropna()
def string_to_int(money_string):
numerals = [c if c.isnumeric() else '' for c in money_string]
return int(''.join(numerals))
# convert deals from string to integers
deals = deals.apply(string_to_int)
deals.tail()
# investment in billions USD
# converts to integers - which is ok, since data is in dollars
investments_b = investments.apply(string_to_int)
# in python3 division automatically converts numbers to floats, we don't loose precicion
investments_b = investments_b / 10**9
# round data to 2 decimals
investments_b = investments_b.apply(round, ndigits=2)
investments_b.tail()
"""
Explanation: Deals and investments are in alternating rows of frame, let's separate them
End of explanation
"""
import matplotlib.pyplot as plt # http://matplotlib.org/
import matplotlib.patches as mpatches
import matplotlib.ticker as ticker
%matplotlib inline
# change matplotlib inline display size
# import matplotlib.pylab as pylab
# pylab.rcParams['figure.figsize'] = (8, 6) # that's default image size for this interactive session
fig, ax1 = plt.subplots()
ax1.set_title("VC historical trend (US Data)")
t = range(len(investments_b)) # need to substitute tickers for years later
width = t[1]-t[0]
y1 = investments_b
# create filled step chart for investment amount
ax1.bar(t, y1, width=width, facecolor='0.80', edgecolor='', label = 'Investment ($ Bln.)')
ax1.set_ylabel('Investment ($ Bln.)')
# set up xlabels with years
years = [str(year)[:-2] for year in deals.index][::4] # get years without quarter
ax1.set_xticks(t[::4]) # set 1 tick per year
ax1.set_xticklabels(years, rotation=50) # set tick names
ax1.set_xlabel('Year') # name X axis
# format Y1 tickers to $ billions
formatter = ticker.FormatStrFormatter('$%1.0f Bil.')
ax1.yaxis.set_major_formatter(formatter)
for tick in ax1.yaxis.get_major_ticks():
tick.label1On = False
tick.label2On = True
# create second Y2 axis for Num of Deals
ax2 = ax1.twinx()
y2 = deals
ax2.plot(t, y2, color = 'k', ls = '-', label = 'Num. of Deals')
ax2.set_ylabel('Num. of Deals')
# add annotation bubbles
ax2.annotate('1997-2000 dot-com bubble', xy=(23, 2100), xytext=(3, 1800),
bbox=dict(boxstyle="round4", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3,rad=0.2",
fc="w"),
)
ax2.annotate('2007-08 Financial Crisis', xy=(57, 800), xytext=(40, 1300),
bbox=dict(boxstyle="round4", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3,rad=-0.2",
fc="w"),
)
# add legend
ax1.legend(loc="best")
ax2.legend(bbox_to_anchor=(0.95, 0.88))
fig.tight_layout() # solves cropping problems when saving png
fig.savefig('vc_trend_3.png', dpi=250)
plt.show()
# load countries dataset from azure
ds = ws.datasets['country_data.csv']
# data for 2015
country_data = ds.to_dataframe()
country_data
def tex(df):
"""
Print dataframe contents in latex-ready format
"""
for line in df.to_latex().split('\n'):
print(line)
params = pd.DataFrame(country_data['Criteria'])
params.index = ['y'] + ['X'+str(i) for i in range(1, len(country_data))]
tex(params)
# set index
country_data = country_data.set_index('Criteria')
# convert values to floats (note: comas need to be replaced by dots for python conversion to work)
country_data.index = ['y'] + ['X'+str(i) for i in range(1, len(country_data))]
country_data
"""
Explanation: Plot data from MoneyTree report
http://www.pwcmoneytree.com
End of explanation
"""
const = pd.Series([1]*len(country_data.columns), index = country_data.columns, name = 'X0')
const
country_data = pd.concat([pd.DataFrame(const).T, country_data])
country_data = country_data.sort_index()
country_data
tex(country_data)
y = country_data.iloc[-1,:]
y
X = country_data.iloc[:-1, :].T
X
# Fit regression model
results = sm.OLS(y, X).fit()
# Inspect the results in latex doc, {tab:vc_ols_1}
print(results.summary())
# Inspect the results in latex doc, {tab:vc_ols_1}
print(results.summary().as_latex())
# equation for eq:OLS_1_coeffs in LaTeX
equation = 'Y ='
for i, coeff in results.params.iteritems():
sign = '+' if coeff >= 0 else '-'
equation += ' ' + sign + str(abs(round(coeff,2))) + '*' + i
print(equation)
# correlation table
corr = country_data.T.corr().iloc[1:,1:]
corr = corr.applymap(lambda x: round(x, 2))
corr
# corr table to latex
tex(corr)
import itertools
# set of unique parameter pairs
pairs = set([frozenset(pair) for pair in itertools.permutations(list(corr.index), 2)])
for pair in pairs:
pair = sorted(list(pair))
corr_pair = corr.loc[pair[0],pair[1]]
if corr_pair > 0.7:
print(pair, round(corr_pair, 2))
print('-'*40)
print('a')
for i in corr.columns:
for j in corr.columns:
if abs(corr.loc[i, j]) > 0.7 and i != j:
print(i+'~'+j, corr.loc[i, j])
"""
Explanation: prepare data for ols
End of explanation
"""
|
dmolina/es_intro_python
|
02-Basic-Python-Syntax.ipynb
|
gpl-3.0
|
# set the midpoint
midpoint = 5
# make two empty lists
lower = []; upper = []
# split the numbers into lower and upper
for i in range(10):
if (i < midpoint):
lower.append(i)
else:
upper.append(i)
print("lower:", lower)
print("upper:", upper)
"""
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub.
The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook.
<!--NAVIGATION-->
< How to Run Python Code | Contents | Basic Python Semantics: Variables and Objects >
A Guía rápida de Python
Python se planteó como un lenguaje de aprendizaje, pero la facilidad de su uso de su sintaxis clara lo hace preferido tanto por principiantes como expertos. Se considera estilo seudocódigo y a menudo es mucho más fácil de entender un script Python que uno en C.
Mira el siguiente ejemplo:
End of explanation
"""
x = 1 + 2 + 3 + 4 +\
5 + 6 + 7 + 8
"""
Explanation: El script es algo tonto, pero ilustra varios aspectos de la sintaxis de Python.
Comentarios marcados #
El programa empieza por:
``` python
set the midpoint
Los comentarios de una línea empiezan por (``#``).
Se pueden tener comentarios en lineas completas o para describir alguna sentencia. Por ejemplo python
x += 2 # shorthand for x = x + 2
`
Python no tiene comentarios multilíneas, como/ ... /`` usado en C y C++, aunque a veces se usan cadenas multi-líneas para eso.
End-of-Line Terminates a Statement
The next line in the script is
python
midpoint = 5
This is an assignment operation, where we've created a variable named midpoint and assigned it the value 5.
Notice that the end of this statement is simply marked by the end of the line.
This is in contrast to languages like C and C++, where every statement must end with a semicolon (;).
In Python, if you'd like a statement to continue to the next line, it is possible to use the "\" marker to indicate this:
End of explanation
"""
x = (1 + 2 + 3 + 4 +
5 + 6 + 7 + 8)
"""
Explanation: It is also possible to continue expressions on the next line within parentheses, without using the "\" marker:
End of explanation
"""
x=1+2
x = 1 + 2
x = 1 + 2
"""
Explanation: Most Python style guides recommend the second version of line continuation (within parentheses) to the first (use of the "\" marker).
Semicolon Can Optionally Terminate a Statement
Sometimes it can be useful to put multiple statements on a single line.
The next portion of the script is
python
lower = []; upper = []
This shows the example of how the semicolon (;) familiar in C can be used optionally in Python to put two statements on a single line.
Functionally, this is entirely equivalent to writing
python
lower = []
upper = []
Using a semicolon to put multiple statements on a single line is generally discouraged by most Python style guides, though occasionally it proves convenient.
Indentation: Whitespace Matters!
Next, we get to the main block of code:
Python
for i in range(10):
if i < midpoint:
lower.append(i)
else:
upper.append(i)
This is a compound control-flow statement including a loop and a conditional – we'll look at these types of statements in a moment.
For now, consider that this demonstrates what is perhaps the most controversial feature of Python's syntax: whitespace is meaningful!
In programming languages, a block of code is a set of statements that should be treated as a unit.
In C, for example, code blocks are denoted by curly braces:
C
// C code
for(int i=0; i<100; i++)
{
// curly braces indicate code block
total += i;
}
In Python, code blocks are denoted by indentation:
python
for i in range(100):
# indentation indicates code block
total += i
In Python, indented code blocks are always preceded by a colon (:) on the previous line.
The use of indentation helps to enforce the uniform, readable style that many find appealing in Python code.
But it might be confusing to the uninitiated; for example, the following two snippets will produce different results:
```python
if x < 4: >>> if x < 4:
... y = x * 2 ... y = x * 2
... print(x) ... print(x)
`
In the snippet on the left,print(x)is in the indented block, and will be executed only ifxis less than4.
In the snippet on the rightprint(x)is outside the block, and will be executed regardless of the value ofx``!
Python's use of meaningful whitespace often is surprising to programmers who are accustomed to other languages, but in practice it can lead to much more consistent and readable code than languages that do not enforce indentation of code blocks.
If you find Python's use of whitespace disagreeable, I'd encourage you to give it a try: as I did, you may find that you come to appreciate it.
Finally, you should be aware that the amount of whitespace used for indenting code blocks is up to the user, as long as it is consistent throughout the script.
By convention, most style guides recommend to indent code blocks by four spaces, and that is the convention we will follow in this report.
Note that many text editors like Emacs and Vim contain Python modes that do four-space indentation automatically.
Whitespace Within Lines Does Not Matter
While the mantra of meaningful whitespace holds true for whitespace before lines (which indicate a code block), white space within lines of Python code does not matter.
For example, all three of these expressions are equivalent:
End of explanation
"""
2 * (3 + 4)
"""
Explanation: Abusing this flexibility can lead to issues with code readibility – in fact, abusing white space is often one of the primary means of intentionally obfuscating code (which some people do for sport).
Using whitespace effectively can lead to much more readable code,
especially in cases where operators follow each other – compare the following two expressions for exponentiating by a negative number:
python
x=10**-2
to
python
x = 10 ** -2
I find the second version with spaces much more easily readable at a single glance.
Most Python style guides recommend using a single space around binary operators, and no space around unary operators.
We'll discuss Python's operators further in Basic Python Semantics: Operators.
Parentheses Are for Grouping or Calling
In the previous code snippet, we see two uses of parentheses.
First, they can be used in the typical way to group statements or mathematical operations:
End of explanation
"""
print('first value:', 1)
print('second value:', 2)
"""
Explanation: They can also be used to indicate that a function is being called.
In the next snippet, the print() function is used to display the contents of a variable (see the sidebar).
The function call is indicated by a pair of opening and closing parentheses, with the arguments to the function contained within:
End of explanation
"""
L = [4,2,3,1]
L.sort()
print(L)
"""
Explanation: Some functions can be called with no arguments at all, in which case the opening and closing parentheses still must be used to indicate a function evaluation.
An example of this is the sort method of lists:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/test-institute-2/cmip6/models/sandbox-2/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-2', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: TEST-INSTITUTE-2
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
sz2472/foundations-homework
|
data and database/2016-06-21 NOTES.ipynb
|
mit
|
x= ["duck","aardvark","crocodile", "emu", "bee"]
x.sort()
x
### sorted by descending order
sorted(x,reverse=True)
### sorted by second letter:
#sorted(x, key=??)
def get_second_letter(s):
return s[1]
get_second_letter("cheese")
sorted(x,key=get_second_letter) #key is a parameter, value is a function:get_second_letter
type(get_second_letter)
"""
Explanation: Sorting lists
End of explanation
"""
# normal function: def nameofthefunction parameter: return expression
def get_second_letters(s):
return s[1]
# the previous is the same as:
get_second_letter = lambda s:s[1]
type(lambda s:s[1])
print("hello")
sorted(x,key=lambda s:s[1])
"""
Explanation: lambda functions: a way of writing a function on a single line
End of explanation
"""
t= (5,10,15)
type(t)
t[0]
for item in t:
print(item*item)
#tuple is like a list but it can't be changed, it's called "immutable" data type
#one benefit is exactly that: it can't be changed.
#other benefit is that tuples are memory-efficient.
hello=[1,2,3]
foo=(1,2,3)
import sys
sys.getsizeof(hello)
sys.getsizeof(foo)
"""
Explanation: tuple("rhymes with supple")
tuple is a kind of like a strict kind of list
End of explanation
"""
import re
test="one 1 two 2 three 3 four 4 five 5"
re.findall(r"\w+ \d",test)
for item in re.findall(r"\w+ \d",test):
x=item.split("")
print(x[0])
print(x[1])
test="one 1 two 2 three 3 four 4 five 5"
re.findall(r"(\w+) (\d)",test)
all_subjects = open("enronsubjects.txt").read()
for item in re.findall(r"(\d{3})-(\d{3})-(\d{4})", all_subjects):
print item[0]
[item[0] for item in re.findall(r"(\d{3})-(\d{3})-(\d{4})", all_subjects)]
"""
Explanation: back to regular expressions for a moment
grouping with multiple matches in the same string
End of explanation
"""
re.findall(r"\$(\d+) ?(\w+)", all_subjects)
vals=[]
for item in re.findall(r"\$(\d+) ?([mMbBkK])", all_subjects):
multiplier=item[1].lower()
number_val=int(item[0])
if multiplier=='k':
number_val *= 1000
elif multiplier=='m':
number_val *= 1000000
elif multiplier=='b':
number_val *= 1000000000
vals.append(number_val)
sum(vals)
"""
Explanation: monetary amounts in the subject lines
match something like $10 m, k, b
End of explanation
"""
message = "this is a test, this is only a test"
message.replace("this","that").replace("test","walrus") #.replace() replace part of a string
re.findall(r"\d{3}-\d{3}-\d{4}", all_subjects)
message="This is a test; this is only a test."
re.sub(r"[Tt]his", "that", message) #.sub():substitute a pattern
re.sub(r"\b\w+\b", "WALRUS", message)
anon = re.sub(r"\d{3}-\d{3}-\d{4}", "555-555-5555", all_subjects)
re.findall(r".{,20}\d{3}-\d{3}-\d{4}.{,20}",anon)
anon = re.sub(r"(\d{3}-\d{3}-\d{4}", r"\1-\2-XXXX", all_subjects)
"""
Explanation: substitution with regular expressions
End of explanation
"""
from urllib.request import urlretrieve
urlretrieve("https://raw.githubusercontent.com/ledeprogram/data-and-databases/master/menupages-morningside-heights.html")
#urlretrieve(url, filename)
"""
Explanation: HTML to SQL
End of explanation
"""
from bs4 import BeautifulSoup
raw_html = open("menupages-morningside-heights.html").read()
soup = BeautifulSoup(raw_html, "html.parser")
#just the name
search_table=soup.find("table", {'class':'search-results'})
table_body=search_table.find('tbody')
for tr_tag in table_body.find_all('tr'):
name_address_tag = tr_tag.find('td', {'class':'name-address'})
a_tag = name_address_tag.find('a')
print(a_tag.string)
search_table=soup.find("table", {'class':'search-results'})
table_body=search_table.find('tbody')
for tr_tag in table_body.find_all('tr'):
# get the restaurant name from the a inside a td
name_address_tag = tr_tag.find('td', {'class':'name-address'})
a_tag = name_address_tag.find('a')
restaurant_name=a_tag.string
#get the price from the span if present
price_tag = tr_tag.find('td', {'class':'price'})
price_span_tag = price_tag.find('span')
if price_span_tag:
price=int(price_span_tag.string)
else:
price=0
print(restaurant_name, price)
def get_name(tag):
return"TEST RESTAURANT"
def get_price(tag):
return"999999"
search_table=soup.find("table", {'class':'search-results'})
table_body=search_table.find('tbody')
for tr_tag in table_body.find_all('tr'):
restaurant_name = get_name(tr_tag)
price = get_price(tr_tag)
print(restaurant_name, price)
def get_name(tr_tag):
name_address_tag = tr_tag.find('td', {'class':'name-address'})
a_tag = name_address_tag.find('a')
restaurant_name=a_tag.string
return restaurant_name
def get_price(tr_tag):
price_tag = tr_tag.find('td', {'class':'price'})
price_span_tag = price_tag.find('span')
if price_span_tag:
price=int(price_span_tag.string)
else:
price=0
return price
def get_cuisines(tr_tag):
all_td_tags=tr_tag.find_all('td')
cuisine_tag = all_td_tags[4]
print(cuisine_tag)
cuisines = cuisine_tag.string
if cuisines:
cuisines_list= cuisines.split(", ")
else:
cuisine_list=[]
return cuisines_list
restaurants=[]
search_table=soup.find("table", {'class':'search-results'})
table_body=search_table.find('tbody')
for tr_tag in table_body.find_all('tr'):
restaurant_name = get_name(tr_tag)
price = get_price(tr_tag)
cuisines=get_cuisines(tr_tag)
rest_dict = {'name':restaurant_name, 'price':price, 'cuisines':cuisines}
restaurants.append(rest_dict)
restaurants
import pandas as pd
df = pd.DataFrame(restaurants)
df[df['price'] >2]
"""
Explanation: store:
* restaurant name
* price ($
* cuisine
research phase:
* every restaurant has a <tr> that is a child of the <table> tag with class search-results
* restaurants are in <td> tags with class name-address
* restaurant names are in an <a> tag inside that <td> tag
* restaurant price in a span inside a <td> tag with class price
* the cuisine of the restaurant is in a <td> tag with no class, the fifth <td> tag that is a child of the restaurant's <tr>'
targets:
* list of dictionaries
[
{'name':"Brad's, 'price': 1,'cuisine':['coffee']},
{'name':"Cafe Nana", 'price':0, 'cuisines':['Middle Eastern','Kosher']},
...
]
End of explanation
"""
import pg8000
conn= pg8000.connect(database="menupages")
type(conn)
conn.rollback() #execyte this whever you make a SQL problem
cursor = conn.cursor()
"""
Explanation: Putting this stuff into SQL
"schema" -> desiging the tables
what table(s) do we need?
what should those tables have in them? (columns and data types)
data normalization normal form
entities
restaurant name, price, list of cuisines
which cuisines a restaurant is associated with
restaurant table:
'id' (unique integer identifying the restaurant)
'name'(string with the restaurant's name)
*'price'(integer that corresponds to the number of dollar signs)
cuisine table:
* 'restaurant_id' (number associated with restaurant)
* 'kind' (string that identifies the cuisine type itself)
...
sample entry from restaurant table
restaurant_id: 4
name: Brad's
price:1
...
...
sameple entry from cuisine table
restaurant_id: 4
kind: coffee-tea
restaurant_id: 4
kind: seafood
...
select restaurant.restaurant_id
from restaurant join cuisine on restaurant.restaurant_id=cuisine.restaurant_id
"set up ohase"-> creating database and creating tables "one time"->psql
"working with data phase"-> inserting records, selecting stuff->python
sql_data_types like
* int_Integer
*varvhar(n) string with length n
* numeric number/decimal
* "serial"-> ubteger that is automatically assigned
connect to the database
End of explanation
"""
cursor.execute("INSERT INTO restaurant(name,price) VALUES('Good Food Place',3)")
conn.commit()
cursor.execute("SELECT * FROM restaurant")
for item in cursor.fetchall():
print(item)
cursor.execute(
"INSERT INTO restaurant(name,price) VALUES('Palace of Vegan Nosh',3) RETURNING id")
results= cursor.fetchone() #return a list
conn.commit()
rowid=results[0]
rowid
"""
Explanation: cursor object:
.execute()<- execute a SQL statement
.fetchone()<- fetches the first record of the results of a statement(as a list)
*.fetchall()<- returns ALL the rows of the results of a statement(as a list)
End of explanation
"""
#will not work:
cursor.execute(
"INSERT INTO restaurant(name,price) VALUES ('Brad's', 1) RETURNING id")
rowid=cursor.fetchone()[0]
conn.commit()
#SQL Injection attack:
restaurant = "'Restaurant'); DELETE FROM restaurant;"
"""
Explanation: quoting and parameters in SQL
End of explanation
"""
rest_insert = "INSERT INTO restaurant(name, price) VALUES (%s, %s)" #%S placeholder
cursor.execute(rest_insert, ["Brad's", 1])
# pg8000 does the work: "INSERT INTO restaurant (name,price) VALUES ('Brad\'s',1)"
conn.commit()
"""
Explanation: string in python -> "quote" "escape"-> valid sql statement
very weird and difficult and arcane
End of explanation
"""
cursor.execute("INSERT INTO restaurant(name, price) VALUES(%s, %s) RETURNING id",["Test Restaurant",2])
rowid=cursor.fetchone()[0]
conn.commit()
rowid
# let's say Test Restaurant serves fondue and casseroles
cuisine_insert = "INSERT INTO cuisine(restaurant_id, kind) VALUES (%s, %s)"
cursor.execute(cuisine_insert, [rowid, "fondue"])
cursor.execute(cuisine_insert, [rowid,"casseroles"])
conn.commit()
### Insert Many restaurants
restaurants
#error
rest_insert = "INSERT INTO restaurant(name,price)VALUES(%s,%s)"
for item in restaurants:
cursor.execute(rest_insert, [item['name'], item['price']])
conn.commit()
first=restaurants[0]
first
type(first['name'])
"""
Explanation: Insert a restaurant and its cuisines
End of explanation
"""
rest_insert = "INSERT INTO restaurant(name,price)VALUES(%s,%s)"
for item in restaurants:
cursor.execute(rest_insert, [str(item['name']), item['price']])
conn.commit()
# restaurants
for item in restaurants
sql to insert a restaurant
commit()
step2: for rest in rest:
for cuisine in cuisine
sql to insert
"""
Explanation: so what happened? Why isn't this just a string?
whenver you use the .string attribute of a Beatufil Soup tag object, the type of that value is bs4.element.NavigableString
fortunately, there's an esay fix:'str(val)' to convert that value into a string use str(value)
End of explanation
"""
rest_insert="INSERT INTO restaurant(name,price)VALUES(%s,%s) RETURNING id"
cuisine_insert="INSERT INTO CUISINE(restaurant_id, kind)VALUES(%s,%s)"
for item in restaurants:
#insert restaurant, RETURNING id
print("inserting restaurant", item['name'])
cursor.execute(rest_insert, [str(item['name']), item['price']])
rowid=cursor.fetchone()[0]
for cuisine in item['cuisines']:
print(" -inserting cuisine",cuisine)
cursor.execute(cuisine_insert, [rowid, str(cuisine)])
#insert restaurant
conn.commit()
"""
Explanation: inserting both restaurants and their cuisines
End of explanation
"""
|
cwhanse/pvlib-python
|
docs/tutorials/forecast.ipynb
|
bsd-3-clause
|
%matplotlib inline
import matplotlib.pyplot as plt
# built in python modules
import datetime
import os
# python add-ons
import numpy as np
import pandas as pd
# for accessing UNIDATA THREDD servers
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
import pvlib
from pvlib.forecast import GFS, HRRR_ESRL, NAM, NDFD, HRRR, RAP
# Choose a location and time.
# Tucson, AZ
latitude = 32.2
longitude = -110.9
tz = 'America/Phoenix'
start = pd.Timestamp(datetime.date.today(), tz=tz) # today's date
end = start + pd.Timedelta(days=7) # 7 days from today
print(start, end)
"""
Explanation: Forecast Tutorial
This tutorial will walk through forecast data from Unidata forecast model data using the forecast.py module within pvlib.
Table of contents:
1. Setup
2. Intialize and Test Each Forecast Model
This tutorial has been tested against the following package versions:
* Python 3.5.2
* IPython 5.0.0
* pandas 0.18.0
* matplotlib 1.5.1
* netcdf4 1.2.1
* siphon 0.4.0
It should work with other Python and Pandas versions. It requires pvlib >= 0.3.0 and IPython >= 3.0.
Authors:
* Derek Groenendyk (@moonraker), University of Arizona, November 2015
* Will Holmgren (@wholmgren), University of Arizona, November 2015, January 2016, April 2016, July 2016
Setup
End of explanation
"""
from pvlib.forecast import GFS, HRRR_ESRL, NAM, NDFD, HRRR, RAP
# GFS model, defaults to 0.5 degree resolution
fm = GFS()
# retrieve data
data = fm.get_data(latitude, longitude, start, end)
data[sorted(data.columns)]
data = fm.process_data(data)
data[['ghi', 'dni', 'dhi']].plot();
cs = fm.location.get_clearsky(data.index)
fig, ax = plt.subplots()
cs['ghi'].plot(ax=ax, label='ineichen')
data['ghi'].plot(ax=ax, label='gfs+larson')
ax.set_ylabel('ghi')
ax.legend();
fig, ax = plt.subplots()
cs['dni'].plot(ax=ax, label='ineichen')
data['dni'].plot(ax=ax, label='gfs+larson')
ax.set_ylabel('ghi')
ax.legend();
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
data[sorted(data.columns)]
data['temp_air'].plot()
plt.ylabel('temperature (%s)' % fm.units['temp_air']);
cloud_vars = ['total_clouds', 'low_clouds', 'mid_clouds', 'high_clouds']
for varname in cloud_vars:
data[varname].plot()
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.5 deg')
plt.legend(bbox_to_anchor=(1.18,1.0));
total_cloud_cover = data['total_clouds']
total_cloud_cover.plot(color='r', linewidth=2)
plt.ylabel('Total cloud cover' + ' (%s)' % fm.units['total_clouds'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.5 deg');
"""
Explanation: GFS (0.5 deg)
End of explanation
"""
# GFS model at 0.25 degree resolution
fm = GFS(resolution='quarter')
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.25 deg')
plt.legend(bbox_to_anchor=(1.18,1.0));
data[sorted(data.columns)]
"""
Explanation: GFS (0.25 deg)
End of explanation
"""
fm = NAM()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('NAM')
plt.legend(bbox_to_anchor=(1.18,1.0));
data['ghi'].plot(linewidth=2, ls='-')
plt.ylabel('GHI W/m**2')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
data[sorted(data.columns)]
"""
Explanation: NAM
End of explanation
"""
fm = NDFD()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
total_cloud_cover = data['total_clouds']
temp = data['temp_air']
wind = data['wind_speed']
total_cloud_cover.plot(color='r', linewidth=2)
plt.ylabel('Total cloud cover' + ' (%s)' % fm.units['total_clouds'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('NDFD')
plt.ylim(0,100);
temp.plot(color='r', linewidth=2)
plt.ylabel('Temperature' + ' (%s)' % fm.units['temp_air'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
wind.plot(color='r', linewidth=2)
plt.ylabel('Wind Speed' + ' (%s)' % fm.units['wind_speed'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
data[sorted(data.columns)]
"""
Explanation: NDFD
End of explanation
"""
fm = RAP(resolution=20)
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds', 'high_clouds', 'mid_clouds', 'low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('RAP')
plt.legend(bbox_to_anchor=(1.18,1.0));
data[sorted(data.columns)]
"""
Explanation: RAP
End of explanation
"""
fm = HRRR()
data_raw = fm.get_data(latitude, longitude, start, end)
# The HRRR model pulls in u, v winds for 2 layers above ground (10 m, 80 m)
# They are labeled as _0, _1 in the raw data
data_raw[sorted(data_raw.columns)]
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds', 'high_clouds', 'mid_clouds', 'low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('RAP')
plt.legend(bbox_to_anchor=(1.18,1.0));
data['temp_air'].plot(color='r', linewidth=2)
plt.ylabel('Temperature' + ' (%s)' % fm.units['temp_air'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
data['wind_speed'].plot(color='r', linewidth=2)
plt.ylabel('Wind Speed' + ' (%s)' % fm.units['wind_speed'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
data[sorted(data.columns)]
"""
Explanation: HRRR
End of explanation
"""
# NBVAL_SKIP
fm = HRRR_ESRL()
# retrieve data
# NBVAL_SKIP
data = fm.get_processed_data(latitude, longitude, start, end)
# NBVAL_SKIP
cloud_vars = ['total_clouds','high_clouds','mid_clouds','low_clouds']
# NBVAL_SKIP
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('HRRR_ESRL')
plt.legend(bbox_to_anchor=(1.18,1.0));
# NBVAL_SKIP
data['ghi'].plot(linewidth=2, ls='-')
plt.ylabel('GHI W/m**2')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
"""
Explanation: HRRR (ESRL)
End of explanation
"""
from pvlib.pvsystem import PVSystem, retrieve_sam
from pvlib.modelchain import ModelChain
sandia_modules = retrieve_sam('SandiaMod')
sapm_inverters = retrieve_sam('cecinverter')
module = sandia_modules['Canadian_Solar_CS5P_220M___2009_']
inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208__208V_']
system = PVSystem(module_parameters=module,
inverter_parameters=inverter,
surface_tilt=latitude,
surface_azimuth=180)
# fx is a common abbreviation for forecast
fx_model = GFS()
fx_data = fx_model.get_processed_data(latitude, longitude, start, end)
# use a ModelChain object to calculate modeling intermediates
mc = ModelChain(system, fx_model.location)
# extract relevant data for model chain
mc.run_model(weather=fx_data)
mc.total_irrad.plot();
mc.cell_temperature.plot();
mc.ac.plot();
"""
Explanation: Quick power calculation
End of explanation
"""
|
calebmadrigal/radio-hacking-scripts
|
audio_signal_generation.ipynb
|
mit
|
# Imports and boilerplate to make graphs look better
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy
import wave
import random
from IPython.display import Audio
def setup_graph(title='', x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0], fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
"""
Explanation: Generating audio
End of explanation
"""
# Let's view 1/20th of a second so we can actually see the wave
samples_per_second = 44100
frequency = 440
num_seconds = 1/20
sample_bitsize = 16
max_amplitude = int(2**sample_bitsize/2 - 1)
x = np.linspace(0, num_seconds, samples_per_second*num_seconds)
y = max_amplitude * np.sin(frequency * 2 * np.pi * x)
setup_graph(title='A440', x_label='time', y_label='freq', fig_size=(12,6))
plt.plot(x, y)
"""
Explanation: Simple tone generation
Let's start by generating a 440Hz tone.
For the normal sine wave, y = sin(2*pi*x), the period is 1.
But for a 1-second 440Hz tone, we want the period to be 1/440, not 1. So we can just multiply the 2*pi constant by 440 to get that.
End of explanation
"""
samples_per_second = 44100
frequency = 440
num_seconds = 3
num_channels = 1
sample_bitsize = 16
max_amplitude = int(2**sample_bitsize/2 - 1)
out_file = 'raw_data/a440.wav'
t = np.linspace(0, num_seconds, samples_per_second * num_seconds)
a440 = max_amplitude * np.sin(frequency * 2 * np.pi * t)
f = wave.open(out_file, 'wb')
f.setparams((num_channels, sample_bitsize // 8, samples_per_second, len(a440), "NONE", "Uncompressed"))
f.writeframes(np.array(a440, dtype=np.int16))
f.close()
"""
Explanation: Now let's generate a wav file
End of explanation
"""
Audio(url='./raw_data/a440.wav', autoplay=False)
"""
Explanation: Play it here
End of explanation
"""
MAX_AMP_16BIT = int(2**sample_bitsize/2 - 1)
def generate_wave(freq, len_in_sec=1, samp_rate=44100, amplitude=MAX_AMP_16BIT):
t = np.linspace(0, len_in_sec, samp_rate * len_in_sec)
sig = amplitude * np.sin(freq * 2 * np.pi * t)
return sig
def write_wav_file(file_path, wav_data, sample_rate=44100, num_channels=1):
f = wave.open(file_path, 'wb')
f.setparams((num_channels, 2, sample_rate, len(wav_data), "NONE", "Uncompressed"))
f.writeframes(np.array(wav_data, dtype=np.int16))
f.close()
"""
Explanation: Generalize a few functions
End of explanation
"""
HALF = 2**(1/12)
WHOLE = 2**(2/12)
MAJ_SCAL_MULTIPLIERS = [WHOLE, WHOLE, HALF, WHOLE, WHOLE, WHOLE, HALF]
tone_freq = 261.6 # Hz
c_maj_scale = np.array([], dtype=np.int16)
for mult in [1]+MAJ_SCAL_MULTIPLIERS:
tone_freq = tone_freq * mult
print('Note frequency: {}'.format(tone_freq))
tone_wave = generate_wave(tone_freq)
#notes.append(tone_wave)
c_maj_scale = np.append(c_maj_scale, tone_wave)
write_wav_file('./raw_data/c_major_scale.wav', c_maj_scale)
Audio(url='./raw_data/c_major_scale.wav', autoplay=False)
c_maj_scale_downsampled = [c_maj_scale[i] for i in range(0, len(c_maj_scale), 44100//16000)]
setup_graph(title='C major scale (with harmonics)', x_label='time', y_label='freq', fig_size=(14,7))
_ = plt.specgram(c_maj_scale_downsampled, Fs=16000)
"""
Explanation: Generating the C-major scale
A couple important math relationships:
* The frequency of Middle C is: 261.6Hz
* The ratio between a half-step in the Chromatic scale is 2**(1/12)
* The major scale follows this step pattern: Whole Whole Half Whole Whole Whole Half
End of explanation
"""
def generate_note_with_harmonics(freq, num_harmonics=16, amplitude_list=(MAX_AMP_16BIT,)*16):
note = generate_wave(freq)*(1/num_harmonics)
for index, harmonic in enumerate([i for i in range(1, num_harmonics+1)]):
harmonic_wave = generate_wave(freq*harmonic, amplitude=amplitude_list[index])
note = note + harmonic_wave
return note
tone_freq = 261.6 # Hz
c_maj_scale = np.array([], dtype=np.int16)
amp_list = [MAX_AMP_16BIT*random.random()/(16+4) for i in range(16)]
for mult in [1]+MAJ_SCAL_MULTIPLIERS:
tone_freq = tone_freq * mult
print('Note frequency: {}'.format(tone_freq))
tone_wave = generate_note_with_harmonics(tone_freq, amplitude_list=amp_list)
c_maj_scale = np.append(c_maj_scale, tone_wave)
write_wav_file('./raw_data/c_major_scale_harmonics.wav', c_maj_scale)
Audio(url='./raw_data/c_major_scale_harmonics.wav', autoplay=False)
c_maj_scale_downsampled = [c_maj_scale[i] for i in range(0, len(c_maj_scale), 44100//16000)]
setup_graph(title='C major scale (with harmonics)', x_label='time', y_label='freq', fig_size=(14,7))
_ = plt.specgram(c_maj_scale_downsampled, Fs=16000)
"""
Explanation: Generate some harmonics
End of explanation
"""
|
the-deep-learners/nyc-ds-academy
|
notebooks/intro_to_tensorflow_times_a_million.ipynb
|
mit
|
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = np.linspace(0., 8., 8000000) # eight million points spaced evenly over the interval zero to eight
ys = 0.3*xs-0.8+np.random.normal(scale=0.25, size=len(xs)) # eight million labels given xs, m=0.3, b=-0.8, plus normally-distributed noise
fig, ax = plt.subplots()
data_subset = pd.DataFrame(list(zip(xs, ys)), columns=['x', 'y']).sample(n=1000)
_ = ax.scatter(data_subset.x, data_subset.y)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
batch_size = 8
"""
Explanation: (Introduction to Tensorflow) * 10^6
In this notebook, we modify the tensor-fied intro to TensorFlow notebook to use placeholder tensors and feed in data from a data set of millions of points. This is a derivation of Jared Ostmeyer's Naked Tensor code.
End of explanation
"""
xs_placeholder = tf.placeholder(tf.float32, [batch_size])
ys_placeholder = tf.placeholder(tf.float32, [batch_size])
"""
Explanation: Define placeholder tensors of length batch_size whose values will be filled in during graph execution
End of explanation
"""
ys_model = m*xs_placeholder + b
total_error = tf.reduce_sum((ys_placeholder-ys_model)**2)
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error)
initializer_op = tf.global_variables_initializer()
"""
Explanation: Define graph that incorporates placeholders
End of explanation
"""
with tf.Session() as sess:
sess.run(initializer_op)
n_batch = 1000
for iteration in range(n_batch):
random_indices = np.random.randint(len(xs), size=batch_size)
feed = {
xs_placeholder: xs[random_indices],
ys_placeholder: ys[random_indices]
}
sess.run(optimizer_operation, feed_dict=feed)
slope, intercept = sess.run([m, b])
slope
intercept
"""
Explanation: Sample from the full data set while running the session
End of explanation
"""
|
scruwys/and-the-award-goes-to
|
notebooks/prepare_data.ipynb
|
mit
|
import re
import pandas as pd
import numpy as np
pd.set_option('display.float_format', lambda x: '%.3f' % x)
nominations = pd.read_csv('../data/nominations.csv')
# clean out some obvious mistakes...
nominations = nominations[~nominations['film'].isin(['2001: A Space Odyssey', 'Oliver!', 'Closely Observed Train'])]
nominations = nominations[nominations['year'] >= 1980]
# scraper pulled in some character names instead of film names...
nominations.loc[nominations['film'] == 'Penny Lane', 'film'] = 'Almost Famous'
nominations.loc[nominations['film'] == 'Sister James', 'film'] = 'Doubt'
"""
Explanation: Preparing Scraped Data for Prediction
This notebook describes the process in which the raw films.csv and nominations.csv files are "wrangled" into a workable format for our classifier(s). At the time of this writing (February 25, 2017), the resulting dataset is only used in a decision tree classifier.
End of explanation
"""
wins = pd.pivot_table(nominations, values='winner', index=['year', 'category', 'film', 'name'], columns=['award'], aggfunc=np.sum)
wins = wins.fillna(0) # if a nominee wasn't in a specific ceremony, we just fill it as a ZERO.
wins.reset_index(inplace=True) # flattens the dataframe
wins.head()
"""
Explanation: Pivot Nominations
Since we pull in four award types, we know that each nominee can have a maximum of four line items. The nominations table is pivoted to ensure that each nomination has its own unique line while still maintaining a count of wins per award.
End of explanation
"""
oscars = nominations[nominations['award'] == 'Oscar'][['year', 'category', 'film', 'name']]
awards = pd.merge(oscars, wins, how='left', on=['year', 'category', 'name', 'film'])
awards.head()
"""
Explanation: Merge Oscars with Nominations
We only care about films that were nominated for an Academy Award. The pd.merge function is used to perform a left join between the oscars dataframe and the wins. In other words, we are pruning out any films that were never nominated for an Academy Award based on the join fields.
End of explanation
"""
films = pd.read_csv('../data/films.csv')
relevant_fields = [
'film',
'country',
'release_date',
'running_time',
'mpaa',
'box_office',
'budget',
'imdb_score',
'rt_audience_score',
'rt_critic_score',
'stars_count',
'writers_count'
]
df = pd.merge(awards, films[relevant_fields], how='left', on='film')
print "Total Observations:", len(df)
print
print "Observations with NaN fields:"
for column in df.columns:
l = len(df[df[column].isnull()])
if l != 0:
print len(df[df[column].isnull()]), "\t", column
"""
Explanation: Read in Films Dataframe
We pull the films.csv file into a dataframe called films. This is then merged to the awards dataframe from above. Note that we only include specific fields. Fields like metacritic_score and bom_worldwide have been excluded because too many null values exist, which would have an adverse effect on our model.
End of explanation
"""
### FIX RUN TIME ###
# df[df['running_time'].isnull()] # Hilary and Jackie
df.loc[df['film'] == 'Hilary and Jackie', 'running_time'] = '121 minutes'
df.loc[df['film'] == 'Fanny and Alexander', 'running_time'] = '121 minutes'
### FIX MPAA RATING ###
df = df.replace('NOT RATED', np.nan)
df = df.replace('UNRATED', np.nan)
df = df.replace('M', np.nan)
df = df.replace('NC-17', np.nan)
df = df.replace('APPROVED', np.nan)
# df[df['mpaa'].isnull()]
df.loc[df['film'].isin(['L.A. Confidential', 'In the Loop']), 'mpaa'] = 'R'
df.loc[df['film'].isin(['True Grit', 'A Room with a View']), 'mpaa'] = 'PG-13'
### FIX COUNTRY ###
# df[df['country'].isnull()] # Ulee's Gold, The Constant Gardner, Dave
df.loc[df['film'].isin(["Ulee's Gold", "Dave"]), 'country'] = 'United States'
df.loc[df['country'].isnull(), 'country'] = 'United Kingdom'
df.loc[df['country'] == 'Germany\\', 'country'] = 'Germany'
df.loc[df['country'] == 'United States & Australia', 'country'] = 'United States'
df['country'].unique()
### FIX STARS COUNT ###
# df[df['stars_count'].isnull()]
df.loc[df['film'].isin(['Before Sunset', 'Before Midnight']), 'stars_count'] = 2
df.loc[df['film'] == 'Dick Tracy', 'stars_count'] = 10
df.loc[df['stars_count'].isnull(), 'stars_count'] = 1
df = df[~df['release_date'].isin(['1970'])]
def to_numeric(value):
multiplier = 1
try:
value = re.sub(r'([$,])', '', str(value)).strip()
value = re.sub(r'\([^)]*\)', '', str(value)).strip()
if 'million' in value:
multiplier = 1000000
elif 'billion' in value:
multiplier = 10000000
for replace in ['US', 'billion', 'million']:
value = value.replace(replace, '')
value = value.split(' ')[0]
if isinstance(value, str):
value = value.split('-')[0]
value = float(value) * multiplier
except:
return np.nan
return value
def to_runtime(value):
try:
return re.findall(r'\d+', value)[0]
except:
return np.nan
### Apply function to appropriate fields ###
for field in ['box_office', 'budget']:
df[field] = df[field].apply(to_numeric)
df['release_month'] = df['release_date'].apply(lambda y: int(y.split('-')[1]))
df['running_time'] = df['running_time'].apply(to_runtime)
### FIX BOX OFFICE ###
list(df[df['mpaa'].isnull()]['film'].unique())
# cleaned_box_offices = {
# 'Mona Lisa': 5794184,
# 'Testament': 2044982,
# 'Pennies from Heaven': 9171289,
# 'The Year of Living Dangerously': 10300000
# }
# for key, value in cleaned_box_offices.items():
# df.loc[df['film'] == key, 'box_office'] = value
# ### FIX BUDGET ###
# # df[(df['budget'].isnull())]['film'].unique()
# cleaned_budgets = {'Juno': 6500000, 'Blue Sky': 16000000, 'Pollock': 6000000 }
# for key, value in cleaned_budgets.items():
# df.loc[df['film'] == key, 'budget'] = value
"""
Explanation: So we obviously have some null values, which is disappointing. We'll take the time to clean these up.
End of explanation
"""
df = df[~df['mpaa'].isnull()]
df['produced_USA'] = df['country'].apply(lambda x: 1 if x == 'United States' else 0)
for column in df['mpaa'].unique():
df[column.replace('-', '')] = df['mpaa'].apply(lambda x: 1 if x == column else 0)
df['q1_release'] = df['release_month'].apply(lambda m: 1 if m <= 3 else 0)
df['q2_release'] = df['release_month'].apply(lambda m: 1 if m > 3 and m <= 6 else 0)
df['q3_release'] = df['release_month'].apply(lambda m: 1 if m > 6 and m <= 9 else 0)
df['q4_release'] = df['release_month'].apply(lambda m: 1 if m > 9 else 0)
df.to_csv('../data/analysis.csv', index=False)
del df['mpaa']
del df['country']
del df['release_date']
del df['release_month']
del df['budget']
for column in df.columns:
df = df[~df[column].isnull()]
df.to_csv('../data/prepared.csv', index=False)
"""
Explanation: Adding some more fields and removing remaining nulls
While we are pretty happy with our MPAA field, we can't input it into a predictive model as is. The decision tree would not know how to treat a string (e.g., "PG"). So, instead, we pivot those values into separate, boolean fields.
So instead of...
|film | mpaa |
|---------|-------|
|Raging Bull | R |
| Kramer vs. Kramer | PG |
We get...
| film | G | PG | PG13 | R |
|-------|---|----|------|---|
|Raging Bull | 0 | 0 | 0 | 1 |
| Kramer vs. Kramer | 0 | 1 | 0 | 0 |
This essentially "quantifies" the MPAA feature so that our algorithm can properly interpret it. Note that we perform a similar action for production country (just for the USA) and seasonality.
End of explanation
"""
|
svdwulp/da-programming-1
|
week_01_oefeningen_uitwerkingen.ipynb
|
gpl-2.0
|
## Opgave 1 - uitwerking
for A in [False, True]:
for B in [False, True]:
print(A, B, not(A or B))
"""
Explanation: Data Analysis - Programming
Week 1
Oefeningen met uitwerkingen
Opageve 1. Schrijf een Python programma dat de waarheidstabel van de volgende expressie produceert:
$\neg{(A \lor B)}$ (Quine's Dagger)
End of explanation
"""
## Opgave 3 - uitwerking
# controle -(-A | -B) == A & B
for A in [False, True]:
for B in [False, True]:
print(A, B, not(not A or not B), A and B)
# controle door computer
for A in [False, True]:
for B in [False, True]:
if not(not A or not B) != A and B:
print("De expressie -(-A | -B) is niet gelijk aan (A & B) voor A", A, "en B", B)
"""
Explanation: Opageve 2. De expressies uit opdracht 3 en 4 (in de slides) hebben een eigen naam,
$(\neg{A}) \lor B$ wordt ook wel implicatie genoemd en genoteerd als $A \implies B$,
$(A \lor B) \land \neg{(A \land B)}$ wordt ook wel exclusieve of (of xor) genoemd en genoteerd als $A \oplus B$.
<br/>De operator $\Leftrightarrow$ wordt equivalentie (gelijkheid) genoemd.
De expressie $A \Leftrightarrow B$ levert alleen True op, wanneer de waarde van $A$ gelijk is aan die van $B$.
Bedenk een expressie in $A$ en $B$ met behulp van de operatoren or, and en not die precies het resultaat van de operator equivalentie oplevert en controleer je expressie met een Python programma.
<table width="50%">
<tr><th>$A$</th><th>$B$</th><th>$A \Leftrightarrow B$</th></tr>
<tr><td>False</td><td>False</td><td>True</td></tr>
<tr><td>False</td><td>True</td><td>False</td></tr>
<tr><td>True</td><td>False</td><td>False</td></tr>
<tr><td>True</td><td>True</td><td>True</td></tr>
</table>
Opgave 2 - uitwerking
In de onderstaande afleiding wordt om technische reden een iets andere notatie gebruikt:
| operator in wiskundige notatie | operator in alternatieve notatie |
|:-:|:-:|
| $\neg{}$ | - |
| $\lor$ | | |
| $\land$ | & |
Afleiding $A \Leftrightarrow B$ met behulp van $\neg{}$, $\lor$ en $\land$:
A | B | A | B | -(A | B) | A & B | -(A | B) | (A & B)
:-:|:-:| :-: | :-: | :-: | :-:
0 | 0 | 0 | 1 | 0 | 1
0 | 1 | 1 | 0 | 0 | 0
1 | 0 | 1 | 0 | 0 | 0
1 | 1 | 1 | 0 | 1 | 1
Opageve 3. Strikt genomen zijn de extra operatoren uit opgave 2 niet nodig, want je kunt ze namaken met or, and en not.
Eigenlijk kun je zelfs rondkomen met alleen or en not !
Bedenk een combinatie van or en not waarbij de waarheidstabel hetzelfde is als die van and. Controleer je expressie met een Python programma.
<table width="50%">
<tr><th>$A$</th><th>$B$</th><th>$A \land B$</th></tr>
<tr><td>False</td><td>False</td><td>False</td></tr>
<tr><td>False</td><td>True</td><td>False</td></tr>
<tr><td>True</td><td>False</td><td>False</td></tr>
<tr><td>True</td><td>True</td><td>True</td></tr>
</table>
<br/><br/>
Opgave 3 - uitwerking
Zie uitwerking opgave 3 voor uitleg over de gebruikte notatie.
Afleiding $A \land B$ met behulp van $\neg{}$ en $\lor$:
A | B | -A | -B | -A | -B | -(-A | -B) | A & B
:-:|:-:|:-: |:-: | :-: | :-: | :-:
0 | 0 | 1 | 1 | 1 | 0 | 0
0 | 1 | 1 | 0 | 1 | 0 | 0
1 | 0 | 0 | 1 | 1 | 0 | 0
1 | 1 | 0 | 0 | 0 | 1 | 1
End of explanation
"""
## Opgave 4 - uitwerking
# controle door computer, (A or B) -> (A or B)
# uitdrukking vertaald met -(A or B) or (A or B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = not(A or B) or (A or B)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (A or B) -> (A or B) is een tautologie")
else:
print("De expressie (A or B) -> (A or B) is geen tautologie")
# controle door computer, (A or B) -> (A and B)
# uitdrukking vertaald met -(A or B) or (A and B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = not(A or B) or (A and B)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (A or B) -> (A and B) is een tautologie")
else:
print("De expressie (A or B) -> (A and B) is geen tautologie")
# controle door computer, (-A -> B) and (-A -> -B)
# uitdrukking vertaald met (-(-A) or B) and (-(-A) or -B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = (not(not(A)) or B) and (not(not(A)) or not(B))
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (-A -> B) and (-A -> -B) is een tautologie")
else:
print("De expressie (-A -> B) and (-A -> -B) is geen tautologie")
# controle door computer, ((A -> B) and (B -> C)) -> (A -> C)
# uitdrukking vertaald met -((-A or B) and (-B or C)) or (-A or C)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
for C in [False, True]:
expr = not((not A or B) and (not B or C)) or (not A or C)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie ((A -> B) and (B -> C)) -> (A -> C) is een tautologie")
else:
print("De expressie ((A -> B) and (B -> C)) -> (A -> C) is geen tautologie")
"""
Explanation: Opageve 4. Een tautologie is een expressie die altijd waar oplevert, ongeacht de invulling van de variabelen.
De uitdrukking $A \lor \neg{A}$ is een voorbeeld van een tautologie.
Schrijf Python programma's om te bepalen of de onderstaande uitdrukkingen tautologieën zijn.
Let op: de implicatie $A \implies B$ kun je uitrekenen, als in opgave 2 beschreven, met $(\neg{A}) \lor B$
1. $(A \lor B) \implies (A \lor B)$
2. $(A \lor B) \implies (A \land B)$
3. $(\neg{A} \implies B) \land (\neg{A} \implies \neg{B})$
4. $((A \implies B) \land (B \implies C)) \implies (A \implies C)$
End of explanation
"""
|
jinzishuai/learn2deeplearn
|
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v1.ipynb
|
gpl-3.0
|
import numpy as np
from utils import *
import random
from random import shuffle
"""
Explanation: Character level language model - Dinosaurus land
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this dataset. (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
How to store text data for processing using an RNN
How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
How to build a character-level text generation recurrent neural network
Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in rnn_utils. Specifically, you have access to functions such as rnn_forward and rnn_backward which are equivalent to those you've implemented in the previous assignment.
End of explanation
"""
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
"""
Explanation: 1 - Problem Statement
1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
End of explanation
"""
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
"""
Explanation: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
End of explanation
"""
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
None
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
"""
Explanation: 1.2 - Overview of the model
Your model will have the following structure:
Initialize parameters
Run the optimization loop
Forward propagation to compute the loss function
Backward propagation to compute the gradients with respect to the loss function
Clip the gradients to avoid exploding gradients
Using the gradients, update your parameter with the gradient descent update rule.
Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> Figure 1: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". </center></caption>
At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$.
2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model.
2.1 - Clipping the gradients in the optimization loop
In this section you will implement the clip function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values.
In the exercise below, you will implement a function clip that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a maxValue (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> Figure 2: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. </center></caption>
Exercise: Implement the function below to return the clipped gradients of your dictionary gradients. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this hint for examples of how to clip in numpy. You will need to use the argument out = ....
End of explanation
"""
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = None
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = None
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = None
z = None
y = None
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = None
# Append the index to "indices"
None
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = None
x[None] = None
# Update "a_prev" to be "a"
a_prev = None
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
n, n_a = 20, 100
a0 = np.random.randn(n_a, 1)
i0 = 1 # first character is ix_to_char[i0]
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
"""
Explanation: Expected output:
<table>
<tr>
<td>
**gradients["dWaa"][1][2] **
</td>
<td>
10.0
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]**
</td>
<td>
-10.0
</td>
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td>
0.29713815361
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td>
[ 10.]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>
[ 8.45833407]
</td>
</tr>
</table>
2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>
Exercise: Implement the sample function below to sample characters. You need to carry out 4 steps:
Step 1: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$
Step 2: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a softmax() function that you can use.
Step 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use np.random.choice.
Here is an example of how to use np.random.choice():
python
np.random.seed(0)
p = np.array([0.1, 0.0, 0.7, 0.2])
index = np.random.choice([0, 1, 2, 3], p = p.ravel())
This means that you will pick the index according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
Step 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
End of explanation
"""
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = None
# Backpropagate through time (≈1 line)
gradients, a = None
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = None
# Update parameters (≈1 line)
parameters = None
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
"""
Explanation: Expected output:
<table>
<tr>
<td>
**list of sampled indices:**
</td>
<td>
[18, 2, 26, 0]
</td>
</tr><tr>
<td>
**list of sampled characters:**
</td>
<td>
['r', 'b', 'z', '\n']
</td>
</tr>
</table>
3 - Building the language model
It is time to build the character-level language model for text generation.
3.1 - Gradient descent
In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:
Forward propagate through the RNN to compute the loss
Backward propagate through time to compute the gradients of the loss with respect to the parameters
Clip the gradients if necessary
Update your parameters using gradient descent
Exercise: Implement this optimization process (one step of stochastic gradient descent).
We provide you with the following functions:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in the backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
End of explanation
"""
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = None
X = None
Y = None
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = None
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
"""
Explanation: Expected output:
<table>
<tr>
<td>
**Loss **
</td>
<td>
126.503975722
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]**
</td>
<td>
0.194709315347
</td>
<tr>
<td>
**np.argmax(gradients["dWax"])**
</td>
<td> 93
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td> -0.007773876032
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td> [-0.06809825]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>[ 0.01538192]
</td>
</tr>
<tr>
<td>
**a_last[4]**
</td>
<td> [-1.]
</td>
</tr>
</table>
3.2 - Training the model
Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order.
Exercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:
python
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
Note that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).
The first entry of X being None will be interpreted by rnn_forward() as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
End of explanation
"""
parameters = model(data, ix_to_char, char_to_ix)
"""
Explanation: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
End of explanation
"""
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
"""
Explanation: Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">
4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
End of explanation
"""
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
"""
Explanation: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (<40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
End of explanation
"""
|
SylvainCorlay/bqplot
|
examples/Tutorials/Object Model.ipynb
|
apache-2.0
|
from bqplot import (LinearScale, Axis, Figure, OrdinalScale,
LinearScale, Bars, Lines, Scatter)
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label='X')
yax = Axis(scale=ys, orientation='vertical', label='Y')
# 3. Create a Lines mark by passing in the scales
# note that Lines object is stored in `line` which can be used later to update the plot
line = Lines(x=x, y=y, scales={'x': xs, 'y': ys})
# 4. Create a Figure object by assembling marks and axes
fig = Figure(marks=[line], axes=[xax, yax], title='Simple Line Chart')
# 5. Render the figure using display or just as is
fig
"""
Explanation: Object Model
bqplot is based on Grammar of Graphics paradigm. The Object Model in bqplot gives the user the full flexibility to build custom plots. This means the API is verbose but fully customizable.
The following are the steps to build a Figure in bqplot using the Object Model:
Build the scales for x and y quantities using the Scale classes (Scales map the data into pixels in the figure)
Build the marks using the Mark classes. Marks represent the core plotting objects (lines, scatter, bars, pies etc.). Marks take the scale objects created in step 1 as arguments
Build the axes for x and y scales
Finally create a figure using Figure class. Figure takes marks and axes as inputs. Figure object is a widget (it inherits from DOMWidget) and can be rendered like any other jupyter widget
Let's look a simple example to understand these concepts:
End of explanation
"""
# first, let's create two vectors x and y to plot a bar chart
x = list('ABCDE')
y = np.random.rand(5)
# 1. Create the scales
xs = OrdinalScale() # note the use of ordinal scale to represent categorical data
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label='X', grid_lines='none') # no grid lines needed for x
yax = Axis(scale=ys, orientation='vertical', label='Y', tick_format='.0%') # note the use of tick_format to format ticks
# 3. Create a Bars mark by passing in the scales
# note that Bars object is stored in `bar` object which can be used later to update the plot
bar = Bars(x=x, y=y, scales={'x': xs, 'y': ys}, padding=.2)
# 4. Create a Figure object by assembling marks and axes
Figure(marks=[bar], axes=[xax, yax], title='Simple Bar Chart')
"""
Explanation: For creating other marks (like scatter, pie, bars, etc.), only step 3 needs to be changed. Lets look a simple example to create a bar chart:
End of explanation
"""
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label='X')
yax = Axis(scale=ys, orientation='vertical', label='Y')
# 3. Create a Lines and Scatter marks by passing in the scales
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['green'], stroke_width=3)
scatter = Scatter(x=x, y=y_noise, scales={'x': xs, 'y': ys}, colors=['red'], stroke='black')
# 4. Create a Figure object by assembling marks and axes
# pass both the marks (line and scatter) as a list to the marks attribute
Figure(marks=[line, scatter], axes=[xax, yax], title='Scatter and Line')
"""
Explanation: Mutiple marks can be rendered in a figure. It's as easy as passing a list of marks when constructing the Figure object
End of explanation
"""
|
eneskemalergin/OldBlog
|
_oldnotebooks/Inferential_Statistics.ipynb
|
mit
|
# Calling the binom module from scipy stats package
from scipy.stats import binom
# Plotting Function
import matplotlib.pyplot as plt
%matplotlib inline
x = list(range(7))
n, p = 6, 0.5
rv = binom(n, p)
plt.vlines(x, 0, rv.pmf(x), colors='r', linestyles='-', lw=1, label='Probability')
plt.legend(loc='best', frameon=False)
plt.xlabel("No. of instances")
plt.ylabel("Probability")
plt.show()
x = range(1001)
n, p = 1000, 0.4
rv = binom(n, p)
plt.vlines(x,0,rv.pmf(x), colors='g', linestyles='-', lw=1, label='Probability')
plt.legend(loc='best', frameon=True)
plt.xlabel("No. of instances")
plt.ylabel("Probability")
plt.show()
"""
Explanation: Inferential Statistics
Let's say you have collected the height of 1,000 people living in Hong Kong. The mean of their height would be descriptive statistics, but their mean height does not indicate that it's the average height of whole of Hong Kong. Here, inferential statistics will help us in determining what the average height of whole of Hong Kong would be, which is described in depth in this chapter.
Inferential statistics is all about describing the larger picture of the analysis with a limited set of data and deriving conclusions from it.
Distributions Types
Normal Distribution
Most common distribution
"Gaussian curve", "bell curve" other names.
The numbers in the plot are the standard deviation numbers from the mean, which is zero.
A normal distribution from a binomial distribution:
Let's take a coin and flip it. The probability of getting a head or a tail is 50%. If you take the same coin and flip it six times, the probability of getting a head three times can be computed using the following formula:
$$
P(x) = \frac{n!}{x!(n-x)!}p^{x}q^{n-x}
$$
and x is the number of successes desired
In the preceding formula, n is the number of times the coin is flipped, p is the probability of success, and q is (1– p), which is the probability of failure.
End of explanation
"""
from scipy.stats import bernoulli
bernoulli.rvs(0.7, size=100)
"""
Explanation: Poisson Distribution
Independent interval occurrences in an interval.
Used for count-based distributions.
$$
f(k;\lambda)=Pr(X = k)=\frac{\lambda^ke^{-k}}{k!}
$$
Here, e is the Euler's number, k is the number of occurrences for which the probability is going to be determined, and lambda is the mean number of occurrences.
Example:
Let's understand this with an example. The number of cars that pass through a bridge in an hour is 20. What would be the probability of 23 cars passing through the bridge in an hour?
```Python
from scipy.stats import poisson
rv = poisson(20)
rv.pmf(23)
Result: 0.066881473662401172
```
With the Poisson function, we define the mean value, which is 20 cars. The rv.pmf function gives the probability, which is around 6%, that 23 cars will pass the bridge.
Bernoulli Distribution
Can perform an experiment with two possible outcomes: success or failure.
Success has a probability of p, and failure has a probability of 1 - p. A random variable that takes a 1 value in case of a success and 0 in case of failure is called a Bernoulli distribution. The probability distribution function can be written as:
$$
P(n)=\begin{cases}1-p & for & n = 0\p & for & n = 1\end{cases}
$$
It can also be written like this:
$$
P(n)=p^n(1-p)^{1-n}
$$
The distribution function can be written like this:
$$
D(n) = \begin{cases}1-p & for & n=0\1 & for & n=1\end{cases}
$$
Example: Voting in an election is a good example of the Bernoulli distribution. A Bernoulli distribution can be generated using the bernoulli.rvs() function of the SciPy package.
End of explanation
"""
import numpy as np
class_score = np.random.normal(50, 10, 60).round()
plt.hist(class_score, 30, normed=True) # Number of breaks is 30
plt.show()
"""
Explanation: z-score
Expresses the value of a distribution in std with respect to mean.
$$
z = \frac{X - \mu}{\sigma}
$$
Here, X is the value in the distribution, μ is the mean of the distribution, and σ is the
standard deviation of the distribution.
Example: A classroom has 60 students in it and they have just got their mathematics examination score. We simulate the score of these 60 students with a normal distribution using the following command:
End of explanation
"""
from scipy import stats
stats.zscore(class_score)
"""
Explanation: The score of each student can be converted to a z-score using the following functions:
End of explanation
"""
prob = 1 - stats.norm.cdf(1.334)
prob
"""
Explanation: So, a student with a score of 60 out of 100 has a z-score of 1.334. To make more sense of the z-score, we'll use the standard normal table.
This table helps in determining the probability of a score.
We would like to know what the probability of getting a score above 60 would be.
The standard normal table can help us in determining the probability of the occurrence of the score, but we do not have to perform the cumbersome task of finding the value by looking through the table and finding the probability. This task is made simple by the cdf function, which is the cumulative distribution function:
End of explanation
"""
stats.norm.ppf(0.80)
"""
Explanation: The cdf function gives the probability of getting values up to the z-score of 1.334, and doing a minus one of it will give us the probability of getting a z-score, which is above it. In other words, 0.09 is the probability of getting marks above 60.
Let's ask another question, "how many students made it to the top 20% of the class?"
Now, to get the z-score at which the top 20% score marks, we can use the ppf function in SciPy:
End of explanation
"""
(0.84 * class_score.std()) + class_score.mean()
"""
Explanation: The z-score for the preceding output that determines whether the top 20% marks are at 0.84 is as follows:
End of explanation
"""
zscore = ( 68 - class_score.mean() ) / class_score.std()
zscore
"""
Explanation: We multiply the z-score with the standard deviation and then add the result with the mean of the distribution. This helps in converting the z-score to a value in the distribution. The 55.83 marks means that students who have marks more than this are in the top 20% of the distribution.
The z-score is an essential concept in statistics, which is widely used. Now you can understand that it is basically used in standardizing any distribution so that it can be compared or inferences can be derived from it.
### p-value
A p-value is the probability of rejecting a null-hypothesis when the hypothesis is proven true.
If the p-value is equal to or less than the significance level (α), then the null hypothesis is inconsistent and it needs to be rejected.
Let's understand this concept with an example where the null hypothesis is that it is common for students to score 68 marks in mathematics.
Let's define the significance level at 5%. If the p-value is less than 5%, then the null hypothesis is rejected and it is not common to score 68 marks in mathematics.
Let's get the z-score of 68 marks:
End of explanation
"""
prob = 1 - stats.norm.cdf(zscore)
prob
"""
Explanation:
End of explanation
"""
zscore = (53-50)/3.0
zscore
prob = stats.norm.cdf(zscore)
prob
"""
Explanation: One-tailed and two-tailed tests
The example in the previous section was an instance of a one-tailed test where the null hypothesis is rejected or accepted based on one direction of the normal distribution.
In a two-tailed test, both the tails of the null hypothesis are used to test the hypothesis.
In a two-tailed test, when a significance level of 5% is used, then it is distributed equally in the both directions, that is, 2.5% of it in one direction and 2.5% in the other direction.
Let's understand this with an example. The mean score of the mathematics exam at a national level is 60 marks and the standard deviation is 3 marks.
The mean marks of a class are 53. The null hypothesis is that the mean marks of the class are similar to the national average. Let's test this hypothesis by first getting the z-score 60:
End of explanation
"""
height_data = np.array([ 186.0, 180.0, 195.0, 189.0, 191.0,
177.0, 161.0, 177.0, 192.0, 182.0,
185.0, 192.0, 173.0, 172.0, 191.0,
184.0, 193.0, 182.0, 190.0, 185.0,
181.0,188.0, 179.0, 188.0, 170.0, 179.0,
180.0, 189.0, 188.0, 185.0, 170.0,
197.0, 187.0,182.0, 173.0, 179.0,184.0,
177.0, 190.0, 174.0, 203.0, 206.0, 173.0,
169.0, 178.0,201.0, 198.0, 166.0,171.0, 180.0])
plt.hist(height_data, 30, normed=True, color='r')
plt.show()
# The mean of the distribution
height_data.mean()
"""
Explanation: Type 1 and Type 2 errors
Type 1 error is a type of error that occurs when there is a rejection of the null hypothesis when it is actually true. This kind of error is also called an error of the first kind and is equivalent to false positives.
Let's understand this concept using an example. There is a new drug that is being developed and it needs to be tested on whether it is effective in combating diseases. The null hypothesis is that it is not effective in combating diseases.
The significance level is kept at 5% so that the null hypothesis can be accepted confidently 95% of the time. However, 5% of the time, we'll accept the rejecttion of the hypothesis although it had to be accepted, which means that even though the drug is ineffective, it is assumed to be effective.
The Type 1 error is controlled by controlling the significance level, which is alpha. Alpha is the highest probability to have a Type 1 error. The lower the alpha, the lower will be the Type 1 error.
The Type 2 error is the kind of error that occurs when we do not reject a null hypothesis that is false. This error is also called the error of the second kind and is equivalent to a false negative.
This kind of error occurs in a drug scenario when the drug is assumed to be ineffective but is actually it is effective.
These errors can be controlled one at a time. If one of the errors is lowered, then the other one increases. It depends on the use case and the problem statement that the analysis is trying to address, and depending on it, the appropriate error should reduce. In the case of this drug scenario, typically, a Type 1 error should be lowered because it is better to ship a drug that is confidently effective.
Confidence Interval
A confidence interval is a type of interval statistics for a population parameter. The confidence interval helps in determining the interval at which the population mean can be defined.
Let's try to understand this concept by using an example. Let's take the height of every man in Kenya and determine with 95% confidence interval the average of height of Kenyan men at a national level.
Let's take 50 men and their height in centimeters:
End of explanation
"""
stats.sem(height_data)
"""
Explanation: So, the average height of a man from the sample is 183.4 cm.
To determine the confidence interval, we'll now define the standard error of the mean.
The standard error of the mean is the deviation of the sample mean from the population mean. It is defined using the following formula:
$$
SE_{\overline{x}} = \frac{s}{\sqrt{n}}
$$
Here, s is the standard deviation of the sample, and n is the number of elements of the sample.
This can be calculated using the sem() function of the SciPy package:
End of explanation
"""
average_height = []
for i in range(30):
# Create a sample of 50 with mean 183 and standard deviation 10
sample50 = np.random.normal(183, 10, 50).round()
# Add the mean on sample of 50 into average_height list
average_height.append(sample50.mean())
# Plot it with 10 bars and normalization
plt.hist(average_height, 10, normed=True)
plt.show()
"""
Explanation: So, there is a standard error of the mean of 1.38 cm. The lower and upper limit of the confidence interval can be determined by using the following formula:
Upper/Lower limit = mean(height) + / - sigma * SEmean(x)
For lower limit:
183.24 + (1.96 * 1.38) = 185.94
For upper limit:
183.24 - (1.96*1.38) = 180.53
A 1.96 standard deviation covers 95% of area in the normal distribution.
We can confidently say that the population mean lies between 180.53 cm and 185.94 cm of height.
New Example: Let's assume we take a sample of 50 people, record their height, and then repeat this process 30 times. We can then plot the averages of each sample and observe the distribution.
End of explanation
"""
average_height = []
for i in range(30):
# Create a sample of 50 with mean 183 and standard deviation 10
sample1000 = np.random.normal(183, 10, 1000).round()
average_height.append(sample1000.mean())
plt.hist(average_height, 10, normed=True)
plt.show()
"""
Explanation: You can observe that the mean ranges from 180 to 187 cm when we simulated the average height of 50 sample men, which was taken 30 times.
Let's see what happens when we sample 1000 men and repeat the process 30 times:
End of explanation
"""
mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8,
16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4, 33.9, 21.5, 15.5,
15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,19.7, 15.0, 21.4]
hp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 180,
205, 215, 230, 66, 52, 65, 97, 150, 150, 245, 175, 66, 91, 113, 264,
175, 335, 109]
stats.pearsonr(mpg,hp)
"""
Explanation: As you can see, the height varies from 182.4 cm and to 183.5 cm. What does this mean?
It means that as the sample size increases, the standard error of the mean decreases, which also means that the confidence interval becomes narrower, and we can tell with certainty the interval that the population mean would lie on.
Correlation
In statistics, correlation defines the similarity between two random variables. The most commonly used correlation is the Pearson correlation and it is defined by the following:
$$
\rho_{X,Y} = \frac{cov(X,Y)}{\sigma_{x}\sigma_{y}} = \frac{E[(X - \mu_{X})(Y - \mu_{Y})]}{\sigma_{x}\sigma_{y}}
$$
The preceding formula defines the Pearson correlation as the covariance between X and Y, which is divided by the standard deviation of X and Y, or it can also be defined as the expected mean of the sum of multiplied difference of random variables with respect to the mean divided by the standard deviation of X and Y. Let's understand this with an example. Let's take the mileage and horsepower of various cars and see if there is a relation between the two. This can be achieved using the pearsonr function in the SciPy package:
End of explanation
"""
plt.scatter(mpg, hp, color='r')
plt.show()
"""
Explanation: The first value of the output gives the correlation between the horsepower and the mileage
The second value gives the p-value.
So, the first value tells us that it is highly negatively correlated and the p-value tells us that there is significant correlation between them:
End of explanation
"""
stats.spearmanr(mpg, hp)
"""
Explanation: Let's look into another correlation called the Spearman correlation. The Spearman correlation applies to the rank order of the values and so it provides a monotonic relation between the two distributions. It is useful for ordinal data (data that has an order, such as movie ratings or grades in class) and is not affected by outliers.
Let's get the Spearman correlation between the miles per gallon and horsepower. This can be achieved using the spearmanr() function in the SciPy package:
End of explanation
"""
mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8,
19.2, 17.8, 16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4,
33.9, 21.5, 15.5, 15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,
19.7, 15.0, 21.4, 120, 3]
hp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180,
180, 180, 205, 215, 230, 66, 52, 65, 97, 150, 150, 245,
175, 66, 91, 113, 264, 175, 335, 109, 30, 600]
plt.scatter(mpg, hp)
plt.show()
"""
Explanation: We can see that the Spearman correlation is -0.89 and the p-value is significant.
Let's do an experiment in which we introduce a few outlier values in the data and see how the Pearson and Spearman correlation gets affected:
End of explanation
"""
stats.pearsonr(mpg, hp)
stats.spearmanr(mpg, hp)
"""
Explanation: From the plot, you can clearly make out the outlier values. Lets see how the correlations get affected for both the Pearson and Spearman correlation
End of explanation
"""
class1_score = np.array([45.0, 40.0, 49.0, 52.0, 54.0, 64.0, 36.0, 41.0, 42.0, 34.0])
class2_score = np.array([75.0, 85.0, 53.0, 70.0, 72.0, 93.0, 61.0, 65.0, 65.0, 72.0])
stats.ttest_ind(class1_score,class2_score)
"""
Explanation: We can clearly see that the Pearson correlation has been drastically affected due to the outliers, which are from a correlation of 0.89 to 0.47.
The Spearman correlation did not get affected much as it is based on the order rather than the actual value in the data.
Z-test vs T-test
We have already done a few Z-tests before where we validated our null hypothesis.
A T-distribution is similar to a Z-distribution—it is centered at zero and has a basic bell shape, but its shorter and flatter around the center than the Z-distribution.
The T-distributions' standard deviation is usually proportionally larger than the Z, because of which you see the fatter tails on each side.
The t distribution is usually used to analyze the population when the sample is small.
The Z-test is used to compare the population mean against a sample or compare the population mean of two distributions with a sample size greater than 30. An example of a Z-test would be comparing the heights of men from different ethnicity groups.
The T-test is used to compare the population mean against a sample, or compare the population mean of two distributions with a sample size less than 30, and when you don't know the population's standard deviation.
Let's do a T-test on two classes that are given a mathematics test and have 10 students in each class:
To perform the T-test, we can use the ttest_ind() function in the SciPy package:
End of explanation
"""
expected = np.array([6,6,6,6,6,6])
observed = np.array([7, 5, 3, 9, 6, 6])
"""
Explanation: The first value in the output is the calculated t-statistics, whereas the second value is the p-value and p-value shows that the two distributions are not identical.
The F distribution
The F distribution is also known as Snedecor's F distribution or the Fisher–Snedecor distribution.
An f statistic is given by the following formula:
$$
f = {[{s_1^2}/{\sigma_1^2}]}{[{s_2^2}/{\sigma_2^2}]}
$$
Here, s 1 is the standard deviation of a sample 1 with an $n_1$ size, $s_2$ is the standard deviation of a sample 2, where the size $n_2σ_1$ is the population standard deviation of a sample $1σ_2$ is the population standard deviation of a sample 12.
The distribution of all the possible values of f statistics is called F distribution. The d1 and d2 represent the degrees of freedom in the following chart:
The chi-square distribution
The chi-square statistics are defined by the following formula:
$$
X^2 = [(n-1)*s^2]/\sigma^2
$$
Here, n is the size of the sample, s is the standard deviation of the sample, and σ is the standard deviation of the population.
If we repeatedly take samples and define the chi-square statistics, then we can form a chi-square distribution, which is defined by the following probability density function:
$$
Y = Y_0 * (X^2)^{(v/2-1)} * e^{-X2/2}
$$
Here, $Y_0$ is a constant that depends on the number of degrees of freedom, $Χ_2$ is the chi-square statistic, $v = n - 1$ is the number of degrees of freedom, and e is a constant equal to the base of the natural logarithm system.
$Y_0$ is defined so that the area under the chi-square curve is equal to one.
The Chi-square test can be used to test whether the observed data differs significantly from the expected data. Let's take the example of a dice. The dice is rolled 36 times and the probability that each face should turn upwards is 1/6. So, the expected and observed distribution is as follows:
End of explanation
"""
stats.chisquare(observed,expected)
"""
Explanation: The null hypothesis in the chi-square test is that the observed value is similar to the
expected value.
The chi-square can be performed using the chisquare function in the SciPy package:
End of explanation
"""
men_women = np.array([[100, 120, 60],[350, 200, 90]])
stats.chi2_contingency(men_women)
"""
Explanation: The first value is the chi-square value and the second value is the p-value, which is very high. This means that the null hypothesis is valid and the observed value is similar to the expected value.
The chi-square test of independence is a statistical test used to determine whether two categorical variables are independent of each other or not.
Let's take the following example to see whether there is a preference for a book based on the gender of people reading it.
The Chi-Square test of independence can be performed using the chi2_contingency function in the SciPy package:
End of explanation
"""
country1 = np.array([ 176., 201., 172., 179., 180., 188., 187., 184., 171.,
181., 192., 187., 178., 178., 180., 199., 185., 176.,
207., 177., 160., 174., 176., 192., 189., 187., 183.,
180., 181., 200., 190., 187., 175., 179., 181., 183.,
171., 181., 190., 186., 185., 188., 201., 192., 188.,
181., 172., 191., 201., 170., 170., 192., 185., 167.,
178., 179., 167., 183., 200., 185.])
country2 = np.array([177., 165., 185., 187., 175., 172.,179., 192.,169.,
167., 162., 165., 188., 194., 187., 175., 163., 178.,
197., 172., 175., 185., 176., 171., 172., 186., 168.,
178., 191., 192., 175., 189., 178., 181., 170., 182.,
166., 189., 196., 192., 189., 171., 185., 198., 181.,
167., 184., 179., 178., 193., 179., 177., 181., 174.,
171., 184., 156., 180., 181., 187.])
country3 = np.array([ 191.,173., 175., 200., 190.,191.,185.,190.,184.,190.,
191., 184., 167., 194., 195., 174., 171., 191.,
174., 177., 182., 184., 176., 180., 181., 186., 179.,
176., 186., 176., 184., 194., 179., 171., 174., 174.,
182., 198., 180., 178., 200., 200., 174., 202., 176.,
180., 163., 159., 194., 192., 163., 194., 183., 190.,
186., 178., 182., 174., 178., 182.])
stats.f_oneway(country1,country2,country3)
"""
Explanation: The first value is the chi-square value,
The second value is the p-value, which is very small, and means that there is an
association between the gender of people and the genre of the book they read.
The third value is the degrees of freedom.
The fourth value, which is an array, is the expected frequencies.
Anova
Analysis of Variance (ANOVA) is a statistical method used to test differences between two or more means.This test basically compares the means between groups and determines whether any of these means are significantly different from each other:
$$
H_0 : \mu_1 = \mu_2 = \mu_3 = ... = \mu_k
$$
ANOVA is a test that can tell you which group is significantly different from each other. Let's take the height of men who are from three different countries and see if their heights are significantly different from others:
End of explanation
"""
|
gaoshuming/udacity
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: 图像分类
在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。
获取数据
请运行以下单元,以下载 CIFAR-10 数据集(Python版)。
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 50
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: 探索数据
该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片:
飞机
汽车
鸟类
猫
鹿
狗
青蛙
马
船只
卡车
了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。
问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return (x / 255)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: 实现预处理函数
标准化
在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。
End of explanation
"""
import numpy as np
from sklearn import preprocessing
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return np.eye(10)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot 编码
和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。
提示:不要重复发明轮子。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: 随机化数据
之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。
预处理所有数据并保存
运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: 检查点
这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, *image_shape), name = "x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.int8, shape = (None, n_classes), name = "y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = None, name = "keep_prob")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: 构建网络
对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。
注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。
但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。
我们开始吧!
输入
神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数:
实现 neural_net_image_input
返回 TF Placeholder
使用 image_shape 设置形状,部分大小设为 None
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名
实现 neural_net_label_input
返回 TF Placeholder
使用 n_classes 设置形状,部分大小设为 None
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名
实现 neural_net_keep_prob_input
返回 TF Placeholder,用于丢弃保留概率
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名
这些名称将在项目结束时,用于加载保存的模型。
注意:TensorFlow 中的 None 表示形状可以是动态大小。
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
input_chanel = int(x_tensor.shape[3])
output_chanel = conv_num_outputs
weight_shape = (*conv_ksize,input_chanel,output_chanel) # *
weight = tf.Variable(tf.random_normal(weight_shape, stddev = 0.1)) #权重
bias = tf.Variable(tf.zeros(output_chanel)) #设置偏置项
l_active = tf.nn.conv2d(x_tensor, weight, (1, *conv_strides, 1), 'SAME')
l_active = tf.nn.bias_add(l_active,bias)
#active_layers = tf.nn.relu(tf.add(tf.matmul(features,label),bias)) #ReLu
mx_layer = tf.nn.relu(l_active)
return tf.nn.max_pool(mx_layer, (1, *pool_ksize, 1), (1, *pool_strides, 1), 'VALID')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: 卷积和最大池化层
卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化:
使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。
使用权重和 conv_strides 对 x_tensor 应用卷积。
建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
添加偏置
向卷积中添加非线性激活(nonlinear activation)
使用 pool_ksize 和 pool_strides 应用最大池化
建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。
End of explanation
"""
from functools import reduce
from operator import mul
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
_, *image_size = x_tensor.get_shape().as_list()
#print(*image_size)
return tf.reshape(x_tensor, (-1, reduce(mul, image_size)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: 扁平化层
实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
num_input = x_tensor.get_shape().as_list()[1]
weight_shape = (num_input, num_outputs)
#print(weight_shape)
weight = tf.Variable(tf.truncated_normal(weight_shape, stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
activation = tf.nn.bias_add(tf.matmul(x_tensor, weight), bias)
return tf.nn.relu(activation)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: 完全连接的层
实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
num_input = x_tensor.get_shape().as_list()[1] #not 0
weight_shape = (num_input, num_outputs)
weight = tf.Variable(tf.truncated_normal(weight_shape, stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.nn.bias_add(tf.matmul(x_tensor,weight),bias)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: 输出层
实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, keep_prob)
x = conv2d_maxpool(x, 128, (3, 3), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, keep_prob)
# x has shape (batch, 8, 8, 128)
x = conv2d_maxpool(x, 256, (3, 3), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 512)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return output(x, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: 创建卷积模型
实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型:
应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers)
应用一个扁平层(Flatten Layer)
应用 1、2 或 3 个完全连接层(Fully Connected Layers)
应用一个输出层(Output Layer)
返回输出
使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: 训练神经网络
单次优化
实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数:
x 表示图片输入
y 表示标签
keep_prob 表示丢弃的保留率
每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。
注意:不需要返回任何内容。该函数只是用来优化神经网络。
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
global valid_features, valid_labels
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
loss = session.run( cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
prt = 'Loss: {:.4f} Accuracy: {:.4f}'
print(prt.format(loss, validation_accuracy, prec=3))
"""
Explanation: 显示数据
实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。
End of explanation
"""
# TODO: Tune Parameters
epochs = 200
batch_size = 128
keep_probability = 0.5
"""
Explanation: 超参数
调试以下超参数:
* 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数
* 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小:
64
128
256
...
设置 keep_probability 表示使用丢弃时保留节点的概率
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: 在单个 CIFAR-10 部分上训练
我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: 完全训练模型
现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: 检查点
模型已保存到本地。
测试模型
利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。
End of explanation
"""
|
suryaavala/stockprediction
|
Crypto/btc/PrepareData - Technical Indicators.ipynb
|
mit
|
def MACD(df,period1,period2,periodSignal):
EMA1 = pd.DataFrame.ewm(df,span=period1).mean()
EMA2 = pd.DataFrame.ewm(df,span=period2).mean()
MACD = EMA1-EMA2
Signal = pd.DataFrame.ewm(MACD,periodSignal).mean()
Histogram = MACD-Signal
return Histogram
def stochastics_oscillator(df,period):
l, h = pd.DataFrame.rolling(df, period).min(), pd.DataFrame.rolling(df, period).max()
k = 100 * (df - l) / (h - l)
return k
def ATR(df,period):
'''
Method A: Current High less the current Low
'''
df['H-L'] = abs(df['High']-df['Low'])
df['H-PC'] = abs(df['High']-df['Price'].shift(1))
df['L-PC'] = abs(df['Low']-df['Price'].shift(1))
TR = df[['H-L','H-PC','L-PC']].max(axis=1)
return TR.to_frame()
"""
Explanation: The model in theory
We are going to use 4 features: The price itself and three extra technical indicators.
- MACD (Trend)
- Stochastics (Momentum)
- Average True Range (Volume)
Functions
Exponential Moving Average: Is a type of infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum decreases exponentially, never reaching zero.
<img src="https://www.bionicturtle.com/images/uploads/WindowsLiveWriterGARCHapproachandExponentialsmoothingEWMA_863image_16.png">
MACD: The Moving Average Convergence/Divergence oscillator (MACD) is one of the simplest and most effective momentum indicators available. The MACD turns two trend-following indicators, moving averages, into a momentum oscillator by subtracting the longer moving average from the shorter moving average.
<img src="http://i68.tinypic.com/289ie1l.png">
Stochastics oscillator: The Stochastic Oscillator is a momentum indicator that shows the location of the close relative to the high-low range over a set number of periods.
<img src="http://i66.tinypic.com/2vam3uo.png">
Average True Range: Is an indicator to measure the volalitility (NOT price direction). The largest of:
- Method A: Current High less the current Low
- Method B: Current High less the previous Close (absolute value)
- Method C: Current Low less the previous Close (absolute value)
<img src="http://d.stockcharts.com/school/data/media/chart_school/technical_indicators_and_overlays/average_true_range_atr/atr-1-trexam.png" width="400px">
Calculation:
<img src="http://i68.tinypic.com/e0kggi.png">
End of explanation
"""
df = pd.read_csv('BTCUSD.csv',usecols=[1,2,3,4])
df = df.iloc[::-1]
df["Price"] = (df["Price"].str.split()).apply(lambda x: float(x[0].replace(',', '')))
df["Open"] = (df["Open"].str.split()).apply(lambda x: float(x[0].replace(',', '')))
df["High"] = (df["High"].str.split()).apply(lambda x: float(x[0].replace(',', '')))
df["Low"] = (df["Low"].str.split()).apply(lambda x: float(x[0].replace(',', '')))
dfPrices = pd.read_csv('BTCUSD.csv',usecols=[1])
dfPrices = dfPrices.iloc[::-1]
dfPrices["Price"] = (dfPrices["Price"].str.split()).apply(lambda x: float(x[0].replace(',', '')))
dfPrices.head(2)
"""
Explanation: Read data
End of explanation
"""
price = dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)].as_matrix().ravel()
"""
Explanation: Plot
End of explanation
"""
prices = dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)].as_matrix().ravel()
plt.figure(figsize=(25,7))
plt.plot(prices,label='Test',color='black')
plt.title('Price')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: Price
End of explanation
"""
macd = MACD(dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)],12,26,9)
plt.figure(figsize=(25,7))
plt.plot(macd,label='macd',color='red')
plt.title('MACD')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: MACD
End of explanation
"""
stochastics = stochastics_oscillator(dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)],14)
plt.figure(figsize=(14,7))
#First 100 points because it's too dense
plt.plot(stochastics[0:100],label='Stochastics Oscillator',color='blue')
plt.title('Stochastics Oscillator')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: Stochastics Oscillator
End of explanation
"""
atr = ATR(df.iloc[len(df.index)-60:len(df.index)],14)
plt.figure(figsize=(21,7))
#First 100 points because it's too dense
plt.plot(atr[0:100],label='ATR',color='green')
plt.title('Average True Range')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: Average True Range
End of explanation
"""
dfPriceShift = dfPrices.shift(-1)
dfPriceShift.rename(columns={'Price':'PriceTarget'}, inplace=True)
dfPriceShift.head(2)
macd = MACD(dfPrices,12,26,9)
macd.rename(columns={'Price':'MACD'}, inplace=True)
stochastics = stochastics_oscillator(dfPrices,14)
stochastics.rename(columns={'Price':'Stochastics'}, inplace=True)
atr = ATR(df,14)
atr.rename(columns={0:'ATR'}, inplace=True)
final_data = pd.concat([dfPrices,dfPriceShift,macd,stochastics,atr], axis=1)
# Delete the entries with missing values (where the stochastics couldn't be computed yet) because have a lot of datapoints ;)
final_data = final_data.dropna()
final_data.info()
final_data
final_data.to_csv('BTCUSD_TechnicalIndicators.csv',index=False)
"""
Explanation: Create complete DataFrame & Save Data
End of explanation
"""
|
danresende/deep-learning
|
sentiment_network/.ipynb_checkpoints/Sentiment Classification - Mini Project 5-checkpoint.ipynb
|
mit
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Project 2: Creating the Input/Output Data
End of explanation
"""
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
"""
Explanation: Understanding Neural Noise
End of explanation
"""
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: Project 4: Reducing Noise in our Input Data
End of explanation
"""
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
"""
Explanation: Analyzing Inefficiencies in our Network
End of explanation
"""
|
vvishwa/deep-learning
|
batch-norm/Batch_Normalization_Lesson.ipynb
|
mit
|
# Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.
Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.
For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network.
Likewise, the output of layer 2 can be thought of as the input to a single layer network, consistng only of layer 3.
When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).
Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.
Benefits of Batch Normalization<a id="benefits"></a>
Batch normalization optimizes network training. It has been shown to have several benefits:
1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall.
2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train.
3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.
4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.
5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.
6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network.
7. May give better results overall – Some tests seem to show batch normalization actually improves the train.ing results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.
Batch Normalization in TensorFlow<a id="implementation_1"></a>
This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow.
The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.
End of explanation
"""
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
"""
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
"""
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
"""
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
"""
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
"""
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
"""
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
"""
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
"""
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
"""
Explanation: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heaviy documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code:
This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.
It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
End of explanation
"""
def plot_training_accuracies(*args, **kwargs):
"""
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
"""
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
"""
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
"""
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
"""
Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code:
1. Layers with batch normalization do not include a bias term.
2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)
3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.
4. We add the normalization before calling the activation function.
In addition to that code, the training step is wrapped in the following with statement:
python
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.
Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.
Batch Normalization Demos<a id='demos'></a>
This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier.
We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.
Code to support testing
The following two functions support the demos we run in the notebook.
The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.
The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu)
"""
Explanation: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
"""
Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.sigmoid)
"""
Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)
In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid)
"""
Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
"""
Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
End of explanation
"""
train_and_test(False, 2, tf.nn.relu)
"""
Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid)
"""
Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
"""
Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.relu)
"""
Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.sigmoid)
"""
Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
End of explanation
"""
train_and_test(True, 1, tf.nn.sigmoid)
"""
Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
End of explanation
"""
train_and_test(True, 2, tf.nn.sigmoid)
"""
Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure: Batch Normalization Doesn't Fix Everything
Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.
This section includes two examples that show runs when batch normalization did not help at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
"""
Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.
End of explanation
"""
def batch_norm_test(test_training_accuracy):
"""
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
"""
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
"""
Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).
End of explanation
"""
batch_norm_test(True)
"""
Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
End of explanation
"""
batch_norm_test(False)
"""
Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.
End of explanation
"""
|
MartyWeissman/Python-for-number-theory
|
PwNT Notebook 1.ipynb
|
gpl-3.0
|
2 + 3
2 * 3
5 - 11
5 / 11
"""
Explanation: Part 1. Computing with Python.
What is the difference between Python and a calculator? We begin this first lesson by showing how Python can be used as a calculator, and we move into some of the basic programming language constructs: data types, variables, lists, and loops.
This programming lesson complements Chapter 0 (Foundations) in An Illustrated Theory of Numbers.
Table of Contents
Python as a calculator
Calculating with booleans
Declaring variables
Ranges
Iterating over a range
<a id='calculator'></a>
Python as a calculator
Different kinds of data are stored as different types in Python. For example, if you wish to work with integers, your data is typically stored as an int. A real number might be stored as a float. There are types for booleans (True/False data), strings (like "Hello World!"), and many more we will see.
A more complete reference for Python's numerical types and arithmetic operations can be found in the official Python documentation. The official Python tutorial is also a great place to start.
Python allows you to perform arithmetic operations: addition, subtraction, multiplication, and division, on numerical types. The operation symbols are +, -, *, and /. Evaluate each of the following cells to see how Python performs operations on integers. To evaluate the cell, click anywhere within the cell to select it (a selected cell will probably have a thick <span style="color:green">green</span> line on its left side) and use the keyboard shortcut Shift-Enter to evaluate. As you go through this and later lessons, try to predict what will happen when you evaluate the cell before you hit Shift-Enter.
End of explanation
"""
5.0 / 11.0
"""
Explanation: The results are probably not surprising, except for the last one. Try the following for contrast.
End of explanation
"""
-12 // 5
"""
Explanation: That seems better. So what is going on? Python interprets the input number 5 as an int (integer) and 5.0 as a float. "Float" stands for "floating point number," which are decimal approximations to real numbers. The word "float" refers to the fact that the decimal (or binary, for computers) point can float around (as in 1.2345 or 12.345 or 123.45 or 1234.5 or 0.00012345). There are deep computational issues related to how computers handle decimal approximations, and you can read about the IEEE standards if you're interested.
The distinct results of 5/11 and 5.0/11.0 show that Python operates on different types in different ways. The creators of Python decided that the result of integer division should be an integer: specifically, the integer obtained by rounding down the quotient.
You might disagree with this decision... it is confusing at first! In fact the designers of Python changed their mind. If you're using Python 3.x, then the result of integer division 5/11 will be a float 0.4545... But in versions 2.x, the result of integer division will be an int. We assume that you are using Version 2.x throughout the tutorial!
To be safe, however, you can use the modified operation // for integer division. This yields the same result in Python 2.x and 3.x.
End of explanation
"""
(3 + 4) * 5
3 + (4 * 5)
3 + 4 * 5 # What do you think will be the result? Remember PEMDAS?
"""
Explanation: Why use integer division // and why use floating point division? In practice, integer division is typically a faster operation. So if you only need the rounded result (and that will often be the case), use integer division. It will run much faster than carrying out floating point division then manually rounding down.
Observe that floating point operations involve approximation. The result of 5.0/11.0 might not be what you expect in the last digit. Over time, especially with repeated operations, floating point approximation errors can add up!
Python allows you to group expressions with parentheses, and follows the order of operations that you learn in school.
End of explanation
"""
# An empty cell. Have fun!
"""
Explanation: Now is a good time to try a few computations of your own, in the empty cell below. You can type any Python commands you want in the empty cell. If you want to insert a new cell into this notebook, it takes two steps:
1. Click to the left of any existing cell. This should make a <span style="color:blue">blue</span> bar appear to the left of the cell.
2. Use the keyboard shortcut a to insert a new cell above the blue-selected cell or b to insert a new cell below the blue-selected cell.
You can also use the keyboard shortcut x do delete a blue-selected cell... be careful!
End of explanation
"""
23 // 5 # Integer division
23 % 5 # The remainder after division
"""
Explanation: For number theory, division with remainder is an operation of central importance. Integer division provides the quotient, and the operation % provides the remainder. It's a bit strange that the percent symbol is used for the remainder, but this dates at least to the early 1970s and has become standard across computer languages.
End of explanation
"""
divmod(23,5)
type(divmod(23,5))
"""
Explanation: Note in the code above, there are little "comments". To place a short comment on a line of code, just put a hashtag # at the end of the line of code, followed by your comment.
Python gives a single command for division with remainder. Its output is a tuple.
End of explanation
"""
type(3)
type(3.0)
type('Hello')
type([1,2,3])
"""
Explanation: All data in Python has a type, but a common complaint about Python is that types are a bit concealed "under the hood". But they are not far under the hood. Anyone can find out the type of some data with a single command.
End of explanation
"""
3 + 3
3.0 + 3.0
'Hello' + 'World!'
[1,2,3] + [4,5,6]
3 + 3.0
3 + 'Hello!'
# An empty cell. Have fun!
"""
Explanation: The key to careful computation in Python is always being aware of the type of your data, and knowing how Python operates differently on data of different types.
End of explanation
"""
3 * 'Hello!'
0 * 'Hello!'
2 * [1,2,3]
"""
Explanation: As you can see, addition (the + operator) is interpreted differently in the contexts of numbers, strings, and lists. The designers of Python allowed us to add numbers of different types: if you try to operate on an int and a float, the int will typically be coerced into a float in order to perform the operation. But the designers of Python did not give meaning to the addition of a number with a string, for example. That's why you probably received a TypeError after trying the above line.
On the other hand, Python does interpret multiplication of a natural number with a string or a list.
End of explanation
"""
# Practice cell
"""
Explanation: Can you create a string with 100 A's (like AAA...)? Use an appropriate operation in the cell below.
End of explanation
"""
2**1000
2.0**1000
"""
Explanation: Exponents in Python are given by the ** operator. The following lines compute 2 to the 1000th power, in two different ways.
End of explanation
"""
type(2**1000)
type(2.0**1000)
# An empty cell. Have fun!
"""
Explanation: As before, Python interprets an operation (**) differently in different contexts. When given integer input, Python evaluates 2**1000 exactly. The result is an integer type called a long integer. A nice fact about Python, for number theorists, is that it handles exact integers of arbitrary length! Many other programming languages (like C++) will give an error message if integers get too large in the midst of a computation. The letter "L" at the end of the result indicates that Python is treating the integer as a long integer.
The reason that Python uses two different types, int and long, for integers is that computer hardware has built-in functionality for arithmetic of somewhat small integers (typically integers with magnitude up to 2 to the 31st or 2 to the 63rd power). So for small integers, Python will exploit the speedy hardware in computations, while for very large integers, Python will rely on its own routines.
For scientific applications, one often wants to keep track of only a certain number of significant digits. If one computes the floating point exponent 2.0**1000, the result is a decimal approximation. It is still a float. The expression "e+301" stands for "multiplied by 10 to the 301st power", i.e., Python uses scientific notation for large floats.
End of explanation
"""
3 > 2
type(3 > 2)
10 < 3
2.4 < 2.4000001
32 >= 32
32 >= 31
2 + 2 == 4
"""
Explanation: Now is a good time for reflection. Double-click in the cell below to answer the given questions. Cells like this one are used for text rather than Python code. Text is entered using markdown, but you can typically just enter text as you would in any text editor without problems. Press shift-Enter after editing a markdown cell to complete the editing process.
Note that a dropdown menu in the toolbar above the notebook allows you to choose whether a cell is Markdown or Code (or a few other things), if you want to add or remove markdown/code cells.
Exercises
What data types have you seen, and what kinds of data are they used for? Can you remember them without looking back?
How is division / interpreted differently for different types of data?
How is multiplication * interpreted differently for different types of data?
Why does Python have different types for short and long integers?
Double-click this markdown cell to edit it, and answer the exercises.
<a id='booleans'></a>
Calculating with booleans
A boolean (type bool) is the smallest possible piece of data. While an int can be any integer, positive or negative, a boolean can only be one of two things: True or False. In this way, booleans are useful for storing the answers to yes/no questions.
Questions about (in)equality of numbers are answered in Python by operations with numerical input and boolean output. Here are some examples. A more complete reference is in the official Python documentation.
End of explanation
"""
# Write your code here.
"""
Explanation: Which number is bigger: $23^{32}$ or $32^{23}$? Use the cell below to answer the question!
End of explanation
"""
63 % 7 == 0 # Is 63 divisible by 7?
101 % 2 == 0 # Is 101 even?
"""
Explanation: The expressions <, >, <=, >= are interpreted here as operations with numerical input and boolean output. The symbol == (two equal symbols!) gives a True result if the numbers are equal, and False if the numbers are not equal. An extremely common typo is to confuse = with ==. But the single equality symbol = has an entirely different meaning, as we shall see.
Using the remainder operator % and equality, we obtain a divisibility test.
End of explanation
"""
# Your code goes here.
"""
Explanation: Use the cell below to determine whether 1234567890 is divisible by 3.
End of explanation
"""
True and False
True or False
True or True
not True
"""
Explanation: Booleans can be operated on by the standard logical operations and, or, not. In ordinary English usage, "and" and "or" are conjunctions, while here in Boolean algebra, "and" and "or" are operations with Boolean inputs and Boolean output. The precise meanings of "and" and "or" are given by the following truth tables.
| and || True | False |
|-----||------|-------|
|||||
| True || True | False |
| False || False | False|
| or || True | False |
|-----||------|-------|
|||||
| True || True | True |
| False || True | False|
End of explanation
"""
(2 > 3) and (3 > 2)
(1 + 1 == 2) or (1 + 1 == 3)
not (-1 + 1 >= 0)
2 + 2 == 4
2 + 2 != 4 # For "not equal", Python uses the operation `!=`.
2 + 2 != 5 # Is 2+2 *not* equal to 5?
not (2 + 2 == 5) # The same as above, but a bit longer to write.
"""
Explanation: Use the truth tables to predict the result (True or False) of each of the following, before evaluating the code.
End of explanation
"""
# Experiment here.
"""
Explanation: Experiment below to see how Python handles a double or triple negative, i.e., something with a not not.
End of explanation
"""
False * 100
True + 13
"""
Explanation: Python does give an interpretation to arithmetic operations with booleans and numbers. Try to guess this interpretation with the following examples. Change the examples to experiment!
End of explanation
"""
# Use this space to work on the exercises
"""
Explanation: This ability of Python to interpret operations based on context is a mixed blessing. On one hand, it leads to handy shortcuts -- quick ways of writing complicated programs. On the other hand, it can lead to code that is harder to read, especially for a Python novice. Good programmers aim for code that is easy to read, not just short!
The Zen of Python is a series of 20 aphorisms for Python programmers. The first seven are below.
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Exercises
Did you look at the truth tables closely? Can you remember, from memory, what True or False equals, or what True and False equals?
How might you easily remember the truth tables? How do they resemble the standard English usage of the words "and" and "or"?
If you wanted to know whether a number, like 2349872348723, is a multiple of 7 but not a multiple of 11, how might you write this in one line of Python code?
You can chain together and commands, e.g., with an expression like True and True and True (which would evaluate to True). You can also group booleans, e.g., with True and (True or False).
Experiment to figure out the order of operations (and, or, not) for booleans.
The operation xor means "exclusive or". Its truth table is: True xor True = False and False xor False = False and True xor False = True and False xor True = True. How might you implement xor in terms of the usual and, or, and not?
End of explanation
"""
e = 2.71828
"""
Explanation: <a id='variables'></a>
Declaring variables
A central feature of programming is the declaration of variables. When you declare a variable, you are storing data in the computer's memory and you are assigning a name to that data. Both storage and name-assignment are carried out with the single equality symbol =.
End of explanation
"""
e * e
type(e)
"""
Explanation: With this command, the float 2.71828 is stored somewhere inside your computer, and Python can access this stored number by the name "e" thereafter. So if you want to compute "e squared", a single command will do.
End of explanation
"""
my_number = 17
my_number < 23
"""
Explanation: You can use just about any name you want for a variable, but your name must start with a letter, must not contain spaces, and your name must not be an existing Python word. Characters in a variable name can include letters (uppercase and lowercase) and numbers and underscores _.
So e is a valid name for a variable, but type is a bad name. It is very tempting for beginners to use very short abbreviation-style names for variables (like dx or vbn). But resist that temptation and use more descriptive names for variables, like difference_x or very_big_number. This will make your code readable by you and others!
There are different style conventions for variable names. We use lowercase names, with underscores separating words, roughly following Google's style conventions for Python code.
End of explanation
"""
my_number = 3.14
"""
Explanation: After you declare a variable, its value remains the same until it is changed. You can change the value of a variable with a simple assignment. After the above lines, the value of my_number is 17.
End of explanation
"""
S = 0
S = S + 1
S = S + 2
S = S + 3
print S
"""
Explanation: This command reassigns the value of my_number to 3.14. Note that it changes the type too! It effectively overrides the previous value and replaces it with the new value.
Often it is useful to change the value of a variable incrementally or recursively. Python, like many programming languages, allows one to assign variables in a self-referential way. What do you think the value of S will be after the following four lines?
End of explanation
"""
my_number = 17
new_number = my_number + 1
my_number = 3.14
"""
Explanation: The first line S = 0 is the initial declaration: the value 0 is stored in memory, and the name S is assigned to this value.
The next line S = S + 1 looks like nonsense, as an algebraic sentence. But reading = as assignment rather than equality, you should read the line S = S + 1 as assigning the value S + 1 to the name S. When Python interprets S = S + 1, it carries out the following steps.
Compute the value of the right side, S+1. (The value is 1, since S was assigned the value 0 in the previous line.)
Assign this value to the left side, S. (Now S has the value 1.)
Well, this is a slight lie. Python probably does something more efficient, when given the command S = S + 1, since such operations are hard-wired in the computer and the Python interpreter is smart enough to take the most efficient route. But at this level, it is most useful to think of a self-referential assignment of the form X = expression(X) as a two step process as above.
Compute the value of expression(X).
Assign this value to X.
Now consider the following three commands.
End of explanation
"""
print my_number
print new_number
"""
Explanation: What are the values of the variables my_number and new_number, after the execution of these three lines?
To access these values, you can use the print command.
End of explanation
"""
# Use this space to work on the exercises.
"""
Explanation: Python is an interpreted language, which means (roughly) that Python carries out commands line-by-line from top to bottom. So consider the three lines
python
my_number = 17
new_number = my_number + 1
my_number = 3.14
Line 1 sets the value of my_number to 17. Line 2 sets the value of new_number to 18. Line 3 sets the value of my_number to 3.14. But Line 3 does not change the value of new_number at all.
(This will become confusing and complicated later, as we study mutable and immutable types.)
Exercises
What is the difference between = and == in the Python language?
If the variable x has value 3, and you then evaluate the Python command x = x * x, what will be the value of x after evaluation?
Imagine you have two variables a and b, and you want to switch their values. How could you do this in Python?
End of explanation
"""
type([1,2,3])
type(['Hello',17])
"""
Explanation: <a id='ranges'></a>
Ranges
Python stands out for the central role played by lists. A list is what it sounds like -- a list of data. Data within a list can be of any type. Multiple types are possible within the same list! The basic syntax for a list is to use brackets to enclose the list items and commas to separate the list items.
End of explanation
"""
type((1,2,3))
"""
Explanation: There is another type called a tuple that we will rarely use. Tuples use parentheses for enclosure instead of brackets.
End of explanation
"""
range(10)
"""
Explanation: The range command is a flexible and powerful way to produce lists of integers. The simplest single-input form of the range command produces a list starting at zero of a given length.
End of explanation
"""
range(3,10)
range(-4,5)
"""
Explanation: A more complicated two-input form of the range command produces a list of integers starting at a given number, and terminating before another given number.
End of explanation
"""
len([2,4,6])
len(range(10))
len(range(10,100)) # Can you figure out the length, before evaluating?
"""
Explanation: This is a common source of difficulty for Python beginners. While the first parameter (-4) is the starting point of the list, the list ends just before the second parameter (5). This takes some getting used to, but experienced Python programmers grow to like this convention.
The length of a list can be accessed by the len command.
End of explanation
"""
range(1,10,2)
range(11,30,2)
range(-4,5,3)
range(10,100,17)
"""
Explanation: The final variant of the range command (for now) is the three-parameter command of the form range(a,b,s). This produces a list like range(a,b), but with a "step size" of s. In other words, it produces a list of integers, beginning at a, increasing by s from one entry to the next, and going up to (but not including) b. It is best to experiment a bit to get the feel for it!
End of explanation
"""
range(10,0,-1)
"""
Explanation: This can be used for descending lists too, and observe that the final number b in range(a,b,s) is not included.
End of explanation
"""
range(10,100,7) # What list will this create? It won't answer the question...
range(14,100,7) # Starting at 14 gives the multiples of 7.
len(range(14,100,7)) # Gives the length of the list, and answers the question!
"""
Explanation: How many multiples of 7 are between 10 and 100? We can find out pretty quickly with the range command and the len command (to count).
End of explanation
"""
# Use this space to work on the exercises.
"""
Explanation: Exercises
If a and b are integers, what is the length of the list range(a,b)?
Use a range command to produce the list [1,2,3,4,5,6,7,8,9,10].
Create the list [1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5] with a single range command and another operation.
How many multiples of 3 are there between 300 and 3000?
End of explanation
"""
for n in [1,2,3,4,5]:
print n*n
for s in ['I','Am','Python']:
print s + "!"
"""
Explanation: <a id='iterating'></a>
Iterating over a range
Computers are excellent at repetitive reliable tasks. If we wish to perform a similar computation, many times over, a computer a great tool. Here we look at a common and simple way to carry out a repetetive computation: the "for loop". The "for loop" iterates through items in a list, carrying out some action for each item. Two examples will illustrate.
End of explanation
"""
n = 1
print n*n
n = 2
print n*n
n = 3
print n*n
n = 4
print n*n
n = 5
print n*n
"""
Explanation: The first loop, unraveled, carries out the following sequence of commands.
End of explanation
"""
P = 1
for n in range(1,6):
P = P * n
print P
"""
Explanation: But the "for loop" is more efficient and more readable to programmers. Indeed, it saves the repetition of writing the same command print n*n over and over again. It also makes transparent, from the beginning, the range of values that n is assigned to.
When you read and write "for loops", you should consider how they look unravelled -- that is how Python will carry out the loop. And when you find yourself faced with a repetetive task, you might consider whether it may be wrapped up in a for loop.
Try to unravel the loop below, and predict the result, before evaluating the code.
End of explanation
"""
P = 1
for n in range(1,6):
P = P * n
print "n is",n,"and P is",P
print P
"""
Explanation: This might have been difficult! So what if you want to trace through the loop, as it goes? Sometimes, especially when debugging, it's useful to inspect every step of the loop to see what Python is doing. We can inspect the loop above, by inserting a print command within the scope of the loop.
End of explanation
"""
print "My favorite number is",17
"""
Explanation: Here we have used the print command with strings and numbers together. In Python 2.x, you can print multiple things on the same line by separating them by commas. The "things" can be strings (enclosed by single or double-quotes) and numbers (int, float, etc.).
End of explanation
"""
P = 1
n = 1
P = P * n
print "n is",n,"and P is",P
n = 2
P = P * n
print "n is",n,"and P is",P
n = 3
P = P * n
print "n is",n,"and P is",P
n = 4
P = P * n
print "n is",n,"and P is",P
n = 5
P = P * n
print "n is",n,"and P is",P
print P
"""
Explanation: If we unravel the loop above, the linear sequence of commands interpreted by Python is the following.
End of explanation
"""
P = 1
for n in range(1,6):
P = P * n # this command is in the scope of the loop.
print "n is",n,"and P is",P # this command is in the scope of the loop too!
print P
"""
Explanation: Let's analyze the loop syntax in more detail.
python
P = 1
for n in range(1,6):
P = P * n # this command is in the scope of the loop.
print "n is",n,"and P is",P # this command is in the scope of the loop too!
print P
The "for" command ends with a colon :, and the next two lines are indented. The colon and indentation are indicators of scope. The scope of the for loop begins after the colon, and includes all indented lines. The scope of the for loop is what is repeated in every step of the loop (in addition to the reassignment of n).
End of explanation
"""
P = 1
for n in range(1,6):
P = P * n
print "n is",n,"and P is",P
print P
"""
Explanation: If we change the indentation, it changes the scope of the for loop. Predict what the following loop will do, by unraveling, before evaluating it.
End of explanation
"""
for x in [1,2,3]:
for y in ['a', 'b']:
print x,y
"""
Explanation: Scopes can be nested by nesting indentation. What do you think the following loop will do? Can you unravel it?
End of explanation
"""
# Insert your loop here.
"""
Explanation: How might you create a nested loop which prints 1 a then 2 a then 3 a then 1 b then 2 b then 3 b? Try it below.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.