repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
chrismcginlay/crazy-koala
|
jupyter/04_making_decisions_introduction.ipynb
|
gpl-3.0
|
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
"""
Explanation: Making Decisions - Introduction
Your programs so far always carry out the same commands every time the programs are run. Most programs need to be able to carry out different commands – for example in most programs you can choose commands from menus.
This is done with the Python if command.
The Basic if - a Single Result
If the condition temperature<15 is True, Python will carry out the two indented print commands.
The sign < means 'less than' or 'fewer than'
Python needs you to 'indent' everything belonging to the if.
The best way to indent is to press the TAB key just to the left of the letter Q
the 'condition' part of an if will work out as either True or False
Please now run the following program a couple of times, first with a low temperature below 15 and then with a temperature above 15.
End of explanation
"""
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
else:
print("You can turn the heating off now")
"""
Explanation: If you've done it right, you should see a message to turn up heating the first time, but no message the second time
if..else - One of Two Results
Using else, if the condition is true, Python will carry out the first block of indented commands. If the condition is false, Python will carry out the second block of indented commands. Only one block will ever be executed
Please now run the following program a couple of times, just like before. This time you should see a new message with a temperature above 15.
End of explanation
"""
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
elif temperature<25:
print("Please turn the heating off")
print("Please turn the air-con off")
else:
print("It is too hot.")
print("Please turn on the air-con")
"""
Explanation: If you've done it right, you should see an appropriate message for your selected temperatures
if..elif..else - One of Three Results
When the first if condition is true, Python will carry out the first block of indented commands. If the condition is false, Python will have a look at the second condition. If that condition is true, Python will carry out the second block of indented commands. If neither of these conditions are true, the last block of commands will be executed. Only one of the three blocks will ever be executed
Please now run the following program a three times, just like before. This time you should see another new message with a temperature above 25.
End of explanation
"""
temperature = float(input("Please enter the temperature: "))
if temperature<15:
print("It is too cold.")
print("Turn up the heating.")
elif temperature<25:
print("Please turn the heating off")
print("Please turn the air-con off")
elif temperature<35:
print("It is too hot.")
print("Please turn on the air-con")
else:
print("HMGW: Holey Moley Global Warming")
print("Break out the barbecue!")
"""
Explanation: If you've done it right you should see the following outputs, one at a time:
Please enter the temperature: 12
It is too cold.
Turn up the heating.
Please enter the temperature: 20
Please turn the heating off
Please turn the air-con off
Please enter the temperature: 30
It is too hot.
Please turn on the air-con
if..elif..elif..else - One of Four (or More!) Results
When the first if condition is true, Python will carry out the first block of indented commands. If the condition is false, Python will have a look at the second condition. If that condition is true, Python will carry out the second block of indented commands.
If the second condition is also false, Python will look at the third condition. If the third condition is true, the third block will be executed.
If none of these conditions are true, the very last block of commands will be executed. Only one of the many blocks will ever be executed
Please now run the following program a four times, just like before. This time you should see another new message with a temperature above 35.
End of explanation
"""
|
gsentveld/lunch_and_learn
|
notebooks/Data_Exploration.ipynb
|
mit
|
import os
from dotenv import load_dotenv, find_dotenv
# find .env automagically by walking up directories until it's found
dotenv_path = find_dotenv()
# load up the entries as environment variables
load_dotenv(dotenv_path)
"""
Explanation: Exploring the files with Pandas
Many statistical Python packages can deal with numpy Arrays.
Numpy Arrays however are not always easy to use.
Pandas is a package that provides a dataframe interface, similar to what R uses as the main data structure.
Since Pandas has become so popular, many packages accept both pd.DataFrames and numpy Arrays.
End of explanation
"""
PROJECT_DIR = os.path.dirname(dotenv_path)
RAW_DATA_DIR = PROJECT_DIR + os.environ.get("RAW_DATA_DIR")
INTERIM_DATA_DIR = PROJECT_DIR + os.environ.get("INTERIM_DATA_DIR")
files=os.environ.get("FILES").split()
print("Project directory is : {0}".format(PROJECT_DIR))
print("Raw data directory is : {0}".format(RAW_DATA_DIR))
print("Interim directory is : {0}".format(INTERIM_DATA_DIR))
"""
Explanation: First some environment variables
We now use the files that are stored in the RAW directory.
If we decide to change the data format by changing names, adding features, created summary data frames etc., we will save those files in the INTERIM directory.
End of explanation
"""
# The following jupyter notebook magic makes the plots appear in the notebook.
# If you run in batch mode, you have to save your plots as images.
%matplotlib inline
# matplotlib.pyplot is traditionally imported as plt
import matplotlib.pyplot as plt
# Pandas is traditionaly imported as pd.
import pandas as pd
from pylab import rcParams
# some display options to size the figures. feel free to experiment
pd.set_option('display.max_columns', 25)
rcParams['figure.figsize'] = (17, 7)
"""
Explanation: Importing pandas and matplotlib.pyplot
End of explanation
"""
#family=pd.read_csv(RAW_DATA_DIR+'/familyxx.csv')
#persons=pd.read_csv(RAW_DATA_DIR+'/personsx.csv')
samadult=pd.read_csv(RAW_DATA_DIR+'/samadult.csv')
samadult.columns.values.tolist()
features=[x for x in samadult.columns.values.tolist() if x.startswith('ALDURA')]
import numpy as np
np.sum(samadult.WKDAYR.notnull() & (samadult['WKDAYR']<900))
np.sum(samadult.ALDURA17.notnull() & (samadult['ALDURA17']<90) )
features=[
'ALDURA3',
#'ALDURA4',
#'ALDURA6',
#'ALDURA7',
#'ALDURA8',
'ALDURA11',
#'ALDURA17',
#'ALDURA20',
#'ALDURA21',
#'ALDURA22',
#'ALDURA23',
#'ALDURA24',
#'ALDURA27',
#'ALDURA28',
'ALDURA29',
'ALDURA33']
target='ALDURA17'
ADD_INDICATORS=False
ADD_POLYNOMIALS=True
LOG_X=True
LOG_Y=False
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
np.random.seed(42)
reg=LinearRegression()
data=samadult[samadult.ALDURA17.notnull() & (samadult['ALDURA17']<90)]
X=data[features]
X.shape
# turn years since into the "nth" year of
# then fill with 0 otherwise
X=X+1
X=X.fillna(0)
if LOG_X:
X=np.log1p(X)
if ADD_INDICATORS:
indicator_names=[x+"_I" for x in features]
indicators=pd.DataFrame()
for feature in features:
indicators[feature+"_I"]=data[feature].notnull().astype(int)
X=pd.concat([X, indicators], axis=1, join_axes=[X.index])
from sklearn.preprocessing import PolynomialFeatures
if ADD_POLYNOMIALS:
poly=PolynomialFeatures(degree=2, interaction_only=True)
X=poly.fit_transform(X)
X.shape
y=data[target]
y=y+1
y=y.fillna(0)
if LOG_Y:
y=np.log1p(y)
y.head()
reg.fit(X,y)
y_pred=reg.predict(X)
score=r2_score(y, y_pred)
import matplotlib.pyplot as plt
plt.plot(y,y_pred,marker='.', linestyle='None', alpha=0.5 )
plt.xlabel('Y Train')
plt.ylabel('Y Predict')
plt.show()
score
from sklearn.linear_model import Ridge
ridge=Ridge(alpha=0.7, normalize=True)
ridge.fit(X,y)
y_pred=ridge.predict(X)
def display_plot(cv_scores, cv_scores_std):
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(alpha_space, cv_scores)
std_error = cv_scores_std / np.sqrt(10)
ax.fill_between(alpha_space, cv_scores + std_error, cv_scores - std_error, alpha=0.2)
ax.set_ylabel('CV Score +/- Std Error')
ax.set_xlabel('Alpha')
ax.axhline(np.max(cv_scores), linestyle='--', color='.5')
ax.set_xlim([alpha_space[0], alpha_space[-1]])
ax.set_xscale('log')
plt.show()
# Import necessary modules
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
# Setup the array of alphas and lists to store scores
alpha_space = np.logspace(-4, 0, 50)
ridge_scores = []
ridge_scores_std = []
# Create a ridge regressor: ridge
ridge = Ridge(normalize=True)
# Compute scores over range of alphas
for alpha in alpha_space:
# Specify the alpha value to use: ridge.alpha
ridge.alpha = alpha
# Perform 10-fold CV: ridge_cv_scores
ridge_cv_scores = cross_val_score(ridge, X, y, cv=10)
# Append the mean of ridge_cv_scores to ridge_scores
ridge_scores.append(np.mean(ridge_cv_scores))
# Append the std of ridge_cv_scores to ridge_scores_std
ridge_scores_std.append(np.std(ridge_cv_scores))
# Display the plot
display_plot(ridge_scores, ridge_scores_std)
from sklearn.decomposition import PCA
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
fig = plt.figure(4, figsize=(8, 6))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=40, azim=20)
plt.cla()
pca = PCA(n_components=3)
pca.fit(X)
X_pca = pca.transform(X)
kmean=KMeans(n_clusters=4)
kmean.fit(X_pca)
y_lab=kmean.labels_
# Reorder the labels to have colors matching the cluster results
ax.scatter(X_pca[:, 0], X_pca[:, 1], X_pca[:, 2], label=y_lab,c=y_lab+1, cmap=plt.cm.spectral)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
plt.legend(bbox_to_anchor=(0, 1), loc='upper right', ncol=7)
plt.show()
y_lab
pca = PCA(n_components=2)
pca.fit(X)
X_pca2 = pca.transform(X)
kmean=KMeans(n_clusters=4)
kmean.fit(X_pca2)
y_lab2=kmean.labels_
#plt.cla()
#plt.figure()
markers=[',',]
case=1
x_special=X_pca2[y_lab2==case]
c_special=y_lab2[y_lab2==case]
x_other=X_pca2[y_lab2!=case]
c_other=y_lab2[y_lab2!=case]
plt.scatter(x_special[:,0],x_special[:,1], c=c_special, marker='+')
plt.scatter(x_other[:,0],x_other[:,1], c=c_other, marker='.')
plt.show()
y_lab2[:5]
for i in range(0,4):
y_case=y[y_lab2==i]
lab_mean=np.mean(y_case)
lab_std=np.std(y_case)
lab_perc=np.percentile(y_case, [2.5, 97.5])
print("For case {}, the mean is {} and the std = {} and the 95% confidence interval = {}".format(i,lab_mean, lab_std, lab_perc))
"""
Explanation: Reading a file in Pandas
Reading a CSV file is really easy in Pandas. There are several formats that Pandas can deal with.
|Format Type|Data Description|Reader|Writer|
|---|---|---|---|
|text|CSV|read_csv|to_csv|
|text|JSON|read_json|to_json|
|text|HTML|read_html|to_html|
|text|Local clipboard|read_clipboard|to_clipboard|
|binary|MS Excel|read_excel|to_excel|
|binary|HDF5 Format|read_hdf|to_hdf|
|binary|Feather Format|read_feather|to_feather|
|binary|Msgpack|read_msgpack|to_msgpack|
|binary|Stata|read_stata|to_stata|
|binary|SAS|read_sas ||
|binary|Python Pickle Format|read_pickle|to_pickle|
|SQL|SQL|read_sql|to_sql|
|SQL|Google Big Query|read_gbq|to_gbq|
We will use <code>pd.read_csv()</code>.
As you will see, the Jupyter notebook prints out a very nice rendition of the DataFrame object that is the result
End of explanation
"""
|
qutip/qutip-notebooks
|
examples/control-pulseoptim-Lindbladian.ipynb
|
lgpl-3.0
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import datetime
from qutip import Qobj, identity, sigmax, sigmay, sigmaz, sigmam, tensor
from qutip.superoperator import liouvillian, sprepost
from qutip.qip import hadamard_transform
import qutip.logging_utils as logging
logger = logging.get_logger()
#Set this to None or logging.WARN for 'quiet' execution
log_level = logging.INFO
#QuTiP control modules
import qutip.control.pulseoptim as cpo
example_name = 'Lindblad'
"""
Explanation: Calculation of control fields for Lindbladian dynamics using L-BFGS-B algorithm
Christian Arenz (christianarenz.ca@gmail.com), Alexander Pitchford (alex.pitchford@gmail.com)
Example to demonstrate using the control library to determine control pulses using the ctrlpulseoptim.optimize_pulse function. The (default) L-BFGS-B algorithm is used to optimise the pulse to
minimise the fidelity error, which in this case is given by the 'Trace difference' norm.
This in an open quantum system example, with a single qubit subject to an amplitude damping channel. The target evolution is the Hadamard gate. For a $d$ dimensional quantum system in general we represent the Lindbladian
as a $d^2 \times d^2$ dimensional matrix by creating the Liouvillian superoperator. Here done for the Lindbladian that describes the amplitude damping channel. Similarly the control generators acting on the qubit are also converted to superoperators. The initial and target maps also need to be in superoperator form.
The user can experiment with the strength of the amplitude damping by changing the gamma variable value. If the rate is sufficiently small then the target fidelity can be achieved within the given tolerence. The drift Hamiltonian and control generators can also be swapped and changed to experiment with controllable and uncontrollable setups.
The user can experiment with the timeslicing, by means of changing the
number of timeslots and/or total time for the evolution.
Different initial (starting) pulse types can be tried.
The initial and final pulses are displayed in a plot
For more background on the pulse optimisation see:
QuTiP overview - Optimal Control
End of explanation
"""
Sx = sigmax()
Sy = sigmay()
Sz = sigmaz()
Sm = sigmam()
Si = identity(2)
#Hadamard gate
had_gate = hadamard_transform(1)
# Hamiltonian
Del = 0.1 # Tunnelling term
wq = 1.0 # Energy of the 2-level system.
H0 = 0.5*wq*sigmaz() + 0.5*Del*sigmax()
#Amplitude damping#
#Damping rate:
gamma = 0.1
L0 = liouvillian(H0, [np.sqrt(gamma)*Sm])
#sigma X control
LC_x = liouvillian(Sx)
#sigma Y control
LC_y = liouvillian(Sy)
#sigma Z control
LC_z = liouvillian(Sz)
#Drift
drift = L0
#Controls - different combinations can be tried
ctrls = [LC_z, LC_x]
# Number of ctrls
n_ctrls = len(ctrls)
# start point for the map evolution
E0 = sprepost(Si, Si)
# target for map evolution
E_targ = sprepost(had_gate, had_gate)
"""
Explanation: Defining the physics
End of explanation
"""
# Number of time slots
n_ts = 10
# Time allowed for the evolution
evo_time = 2
"""
Explanation: Defining the time evolution parameters
End of explanation
"""
# Fidelity error target
fid_err_targ = 1e-3
# Maximum iterations for the optisation algorithm
max_iter = 200
# Maximum (elapsed) time allowed in seconds
max_wall_time = 30
# Minimum gradient (sum of gradients squared)
# as this tends to 0 -> local minima has been found
min_grad = 1e-20
"""
Explanation: Set the conditions which will cause the pulse optimisation to terminate
End of explanation
"""
# pulse type alternatives: RND|ZERO|LIN|SINE|SQUARE|SAW|TRIANGLE|
p_type = 'RND'
"""
Explanation: Set the initial pulse type
End of explanation
"""
#Set to None to suppress output files
f_ext = "{}_n_ts{}_ptype{}.txt".format(example_name, n_ts, p_type)
"""
Explanation: Give an extension for output files
End of explanation
"""
# Note that this call will take the defaults
# dyn_type='GEN_MAT'
# This means that matrices that describe the dynamics are assumed to be
# general, i.e. the propagator can be calculated using:
# expm(combined_dynamics*dt)
# prop_type='FRECHET'
# and the propagators and their gradients will be calculated using the
# Frechet method, i.e. an exact gradent
# fid_type='TRACEDIFF'
# and that the fidelity error, i.e. distance from the target, is give
# by the trace of the difference between the target and evolved operators
result = cpo.optimize_pulse(drift, ctrls, E0, E_targ, n_ts, evo_time,
fid_err_targ=fid_err_targ, min_grad=min_grad,
max_iter=max_iter, max_wall_time=max_wall_time,
out_file_ext=f_ext, init_pulse_type=p_type,
log_level=log_level, gen_stats=True)
"""
Explanation: Run the optimisation
End of explanation
"""
result.stats.report()
print("Final evolution\n{}\n".format(result.evo_full_final))
print("********* Summary *****************")
print("Initial fidelity error {}".format(result.initial_fid_err))
print("Final fidelity error {}".format(result.fid_err))
print("Final gradient normal {}".format(result.grad_norm_final))
print("Terminated due to {}".format(result.termination_reason))
print("Number of iterations {}".format(result.num_iter))
print("Completed in {} HH:MM:SS.US".format(datetime.timedelta(seconds=result.wall_time)))
"""
Explanation: Report the results
End of explanation
"""
fig1 = plt.figure()
ax1 = fig1.add_subplot(2, 1, 1)
ax1.set_title("Initial control amps")
ax1.set_xlabel("Time")
ax1.set_ylabel("Control amplitude")
for j in range(n_ctrls):
ax1.step(result.time,
np.hstack((result.initial_amps[:, j], result.initial_amps[-1, j])),
where='post')
ax2 = fig1.add_subplot(2, 1, 2)
ax2.set_title("Optimised Control Sequences")
ax2.set_xlabel("Time")
ax2.set_ylabel("Control amplitude")
for j in range(n_ctrls):
ax2.step(result.time,
np.hstack((result.final_amps[:, j], result.final_amps[-1, j])),
where='post')
fig1.tight_layout()
"""
Explanation: Plot the initial and final amplitudes
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Versions
End of explanation
"""
|
dismalpy/dismalpy
|
doc/notebooks/local_linear_trend.ipynb
|
bsd-2-clause
|
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import dismalpy as dp
import matplotlib.pyplot as plt
"""
Explanation: State space modeling: Local Linear Trends
This notebook describes how to extend the state space classes to create and estimate a custom model. Here we develop a local linear trend model.
The Local Linear Trend model has the form (see Durbin and Koopman 2012, Chapter 3.2 for all notation and details):
$$
\begin{align}
y_t & = \mu_t + \varepsilon_t \qquad & \varepsilon_t \sim
N(0, \sigma_\varepsilon^2) \
\mu_{t+1} & = \mu_t + \nu_t + \xi_t & \xi_t \sim N(0, \sigma_\xi^2) \
\nu_{t+1} & = \nu_t + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2)
\end{align}
$$
It is easy to see that this can be cast into state space form as:
$$
\begin{align}
y_t & = \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \varepsilon_t \
\begin{pmatrix} \mu_{t+1} \ \nu_{t+1} \end{pmatrix} & = \begin{bmatrix} 1 & 1 \ 0 & 1 \end{bmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \begin{pmatrix} \xi_t \ \zeta_t \end{pmatrix}
\end{align}
$$
Notice that much of the state space representation is composed of known values; in fact the only parts in which parameters to be estimated appear are in the variance / covariance matrices:
$$
\begin{align}
H_t & = \begin{bmatrix} \sigma_\varepsilon^2 \end{bmatrix} \
Q_t & = \begin{bmatrix} \sigma_\xi^2 & 0 \ 0 & \sigma_\zeta^2 \end{bmatrix}
\end{align}
$$
End of explanation
"""
"""
Univariate Local Linear Trend Model
"""
class LocalLinearTrend(dp.ssm.MLEModel):
def __init__(self, endog):
# Model order
k_states = k_posdef = 2
# Initialize the statespace
super(LocalLinearTrend, self).__init__(
endog, k_states=k_states, k_posdef=k_posdef,
initialization='approximate_diffuse',
loglikelihood_burn=k_states
)
# Initialize the matrices
self['design'] = np.array([1, 0])
self['transition'] = np.array([[1, 1],
[0, 1]])
self['selection'] = np.eye(k_states)
# Cache some indices
self._state_cov_idx = ('state_cov',) + np.diag_indices(k_posdef)
@property
def param_names(self):
return ['sigma2.measurement', 'sigma2.level', 'sigma2.trend']
@property
def start_params(self):
return [np.std(self.endog)]*3
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, constrained):
return constrained**0.5
def update(self, params, transformed=True):
params = super(LocalLinearTrend, self).update(params, transformed)
# Observation covariance
self['obs_cov',0,0] = params[0]
# State covariance
self[self._state_cov_idx] = params[1:]
"""
Explanation: To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from dismalpy.ssm.MLEModel. There are a number of things that must be specified:
k_states, k_posdef: These two parameters must be provided to the base classes in initialization. The inform the statespace model about the size of, respectively, the state vector, above $\begin{pmatrix} \mu_t & \nu_t \end{pmatrix}'$, and the state error vector, above $\begin{pmatrix} \xi_t & \zeta_t \end{pmatrix}'$. Note that the dimension of the endogenous vector does not have to be specified, since it can be inferred from the endog array.
update: The method update, with argument params, must be specified (it is used when fit() is called to calculate the MLE). It takes the parameters and fills them into the appropriate state space matrices. For example, below, the params vector contains variance parameters $\begin{pmatrix} \sigma_\varepsilon^2 & \sigma_\xi^2 & \sigma_\zeta^2\end{pmatrix}$, and the update method must place them in the observation and state covariance matrices. More generally, the parameter vector might be mapped into many different places in all of the statespace matrices.
statespace matrices: by default, all state space matrices (obs_intercept, design, obs_cov, state_intercept, transition, selection, state_cov) are set to zeros. Values that are fixed (like the ones in the design and transition matrices here) can be set in initialization, whereas values that vary with the parameters should be set in the update method. Note that it is easy to forget to set the selection matrix, which is often just the identity matrix (as it is here), but not setting it will lead to a very different model (one where there is not a stochastic component to the transition equation).
start params: start parameters must be set, even if it is just a vector of zeros, although often good start parameters can be found from the data. Maximum likelihood estimation by gradient methods (as employed here) can be sensitive to the starting parameters, so it is important to select good ones if possible. Here it does not matter too much (although as variances, they should't be set zero).
initialization: in addition to defined state space matrices, all state space models must be initialized with the mean and variance for the initial distribution of the state vector. If the distribution is known, initialize_known(initial_state, initial_state_cov) can be called, or if the model is stationary (e.g. an ARMA model), initialize_stationary can be used. Otherwise, initialize_approximate_diffuse is a reasonable generic initialization (exact diffuse initialization is not yet available). Since the local linear trend model is not stationary (it is composed of random walks) and since the distribution is not generally known, we use initialize_approximate_diffuse below.
The above are the minimum necessary for a successful model. There are also a number of things that do not have to be set, but which may be helpful or important for some applications:
transform / untransform: when fit is called, the optimizer in the background will use gradient methods to select the parameters that maximize the likelihood function. By default it uses unbounded optimization, which means that it may select any parameter value. In many cases, that is not the desired behavior; variances, for example, cannot be negative. To get around this, the transform method takes the unconstrained vector of parameters provided by the optimizer and returns a constrained vector of parameters used in likelihood evaluation. untransform provides the reverse operation.
param_names: this internal method can be used to set names for the estimated parameters so that e.g. the summary provides meaningful names. If not present, parameters are named param0, param1, etc.
End of explanation
"""
# Load Dataset
# Note: dataset from http://www.ssfpack.com/CKbook.html
df = pd.read_table('data/NorwayFinland.txt', skiprows=1, header=None)
df.columns = ['date', 'nf', 'ff']
df.index = pd.date_range(start='%d-01-01' % df.date[0], end='%d-01-01' % df.iloc[-1, 0], freq='AS')
# Log transform
df['lff'] = np.log(df['ff'])
# Setup the model
mod = LocalLinearTrend(df['lff'])
# Fit it using MLE (recall that we are fitting the three variance parameters)
res = mod.fit()
print(res.summary())
"""
Explanation: Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland.
End of explanation
"""
# Perform prediction and forecasting
predict = res.get_prediction()
forecast = res.get_forecast('2014')
fig, ax = plt.subplots(figsize=(10,4))
# Plot the results
df['lff'].plot(ax=ax, style='k.', label='Observations')
predict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction')
predict_ci = predict.conf_int(alpha=0.05)
ax.fill_between(predict_ci.index[2:], predict_ci.ix[2:, 0], predict_ci.ix[2:, 1], alpha=0.1)
forecast.predicted_mean.plot(ax=ax, style='r', label='Forecast')
forecast_ci = forecast.conf_int()
ax.fill_between(forecast_ci.index, forecast_ci.ix[:, 0], forecast_ci.ix[:, 1], alpha=0.1)
# Cleanup the image
ax.set_ylim((4, 8));
legend = ax.legend(loc='lower left');
"""
Explanation: Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/dwd/cmip6/models/sandbox-1/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
basp/notes
|
3dgfx.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 3dgfx the math
This is pretty much a collection of notes mostly inspired by Computer Graphics, Fall 2009. Yeah it's an old course but it's very good and covers a lot of essentials in a fast pace.
This is by no means a substitute for watching it yourself; please do if you're tyring to figure out this stuff. This is more like a companion guide where essential materials are made real by executable examples where possible and relevant.
We will focus only on the math. Any programming (besides the code in this book) will be out of scope. The target audience is experience programmers who already have (or should have) seen this stuff and need an extended cheat sheet in the form of a refresher.
"All the meat I've eaten. I forgive myself"
End of explanation
"""
def plot_line(p1, p2, style='b', **kwargs):
p1x, p1y = p1
p2x, p2y = p2
plt.plot([p1x, p2x], [p1y, p2y], style, **kwargs)
P1 = np.array([1, 1])
P2 = np.array([3, 2])
t = 0.38
Pt = P1 + t * (P2 - P1)
plot_line(P1, P2, label=r'$t(P_2 - P_1) = t\vec{v}$')
plt.plot(*P1, 'ko')
plt.plot(*P2, 'ko')
plt.plot(*Pt, 'ro')
plt.legend()
ax = plt.axes()
ax.set_xlim(0, 4)
ax.set_ylim(0, 3)
ax.annotate('$P_1$', (P1[0], P1[1]), xytext=(P1[0], P1[1] - 0.20))
ax.annotate('$P_2$', (P2[0], P2[1]), xytext=(P2[0], P2[1] + 0.10))
ax.annotate('$P_{t=%.2f}$' % t, (Pt[0], Pt[1]), xytext=(Pt[0], Pt[1] - 0.20))
"""
Explanation: vector space
A vector space (also called a linear space) is a collection of objects called vectors which may be added together and multiplied by numbers (called scalars in this context). The operations of multiplication and addition must satisfy certain requirements called axioms.
A vector space over a field $F$ is a set $V$ together with two operations that satisfy the eight axioms below. Elements of $V$ are commonly called vectors. Elements of $F$ are commonly called scalars.
The first operation, called vector addition or just addition takes any two vectors $\vec{v}$ and $\vec{w}$ and assigns them to a third vector commonly written as $\vec{v} + \vec{w}$ and called the sum of these two vectors. The second operation, vector multiplication takes any scalar $a$ and any vector $\vec{v}$ and produces a new vector $a\vec{v}$.
axioms
In the list below, let $\vec{u}$, $\vec{v}$ and $\vec{w}$ be arbitrary vectors in $V$, and $a$ and $b$ scalars in $F$.
associativity of addition
$\vec{u} + (\vec{v} + \vec{w}) = (\vec{u} + \vec{v}) + \vec{w}$
communativity of addition
$\vec{u} + \vec{w} = \vec{w} + \vec{u}$
identity element of addition
There exists an element $0 \in V$, called the zero vector such that $\vec{v} + 0 = \vec{v}$ for all $\vec{v} \in V$.
inverse elements of addition
For every element $\vec{v} \in V$ there exists an element $-\vec{v} \in V$ called the additive inverse of $\vec{v}$ such that $\vec{v} + (-\vec{v}) = 0$.
compatibility of scalar multiplication with field multiplication
$a(b\vec{v}) = (ab)\vec{v}$
identity element of scalar multiplication
$1\vec{v} = \vec{v}$, where $1$ denotes the multiplicative identity in $F$.
distributivity of scalar multiplication with respect to vector addition
$a(\vec{u} + \vec{v}) = a\vec{u} + a\vec{v}$
distributivity of scalar multiplication with respect to field addition
$(a + b)\vec{v} = a\vec{v} + b\vec{v}$
When the scalar field $F$ is the real numbers $\mathbb{R}$, the vector space is called a real vector space. When the scalar field is complex numbers, the vector space is called a complex vector space.
affine space
In an affine space, there is no distinguished point that serves as an origin. Hence, no vector has a fixed origin and no vector can be uniquely associated to a point. In an affine space, there are instead displacement vectors also called translation vectors or simply translations, between two points of the space.
Thus it makes sense to subtract two points of the space, giving a translation vector, but it does not make sense to add two points of the space.
$\vec{v} = P_2 - P_1$
Likewise, it makes sense to add a vector to a point of an affine space, resulting in a new point translated from the starting point by that vector.
$P_2 = P_1 + \vec{v}$
Note that we can interpolate any point $P_t$ on the line through points $P_1$ and $P_2$ by scaling the translation vector with a factor ${t}$.
$P_t = P_1 + t\vec{v} = P_1 + t(P_2 - P_1)$
This is demonstrated in the plot below.
End of explanation
"""
P1 = 0, 1
P2 = 2, 5
P3 = 4, 2
P = 1.8, 2.9
def triangle_example1():
plot_line(P1, P2)
plot_line(P2, P3)
plot_line(P3, P1)
plt.plot(*P1, 'ko')
plt.plot(*P2, 'ko')
plt.plot(*P3, 'ko')
plt.plot(*P, 'ro')
ax = plt.axes()
ax.set_xlim(-0.5, 4.5)
ax.set_ylim(0.5, 5.5)
ax.annotate('$P_1$', P1, xytext=(P1[0] - 0.2, P1[1] + 0.2))
ax.annotate('$P_2$', P2, xytext=(P2[0] + 0.2, P2[1] + 0))
ax.annotate('$P_3$', P3, xytext=(P3[0] - 0.1, P3[1] - 0.3))
ax.annotate('$P$', P, xytext=(P[0] + 0.1, P[1] + 0.2))
ax.set_aspect(1.0)
"""
Explanation: We can also write this differently:
$P_t = P1 + t(P_2 - P_1) = (1 - t)P_1 + tP_2$.
We can see this by refactoring it:
$(1 - t)P_1 + tP_2 = P_1 - tP_1 + tP_2 = P_1 + t(P_2 - P_1)$.
The benefit of writing it like $(1 - t)P_1 + tP$ is that now we have something that is known as an affine combination.
affine combination
An affine combination, also sometimes called an affine sequence, of vectors ${x_1}, \ldots, {x_n}$ is a vector $\underset{i=1}{\overset{n}{\sum}}{\alpha_i}\cdot{x_i} = {\alpha_1}{x_1} + {\alpha_2}{x_2} + \cdots + {\alpha_n}{x_n}$ called a linear combination of ${x_1}, \ldots, {x_n}$ in which the sum of the coefficients is 1, thus: $\underset{i=1}{\overset{n}{\sum}}{\alpha_i} = 1$.
Here the vectors ${x_i}$ are elements of a given vector space $V$ over a field $K$ and the coefficients ${\alpha_i}$ are scalars in $K$. This concept is important, for example, in Euclidean geometry.
The act of taking an affine combination commutes with any affine transformation $T$ in the sense that $T{\underset{i=1}{\overset{n}{\sum}}}{\alpha_i}\cdot{x_i} = \underset{i=1}{\overset{n}{\sum}}{\alpha_i}\cdot{T}{x_i}$.
In particular, any affine combination of the [fixed points](https://en.wikipedia.org/wiki/Fixed_point_(mathematics) of a given affine transformation $T$ is also a fixed point of $T$, so the set of fixed points of $T$ forms an affine subspace (in 3D: a line or a plane, and the trivial cases, a point or the whole space).
The important takeaway from the mumbo jumbo above is that, if we have some kind of thing that resembles $t_1{P_1} + t_2{P_2} + \cdots + t_n{P_n}$ and $t_1 + t_2 + \cdots + t_n = 1$ then we're dealing with an affine combination.
interlude: thinking about affine space
As a thought experiment it's fun to stop and think about what it means to be in affine space.
First we cannot just describe a point and say where it is. If you are in affine space you can be only described by your position relative to the combination you are in.
We also found that we can use triangles to define a point in two-dimensional affine space. It would make sense that we could use lines to represent points in one-dimensional affine space. If a triangle is $P_t = t_1{P_1} + t_2{P_2} + t_2{P_3}$ then it it will also make sense that a line will be $P_t = t_1{P_1} + t_2{P_2}$.
Even with the small amount of computations we have done we can already see that the affine space model is a really nice way of thinking abou
barycentric coordinates
In the context of a triangle, barycentric coordinates are also known as area coordinates or areal coordinates, because the coordinates of $P$ with respect to triangle $ABC$ are equivalent to the (signed) ratios of the areas of $PBC$, $PCA$ and $PAB$ to the area of the reference triangle $ABC$.
Areal and trilinear coordinates are used for similar purposes in geometry.
In order to figure out what this all means, we'll start with a triangle ${P_1}{P_2}{P_3}$ and a point ${P}$ inside of it. The code below defines these points and a triangle_example1 function so we can re-use it. The function just plots the triangle, the points and some annotations.
End of explanation
"""
plt.figure(figsize=(6,6))
triangle_example1()
"""
Explanation: And now we call the triangle_example1 function to plot it:
End of explanation
"""
plt.figure(figsize=(6,6))
triangle_example1()
f = lambda x: 1/4 * x + 1
x = np.linspace(-1, 5, 5)
# FIXME: Let's just do it engineering style for now
f_x = f(x) - f(P[0]) + P[1]
plt.plot(x, f_x, 'r--')
ax = plt.axes()
ax.annotate(r'$t_2$', (0.25, 2))
ax.annotate(r'$t_3$', (1.25, 2.45))
"""
Explanation: We have all the components to express $P$ in terms of the points of the triangle ${P_1}{P_2}{P_3}$.
We start at $P_1$ and go some amount $t_2$ in the direction of the vector $P_2 - P_1$. We'll end up at $P1 + t_2(P2 - P1)$. From there we go some amount $t_3$ in the direction of the vector $P_3 - P_1$ and with the proper amounts (or ratios) of $t_2$ and $t_3$ we should be able to end up at $P$.
To visualize it we'll use a slightly different plot. The dashed line goes through point $P$ and runs in the direction of $P_3 - P_1$. We start at $P_1$ and go in the direction of the $P_2 - P_1$ vector until we end up at the dashed line. From there we just have to follow said dashed line until we end up at $P$.
End of explanation
"""
|
deepmind/acme
|
examples/quickstart.ipynb
|
apache-2.0
|
environment_library = 'gym' # @param ['dm_control', 'gym']
"""
Explanation: Acme: Quickstart
Guide to installing Acme and training your first D4PG agent.
<a href="https://colab.research.google.com/github/deepmind/acme/blob/master/examples/quickstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Select your environment library
End of explanation
"""
!pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
"""
Explanation: Installation
Install Acme
End of explanation
"""
if environment_library == 'dm_control':
import distutils.util
import subprocess
if subprocess.run('nvidia-smi').returncode:
raise RuntimeError(
'Cannot communicate with GPU. '
'Make sure you are using a GPU Colab runtime. '
'Go to the Runtime menu and select Choose runtime type.')
mujoco_dir = "$HOME/.mujoco"
print('Installing OpenGL dependencies...')
!apt-get update -qq
!apt-get install -qq -y --no-install-recommends libglew2.0 > /dev/null
print('Downloading MuJoCo...')
BASE_URL = 'https://github.com/deepmind/mujoco/releases/download'
MUJOCO_VERSION = '2.1.1'
MUJOCO_ARCHIVE = (
f'mujoco-{MUJOCO_VERSION}-{distutils.util.get_platform()}.tar.gz')
!wget -q "{BASE_URL}/{MUJOCO_VERSION}/{MUJOCO_ARCHIVE}"
!wget -q "{BASE_URL}/{MUJOCO_VERSION}/{MUJOCO_ARCHIVE}.sha256"
check_result = !shasum -c "{MUJOCO_ARCHIVE}.sha256"
if _exit_code:
raise RuntimeError(
'Downloaded MuJoCo archive is corrupted (checksum mismatch)')
print('Unpacking MuJoCo...')
MUJOCO_DIR = '$HOME/.mujoco'
!mkdir -p "{MUJOCO_DIR}"
!tar -zxf {MUJOCO_ARCHIVE} -C "{MUJOCO_DIR}"
# Configure dm_control to use the EGL rendering backend (requires GPU)
%env MUJOCO_GL=egl
print('Installing dm_control...')
# Version 0.0.416848645 is the first one to support MuJoCo 2.1.1.
!pip install -q dm_control>=0.0.416848645
print('Checking that the dm_control installation succeeded...')
try:
from dm_control import suite
env = suite.load('cartpole', 'swingup')
pixels = env.physics.render()
except Exception as e:
raise e from RuntimeError(
'Something went wrong during installation. Check the shell output above '
'for more information.\n'
'If using a hosted Colab runtime, make sure you enable GPU acceleration '
'by going to the Runtime menu and selecting "Choose runtime type".')
else:
del suite, env, pixels
!echo Installed dm_control $(pip show dm_control | grep -Po "(?<=Version: ).+")
elif environment_library == 'gym':
!pip install gym
"""
Explanation: Install the environment library
End of explanation
"""
!sudo apt-get install -y xvfb ffmpeg
!pip install imageio
!pip install PILLOW
!pip install pyvirtualdisplay
"""
Explanation: Install visualization packages
End of explanation
"""
import IPython
from acme import environment_loop
from acme import specs
from acme import wrappers
from acme.agents.tf import d4pg
from acme.tf import networks
from acme.tf import utils as tf2_utils
from acme.utils import loggers
import numpy as np
import sonnet as snt
# Import the selected environment lib
if environment_library == 'dm_control':
from dm_control import suite
elif environment_library == 'gym':
import gym
# Imports required for visualization
import pyvirtualdisplay
import imageio
import base64
# Set up a virtual display for rendering.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
"""
Explanation: Import Modules
End of explanation
"""
if environment_library == 'dm_control':
environment = suite.load('cartpole', 'balance')
elif environment_library == 'gym':
environment = gym.make('MountainCarContinuous-v0')
environment = wrappers.GymWrapper(environment) # To dm_env interface.
else:
raise ValueError(
"Unknown environment library: {};".format(environment_library) +
"choose among ['dm_control', 'gym'].")
# Make sure the environment outputs single-precision floats.
environment = wrappers.SinglePrecisionWrapper(environment)
# Grab the spec of the environment.
environment_spec = specs.make_environment_spec(environment)
"""
Explanation: Load an environment
We can now load an environment. In what follows we'll create an environment and grab the environment's specifications.
End of explanation
"""
#@title Build agent networks
# Get total number of action dimensions from action spec.
num_dimensions = np.prod(environment_spec.actions.shape, dtype=int)
# Create the shared observation network; here simply a state-less operation.
observation_network = tf2_utils.batch_concat
# Create the deterministic policy network.
policy_network = snt.Sequential([
networks.LayerNormMLP((256, 256, 256), activate_final=True),
networks.NearZeroInitializedLinear(num_dimensions),
networks.TanhToSpec(environment_spec.actions),
])
# Create the distributional critic network.
critic_network = snt.Sequential([
# The multiplexer concatenates the observations/actions.
networks.CriticMultiplexer(),
networks.LayerNormMLP((512, 512, 256), activate_final=True),
networks.DiscreteValuedHead(vmin=-150., vmax=150., num_atoms=51),
])
# Create a logger for the agent and environment loop.
agent_logger = loggers.TerminalLogger(label='agent', time_delta=10.)
env_loop_logger = loggers.TerminalLogger(label='env_loop', time_delta=10.)
# Create the D4PG agent.
agent = d4pg.D4PG(
environment_spec=environment_spec,
policy_network=policy_network,
critic_network=critic_network,
observation_network=observation_network,
sigma=1.0,
logger=agent_logger,
checkpoint=False
)
# Create an loop connecting this agent to the environment created above.
env_loop = environment_loop.EnvironmentLoop(
environment, agent, logger=env_loop_logger)
"""
Explanation: ## Create a D4PG agent
End of explanation
"""
# Run a `num_episodes` training episodes.
# Rerun this cell until the agent has learned the given task.
env_loop.run(num_episodes=100)
"""
Explanation: Run a training loop
End of explanation
"""
# Create a simple helper function to render a frame from the current state of
# the environment.
if environment_library == 'dm_control':
def render(env):
return env.physics.render(camera_id=0)
elif environment_library == 'gym':
def render(env):
return env.environment.render(mode='rgb_array')
else:
raise ValueError(
"Unknown environment library: {};".format(environment_library) +
"choose among ['dm_control', 'gym'].")
def display_video(frames, filename='temp.mp4'):
"""Save and display video."""
# Write video
with imageio.get_writer(filename, fps=60) as video:
for frame in frames:
video.append_data(frame)
# Read video and display the video
video = open(filename, 'rb').read()
b64_video = base64.b64encode(video)
video_tag = ('<video width="320" height="240" controls alt="test" '
'src="data:video/mp4;base64,{0}">').format(b64_video.decode())
return IPython.display.HTML(video_tag)
"""
Explanation: Visualize an evaluation loop
Helper functions for rendering and vizualization
End of explanation
"""
timestep = environment.reset()
frames = [render(environment)]
while not timestep.last():
# Simple environment loop.
action = agent.select_action(timestep.observation)
timestep = environment.step(action)
# Render the scene and add it to the frame stack.
frames.append(render(environment))
# Save and display a video of the behaviour.
display_video(np.array(frames))
"""
Explanation: Run and visualize the agent in the environment for an episode
End of explanation
"""
|
analysiscenter/dataset
|
examples/tutorials/05_creating_CNN.ipynb
|
apache-2.0
|
import sys
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import PIL
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
# the following line is not required if BatchFlow is installed as a python package.
sys.path.append('../..')
from batchflow import D, B, V, C, R, P
from batchflow.utils import plot_images
from batchflow.opensets import MNIST
from batchflow.models.tf import TFModel
from batchflow.models.torch import TorchModel
from batchflow.models.metrics import ClassificationMetrics
plt.style.use('ggplot')
"""
Explanation: Convolutional Neural Networks With BatchFlow
Now it's time to talk about convolutional neural networks and in this notebook you will find out how to do:
* data augmentation;
* early stopping.
End of explanation
"""
mnist = MNIST()
"""
Explanation: You don't need to implement a MNIST dataset. It is already done for you.
End of explanation
"""
config = {
'model': TorchModel,
'channels': 'first'}
# or for TensorFlow model
# config = {
# 'model': TFModel,
# 'channels': 'last'}
"""
Explanation: We can use deep learning frameworks such as TensorFlow or PyTorch to make a neural network. These frameworks have a lot of differences under the hood. Batchflow allows us not to dive deep into each of them and use the same model configuration, thereby allowing us to build framework-agnostic models.
But before, we should define model class 'model' and channels positions 'channels' (for TensorFlow models - 'last', for PyTorch models - 'first') in config.
There are also predefined models of both frameworks. You can use them without additional configuration.
Model configuration
End of explanation
"""
model_config = {
'inputs/images/shape': B.image_shape,
'inputs/labels/classes': D.num_classes,
'initial_block/inputs': 'images',
'body': {'layout': 'cna cna cna',
'filters': [16, 32, 64],
'kernel_size': [7, 5, 3],
'strides': 2},
'head': {'layout': 'Pf',
'units': 10},
'loss': 'ce',
'optimizer': 'Adam',
'output': dict(predicted=['proba', 'labels'])
}
"""
Explanation: As we already learned from the previous tutorials, first of all you have to define model configuration and create train and test pipelines.
A little bit about the structure of batchflow model:
* initial_block - block containing the input layers;
* body - the main part of the model;
* head - outputs layers, like global average pooling or dense layers.
Let's create a dict with configuration for our model — model_config. This dict is used when model is initialized. You can override default parameters or add new parameters by typing in a model_config key like 'body/layout' and params to this key. Similar way use it in the key 'initial_block/inputs' or 'head/units'.
The main parameter of each architecture is 'layout'. It is a sequence of letters, each letter meaning operation. For example, operations in our model:
* c - convolution layer,
* b - batch normalization,
* a - activation,
* P - global pooling,
* f - dense layer (fully connected).
In our configuration 'body/filters', 'body/kernel_size' are lists with a length equal to the number of convolutions, store individual parameters for each convolution. And 'body/strides' is an integer — therefore, the same value is used for all convolutional layers.
In docs you can read more.
End of explanation
"""
def custom_filter(image, kernel_weights=None):
""" Apply filter with custom kernel to image
Parameters
----------
kernel_weights: np.array
Weights of kernel.
Returns
-------
filtered image """
if kernel_weights is None:
kernel_weights = np.ones((3,3))
kernel_weights[1][1] = 10
kernel = PIL.ImageFilter.Kernel(kernel_weights.shape, kernel_weights.ravel())
return image.filter(kernel)
"""
Explanation: Train pipeline
We define our custom function for data augmentation.
End of explanation
"""
train_pipeline = (
mnist.train.p
.init_variable('loss_history', default=[])
.init_model('dynamic', C('model'), 'conv', config=model_config)
.apply_transform(custom_filter, src='images', p=0.8)
.shift(offset=P(R('randint', 8, size=2)), p=0.8)
.rotate(angle=P(R('uniform', -10, 10)), p=0.8)
.scale(factor=P(R('uniform', 0.8, 1.2, size=R([1, 2]))), preserve_shape=True, p=0.8)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.train_model('conv', fetches='loss', images=B('images'), targets=B('labels'),
save_to=V('loss_history', mode='a'))
) << config
"""
Explanation: When config is defined, next step is to create a pipeline. Note that rotate and scale are methods of the ImagesBatch class. You can see all avalible augmentations in images tutorial.
In contrast to them apply_transform is a function from Batch class. It is worth mentioning because it runs our function custom_filter in parallel. About parallel method read docs.
End of explanation
"""
validation_pipeline = (
mnist.test.p
.init_variable('predictions')
.init_variable('metrics', default=None)
.import_model('conv', train_pipeline)
.apply_transform(custom_filter, src='images', p=0.8)
.shift(offset=P(R('randint', 8, size=2)), p=0.8)
.rotate(angle=P(R('uniform', -10, 10)), p=0.8)
.scale(factor=P(R('uniform', 0.8, 1.2, size=R([1, 2]))), preserve_shape=True, p=0.8)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('conv', images=B('images'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics(ClassificationMetrics, targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
) << config
"""
Explanation: Validation pipeline
Testing on the augmented data
End of explanation
"""
MAX_ITER = 500
FREQUENCY = N_LAST = 20
batch_size = 128
for curr_iter in tqdm(range(1, MAX_ITER + 1)):
train_pipeline.next_batch(batch_size)
validation_pipeline.next_batch(batch_size)
if curr_iter % FREQUENCY == 0:
metrics = validation_pipeline.v('metrics')
accuracy = metrics[-N_LAST:].evaluate('accuracy')
#Early stopping
if accuracy > 0.9:
print('Early stop on {} iteration. Accuracy: {}'.format(curr_iter, accuracy))
break
"""
Explanation: Training process
We introduce an early stopping to terminate the model training when an average accuracy for a few last epochs will exceed 90 percent.
End of explanation
"""
plt.figure(figsize=(15, 5))
plt.plot(train_pipeline.v('loss_history'))
plt.xlabel("Iterations"), plt.ylabel("Loss")
plt.show()
"""
Explanation: Take a look at the loss history during training.
End of explanation
"""
inference_pipeline = (mnist.test.p
.init_variables('proba', 'labels')
.import_model('conv', train_pipeline)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('conv', images=B('images'),
fetches=['predicted_proba', 'predicted_labels'],
save_to=[V('proba'), V('labels')])) << config
"""
Explanation: Results
Our network is ready for inference. Now we don't use data augmentations. Let's take a look at the predictions.
End of explanation
"""
batch = inference_pipeline.next_batch(12, shuffle=True)
plot_images(np.squeeze(batch.images), batch.labels,
batch.pipeline.v('proba'), ncols=4, figsize=(30, 35))
"""
Explanation: It's always interesting to look at the images, so let's draw them.
End of explanation
"""
|
BryanCutler/spark
|
python/docs/source/getting_started/quickstart.ipynb
|
apache-2.0
|
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
"""
Explanation: Quickstart
This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts.
This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook here.
There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide.
PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users.
End of explanation
"""
from datetime import datetime, date
import pandas as pd
from pyspark.sql import Row
df = spark.createDataFrame([
Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),
Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),
Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))
])
df
"""
Explanation: DataFrame Creation
A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list.
pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data.
Firstly, you can create a PySpark DataFrame from a list of rows
End of explanation
"""
df = spark.createDataFrame([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
], schema='a long, b double, c string, d date, e timestamp')
df
"""
Explanation: Create a PySpark DataFrame with an explicit schema.
End of explanation
"""
pandas_df = pd.DataFrame({
'a': [1, 2, 3],
'b': [2., 3., 4.],
'c': ['string1', 'string2', 'string3'],
'd': [date(2000, 1, 1), date(2000, 2, 1), date(2000, 3, 1)],
'e': [datetime(2000, 1, 1, 12, 0), datetime(2000, 1, 2, 12, 0), datetime(2000, 1, 3, 12, 0)]
})
df = spark.createDataFrame(pandas_df)
df
"""
Explanation: Create a PySpark DataFrame from a pandas DataFrame
End of explanation
"""
rdd = spark.sparkContext.parallelize([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
])
df = spark.createDataFrame(rdd, schema=['a', 'b', 'c', 'd', 'e'])
df
"""
Explanation: Create a PySpark DataFrame from an RDD consisting of a list of tuples.
End of explanation
"""
# All DataFrames above result same.
df.show()
df.printSchema()
"""
Explanation: The DataFrames created above all have the same results and schema.
End of explanation
"""
df.show(1)
"""
Explanation: Viewing Data
The top rows of a DataFrame can be displayed using DataFrame.show().
End of explanation
"""
spark.conf.set('spark.sql.repl.eagerEval.enabled', True)
df
"""
Explanation: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration.
End of explanation
"""
df.show(1, vertical=True)
"""
Explanation: The rows can also be shown vertically. This is useful when rows are too long to show horizontally.
End of explanation
"""
df.columns
df.printSchema()
"""
Explanation: You can see the DataFrame's schema and column names as follows:
End of explanation
"""
df.select("a", "b", "c").describe().show()
"""
Explanation: Show the summary of the DataFrame
End of explanation
"""
df.collect()
"""
Explanation: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side.
End of explanation
"""
df.take(1)
"""
Explanation: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail().
End of explanation
"""
df.toPandas()
"""
Explanation: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas APIs. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side.
End of explanation
"""
df.a
"""
Explanation: Selecting and Accessing Data
PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance.
End of explanation
"""
from pyspark.sql import Column
from pyspark.sql.functions import upper
type(df.c) == type(upper(df.c)) == type(df.c.isNull())
"""
Explanation: In fact, most of column-wise operations return Columns.
End of explanation
"""
df.select(df.c).show()
"""
Explanation: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame.
End of explanation
"""
df.withColumn('upper_c', upper(df.c)).show()
"""
Explanation: Assign new Column instance.
End of explanation
"""
df.filter(df.a == 1).show()
"""
Explanation: To select a subset of rows, use DataFrame.filter().
End of explanation
"""
import pandas
from pyspark.sql.functions import pandas_udf
@pandas_udf('long')
def pandas_plus_one(series: pd.Series) -> pd.Series:
# Simply plus one by using pandas Series.
return series + 1
df.select(pandas_plus_one(df.a)).show()
"""
Explanation: Applying a Function
PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function.
End of explanation
"""
def pandas_filter_func(iterator):
for pandas_df in iterator:
yield pandas_df[pandas_df.a == 1]
df.mapInPandas(pandas_filter_func, schema=df.schema).show()
"""
Explanation: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length.
End of explanation
"""
df = spark.createDataFrame([
['red', 'banana', 1, 10], ['blue', 'banana', 2, 20], ['red', 'carrot', 3, 30],
['blue', 'grape', 4, 40], ['red', 'carrot', 5, 50], ['black', 'carrot', 6, 60],
['red', 'banana', 7, 70], ['red', 'grape', 8, 80]], schema=['color', 'fruit', 'v1', 'v2'])
df.show()
"""
Explanation: Grouping Data
PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy.
It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame.
End of explanation
"""
df.groupby('color').avg().show()
"""
Explanation: Grouping and then applying the avg() function to the resulting groups.
End of explanation
"""
def plus_mean(pandas_df):
return pandas_df.assign(v1=pandas_df.v1 - pandas_df.v1.mean())
df.groupby('color').applyInPandas(plus_mean, schema=df.schema).show()
"""
Explanation: You can also apply a Python native function against each group by using pandas APIs.
End of explanation
"""
df1 = spark.createDataFrame(
[(20000101, 1, 1.0), (20000101, 2, 2.0), (20000102, 1, 3.0), (20000102, 2, 4.0)],
('time', 'id', 'v1'))
df2 = spark.createDataFrame(
[(20000101, 1, 'x'), (20000101, 2, 'y')],
('time', 'id', 'v2'))
def asof_join(l, r):
return pd.merge_asof(l, r, on='time', by='id')
df1.groupby('id').cogroup(df2.groupby('id')).applyInPandas(
asof_join, schema='time int, id int, v1 double, v2 string').show()
"""
Explanation: Co-grouping and applying a function.
End of explanation
"""
df.write.csv('foo.csv', header=True)
spark.read.csv('foo.csv', header=True).show()
"""
Explanation: Getting Data in/out
CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster.
There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation.
CSV
End of explanation
"""
df.write.parquet('bar.parquet')
spark.read.parquet('bar.parquet').show()
"""
Explanation: Parquet
End of explanation
"""
df.write.orc('zoo.orc')
spark.read.orc('zoo.orc').show()
"""
Explanation: ORC
End of explanation
"""
df.createOrReplaceTempView("tableA")
spark.sql("SELECT count(*) from tableA").show()
"""
Explanation: Working with SQL
DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below:
End of explanation
"""
@pandas_udf("integer")
def add_one(s: pd.Series) -> pd.Series:
return s + 1
spark.udf.register("add_one", add_one)
spark.sql("SELECT add_one(v1) FROM tableA").show()
"""
Explanation: In addition, UDFs can be registered and invoked in SQL out of the box:
End of explanation
"""
from pyspark.sql.functions import expr
df.selectExpr('add_one(v1)').show()
df.select(expr('count(*)') > 0).show()
"""
Explanation: These SQL expressions can directly be mixed and used as PySpark columns.
End of explanation
"""
|
rainyear/pytips
|
Tips/2016-04-08-Descriptor.ipynb
|
mit
|
a = 1
b = 2
print("a + b = {}".format(a+b))
# 相当于
print("a.__add__(b) = {}".format(a.__add__(b)))
"""
Explanation: Python 描述符
本篇主要关于三个常用内置方法:property(),staticmethod(),classmethod()
在 Python 语言的设计中,通常的语法操作最终都会转化为方法调用,例如:
End of explanation
"""
class Int:
ctype = "Class::Int"
def __init__(self, val):
self._val = val
a = Int(1)
print(a.ctype)
"""
Explanation: Python 中的描述符(Descriptor)就是将对象属性的获取、赋值以及删除等行为转换为方法调用的协议:
```py
descr.get(self, obj, type=None) --> value
descr.set(self, obj, value) --> None
descr.delete(self, obj) --> None
```
例如我们要获取一个对象的属性,可以通过o.x的方式取得:
End of explanation
"""
class Int:
ctype = "Class::Int"
def __init__(self, val):
self._val = val
def __getattribute__(self, name):
print("👿 doesn't want to give `{}' to you!".format(name))
return "🐍"
a = Int(2)
print(a.ctype)
"""
Explanation: 而通过.的方式寻找属性的值实际上调用了object.__getattribute__(self, name)方法:
End of explanation
"""
class Str:
def __init__(self, val):
self._val = val
def __get__(self, name, ctype=None):
print("You can __get__ anything from here!")
return self._val
class Int:
ctype = Str("Class::Int")
def __init__(self, val):
self._val = val
def __getattribute__(self, name):
return type(self).__dict__[name].__get__(None, type(self))
a = Int(2)
print(a.ctype)
"""
Explanation: 而这里的__getattribute__(self, name)方法实际上就是将.的属性获取方法转化为描述符协议定义的descr.__get__(self, key):
End of explanation
"""
class Str:
def __init__(self, val):
self._val = val
def __get__(self, name, ctype=None):
print("You can __get__ anything from here!")
return self._val
def __set__(self, name, val):
print("You can __set__ anything to me!")
self._val = val
class Int:
ctype = Str("Class::Int")
def __init__(self, val):
self._val = val
a = Int(3)
print(a.ctype)
a.ctype = "Class::Float"
print(a.ctype)
"""
Explanation: 这里的 a.ctype = (Int.__dict__['ctype']).__get__(None, Int),即通过描述符的方式获取了 ctype 属性的值。同样的道理,你也可以通过 descr.__set__(self, obj, val) 设置属性的值:
End of explanation
"""
class Int:
def __init__(self, val):
self._val = val
self._ctype = None
def get_ctype(self):
print("INFO: You can get `ctype`")
return self._ctype
def set_ctype(self, val):
print("INFO: You're setting `ctype` =", val)
self._ctype=val
ctype = property(fget=get_ctype, fset=set_ctype, doc="Property `ctype`")
a = Int(4)
print(a.ctype)
a.ctype = "Class::Int"
print(a.ctype)
"""
Explanation: 将这些取值、赋值的操作转换为方法调用让我们有办法在做这些操作的过程中插入一些小动作,这么好用的东西自然是已加入豪华内置函数阵容,正是我们常见的
property()
classmethod()
staticmethod()
property
property(fget=None, fset=None, fdel=None, doc=None) 方法简化了上面的操作:
End of explanation
"""
class Int:
_ctype = None
def __init__(self, val):
self._val = val
@property
def ctype(self):
print("INFO: You can get `ctype` from me!")
return self._ctype
@ctype.setter
def ctype(self, val):
print("INFO: You're setting `ctype` =", val)
self._ctype = val
a = Int(5)
print(a.ctype)
a.ctype = "Class::Int"
print(a.ctype)
"""
Explanation: 显然,更方便一些的用法是将 property 当做修饰器:
End of explanation
"""
class Int:
def __init__(self, val):
self._val = val
def _get_ctype(self=None):
print("INFO: You can get `ctype` from here!")
return "Class::Int"
@staticmethod
def get_ctype():
print("INFO: You can get `ctype` from here!")
return "Class::StaticInt"
a = Int(6)
print(a._get_ctype())
print(Int._get_ctype())
print(a.get_ctype())
print(Int.get_ctype())
"""
Explanation: staticmethod & classmethod
顾名思义,property 是关于属性的全部操作,如果是要获取类中的方法,则需要用到 staticmethod 和 classmethod。顾名思义,staticmethod 将方法变成静态方法,即类和实例都可以访问,如果不用 staticmethod 我们可以用下面这种别扭的方法实现:
End of explanation
"""
class Int:
_ctype = ""
def __init__(self, val):
self._val = val
@classmethod
def set_ctype(klass, t):
klass._ctype = t
return "{}.ctype = {}".format(klass.__name__, t)
a = Int(7)
print(a.set_ctype("Class::Int"))
print(Int.set_ctype("Class::Float"))
b = Int(8)
print(b._ctype)
"""
Explanation: 可以看到,静态方法与类和实例无关,也就不再(不能)需要 self 关键词;与之相反,当我们需要在方法中保留类(而非实例)的引用时,则需要用 classmethod:
End of explanation
"""
class Property(object):
"Emulate PyProperty_Type() in Objects/descrobject.c"
def __init__(self, fget=None, fset=None, fdel=None, doc=None):
self.fget = fget
self.fset = fset
self.fdel = fdel
if doc is None and fget is not None:
doc = fget.__doc__
self.__doc__ = doc
def __get__(self, obj, objtype=None):
if obj is None:
return self
if self.fget is None:
raise AttributeError("unreadable attribute")
return self.fget(obj)
def __set__(self, obj, value):
if self.fset is None:
raise AttributeError("can't set attribute")
self.fset(obj, value)
def __delete__(self, obj):
if self.fdel is None:
raise AttributeError("can't delete attribute")
self.fdel(obj)
def getter(self, fget):
return type(self)(fget, self.fset, self.fdel, self.__doc__)
def setter(self, fset):
return type(self)(self.fget, fset, self.fdel, self.__doc__)
def deleter(self, fdel):
return type(self)(self.fget, self.fset, fdel, self.__doc__)
class StaticMethod(object):
"Emulate PyStaticMethod_Type() in Objects/funcobject.c"
def __init__(self, f):
self.f = f
def __get__(self, obj, objtype=None):
return self.f
class ClassMethod(object):
"Emulate PyClassMethod_Type() in Objects/funcobject.c"
def __init__(self, f):
self.f = f
def __get__(self, obj, klass=None):
if klass is None:
klass = type(obj)
def newfunc(*args):
return self.f(klass, *args)
return newfunc
"""
Explanation: 总结
Python 的描述符给出一种通过方法调用来实现属性(方法)获取、赋值等操作的规则,通过这一规则可以方便我们深入程序内部并实施操控,因此 property/staticmethod/classmethod 在 Python 是通过底层(如 CPython 中的 C)实现的,如果想要进一步深入了解其实现原理,可以访问参考链接的教程,其中包括了这三个内置方法的 Python 实现版本,我也把它们 copy 过来方便查看。
参考
Descriptor HowTo Guide
End of explanation
"""
|
wmfschneider/CHE30324
|
Homework/HW5-soln.ipynb
|
gpl-3.0
|
import numpy as np
import matplotlib.pyplot as plt
E = []
l = 1.4e-10 #m
hbar = 1.05457e-34 #J*s
m = 9.109e-31 #kg
N = [1,3,5,7,9] #N = number of C-C bonds
for n in range (1,7):
for i in N:
e = (n**2*np.pi**2*hbar**2*6.2415e18)/(2*m*(i*l)**2)
E.append(e)
plt.scatter(N,E[0:5], label = "n=1")
plt.scatter(N,E[5:10], label = "n=2")
plt.scatter(N,E[10:15], label = "n=3")
plt.scatter(N,E[15:20], label = "n=4")
plt.scatter(N,E[20:25], label = "n=5")
plt.scatter(N,E[25:30], label = "n=6")
plt.xticks(N, ["Ethylene", "Butadiene", "Hexatriene", "Octatriene", "Decapentaene"], rotation = 'vertical')
plt.ylabel('Energy (eV)')
plt.title('Energies of n = 1-6 Particle in a Box')
plt.legend()
plt.show()
"""
Explanation: Chem 30324, Spring 2019, Homework 5
Due Febrary 25, 2020
Real-world particle-in-a-box.
A one-dimensional particle-in-a-box is a simple but plausible model for the π electrons of a conjugated alkene, like butadiene ($C_4H_6$, shown here). Suppose all the C–C bonds in a polyene are 1.4 Å long and the polyenes are perfectly linear.
<img src="https://github.com/wmfschneider/CHE30324/blob/master/Homework/imgs/HW5-1.png?raw=1" width="360">
1. Plot out the energies of the $n = 1 – 6$ particle-in-a-box states for ethylene (2 carbon chain), butadiene (4 carbon chain), hexatriene (6 carbon chain), octatetraene (8 carbon chain), and decapentaene (10 carbon chain). What happens to the spacing between energy levels as the molecule gets longer?
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
l = 1.4*3 #Angstrom
x = np.linspace(0,l,100)
psi = (2/l)**.5*np.sin(2*np.pi*x/l) #normalized wave function
plt.plot(x,psi,label = '$\Psi_2(x)$')
plt.plot(x,psi**2,label = '$|\Psi_2(x)|^2$')
plt.xlim(0,l)
plt.xlabel('$x(\AA)$')
plt.ylabel('$\Psi_2(x)(\AA^{-1/2})$, $|\Psi_2(x)|^2(\AA^{-1})$')
plt.title('Wavefunction and Probability Distribution of n=2 Butadiene in a Box')
plt.axhline(y=0, color = 'k', linestyle = '--')
plt.axvline(x=l/4, color = 'k', linestyle = '--')
plt.axvline(x=3*l/4, color = 'k', linestyle = '--')
plt.annotate('Most Probable\n Location',xy=(l/4,0),xytext=(1.1,.25), arrowprops = dict(facecolor = 'black'))
plt.annotate('Most Probable Location',xy=(3*l/4,0),xytext=(1.5,.65), arrowprops = dict(facecolor = 'black'))
plt.annotate('Node Location and\nAverage Location',xy=(l/2,0),xytext=(1.5,-.3), arrowprops = dict(facecolor = 'black'))
plt.legend()
plt.show()
"""
Explanation: 2. Plot out the normalized $n = 2$ particle-in-a-box wavefunction for an electron in butadiene and the normalized $n = 2$ probability distribution. Indicate on the plots the most probable location(s) of the electron, the average location of the electron, and the positions of any nodes.
End of explanation
"""
import numpy as np
l = 3*1.4e-10 #m, length of the box
hbar = 1.05457e-34 #J*s
me = 9.109e-31 #kg
E13 = ((3**2-1**2)*np.pi**2*hbar**2/2/me/l**2)*6.2415e18
E23 = ((3**2-2**2)*np.pi**2*hbar**2/2/me/l**2)*6.2415e18
lambda13 = 1240/E13 #nm
lambda23 = 1240/E23 #nm
print('From n=1 to n=3, light must have wavelength = {0:.2f}nm. \nFrom n=2 to n=3, light must have wavelength = {1:.2f}nm.'.format(lambda13,lambda23))
"""
Explanation: 3. Butadiene has 4 π electrons, and we will learn later that in its lowest energy state, two of these are in the $n = 1$ and two in the $n = 2$ levels. Compare the wavelength of light (in nm) necessary to promote (“excite”) one electron from either of these levels to the empty $n = 3$ level.
End of explanation
"""
import numpy as np
#Heat of Formatin data from NIST
ethylene = 52.4 #kJ/mol
butadiene = 108.8 #kJ/mol
print('According to NIST, the energy of reaction =', butadiene - 2*ethylene, 'kJ/mol.')
l = 1.4e-10 #m, length of C-C bond
hbar = 1.05457e-34 #J*s
me = 9.109e-31 #kg
ethyl_n = 4*1**2 #4 n1 electrons
buta_n = 2*(1**2+2**2) #2 n1 + 1 n2 electrons
Erxn = (buta_n*np.pi**2*hbar**2)/(2*me*(3*l)**2) - (ethyl_n*np.pi**2*hbar**2)/(2*me*l**2) #J/molecule
E_rxn = Erxn*6.022e23/1000 #kJ/mol
print('Using the particle in a box method, the energy of reaction =',round(E_rxn,1),'kJ/mol. ')
print('This model isn\'t perfect beacause the potential is not zero or infinite in real life, \n and the model ignores interaction between nucleus and electrons.')
"""
Explanation: 4. The probability of an electron jumping between two energy states by emitting or absorbing light is proportional to the square of the “transition dipole,” given by the integral $\lvert\langle\psi_{initial}\lvert \hat{x}\rvert\psi_{final}\rangle\rvert^2$. Contrast the relative probabilities of an electron jumping from $n = 1$ to $n = 3$ and from $n = 2$ to $n = 3$ levels. Can you propose any general rules about “allowed” and "forbidden" jumps?
$|\langle\psi_1|\hat{x}|\psi_3\rangle|^2$ = $(\int_{0}^{L} \frac{sin(\pi x)}{L} * x * \frac{sin(3\pi x)}{L} dx)^2$
If we assume L = 1 m and integrate, this integral is equal to zero. Therefore, it is "forbidden".
$|\langle\psi_2|\hat{x}|\psi_3\rangle|^2$ = $(\int_{0}^{L} \frac{sin(2\pi x)}{L} * x * \frac{sin(3\pi x)}{L} dx)^2$
If we assume L = 1 m and integrate, this integral is not equal to zero. Therefore, it is "allowed".
5. Consider the reaction of two ethylene molecules to form butadiene:
<img src="https://github.com/wmfschneider/CHE30324/blob/master/Homework/imgs/HW5-2.png?raw=1" width="360">
As a very simple estimate, you could take the energy of each molecule as the sum of the energies of its π electrons, allowing only two electrons per energy level. Again taking each C—C bond as 1.4 Å long and treating the π electrons as particles in a box, calculate the total energy of an ethylene and a butadiene molecule within this model (in kJ/mol), and from these calculate the net reaction energy. Compare your results to the experimental reaction enthalpy. How well did the model do?
End of explanation
"""
import numpy as np
MN = 14 #amu
MO = 16 #amu
k = 1594.8 #N/m
l = 1.15077 #Angstroms
conv = 6.022e26 #amu to kg
h = 6.626e-34 # m^2 kg/s
c = 299792458 #m/s, Speed of Light
N = 6.022e23 #molecules/mole
Mred = 1/(1/MN+1/MO) #amu
print('The reduced mass is',round(Mred,2), 'amu.')
vibfreq = 1/(2*np.pi)*np.sqrt(k/Mred*conv/(c**2)/(100**2)) #cm^-1
print('The harmonic vibrational frequency is',round(vibfreq,2), 'cm^-1.')
E0 = .5*h*vibfreq*c*100*N/1000
print('The zero point vibrational energy is', round(E0,2),'kJ/mol.')
"""
Explanation: 6. This particle-in-a-box model has many flaws, not the least of which is that the ends of the polyene “box” are not infinitely high potential walls. In a somewhat better model the π electrons would travel in a finite-depth potential well. State two things that would change from the infinite depth to the finite depth model.
When wall potential drops from infinity to finite value,
Number of bound states/levels will drop from infinite to finite (molecules will eventually escape the box at high enough energy).
It's possible for electrons to tunnel into once forbidden region.
Energies of bound states decreases.
Quantum mechanics of vibrating NO.
The diatomic nitric oxide (NO) is an unusual and important molecule. It has an odd number of electrons, which is a rarity for stable molecule. It acts as a signaling molecule in the body, helping to regulate blood pressure, is a primary pollutant from combustion, and is a key constituent of smog. It exists in several isotopic forms, but the most common, ${}^{14}$N= ${}^{16}$O, has a bond length of 1.15077 Å and vibrational force constant of 1594.8 N/m.
7. Compute the reduced mass $\mu$ (amu), harmonic vibrational frequency (cm$^{-1}$), and zero point vibrational energy (kJ/mol) of ${}^{14}$N= ${}^{16}$O. Recall $1/\mu=1/M_\text{N} + 1/M_\text{O}$.
End of explanation
"""
MN = 14 #amu
MO = 16 #amu
k = 1594.8 #N/m
hbar = 1.05457e-34 #J*s
conv = 6.022e26 #amu to kg
l = 1.15077e-10 #m
alpha = (hbar**2/Mred/k*conv)**0.25 #m
rmax = l+alpha #m
rmin = l-alpha #m
print('Classical bond length maximum is %e m.'%(rmax))
print('Classical bond length minimum is %e m.'%(rmin))
"""
Explanation: 8. Calculate the classical minimum and maximum values of the $^{14}$N=$^{16}$O bond length for a molecule in the ground vibrational state. Hint: Calculate the classical limits on $x$, the value of $x$ at which the kinetic energy is 0 and thus the total energy equals the potential energy.
End of explanation
"""
from sympy import *
a = 1 # in this case, a can be any number
x = Symbol('x')
pi = integrate(1/a/sqrt(pi)*exp(-x**2/a**2),(x,-a,a))
print('The probability of being inside the classical limits is:')
pprint(pi)
print('Therefore, the probability of being outside the classical limits is')
pprint(1-pi)
print('This is equal to 0.1523.')
"""
Explanation: 9. The normalized ground vibrational wavefunction of N=O can be written
$$\Psi_{\upsilon=0}(x) = \left ({\frac{1}{\alpha\sqrt{\pi}}}\right )^{1/2}e^{-x^2/2\alpha^2}, \quad x = R-R_{eq}, \quad \alpha = \left ({\frac{\hbar^2}{\mu k}}\right )^{1/4}$$
where $x = R-R_{eq}$. Calculate the probability for an ${}^{14}N={}^{16}O$ molecule to have a bond length outside the classical limits. This is an example of quantum mechanical tunneling.
End of explanation
"""
import numpy as np
h = 6.626e-34 # J*s
c = 299792458 #m/s, Speed of Light
T = 273 #K
k = 1.38e-23 #J/K
P = []
for v in [0,1,2,3]:
E = (v+0.5)*h*c*vibfreq*100
P.append(np.exp(-E/k/T))
print('The population of v = [0,1,2,3] is [%.2f,%.2e,%.2e,%.2e]'%(P[0]/sum(P),P[1]/sum(P),P[2]/sum(P),P[3]/sum(P)))
"""
Explanation: 10. The gross selection rule for whether light can excite a vibration of a molecule is that the dipole moment of the molecule must change as it vibrates. Based on this criterion, do you expect NO to exhibit an absorption vibrational spectrum?
NO will exhibit an infrared spectrum. Because the molecule is heteronuclear (two ends are not the same), it has a dipole moment. Stretching the bond will change the dipole moment, so the molecule satisfies the gross selection rule.
11. The specific selection rule for whether light can excite a vibration of a molecule is that $\Delta v = \pm 1$. At ambient temperature, what initial and final vibrational states would contribute most significantly to an NO vibrational spectrum? Justify your answer. (Hint: What does the Boltzmann distribution say about the probability to be in each $\nu$ state?)
At 273 K, the most occupied vibrational state is v = 0. Therefore, it will contribute most significantly to the NO spectrum.
Quantitatively, we can prove this using the Boltzmann distribution:
End of explanation
"""
import numpy as np
del_E = 0.05*1.60218e-19 #J
h = 6.626e-34 #Planck constant in m^2 kg / s
m = 1.66054e-27 #kg
freq = del_E/h #/s
k = (2*np.pi*freq)**2*m #kg/m/s^2
print('Force constant k is ',round(k,2),'N/m')
"""
Explanation: 12. Based on your answers to questions 10 and 11, what do you expect the vibrational spectrum of an ${}^{14}N={}^{16}O$ molecule to look like? If it has a spectrum, in what region of the spectrum does it absorb (e.g., ultraviolet, x-ray, ...)?
There is only one peak that corresponds to the v0 to v=1 transition. Vibrational frequency is 1904 cm^-1, which is in the IR region.
Two-dimensional harmonic oscillator
Imagine an H atom embedded in a two-dimensional sheet of MoS$_2$. The H atom vibrates like a two-dimensional harmonic oscillator with mass 1 amu and force constants $k_x$ and $k_y$ in the two directions.
13. Write down the Schrödinger equation for the vibrating H atom. Remember to include any boundary conditions on the solutions.
$-{\frac{\hbar}{2m_e}}{\frac{\partial^2\psi(x,y)}{\partial x^2}}-{\frac{\hbar}{2m_e}}{\frac{\partial^2\psi(x,y)}{\partial y^2}}+{\frac{1}{2}k_xx^2\psi(x,y)}+{\frac{1}{2}k_yy^2\psi(x,y)}=E\psi(x,y) $
$\lim_{x\rightarrow\pm\infty} \psi(x,y)=0\qquad \lim_{y\rightarrow\pm\infty} \psi(x,y)=0$
14. The Schrödinger equation is seperable, so the wavefunctions are products of one-dimensional wavefunctions and the eigenenergies are sums of corresponding one-dimensional energies. Derive an expression for the H atom vibrational energy states, assuming $k_x = k_y/4 = k$.
Because it is separable, energies in $x$ and $y$ are additive.
$E = E_x + E_y = (v_x+\frac{1}{2})h\nu_x + (v_x+\frac{1}{2})h\nu_y$
$\nu_x= \frac{1}{2\pi}\sqrt{\frac{k}{m}}\qquad \nu_y= \frac{1}{2\pi}\sqrt{\frac{4k}{m}} = 2\nu_x$
$E = (v_x+\frac{1}{2})hν+ 2(v_y+\frac{1}{2})hν=(v_x+2v_y+\frac{3}{2})hν,ν=\frac{1}{2\pi}\sqrt{\frac{k}{\mu}}$
15. A spectroscopic experiment reveals that the spacing between the first and second energy levels is 0.05 eV. What is $k$, in N/m?
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
|
apache-2.0
|
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
# Install seaborn for pretty visualizations
!pip3 install --quiet seaborn
# Install SentencePiece package
# SentencePiece package is needed for Universal Sentence Encoder Lite. We'll
# use it for all the text processing and sentence feature ID lookup.
!pip3 install --quiet sentencepiece
from absl import logging
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import tensorflow_hub as hub
import sentencepiece as spm
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
"""
Explanation: ユニバーサルセンテンスエンコーダー Lite の実演
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
<td> <a href="https://tfhub.dev/google/universal-sentence-encoder-lite/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td>
</table>
この Colab では、文章の類似性タスクにユニバーサルセンテンスエンコーダー Lite を使用する方法を説明します。このモジュールは、ユニバーサルセンテンスエンコーダーによく似ていますが、入力文章に SentencePiece 処理を実行する必要があります。
ユニバーサルセンテンスエンコーダーでは、これまで各単語の埋め込みをルックアップしてきたのと同じくらい簡単に文章レベルの埋め込みを取得することができます。取得された文章埋め込みは、文章レベルでの意味の類似性を計算するためだけではなく、少ない教師ありトレーニングデータを使うことで、ダウンストリームの分類タスクのパフォーマンスを改善するために使用することができます。
はじめに
セットアップ
End of explanation
"""
module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-lite/2")
input_placeholder = tf.sparse_placeholder(tf.int64, shape=[None, None])
encodings = module(
inputs=dict(
values=input_placeholder.values,
indices=input_placeholder.indices,
dense_shape=input_placeholder.dense_shape))
"""
Explanation: TF-Hub からモジュールを読み込む
End of explanation
"""
with tf.Session() as sess:
spm_path = sess.run(module(signature="spm_path"))
sp = spm.SentencePieceProcessor()
with tf.io.gfile.GFile(spm_path, mode="rb") as f:
sp.LoadFromSerializedProto(f.read())
print("SentencePiece model loaded at {}.".format(spm_path))
def process_to_IDs_in_sparse_format(sp, sentences):
# An utility method that processes sentences with the sentence piece processor
# 'sp' and returns the results in tf.SparseTensor-similar format:
# (values, indices, dense_shape)
ids = [sp.EncodeAsIds(x) for x in sentences]
max_len = max(len(x) for x in ids)
dense_shape=(len(ids), max_len)
values=[item for sublist in ids for item in sublist]
indices=[[row,col] for row in range(len(ids)) for col in range(len(ids[row]))]
return (values, indices, dense_shape)
"""
Explanation: TF-Hub モジュールから SentencePiece モデルを読み込む
SentencePiece モデルは、モジュールのアセットに格納されています。プロセッサを初期化するには、このモデルが読み込まれている必要があります。
End of explanation
"""
# Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
values, indices, dense_shape = process_to_IDs_in_sparse_format(sp, messages)
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(
encodings,
feed_dict={input_placeholder.values: values,
input_placeholder.indices: indices,
input_placeholder.dense_shape: dense_shape})
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
"""
Explanation: いくつかの例を使ってモジュールをテストする
End of explanation
"""
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(session, input_placeholder, messages):
values, indices, dense_shape = process_to_IDs_in_sparse_format(sp,messages)
message_embeddings = session.run(
encodings,
feed_dict={input_placeholder.values: values,
input_placeholder.indices: indices,
input_placeholder.dense_shape: dense_shape})
plot_similarity(messages, message_embeddings, 90)
"""
Explanation: セマンティックテキストの類似性(STS)タスクの例
ユニバーサルセンテンスエンコーダーによって生成される埋め込みは、おおよそ正規化されています。2 つの文章の意味的類似性は、エンコーディングの内積として簡単に計算することができます。
End of explanation
"""
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
run_and_plot(session, input_placeholder, messages)
"""
Explanation: 類似性の視覚化
ここでは、ヒートマップに類似性を表示します。最終的なグラフは 9x9 の行列で、各エントリ [i, j] は、文章 i と j のエンコーディングの内積に基づいて色付けされます。
End of explanation
"""
import pandas
import scipy
import math
def load_sts_dataset(filename):
# Loads a subset of the STS dataset into a DataFrame. In particular both
# sentences and their human rated similarity score.
sent_pairs = []
with tf.gfile.GFile(filename, "r") as f:
for line in f:
ts = line.strip().split("\t")
# (sent_1, sent_2, similarity_score)
sent_pairs.append((ts[5], ts[6], float(ts[4])))
return pandas.DataFrame(sent_pairs, columns=["sent_1", "sent_2", "sim"])
def download_and_load_sts_data():
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = load_sts_dataset(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"))
sts_test = load_sts_dataset(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"))
return sts_dev, sts_test
sts_dev, sts_test = download_and_load_sts_data()
"""
Explanation: 評価: STS(セマンティックテキストの類似性)ベンチマーク
STS ベンチマークは、文章埋め込みを使って計算された類似性スコアが人の判定に適合する程度の本質的評価です。ベンチマークでは、システムは多様な文章ペアに対して類似性スコアを返す必要があります。その後で、ピアソン相関を使用して、人の判定に対して機械の類似性スコアの質が評価されます。
データのダウンロード
End of explanation
"""
sts_input1 = tf.sparse_placeholder(tf.int64, shape=(None, None))
sts_input2 = tf.sparse_placeholder(tf.int64, shape=(None, None))
# For evaluation we use exactly normalized rather than
# approximately normalized.
sts_encode1 = tf.nn.l2_normalize(
module(
inputs=dict(values=sts_input1.values,
indices=sts_input1.indices,
dense_shape=sts_input1.dense_shape)),
axis=1)
sts_encode2 = tf.nn.l2_normalize(
module(
inputs=dict(values=sts_input2.values,
indices=sts_input2.indices,
dense_shape=sts_input2.dense_shape)),
axis=1)
sim_scores = -tf.acos(tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1))
"""
Explanation: 評価グラフの構築
End of explanation
"""
#@title Choose dataset for benchmark
dataset = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
values1, indices1, dense_shape1 = process_to_IDs_in_sparse_format(sp, dataset['sent_1'].tolist())
values2, indices2, dense_shape2 = process_to_IDs_in_sparse_format(sp, dataset['sent_2'].tolist())
similarity_scores = dataset['sim'].tolist()
def run_sts_benchmark(session):
"""Returns the similarity scores"""
scores = session.run(
sim_scores,
feed_dict={
sts_input1.values: values1,
sts_input1.indices: indices1,
sts_input1.dense_shape: dense_shape1,
sts_input2.values: values2,
sts_input2.indices: indices2,
sts_input2.dense_shape: dense_shape2,
})
return scores
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
scores = run_sts_benchmark(session)
pearson_correlation = scipy.stats.pearsonr(scores, similarity_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
"""
Explanation: 文章埋め込みの評価
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.20/_downloads/d52b5321a00f5cf4d4be975019fb541b/plot_morph_surface_stc.ipynb
|
bsd-3-clause
|
# Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import mne
from mne.datasets import sample
print(__doc__)
"""
Explanation: Morph surface source estimate
This example demonstrates how to morph an individual subject's
:class:mne.SourceEstimate to a common reference space. We achieve this using
:class:mne.SourceMorph. Pre-computed data will be morphed based on
a spherical representation of the cortex computed using the spherical
registration of FreeSurfer <tut-freesurfer>
(https://surfer.nmr.mgh.harvard.edu/fswiki/SurfaceRegAndTemplates) [1]_. This
transform will be used to morph the surface vertices of the subject towards the
reference vertices. Here we will use 'fsaverage' as a reference space (see
https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage).
The transformation will be applied to the surface source estimate. A plot
depicting the successful morph will be created for the spherical and inflated
surface representation of 'fsaverage', overlaid with the morphed surface
source estimate.
References
.. [1] Greve D. N., Van der Haegen L., Cai Q., Stufflebeam S., Sabuncu M.
R., Fischl B., Brysbaert M.
A Surface-based Analysis of Language Lateralization and Cortical
Asymmetry. Journal of Cognitive Neuroscience 25(9), 1477-1492, 2013.
<div class="alert alert-info"><h4>Note</h4><p>For background information about morphing see `ch_morph`.</p></div>
End of explanation
"""
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
"""
Explanation: Setup paths
End of explanation
"""
# Read stc from file
stc = mne.read_source_estimate(fname_stc, subject='sample')
"""
Explanation: Load example data
End of explanation
"""
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage',
subjects_dir=subjects_dir)
"""
Explanation: Setting up SourceMorph for SourceEstimate
In MNE surface source estimates represent the source space simply as
lists of vertices (see
tut-source-estimate-class).
This list can either be obtained from
:class:mne.SourceSpaces (src) or from the stc itself.
Since the default spacing (resolution of surface mesh) is 5 and
subject_to is set to 'fsaverage', :class:mne.SourceMorph will use
default ico-5 fsaverage vertices to morph, which are the special
values [np.arange(10242)] * 2.
<div class="alert alert-info"><h4>Note</h4><p>This is not generally true for other subjects! The set of vertices
used for ``fsaverage`` with ico-5 spacing was designed to be
special. ico-5 spacings for other subjects (or other spacings
for fsaverage) must be calculated and will not be consecutive
integers.</p></div>
If src was not defined, the morph will actually not be precomputed, because
we lack the vertices from that we want to compute. Instead the morph will
be set up and when applying it, the actual transformation will be computed on
the fly.
Initialize SourceMorph for SourceEstimate
End of explanation
"""
stc_fsaverage = morph.apply(stc)
"""
Explanation: Apply morph to (Vector) SourceEstimate
The morph will be applied to the source estimate data, by giving it as the
first argument to the morph we computed above.
End of explanation
"""
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# As spherical surface
brain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',
font_size=16)
"""
Explanation: Plot results
End of explanation
"""
brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)
# Add title
brain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',
font_size=16)
"""
Explanation: As inflated surface
End of explanation
"""
stc_fsaverage = mne.compute_source_morph(stc,
subjects_dir=subjects_dir).apply(stc)
"""
Explanation: Reading and writing SourceMorph from and to disk
An instance of SourceMorph can be saved, by calling
:meth:morph.save <mne.SourceMorph.save>.
This method allows for specification of a filename under which the morph
will be save in ".h5" format. If no file extension is provided, "-morph.h5"
will be appended to the respective defined filename::
>>> morph.save('my-file-name')
Reading a saved source morph can be achieved by using
:func:mne.read_source_morph::
>>> morph = mne.read_source_morph('my-file-name-morph.h5')
Once the environment is set up correctly, no information such as
subject_from or subjects_dir must be provided, since it can be
inferred from the data and use morph to 'fsaverage' by default. SourceMorph
can further be used without creating an instance and assigning it to a
variable. Instead :func:mne.compute_source_morph and
:meth:mne.SourceMorph.apply can be
easily chained into a handy one-liner. Taking this together the shortest
possible way to morph data directly would be:
End of explanation
"""
|
nbokulich/short-read-tax-assignment
|
ipynb/simulated-community/taxonomy-assignment.ipynb
|
bsd-3-clause
|
from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.simulated_communities import copy_expected_composition
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
"""
Explanation: Taxonomy assignment of simulated communities
This notebook demonstrates how to assign taxonomy to communities simulated from natural compositions. These data are stored in the precomputed-results directory in tax-credit and this notebook does not need to be re-run unless if being used to test additional simulated communities or taxonomy assignment methods.
End of explanation
"""
# Project directory
project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment/")
# Directory containing reference sequence databases
reference_database_dir = join(project_dir, 'data', 'ref_dbs')
# simulated communities directory
sim_dir = join(project_dir, "data", "simulated-community")
"""
Explanation: First, set the location of the tax-credit repository, the reference databases, and the simulated community directory
End of explanation
"""
dataset_reference_combinations = [
# (community_name, ref_db)
('sake', 'gg_13_8_otus'),
('wine', 'unite_20.11.2016')
]
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016' : (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_BITSf-B58S3r_trim250.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv'))}
"""
Explanation: In the following cell, we define the simulated communities that we want to use for taxonomy assignment. The directory for each dataset is located in sim_dir, and contains the files simulated-seqs.fna that were previously generated in the dataset generation notebook.
End of explanation
"""
results_dir = expandvars("$HOME/Desktop/projects/simulated-community/")
"""
Explanation: Assign taxonomy to simulated community sequences
First, set the results directory, where we will put temporary results.
End of explanation
"""
method_parameters_combinations = { # probabalistic classifiers
'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0]},
# global alignment classifiers
'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0],
'similarity': [0.8, 0.9],
'uclust_max_accepts': [1, 3, 5]},
# local alignment classifiers
'sortmerna': {'sortmerna_e_value': [1.0],
'min_consensus_fraction': [0.51, 0.76, 1.0],
'similarity': [0.8, 0.9],
'sortmerna_best_N_alignments ': [1, 3, 5],
'sortmerna_coverage' : [0.8, 0.9]},
'blast' : {'blast_e_value' : [0.0000000001, 0.001, 1, 1000]}
}
"""
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Assignment Using QIIME 1 or Command-Line Classifiers
Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.
End of explanation
"""
command_template = "source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000"
commands = parameter_sweep(sim_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='simulated-seqs.fna')
"""
Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = output destination
{3} = reference taxonomy
{4} = method name
{5} = other parameters
End of explanation
"""
print(len(commands))
commands[0]
"""
Explanation: A quick sanity check...
End of explanation
"""
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
"""
Explanation: ... and finally we are ready to run.
End of explanation
"""
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'simulated-seqs_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, sim_dir, biom_input_fn='simulated-composition.biom')
"""
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
"""
precomputed_results_dir = join(project_dir, "data", "precomputed-results", "simulated-community")
method_dirs = glob(join(results_dir, '*', '*', '*', '*'))
move_results_to_repository(method_dirs, precomputed_results_dir)
"""
Explanation: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
End of explanation
"""
copy_expected_composition(sim_dir, dataset_reference_combinations, precomputed_results_dir)
"""
Explanation: Add expected composition bioms to repository
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
machine-learning/flatten_a_matrix.ipynb
|
mit
|
# Load library
import numpy as np
"""
Explanation: Title: Flatten A Matrix
Slug: flatten_a_matrix
Summary: How to flatten a matrix in Python.
Date: 2017-09-02 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
"""
Explanation: Create Matrix
End of explanation
"""
# Flatten matrix
matrix.flatten()
"""
Explanation: Flatten Matrix
End of explanation
"""
|
tensorflow/lucid
|
notebooks/feature-visualization/any_number_channels.ipynb
|
apache-2.0
|
import numpy as np
import tensorflow as tf
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
model = models.InceptionV1()
model.load_graphdef()
"""
Explanation: Arbitrary number of channels parametrization
This notebook uses the new param.image parametrization that takes any number of channels.
End of explanation
"""
def arbitrary_channels_to_rgb(*args, channels=None, **kwargs):
channels = channels or 10
full_im = param.image(*args, channels=channels, **kwargs)
r = tf.reduce_mean(full_im[...,:channels//3]**2, axis=-1)
g = tf.reduce_mean(full_im[...,channels//3:2*channels//3]**2, axis=-1)
b = tf.reduce_mean(full_im[...,2*channels//3:]**2, axis=-1)
return tf.stack([r,g,b], axis=-1)
def grayscale_image_to_rgb(*args, **kwargs):
"""Takes same arguments as image"""
output = param.image(*args, channels=1, **kwargs)
return tf.tile(output, (1,1,1,3))
"""
Explanation: Testing params
The following params are introduced to test the new param.imag parametrization by going back to three channels for the existing modelzoo models
End of explanation
"""
_ = render.render_vis(model, "mixed4a_pre_relu:476", param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
"""
Explanation: Arbitrary channels parametrization
param.arbitrary_channels calls param.image and then reduces the arbitrary number of channels to 3 for visualizing with modelzoo models.
End of explanation
"""
_ = render.render_vis(model, "mixed4a_pre_relu:476", param_f=lambda:grayscale_image_to_rgb(128))
"""
Explanation: Grayscale parametrization
param.grayscale_image creates param.image with a single channel and then tiles them 3 times for visualizing with modelzoo models.
End of explanation
"""
_ = render.render_vis(model, objectives.deepdream("mixed4a_pre_relu"), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
_ = render.render_vis(model, objectives.channel("mixed4a_pre_relu", 360), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
_ = render.render_vis(model, objectives.neuron("mixed4a_pre_relu", 476), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
_ = render.render_vis(model, objectives.deepdream("mixed4a_pre_relu"), param_f=lambda:grayscale_image_to_rgb(128))
_ = render.render_vis(model, objectives.channel("mixed4a_pre_relu", 360), param_f=lambda:grayscale_image_to_rgb(128))
_ = render.render_vis(model, objectives.neuron("mixed4a_pre_relu", 476), param_f=lambda:grayscale_image_to_rgb(128))
"""
Explanation: Testing different objectives
Different objectives applied to both parametrizations.
End of explanation
"""
|
ericmjl/systems-microbiology-hiv
|
Problem Set (Solutions).ipynb
|
mit
|
# This cell loads the data and cleans it for you, and log10 transforms the drug resistance values.
# Remember to run this cell if you want to have the data loaded into memory.
DATA_HANDLE = 'drug_data/hiv-protease-data.csv' # specify the relative path to the protease drug resistance data
N_DATA = 8 # specify the number of columns in the CSV file that are drug resistance measurements.
CONSENSUS = 'sequences/hiv-protease-consensus.fasta' # specify the relative path to the HIV protease consensus sequence
data, drug_cols, feat_cols = cf.read_data(DATA_HANDLE, N_DATA)
consensus_map = cf.read_consensus(CONSENSUS)
data = cf.clean_data(data, feat_cols, consensus_map)
for name in drug_cols:
data[name] = data[name].apply(np.log10)
data.head()
"""
Complete the function below to compute the correlation score.
Use the scipy.stats.pearsonr(x, y) function to find the correlation score between two arrays of things.
You do not need to type the whole name, as I have imported the pearsonr name for you, so you only have to do:
pearsonr(x, y)
Procedure:
1. Select two columns' names to compare.
2. Make sure to drop NaN values. the pearsonr function cannot deal with NaN values.
(Refer to the Lecture notebook if you forgot how to do this.)
3. Pass the data in to pearsonr().
"""
def corr_score(drug1, drug2):
### BEGIN SOLUTION
# Get the subset of data
subset = data[[drug1, drug2]].dropna()
# Return the pearsonr score.
return pearsonr(subset[drug1], subset[drug2])
### END SOLUTION
assert corr_score('IDV', 'FPV') == (0.79921991532901282, 2.6346448659104859e-306)
assert corr_score('ATV', 'FPV') == (0.82009597442033089, 2.5199367322520278e-231)
assert corr_score('NFV', 'DRV') == (0.69148264851159791, 4.0640711263961111e-82)
assert corr_score('LPV', 'SQV') == (0.76682619729899326, 4.2705737581002648e-234)
"""
Explanation: Problem Set on Machine Learning
Problem 1
Identify an academic literature reference that descirbes the PhenoSense assay. Paste the URL to the PubMed article below, and write a 1-2 sentence summary on what is measured in the assay, and how it relates to drug resistance.
Compare and contrast it with the plaque reduction assay as mentioned in the literature - what would be one advantage of the plaque reduction assay that is lacking in PhenoSense, and vice versa?
Answer
The PhenoSense assay is a luciferase-based assay. Greater luminescence indicates weaker resistance.
Plaque reduction assay: Actually measures viral load, where PhenoSense doesn't.
PhenoSense: High throughput, fairly accurate, where PRA is labor-intensive.
Problem 2
Write code below to calculate the correlation between two drugs' resistance profiles. Identify the protease drugs for which the two drugs' resistance values are correlated.
Speculate as to why they would be correlated.
End of explanation
"""
# Fill in the code here to clean the data.
def return_cleaned_data(drug_name, data):
# Select the subsets of columns of interest.
# Fade out the drug_name and feat_cols variables
cols_of_interest = []
cols_of_interest.append(drug_name)
cols_of_interest.extend(feat_cols)
subset = data[cols_of_interest].dropna() # fade out .dropna()
Y = subset[drug_name] # fade out drug_name, fade out .apply(np.log10)
X = subset[feat_cols]
# We call on a custom function to binarize the sequence feature matrix.
# You can inspect the code in the custom_funcs.py file.
lb = LabelBinarizer()
lb.fit(list('CHIMSVAGLPTRFYWDNEQK'))
X_binarized = pd.DataFrame()
for col in X.columns:
binarized_cols = lb.transform(X[col])
for i, c in enumerate(lb.classes_):
X_binarized[col + '_' + c] = binarized_cols[:,i]
return X_binarized, Y
X_binarized, Y = return_cleaned_data('FPV', data)
len(X_binarized), len(Y)
num_estimators = [10, 30, 50, 80, 100, 300, 500 800] # fill in the number of estimators to try here.
models = {'Random Forest':RandomForestRegressor,
'Ada Boost':AdaBoostRegressor,
'Gradient Boost':GradientBoostingRegressor,
'Extra Trees':ExtraTreesRegressor} # fill in the other models here
# Initialize a dictionary to hold the models' MSE values.
mses = dict()
for model_name, model in models.items():
mses[model_name] = dict()
for n in num_estimators:
mses[model_name][n] = 0
# Iterate over the models, and number of estimators.
for model_name, model in models.items():
for n_est in num_estimators:
### Begin Here
print(model_name, n_est)
# Set up the cross-validation iterator
cv_iterator = cv.ShuffleSplit(len(X_binarized), test_size=0.3, n_iter=5)
# Initialize the model
m = model(n_estimators=n_est) # fill in the parameters
# Collect the cross-validation scores. Remember that mse will be negative, and needs to
# be transformed to be positive.
cv_scores = cv.cross_val_score(m, X_binarized, Y, cv=cv_iterator, scoring='mean_squared_error')
### End Here
# Store the mean MSEs.
mses[model_name][n_est] = np.mean(-cv_scores)
# When you're done, run the following cell to make your plot.
pd.DataFrame(mses).plot()
plt.xlabel('Num Estimators')
plt.ylabel('MSE')
"""
Explanation: Question: Which two drugs are most correlated?
Answer: FPV and DRV
Question: Why might they be correlated? (Hint: you can look online for what they look like.)
Answer: They have very similar chemical structures.
Problem 3
Fill in the code below to plot the relationship between number of estimators (X-axis) and the MSE value for each of the estimators.
Try 10, 30, 50, 80, 100, 300, 500 and 800 estimators.
Use the ShuffleSplit iterator with cross-validation.
Use mean of at least 5 cross-validated MSE scores.
End of explanation
"""
# Load in the data and binarize it.
proteases = [s for s in SeqIO.parse('sequences/HIV1-protease.fasta', 'fasta') if len(s) == 99]
alignment = MultipleSeqAlignment(proteases)
proteases_df = pd.DataFrame(np.array([list(rec) for rec in alignment], str))
proteases_df.index = [s.id for s in proteases]
proteases_df.columns = [i for i in range(1, 100)]
X_global = cf.binarize_seqfeature(proteases_df)
# Train your model here, with optimized parameters for best MSE minimization.
### BEGIN
model = RandomForestRegressor() # put your best model here.
model.fit(X_binarized, Y)
preds = model.predict(X_global)
plt.hist(preds)
### END
"""
Explanation: Question: Given the data above, consider this question from the viewpoint of a data scientist/data analyst. What factors do you need to consider when tweaking model parameters?
Answers:
Gains in model accuracy/performance vs. time it takes to run.
Problem 4
Pick the best model from above, and re-train it on the dataset again. Refer to the Lecture notebook for a version of the code that may help here!
Now, use it to make predictions on the global HIV protease dataset.
Plot the global distribution of drug resistance values.
End of explanation
"""
|
grfiv/MNIST
|
svm.scikit/svm_rbf.scikit_random_gridsearch.ipynb
|
mit
|
from __future__ import division
import os, time, math, csv
import cPickle as pickle
import matplotlib.pyplot as plt
import numpy as np
from print_imgs import print_imgs # my own function to print a grid of square images
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import RandomizedSearchCV
from sklearn.metrics import classification_report, confusion_matrix
np.random.seed(seed=1009)
%matplotlib inline
#%qtconsole
"""
Explanation: MNIST digit recognition using SVC with RBF in scikit-learn
> Using RANDOMIZED grid search, find optimal parameters
See Comparing randomized search and grid search for hyperparameter estimation for a discussion of using a randomized grid search rather than an exhaustive one. The statement is made The result in parameter settings is quite similar, while the run time for randomized search is dramatically lower. The performance is slightly worse for the randomized search, though this is most likely a noise effect and would not carry over to a held-out test set.
My process was to iteratively narrow the bounds of the grid search so that fewer duds showed up in the random search. Narrowing the end points and increasing the density can improve precision but I'm not sure at what point greater precision no longer matters in a stochastic domain nor am I certain that the C/gamma tradeoff is strictly monotone linear.
End of explanation
"""
file_path = '../data/'
DESKEWED = True
if DESKEWED:
train_img_filename = 'train-images_deskewed.csv'
test_img_filename = 't10k-images_deskewed.csv'
else:
train_img_filename = 'train-images.csv'
test_img_filename = 't10k-images.csv'
train_label_filename = 'train-labels.csv'
test_label_filename = 't10k-labels.csv'
"""
Explanation: Where's the data?
End of explanation
"""
portion = 1.0 # set to 1.0 for all of it, less than 1.0 for less
"""
Explanation: How much of the data will we use?
End of explanation
"""
# read trainX
with open(file_path + train_img_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainX = np.ascontiguousarray(data, dtype = np.float64)
if portion < 1.0:
trainX = trainX[:portion*trainX.shape[0]]
# scale trainX
scaler = StandardScaler()
scaler.fit(trainX) # find mean/std for trainX
trainX = scaler.transform(trainX) # scale trainX with trainX mean/std
# read trainY
with open(file_path + train_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainY = np.ascontiguousarray(data, dtype = np.int8)
if portion < 1.0:
trainY = trainY[:portion*trainY.shape[0]].ravel()
# shuffle trainX & trainY
trainX, trainY = shuffle(trainX, trainY, random_state=0)
print("trainX shape: {0}".format(trainX.shape))
print("trainY shape: {0}\n".format(trainY.shape))
print(trainX.flags)
"""
Explanation: Read the training images and labels
End of explanation
"""
# read testX
with open(file_path + test_img_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testX = np.ascontiguousarray(data, dtype = np.float64)
if portion < 1.0:
testX = testX[:portion*testX.shape[0]]
# scale testX
testX = scaler.transform(testX) # scale testX with trainX mean/std
# read testY
with open(file_path + test_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testY = np.ascontiguousarray(data, dtype = np.int8)
if portion < 1.0:
testY = testY[:portion*testY.shape[0]].ravel()
# shuffle testX, testY
testX, testY = shuffle(testX, testY, random_state=0)
print("testX shape: {0}".format(testX.shape))
print("testY shape: {0}".format(testY.shape))
"""
Explanation: Read the test images and labels
End of explanation
"""
print_imgs(images = trainX,
actual_labels = trainY.ravel(),
predicted_labels = trainY.ravel(),
starting_index = np.random.randint(0, high=trainY.shape[0]-36, size=1)[0],
size = 6)
"""
Explanation: Use the smaller, fewer images for testing
Print a sample
End of explanation
"""
# default parameters for SVC
# ==========================
default_svc_params = {}
default_svc_params['C'] = 1.0 # penalty
default_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C
# set to 'auto' for unbalanced classes
default_svc_params['gamma'] = 0.0 # Kernel coefficient for 'rbf', 'poly' and 'sigmoid'
default_svc_params['kernel'] = 'rbf' # 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable
default_svc_params['shrinking'] = True # Whether to use the shrinking heuristic.
default_svc_params['probability'] = False # Whether to enable probability estimates.
default_svc_params['tol'] = 0.001 # Tolerance for stopping criterion.
default_svc_params['cache_size'] = 200 # size of the kernel cache (in MB).
default_svc_params['max_iter'] = -1 # limit on iterations within solver, or -1 for no limit.
default_svc_params['random_state'] = 1009
default_svc_params['verbose'] = False
default_svc_params['degree'] = 3 # 'poly' only
default_svc_params['coef0'] = 0.0 # 'poly' and 'sigmoid' only
# set parameters for the estimator
svc_params = dict(default_svc_params)
svc_params['cache_size'] = 2000
# the classifier
svc_clf = SVC(**svc_params)
"""
Explanation: SVC Default Parameter Settings
End of explanation
"""
t0 = time.time()
# search grid
# ===========
search_grid = dict(C = np.logspace( 0, 3, 50),
gamma = np.logspace(-5, -3, 50))
# stratified K-Fold indices
# =========================
SKFolds = StratifiedKFold(y = trainY.ravel(),
n_folds = 3,
indices = None,
shuffle = True,
random_state = 1009)
# default parameters for RandomizedSearchCV
# =========================================
default_random_params = {}
default_random_params['scoring'] = None
default_random_params['fit_params'] = None # dict of parameters to pass to the fit method
default_random_params['n_jobs'] = 1
default_random_params['pre_dispatch'] = '2*n_jobs' # memory is copied this many times
# reduce if you're running into memory problems
default_random_params['iid'] = True # assume the folds are iid
default_random_params['refit'] = True # Refit the best estimator with the entire dataset
default_random_params['cv'] = None
default_random_params['verbose'] = 0
default_random_params['random_state'] = None
default_random_params['n_iter'] = 10
# set parameters for the randomized grid search
# =============================================
random_params = dict(default_random_params)
random_params['verbose'] = 1
random_params['random_state'] = 1009
random_params['cv'] = SKFolds
random_params['n_jobs'] = -1 # -1 => use all available cores
# one core per fold
# for each point in the grid
random_params['n_iter'] = 100 # choose this many random combinations of parameters
# from the 'search_grid'
# perform the randomized parameter grid search
# ============================================
random_search = RandomizedSearchCV(estimator = svc_clf,
param_distributions = search_grid,
**random_params)
random_search.fit(trainX, trainY.ravel())
print(random_search)
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
"""
Explanation: RANDOMIZED grid search
End of explanation
"""
from operator import itemgetter
# how many duds?
mean_score_list = [score.mean_validation_score for score in random_search.grid_scores_]
print("\nProportion of random scores below 98%: {0:.2f}\n".format(sum(np.array(mean_score_list)<0.98)/len(mean_score_list)))
# what do the best ones look like?
for score in sorted(random_search.grid_scores_, key=itemgetter(1), reverse=True)[:10]:
print score
"""
Explanation: Analyze the results of the parameter pairs randomly selected
End of explanation
"""
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
"""Utility function to move the midpoint of a colormap to be around the values of interest."""
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
# --------------------------------------------------------------------------------
# skip this many parameter values on the display axes
tick_step_size_C = math.ceil(len(search_grid['C']) / 15)
tick_step_size_gamma = math.ceil(len(search_grid['gamma']) / 15)
# create 'heatmap'
# ================
# a C x gamma matrix; initially all zeros (black)
heatmap = np.zeros((len(search_grid['C']), len(search_grid['gamma'])))
# for each score, find the index in 'heatmap' of the 'C' and 'gamma' values
# at that index intersection put the mean score
for score in random_search.grid_scores_:
# index of C and gamma in 'search_grid'
ceeinx = search_grid['C'].tolist().index(score[0]['C'])
gaminx = search_grid['gamma'].tolist().index(score[0]['gamma'])
heatmap[ceeinx, gaminx] = score[1]
# display the heatmap
# ===================
plt.figure(figsize=(10, 8))
plt.subplots_adjust(left=.2, right=0.95, bottom=0.15, top=0.95)
plt.imshow(heatmap, interpolation='nearest', cmap=plt.cm.hot,
norm=MidpointNormalize(vmin=0.2, midpoint=0.92))
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
# label the axes
plt.xticks(np.arange(0, len(search_grid['gamma']), tick_step_size_gamma),
search_grid['gamma'][::tick_step_size_gamma],
rotation=45)
plt.yticks(np.arange(0, len(search_grid['C']), tick_step_size_C),
search_grid['C'][::tick_step_size_C])
# cross hairs
ceeinx = search_grid['C'].tolist().index(random_search.best_params_['C'])
plt.axhline(y=ceeinx)
gaminx = search_grid['gamma'].tolist().index(random_search.best_params_['gamma'])
plt.axvline(x=gaminx)
plt.title('Parameter-pair accuracy')
plt.show()
print("\nThe best parameters are %s\nwith a score of %0.2f, misclass of %0.4f"
% (random_search.best_params_, random_search.best_score_, 1-random_search.best_score_))
"""
Explanation: Heatmap of the accuracy of the C and gamma pairs chosen in the grid search
see http://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html
This script was extensively modified to work with the score results from RandomizedSearchCV
End of explanation
"""
target_names = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
predicted_values = random_search.predict(testX)
y_true, y_pred = testY.ravel(), predicted_values
print(classification_report(y_true, y_pred, target_names=target_names))
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix',
cmap=plt.cm.Paired):
"""
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
"""
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# --------------------------------------------------------------------------------------------
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(testY)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
"""
Explanation: Predict the test set and analyze the result
End of explanation
"""
t0 = time.time()
from sklearn.learning_curve import learning_curve
from sklearn.cross_validation import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
"""
plt.figure(figsize=(8, 6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
# --------------------------------------------------------------------------------
C_gamma = "C=" + str(np.round(random_search.best_params_['C'],4)) + \
", gamma=" + str(np.round(random_search.best_params_['gamma'],6))
plot_learning_curve(estimator = random_search.best_estimator_,
title = "Learning Curves (SVM, RBF, " + C_gamma + ")",
X = trainX,
y = trainY.ravel(),
ylim = (0.85, 1.01),
cv = ShuffleSplit(n = trainX.shape[0],
n_iter = 5,
test_size = 0.2,
random_state = 0),
n_jobs = 8)
plt.show()
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
"""
Explanation: Learning Curves
see http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html
The red line shows how well we fit the training data. The larger the score, the lower the bias. We expect the red line to start very near to 1.0 since we ought to be able to fit just a few points very well. We expect the red line to decline slightly since more points to fit requires a more complex model.
The green line shows the accuracy of the predictions of the test set. We expect it to start much lower than the red line but to increase continuously as the amount of training data used to create the model grows. An appropriate algorithm, correctly parameterized should push the green line higher and higher as we train with more training data. The best case is for the red line to decline only very slightly from 1.0 and for the green line to rise to intersect the red line.
A red line that starts below 1.0 and/or declines steeply indicates bias, a model that does not even fit the data it already knows the answer for. In addition to reviewing whether the algorithm is appropriate and whether it is optimally parameterized you may consider ways to increase the number of useful predictor variables.
A red line that hugs the top but for which the green line does not rise to meet it indicates overfitting.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/thu/cmip6/models/sandbox-2/landice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
drgmk/sdf
|
examples/explore_results.ipynb
|
mit
|
import requests
import pickle
"""
Explanation: Explore sdf output
sdf generates a large amount of information during fitting. Most of this is saved in a database that isn't visible on the web, and also in pickle files that can be found for each model under the "..." link.
A simpler output is the json files under the "model" link, there is less detail here, but they are sufficient for plotting.
To just explore the output you can probably avoid installing the sdf package.
End of explanation
"""
r = requests.get('http://drgmk.com/sdb/seds/masters/'
'sdb-v2-132436.10-513016.1/public/sdb-v2-132436.10-513016.1-mnest/phoenix_m+modbb_disk_r_.json')
d = r.json()
for k in d.keys():
print(k, type(d[k]))
"""
Explanation: json output
To explore json output we don't need any special packages. Either download directly with requests, or open with the json module.
End of explanation
"""
s = requests.get('http://drgmk.com/sdb/seds/masters/'
'sdb-v2-132436.10-513016.1/public/sdb-v2-132436.10-513016.1-mnest/phoenix_m+modbb_disk_r_.pkl')
r = pickle.loads(s.content)
# print the model component fluxes for the MIRI bands
print(f'filter: {r.model_comps}, total')
for i,f in enumerate(r.all_filters):
if 'NIRCAM' in f:
print(f, r.all_comp_phot[:,i], r.all_phot[i])
"""
Explanation: The information contained in the json is largely related to the observational data, e.g. photometry and models in the observed bands.
There are also spectra for each model component.
pickle output
To explore the pickle data we need the pickle package. There is a tonne of information saved here, including fluxes for the models in all bands known to sdf.
End of explanation
"""
|
AllenDowney/ModSim
|
soln/chap20.ipynb
|
gpl-2.0
|
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
"""
Explanation: Chapter 20
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
"""
from modsim import State
init = State(y=381, v=0)
"""
Explanation: So far the differential equations we've worked with have been first
order, which means they involve only first derivatives. In this
chapter, we turn our attention to second order ODEs, which can involve
both first and second derivatives.
We'll revisit the falling penny example from
Chapter xxx, and use run_solve_ivp to find the position and velocity of the penny as it falls, with and without air resistance.
Newton's second law of motion
First order ODEs can be written
$$\frac{dy}{dx} = G(x, y)$$
where $G$ is some function of $x$ and $y$ (see http://modsimpy.com/ode). Second order ODEs can be written
$$\frac{d^2y}{dx^2} = H(x, y, \frac{dy}{dt})$$
where $H$ is a function of $x$, $y$, and $dy/dx$.
In this chapter, we will work with one of the most famous and useful
second order ODEs, Newton's second law of motion:
$$F = m a$$
where $F$ is a force or the total of a set of forces, $m$ is the mass of a moving object, and $a$ is its acceleration.
Newton's law might not look like a differential equation, until we
realize that acceleration, $a$, is the second derivative of position,
$y$, with respect to time, $t$. With the substitution
$$a = \frac{d^2y}{dt^2}$$
Newton's law can be written
$$\frac{d^2y}{dt^2} = F / m$$
And that's definitely a second order ODE.
In general, $F$ can be a function of time, position, and velocity.
Of course, this "law" is really a model in the sense that it is a
simplification of the real world. Although it is often approximately
true:
It only applies if $m$ is constant. If mass depends on time,
position, or velocity, we have to use a more general form of
Newton's law (see http://modsimpy.com/varmass).
It is not a good model for very small things, which are better
described by another model, quantum mechanics.
And it is not a good model for things moving very fast, which are
better described by yet another model, relativistic mechanics.
However, for medium-sized things with constant mass, moving at
medium-sized speeds, Newton's model is extremely useful. If we can
quantify the forces that act on such an object, we can predict how it
will move.
Dropping pennies
As a first example, let's get back to the penny falling from the Empire State Building, which we considered in
Chapter xxx. We will implement two models of this system: first without air resistance, then with.
Given that the Empire State Building is 381 m high, and assuming that
the penny is dropped from a standstill, the initial conditions are:
End of explanation
"""
g = 9.8
"""
Explanation: where y is height above the sidewalk and v is velocity.
The units m and s are from the units object provided by Pint:
The only system parameter is the acceleration of gravity:
End of explanation
"""
t_end = 10
dt = 0.1
"""
Explanation: In addition, we'll specify the duration of the simulation and the step
size:
End of explanation
"""
from modsim import System
system = System(init=init, g=g, t_end=t_end, dt=dt)
"""
Explanation: With these parameters, the number of time steps is 100, which is good
enough for many problems. Once we have a solution, we will increase the
number of steps and see what effect it has on the results.
We need a System object to store the parameters:
End of explanation
"""
def slope_func(t, state, system):
y, v = state
dydt = v
dvdt = -system.g
return dydt, dvdt
"""
Explanation: Now we need a slope function, and here's where things get tricky. As we have seen, run_solve_ivp can solve systems of first order ODEs, but Newton's law is a second order ODE. However, if we recognize that
Velocity, $v$, is the derivative of position, $dy/dt$, and
Acceleration, $a$, is the derivative of velocity, $dv/dt$,
we can rewrite Newton's law as a system of first order ODEs:
$$\frac{dy}{dt} = v$$
$$\frac{dv}{dt} = a$$
And we can translate those
equations into a slope function:
End of explanation
"""
dydt, dvdt = slope_func(0, system.init, system)
print(dydt)
print(dvdt)
"""
Explanation: The first parameter, state, contains the position and velocity of the
penny. The last parameter, system, contains the system parameter g,
which is the magnitude of acceleration due to gravity.
The second parameter, t, is time. It is not used in this slope
function because none of the factors of the model are time dependent. I include it anyway because this function will be called by run_solve_ivp, which always provides the same arguments,
whether they are needed or not.
The rest of the function is a straightforward translation of the
differential equations, with the substitution $a = -g$, which indicates that acceleration due to gravity is in the direction of decreasing $y$. slope_func returns a sequence containing the two derivatives.
Before calling run_solve_ivp, it is a good idea to test the slope
function with the initial conditions:
End of explanation
"""
from modsim import run_solve_ivp
results, details = run_solve_ivp(system, slope_func)
details
results.head()
"""
Explanation: The result is 0 m/s for velocity and 9.8 m/s$^2$ for acceleration. Now we call run_solve_ivp like this:
End of explanation
"""
from modsim import decorate
results.y.plot()
decorate(xlabel='Time (s)',
ylabel='Position (m)')
"""
Explanation: results is a TimeFrame with two columns: y contains the height of
the penny; v contains its velocity.
We can plot the results like this:
End of explanation
"""
t_end = results.index[-1]
results.y[t_end]
"""
Explanation: Since acceleration is constant, velocity increases linearly and position decreases quadratically; as a result, the height curve is a parabola.
The last value of results.y is negative, which means we ran the simulation too long.
End of explanation
"""
from modsim import crossings
t_crossings = crossings(results.y, 0)
t_crossings
"""
Explanation: One way to solve this problem is to use the results to
estimate the time when the penny hits the sidewalk.
The ModSim library provides crossings, which takes a TimeSeries and a value, and returns a sequence of times when the series passes through the value. We can find the time when the height of the penny is 0 like this:
End of explanation
"""
def event_func(t, state, system):
y, v = state
return y
"""
Explanation: The result is an array with a single value, 8.818 s. Now, we could run
the simulation again with t_end = 8.818, but there's a better way.
Events
As an option, run_solve_ivp can take an event function, which
detects an "event", like the penny hitting the sidewalk, and ends the
simulation.
Event functions take the same parameters as slope functions, state,
t, and system. They should return a value that passes through 0
when the event occurs. Here's an event function that detects the penny
hitting the sidewalk:
End of explanation
"""
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details
"""
Explanation: The return value is the height of the penny, y, which passes through
0 when the penny hits the sidewalk.
We pass the event function to run_solve_ivp like this:
End of explanation
"""
t_end = results.index[-1]
t_end
y, v = results.iloc[-1]
print(y)
print(v)
"""
Explanation: Then we can get the flight time and final velocity like this:
End of explanation
"""
# Solution
r_0 = 150e9 # 150 million km in m
v_0 = 0
init = State(r=r_0,
v=v_0)
# Solution
radius_earth = 6.37e6 # meters
radius_sun = 696e6 # meters
r_final = radius_sun + radius_earth
r_final
r_0 / r_final
t_end = 1e7 # seconds
system = System(init=init,
G=6.674e-11, # N m^2 / kg^2
m1=1.989e30, # kg
m2=5.972e24, # kg
r_final=radius_sun + radius_earth,
t_end=t_end)
# Solution
def universal_gravitation(state, system):
"""Computes gravitational force.
state: State object with distance r
system: System object with m1, m2, and G
"""
r, v = state
G, m1, m2 = system.G, system.m1, system.m2
force = G * m1 * m2 / r**2
return force
# Solution
universal_gravitation(init, system)
# Solution
def slope_func(t, state, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `m2`
returns: derivatives of y and v
"""
y, v = state
m2 = system.m2
force = universal_gravitation(state, system)
dydt = v
dvdt = -force / m2
return dydt, dvdt
# Solution
slope_func(0, system.init, system)
# Solution
def event_func(t, state, system):
r, v = state
return r - system.r_final
# Solution
event_func(0, init, system)
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details
# Solution
t_event = results.index[-1]
t_event
# Solution
from modsim import units
seconds = t_event * units.second
days = seconds.to(units.day)
# Solution
results.index /= 60 * 60 * 24
# Solution
results.r /= 1e9
# Solution
results.r.plot(label='r')
decorate(xlabel='Time (day)',
ylabel='Distance from sun (million km)')
"""
Explanation: If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
So it's a good thing there is air resistance.
Summary
But air resistance...
Exercises
Exercise: Here's a question from the web site Ask an Astronomer:
"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."
Use run_solve_ivp to answer this question.
Here are some suggestions about how to proceed:
Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.
When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.
Express your answer in days, and plot the results as millions of kilometers versus days.
If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.
You might also be interested to know that it's actually not that easy to get to the Sun.
End of explanation
"""
%psource crossings
"""
Explanation: Under the hood
solve_ivp
Here is the source code for crossings so you can see what's happening under the hood:
End of explanation
"""
|
DamienIrving/ocean-analysis
|
development/dask_iris.ipynb
|
mit
|
import warnings
warnings.filterwarnings('ignore')
import glob
import iris
from iris.experimental.equalise_cubes import equalise_attributes
import iris.coord_categorisation
infiles = glob.glob('/g/data/ua6/DRSv3/CMIP5/CCSM4/historical/mon/ocean/r1i1p1/thetao/latest/thetao_Omon_CCSM4_historical_r1i1p1_??????-??????.nc')
infiles.sort()
cube_list = iris.load(infiles)
cube_list
equalise_attributes(cube_list)
cube = cube_list.concatenate_cube()
iris.coord_categorisation.add_year(cube, 'time')
"""
Explanation: Using dask with iris
Below I'm attempting to calculate the annual mean timeseries for data sufficiently large that without using dask I get a memory error.
Prepare cube
End of explanation
"""
from dask.distributed import LocalCluster
from dask.distributed import Client
cluster = LocalCluster(n_workers=4)
cluster
client = Client(cluster)
client
test = cube.aggregated_by(['year'], iris.analysis.MEAN)
"""
Explanation: Using dask for the memory intensive calculation
End of explanation
"""
|
antoinecarme/sklearn_explain
|
doc/sklearn_reason_codes.ipynb
|
bsd-3-clause
|
from sklearn import datasets
import pandas as pd
ds = datasets.load_breast_cancer();
NC = 4
lFeatures = ds.feature_names[0:NC]
df = pd.DataFrame(ds.data[:,0:NC] , columns=lFeatures)
df['TGT'] = ds.target
df.sample(6, random_state=1960)
"""
Explanation: Model Explanation for Classification Models
This document describes the usage of a classification model to provide an explanation for a given prediction.
Model explanation provides the ability to interpret the effect of the predictors on the composition of an individual score. These predictors can then be ranked according to their contribution in the final score (leading to a positive or negative decision).
Model explanation has always been used in credit risk applications in presence of regulatory settings . The credit company is expected to give the customer the main (top n) reasons why the credit application was rejected (also known as reason codes).
Model explanation was also recently introduced by the European Union’s new General Data Protection Regulation (GDPR, https://arxiv.org/pdf/1606.08813.pdf) to add the possibility to control the increasing use of machine learning algorithms in routine decision-making processes.
The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them.
Sample scikit-learn Classification Model
Here, we will use a sciki-learn classification model on a standard dataset (breast cancer detection model).
The dataset used contains 30 predictor variables (numerical features) and one binary target (dependant variable). For practical reasons, we will restrict our study to the first 4 predictors in this document.
End of explanation
"""
from sklearn.linear_model import *
clf = RidgeClassifier(random_state = 1960)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df[lFeatures].values, df['TGT'].values, test_size=0.2, random_state=1960)
clf.fit(X_train , y_train)
"""
Explanation: For the classification task, we will build a ridge regression model, and train it on a part of the full dataset
End of explanation
"""
coefficients = dict(zip(ds.feature_names, [clf.coef_.ravel()[i] for i in range(clf.coef_.shape[1])]))
df_var_importance = pd.DataFrame()
df_var_importance['variable'] = list(coefficients.keys())
df_var_importance['importance'] = df_var_importance['variable'].apply(coefficients.get)
%matplotlib inline
df_var_importance.plot('variable' , ['importance'], kind='bar')
df_var_importance.head()
"""
Explanation: This is a standard linear model, that assigns a coefficient to each predictor value, these coefficients can be seen as global importance indicators for the predictors.
End of explanation
"""
df['Score'] = clf.decision_function(df[lFeatures].values)
df['Decision'] = clf.predict(df[lFeatures].values)
df.sample(6, random_state=1960)
"""
Explanation: To put it simply, this is a global view of all the indivduals. The most important variable is 'mean radius', the higher the radius of the tumor, the higher the score of being malignant. On the oppopsite side, the higher the 'mean perimeter' is, the lower the score.
Model Explanation
The goal here is to be able, for a given individual, the impact of each predictor on the final score.
For our model, the score is a linear combination of predictor values:
$$ Score = \alpha_1 X_1 + \alpha_2 X_2 + \alpha_3 X_3 + \alpha_4 X_4 + \beta $$
One can see $\alpha_1 X_1$ as the contribution of the predictor $X_1$, $\alpha_2 X_2$ as the contribution of the predictor $X_2$, etc
These contributions can be seen as partial scores and their sum is the final score (used to assign positive or negative decision).
The intercept $\beta$ being constant, it can be ignored when analyzing individual effects.
In scikit-learn , the score is computed by a decision_function method of the classifier, an individual is detected as positive if the score has a positive value.
End of explanation
"""
for col in lFeatures:
lContrib = df[col] * coefficients[col]
df[col + '_Effect'] = lContrib - lContrib.mean()
df.sample(6, random_state=1960)
"""
Explanation: Predictor Effects
Predictor effects describe the impact of specific predictor values on the partial score. For example, some values of a predictor can increase or decrease the partial score (and hence the score) by 10 or more points and change the negative decision to a positive one.
The effect reflects how a specific predictor increases the score (above the mean contribtution of this variable).
End of explanation
"""
import numpy as np
reason_codes = np.argsort(df[[col + '_Effect' for col in lFeatures]].values, axis=1)
df_rc = pd.DataFrame(reason_codes, columns=['reason_' + str(NC-c) for c in range(NC)])
df_rc = df_rc[list(reversed(df_rc.columns))]
df = pd.concat([df , df_rc] , axis=1)
for c in range(NC):
df['reason_' + str(c+1)] = df['reason_' + str(c+1)].apply(lambda x : lFeatures[x])
df.sample(6, random_state=1960)
df[['reason_' + str(NC-c) for c in range(NC)]].describe()
"""
Explanation: The previous sample, shows that the first individual lost 1.148856 score points due to the feature $X_1$, gained 2.076852 with the feature $X_3$, etc
Reason Codes
The reason codes are a user-oriented representation of the decision making process. These are the predictors ranked by their effects.
End of explanation
"""
|
snowicecat/umich-eecs445-f16
|
handsOn_lecture17_clustering-mixtures-em/handsOn_lecture17_clustering-mixtures-em.ipynb
|
mit
|
%matplotlib inline
from matplotlib import pyplot as plt;
import matplotlib as mpl;
import numpy as np;
"""
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
End of explanation
"""
def coin_likelihood(roll, bias):
# P(X | Z, theta)
numHeads = roll.count("H");
flips = len(roll);
return pow(bias, numHeads) * pow(1-bias, flips-numHeads);
def coin_marginal_likelihood(rolls, biasA, biasB):
# P(X | theta)
trials = [];
for roll in rolls:
h = roll.count("H");
t = roll.count("T");
likelihoodA = coin_likelihood(roll, biasA);
likelihoodB = coin_likelihood(roll, biasB);
trials.append(np.log(0.5 * (likelihoodA + likelihoodB)));
return sum(trials);
def plot_coin_likelihood(rolls, thetas=None):
# grid
xvals = np.linspace(0.01,0.99,100);
yvals = np.linspace(0.01,0.99,100);
X,Y = np.meshgrid(xvals, yvals);
# compute likelihood
Z = [];
for i,r in enumerate(X):
z = []
for j,c in enumerate(r):
z.append(coin_marginal_likelihood(rolls,c,Y[i][j]));
Z.append(z);
# plot
plt.figure(figsize=(10,8));
C = plt.contour(X,Y,Z,150);
cbar = plt.colorbar(C);
plt.title(r"Likelihood $\log p(\mathcal{X}|\theta_A,\theta_B)$", fontsize=20);
plt.xlabel(r"$\theta_A$", fontsize=20);
plt.ylabel(r"$\theta_B$", fontsize=20);
# plot thetas
if thetas is not None:
thetas = np.array(thetas);
plt.plot(thetas[:,0], thetas[:,1], '-k', lw=2.0);
plt.plot(thetas[:,0], thetas[:,1], 'ok', ms=5.0);
plot_coin_likelihood([ "HTTTHHTHTH", "HHHHTHHHHH",
"HTHHHHHTHH", "HTHTTTHHTT", "THHHTHHHTH"]);
"""
Explanation: Coin Flipping and EM
Let's work our way through the example presented in lecture.
Model and EM theory review
First, let's review the model:
Suppose we have two coins, each with unknown bias $\theta_A$ and $\theta_B$, and collect data in the following way:
Repeat for $n=1, \dots, N$:
Choose a random coin $z_n$.
Flip this same coin $M$ times.
Record only the total number $x_n$ of heads.
The corresponding probabilistic model is a mixture of binomials:
$$
\begin{align}
\theta &= (\theta_A, \theta_B) &
&&\text{fixed coin biases} \
Z_n &\sim \mathrm{Uniform}{A, B } & \forall\, n=1,\dots,N
&&\text{coin indicators} \
X_n | Z_n, \theta &\sim \Bin[\theta_{Z_n}, M] & \forall\, n=1,\dots,N
&&\text{head count}
\end{align}
$$
and the corresponding graphical model is:
<img src="coinflip-graphical-model.png">
The complete data log-likelihood for a single trial $(x_n,z_n)$ is
$$
\log p(x_n, z_n | \theta) = \log p(z_n) + \log p(x_n | z_n, \theta)
$$
Here, $P(z_n)=\frac{1}{2}$. The remaining term is
$$
\begin{align}
\log p(x_n | z_n, \theta)
&= \log \binom{M}{x_n} \theta_{z_n}^{x_n} (1-\theta_{z_n})^{M-x_n} \
&= \log \binom{M}{x_n} + x_n \log\theta_{z_n} + (M-x_n)\log(1-\theta_{z_n})
\end{align}
$$
Likelihood Plot
There aren't many latent variables, so we can plot $\log P(\X|\theta_A,\theta_B)$ directly!
End of explanation
"""
def e_step(n_flips, theta_A, theta_B):
"""Produce the expected value for heads_A, tails_A, heads_B, tails_B
over n_flipsgiven the coin biases"""
# Replace dummy values with your implementation
heads_A, tails_A, heads_B, tails_B = n_flips, 0, n_flips, 0
return heads_A, tails_A, heads_B, tails_B
def m_step(heads_A, tails_A, heads_B, tails_B):
"""Produce the values for theta that maximize the expected number of heads/tails"""
# Replace dummy values with your implementation
theta_A, theta_B = 0.5, 0.5
return theta_A, theta_B
def coin_em(n_flips, theta_A=None, theta_B=None, maxiter=10):
# Initial Guess
theta_A = theta_A or random.random();
theta_B = theta_B or random.random();
thetas = [(theta_A, theta_B)];
# Iterate
for c in range(maxiter):
print("#%d:\t%0.2f %0.2f" % (c, theta_A, theta_B));
heads_A, tails_A, heads_B, tails_B = e_step(n_flips, theta_A, theta_B)
theta_A, theta_B = m_step(heads_A, tails_A, heads_B, tails_B)
thetas.append((theta_A,theta_B));
return thetas, (theta_A,theta_B);
rolls = [ "HTTTHHTHTH", "HHHHTHHHHH", "HTHHHHHTHH",
"HTHTTTHHTT", "THHHTHHHTH" ];
thetas, _ = coin_em(rolls, 0.1, 0.3, maxiter=6);
"""
Explanation: For reference (and to make it easier to avoid looking at the solution from lecture :)), here is how to frame EM in terms of the coin flip example:
E-Step
The E-Step involves writing down an expression for
$$
\begin{align}
E_q[\log p(\X, Z | \theta )]
&= E_q[\log p(\X | Z, \theta) p(Z)] \
&= E_q[\log p(\X | Z, \theta)] + \log p(Z) \
\end{align}
$$
The $\log p(Z)$ term is constant wrt $\theta$, so we ignore it.
Recall $q \equiv q_{\theta_t} = p(Z | \X,\theta_t)$
$$
\begin{align}
E_q[\log p(\X | Z, \theta)]
&= \sum_{n=1}^N E_q \bigg[
x_n \log \theta_{z_n} + (M-x_n) \log (1-\theta_{z_n})
\bigg] + \text{const.}
\
&= \sum_{n=1}^N q_\vartheta(z_n=A)
\bigg[ x_n \log \theta_A + (M-x_n) \log \theta_A \bigg] \
&+ \sum_{n=1}^N q_\vartheta(z_n=B)
\bigg[ x_n \log \theta_B + (M-x_n) \log \theta_B \bigg]
+ \text{const.}
\end{align}
$$
M-Step
Let $a_n = q(z_n = A)$ and $b_n = q(z_n = B)$. Taking derivatives with respect to $\theta_A$ and $\theta_B$, we obtain the following update rules:
$$
\theta_A = \frac{\sum_{n=1}^N a_n x_n}{\sum_{n=1}^N a_n M}
\qquad
\theta_B = \frac{\sum_{n=1}^N b_n x_n}{\sum_{n=1}^N b_n M}
$$
Interpretation: For each coin, examples are weighted according to the probability that they belong to that coin. Observing $M$ flips is equivalent to observing $a_n M$ effective flips.
Problem: implement EM for Coin Flips
Using the definitions above and an outline below, fill in the missing pieces.
End of explanation
"""
plot_coin_likelihood(rolls, thetas)
"""
Explanation: Plotting convergence
Once you have a working implementation, re-run the code below you should be able to produce a convergence plot of the estimated thetas, which should move towards 0.5, 0.8.
End of explanation
"""
import random
def generate_sample(num_flips, prob_choose_a=0.5, a_bias=0.5, b_bias=0.5):
which_coin = random.random()
if which_coin < prob_choose_a:
return "".join(['H' if random.random() < a_bias else 'T' for i in range(num_flips)])
else:
return "".join(['H' if random.random() < b_bias else 'T' for i in range(num_flips)])
[generate_sample(10),
generate_sample(10, prob_choose_a=0.2, a_bias=0.9),
generate_sample(10, prob_choose_a=0.9, a_bias=0.2, b_bias=0.9)]
"""
Explanation: Working with different paramaters
Let's explore some different values for the paramaters of our model.
Here's a method to generate a sample for different values of $Z$ (for the purposes of this example the probability of choosing $A$), $\theta_A$ and $\theta_B$:
End of explanation
"""
flips = [] # your code goes here
"""
Explanation: Unequal chance of selecting a coin
Use generate_sample to produce a new dataset where:
coin A is 90% biased towards heads
coin B is 30% biased towards heads
the probability of chooising B is 70%
End of explanation
"""
# your code goes here
"""
Explanation: Use your EM implementation and plot its progress over on the liklihood plot and see how well it estimates the true latent parameters underlying the coin flips.
End of explanation
"""
|
ajfriend/cyscs
|
tutorial_parallel.ipynb
|
mit
|
import scs
from concurrent import futures
num_problems = 20
m = 1000 # size of L1 problem
data = [scs.examples.l1(m, seed=i) for i in range(num_problems)]
"""
Explanation: Calling SCS in Parallel
In this notebook, we set up a list of several SCS problems and map scs.solve over that list
to solve each of the problems.
Our first attempt uses Python's builtin map function, which operates in serial, solving one problem at a time.
The second attempt uses concurrent.futures.ProcessPoolExecutor to solve the problems in parallel, using separate Python processes.
The final attempt uses concurrent.futures.ThreadPoolExecutor to solve in parallel, using separate threads.
When running arbitrary Python code, the ThreadPoolExecutor approach may suffer due to the Python GIL, which prevents multiple threads from executing at the same time. However, SCS is able to release the GIL when running its underlying C code, allowing it to achieve true parallelism and performance similar to ProcessPoolExecutor.
The ThreadPool approach may be preferable to ProcessPool because it doesn't require launching separate Python interpreters for each process, and does not need to serialize data to communicate it between processes.
This notebook uses the concurrent.futures library, which is new to Python 3.2, but has been backported to Python 2.5 and above through the futures libray on PyPi.
Generate data
We first generate a number of SCS problems.
End of explanation
"""
workers = 4 # number of threads/processes
def solve(x):
return scs.solve(*x, verbose=False)
"""
Explanation: We define a solve function to map over our problem data.
We set verbose=False because verbose printing can hinder performance, because the GIL needs to be reacquired for each print.
We define a function instead of a lambda because ProcessPoolExecutor can't serialize lambdas.
We set the number of workers to 4 in this example, which will set the number of threads or processes in the parallel examples. Setting the number of workers to be the number of processors on your system is a good first guess, but some experimentation may be required to find the optimal setting.
End of explanation
"""
%%time
a = list(map(solve, data))
"""
Explanation: Serial solve with map
We observe the solvetime in serial, using the builtin Python map function.
End of explanation
"""
%%time
with futures.ProcessPoolExecutor(workers) as ex:
a = list(ex.map(solve, data))
"""
Explanation: Parallel solve with processes
We observe the parallel solvetime, using ProcessPoolExecutor.map().
End of explanation
"""
%%time
with futures.ThreadPoolExecutor(workers) as ex:
a = list(ex.map(solve, data))
"""
Explanation: Parallel solve with threads
We observe the parallel solvetime, using ThreadPoolExecutor.map().
We achieve similar performance to the processes example because SCS releases the GIL when calling its underlying C solver code. Threads can be more lightweight than processes because they do not need to launch separate Python interpreters, and do not need to serialize data to communicate between processes.
However, in this case, it doesn't seem to help much.
End of explanation
"""
def form_workspace(x):
return scs.Workspace(*x, verbose=False)
def workspace_solve(work):
return work.solve()
"""
Explanation: SCS Workspace in parallel
We can also form scs.Workspace objects in parallel, and use them to solve problems in parallel.
Below, we define two functions to form and solve with Workspace, which we'll use in our serial and parallel maps.
End of explanation
"""
%%time
workspaces = list(map(form_workspace, data))
%%time
with futures.ThreadPoolExecutor(workers) as ex:
workspaces = list(ex.map(form_workspace, data))
"""
Explanation: Initialize Workspace
We can compare the intialization time (which involves a matrix factorization) for the Workspace objects when we perform it in serial and in parallel.
End of explanation
"""
%%time
results = list(map(workspace_solve, workspaces))
%%time
with futures.ThreadPoolExecutor(workers) as ex:
results = list(ex.map(workspace_solve, workspaces))
"""
Explanation: Workspace.solve()
We can also compare serial and parallel calls of workspace.solve().
Note that we can't use ProcessPoolExecutor here, because it can't serialize SCS Workspace objects.
ThreadPoolExecutor eliminates the need for serialization.
End of explanation
"""
|
shreyas111/Multimedia_CS523_Project1
|
Style_Transfer_Saving_Input_Output_Images.ipynb
|
mit
|
from IPython.display import Image, display
Image('images/15_style_transfer_flowchart.png')
"""
Explanation: Style Transfer
Our Changes:
Added code for saving the input content and style images. Also added code for saving the output mixed image
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import PIL.Image
"""
Explanation: Imports
End of explanation
"""
tf.__version__
import vgg16
"""
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
"""
# vgg16.data_dir = 'vgg16/'
vgg16.maybe_download()
"""
Explanation: The VGG-16 model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.
End of explanation
"""
def load_image(filename, max_size=None):
image = PIL.Image.open(filename)
if max_size is not None:
# Calculate the appropriate rescale-factor for
# ensuring a max height and width, while keeping
# the proportion between them.
factor = max_size / np.max(image.size)
# Scale the image's height and width.
size = np.array(image.size) * factor
# The size is now floating-point because it was scaled.
# But PIL requires the size to be integers.
size = size.astype(int)
# Resize the image.
image = image.resize(size, PIL.Image.LANCZOS)
print(image)
# Convert to numpy floating-point array.
return np.float32(image)
"""
Explanation: Helper-functions for image manipulation
This function loads an image and returns it as a numpy array of floating-points. The image can be automatically resized so the largest of the height or width equals max_size.
End of explanation
"""
def save_image(image, filename):
# Ensure the pixel-values are between 0 and 255.
image = np.clip(image, 0.0, 255.0)
# Convert to bytes.
image = image.astype(np.uint8)
# Write the image-file in jpeg-format.
with open(filename, 'wb') as file:
PIL.Image.fromarray(image).save(file, 'jpeg')
"""
Explanation: Save an image as a jpeg-file. The image is given as a numpy array with pixel-values between 0 and 255.
End of explanation
"""
def plot_image_big(image):
# Ensure the pixel-values are between 0 and 255.
image = np.clip(image, 0.0, 255.0)
# Convert pixels to bytes.
image = image.astype(np.uint8)
# Convert to a PIL-image and display it.
display(PIL.Image.fromarray(image))
def plot_images(content_image, style_image, mixed_image):
# Create figure with sub-plots.
fig, axes = plt.subplots(1, 3, figsize=(10, 10))
# Adjust vertical spacing.
fig.subplots_adjust(hspace=0.1, wspace=0.1)
# Use interpolation to smooth pixels?
smooth = True
# Interpolation type.
if smooth:
interpolation = 'sinc'
else:
interpolation = 'nearest'
# Plot the content-image.
# Note that the pixel-values are normalized to
# the [0.0, 1.0] range by dividing with 255.
ax = axes.flat[0]
ax.imshow(content_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Content")
# Plot the mixed-image.
ax = axes.flat[1]
ax.imshow(mixed_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Mixed")
# Plot the style-image
ax = axes.flat[2]
ax.imshow(style_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Style")
# Remove ticks from all the plots.
for ax in axes.flat:
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: This function plots a large image. The image is given as a numpy array with pixel-values between 0 and 255.
This function plots the content-, mixed- and style-images.
End of explanation
"""
def mean_squared_error(a, b):
return tf.reduce_mean(tf.square(a - b))
def create_content_loss(session, model, content_image, layer_ids):
"""
Create the loss-function for the content-image.
Parameters:
session: An open TensorFlow session for running the model's graph.
model: The model, e.g. an instance of the VGG16-class.
content_image: Numpy float array with the content-image.
layer_ids: List of integer id's for the layers to use in the model.
"""
# Create a feed-dict with the content-image.
feed_dict = model.create_feed_dict(image=content_image)
# Get references to the tensors for the given layers.
layers = model.get_layer_tensors(layer_ids)
# Calculate the output values of those layers when
# feeding the content-image to the model.
values = session.run(layers, feed_dict=feed_dict)
# Set the model's graph as the default so we can add
# computational nodes to it. It is not always clear
# when this is necessary in TensorFlow, but if you
# want to re-use this code then it may be necessary.
with model.graph.as_default():
# Initialize an empty list of loss-functions.
layer_losses = []
# For each layer and its corresponding values
# for the content-image.
for value, layer in zip(values, layers):
# These are the values that are calculated
# for this layer in the model when inputting
# the content-image. Wrap it to ensure it
# is a const - although this may be done
# automatically by TensorFlow.
value_const = tf.constant(value)
# The loss-function for this layer is the
# Mean Squared Error between the layer-values
# when inputting the content- and mixed-images.
# Note that the mixed-image is not calculated
# yet, we are merely creating the operations
# for calculating the MSE between those two.
loss = mean_squared_error(layer, value_const)
# Add the loss-function for this layer to the
# list of loss-functions.
layer_losses.append(loss)
# The combined loss for all layers is just the average.
# The loss-functions could be weighted differently for
# each layer. You can try it and see what happens.
total_loss = tf.reduce_mean(layer_losses)
return total_loss
def gram_matrix(tensor):
shape = tensor.get_shape()
# Get the number of feature channels for the input tensor,
# which is assumed to be from a convolutional layer with 4-dim.
num_channels = int(shape[3])
# Reshape the tensor so it is a 2-dim matrix. This essentially
# flattens the contents of each feature-channel.
matrix = tf.reshape(tensor, shape=[-1, num_channels])
# Calculate the Gram-matrix as the matrix-product of
# the 2-dim matrix with itself. This calculates the
# dot-products of all combinations of the feature-channels.
gram = tf.matmul(tf.transpose(matrix), matrix)
return gram
def create_style_loss(session, model, style_image, layer_ids):
"""
Create the loss-function for the style-image.
Parameters:
session: An open TensorFlow session for running the model's graph.
model: The model, e.g. an instance of the VGG16-class.
style_image: Numpy float array with the style-image.
layer_ids: List of integer id's for the layers to use in the model.
"""
# Create a feed-dict with the style-image.
feed_dict = model.create_feed_dict(image=style_image)
# Get references to the tensors for the given layers.
layers = model.get_layer_tensors(layer_ids)
layerIdCount=len(layer_ids)
print('count of layer ids:',layerIdCount)
# Set the model's graph as the default so we can add
# computational nodes to it. It is not always clear
# when this is necessary in TensorFlow, but if you
# want to re-use this code then it may be necessary.
with model.graph.as_default():
# Construct the TensorFlow-operations for calculating
# the Gram-matrices for each of the layers.
gram_layers = [gram_matrix(layer) for layer in layers]
# Calculate the values of those Gram-matrices when
# feeding the style-image to the model.
values = session.run(gram_layers, feed_dict=feed_dict)
# Initialize an empty list of loss-functions.
layer_losses = []
# For each Gram-matrix layer and its corresponding values.
for value, gram_layer in zip(values, gram_layers):
# These are the Gram-matrix values that are calculated
# for this layer in the model when inputting the
# style-image. Wrap it to ensure it is a const,
# although this may be done automatically by TensorFlow.
value_const = tf.constant(value)
# The loss-function for this layer is the
# Mean Squared Error between the Gram-matrix values
# for the content- and mixed-images.
# Note that the mixed-image is not calculated
# yet, we are merely creating the operations
# for calculating the MSE between those two.
loss = mean_squared_error(gram_layer, value_const)
# Add the loss-function for this layer to the
# list of loss-functions.
layer_losses.append(loss)
# The combined loss for all layers is just the average.
# The loss-functions could be weighted differently for
# each layer. You can try it and see what happens.
total_loss = tf.reduce_mean(layer_losses)
return total_loss
def create_denoise_loss(model):
loss = tf.reduce_sum(tf.abs(model.input[:,1:,:,:] - model.input[:,:-1,:,:])) + \
tf.reduce_sum(tf.abs(model.input[:,:,1:,:] - model.input[:,:,:-1,:]))
return loss
def style_transfer(content_image, style_image,
content_layer_ids, style_layer_ids,
weight_content=1.5, weight_style=10.0,
weight_denoise=0.3,
num_iterations=120, step_size=10.0):
"""
Use gradient descent to find an image that minimizes the
loss-functions of the content-layers and style-layers. This
should result in a mixed-image that resembles the contours
of the content-image, and resembles the colours and textures
of the style-image.
Parameters:
content_image: Numpy 3-dim float-array with the content-image.
style_image: Numpy 3-dim float-array with the style-image.
content_layer_ids: List of integers identifying the content-layers.
style_layer_ids: List of integers identifying the style-layers.
weight_content: Weight for the content-loss-function.
weight_style: Weight for the style-loss-function.
weight_denoise: Weight for the denoising-loss-function.
num_iterations: Number of optimization iterations to perform.
step_size: Step-size for the gradient in each iteration.
"""
# Create an instance of the VGG16-model. This is done
# in each call of this function, because we will add
# operations to the graph so it can grow very large
# and run out of RAM if we keep using the same instance.
model = vgg16.VGG16()
# Create a TensorFlow-session.
session = tf.InteractiveSession(graph=model.graph)
# Print the names of the content-layers.
print("Content layers:")
print(model.get_layer_names(content_layer_ids))
print('Content Layers:',content_layer_ids)
print()
# Print the names of the style-layers.
print("Style layers:")
print(model.get_layer_names(style_layer_ids))
print('Style Layers:',style_layer_ids)
print()
#Printing the input paramenter to the function
print('Weight Content:',weight_content)
print('Weight Style:',weight_style)
print('Weight Denoise:',weight_denoise)
print('Number of Iterations:',num_iterations)
print('Step Size:',step_size)
print()
# Create the loss-function for the content-layers and -image.
loss_content = create_content_loss(session=session,
model=model,
content_image=content_image,
layer_ids=content_layer_ids)
# Create the loss-function for the style-layers and -image.
loss_style = create_style_loss(session=session,
model=model,
style_image=style_image,
layer_ids=style_layer_ids)
# Create the loss-function for the denoising of the mixed-image.
loss_denoise = create_denoise_loss(model)
# Create TensorFlow variables for adjusting the values of
# the loss-functions. This is explained below.
adj_content = tf.Variable(1e-10, name='adj_content')
adj_style = tf.Variable(1e-10, name='adj_style')
adj_denoise = tf.Variable(1e-10, name='adj_denoise')
# Initialize the adjustment values for the loss-functions.
session.run([adj_content.initializer,
adj_style.initializer,
adj_denoise.initializer])
# Create TensorFlow operations for updating the adjustment values.
# These are basically just the reciprocal values of the
# loss-functions, with a small value 1e-10 added to avoid the
# possibility of division by zero.
update_adj_content = adj_content.assign(1.0 / (loss_content + 1e-10))
update_adj_style = adj_style.assign(1.0 / (loss_style + 1e-10))
update_adj_denoise = adj_denoise.assign(1.0 / (loss_denoise + 1e-10))
# This is the weighted loss-function that we will minimize
# below in order to generate the mixed-image.
# Because we multiply the loss-values with their reciprocal
# adjustment values, we can use relative weights for the
# loss-functions that are easier to select, as they are
# independent of the exact choice of style- and content-layers.
loss_combined = weight_content * adj_content * loss_content + \
weight_style * adj_style * loss_style + \
weight_denoise * adj_denoise * loss_denoise
# Use TensorFlow to get the mathematical function for the
# gradient of the combined loss-function with regard to
# the input image.
gradient = tf.gradients(loss_combined, model.input)
# List of tensors that we will run in each optimization iteration.
run_list = [gradient, update_adj_content, update_adj_style, \
update_adj_denoise]
# The mixed-image is initialized with random noise.
# It is the same size as the content-image.
mixed_image = np.random.rand(*content_image.shape) + 128
for i in range(num_iterations):
# Create a feed-dict with the mixed-image.
feed_dict = model.create_feed_dict(image=mixed_image)
# Use TensorFlow to calculate the value of the
# gradient, as well as updating the adjustment values.
grad, adj_content_val, adj_style_val, adj_denoise_val \
= session.run(run_list, feed_dict=feed_dict)
# Reduce the dimensionality of the gradient.
grad = np.squeeze(grad)
# Scale the step-size according to the gradient-values.
step_size_scaled = step_size / (np.std(grad) + 1e-8)
# Update the image by following the gradient.
mixed_image -= grad * step_size_scaled
# Ensure the image has valid pixel-values between 0 and 255.
mixed_image = np.clip(mixed_image, 0.0, 255.0)
# Print a little progress-indicator.
print(". ", end="")
# Display status once every 10 iterations, and the last.
if (i % 10 == 0) or (i == num_iterations - 1):
print()
print("Iteration:", i)
#Print adjustment weights for loss-functions.
msg = "Weight Adj. for Content: {0:.2e}, Style: {1:.2e}, Denoise: {2:.2e}"
print(msg.format(adj_content_val, adj_style_val, adj_denoise_val))
# Plot the content-, style- and mixed-images.
plot_images(content_image=content_image,
style_image=style_image,
mixed_image=mixed_image)
#Saving the mixed image after every 10 iterations
filename='images/outputs_StyleTransfer/Mixed_Iteration' + str(i) +'.jpg'
print(filename)
save_image(mixed_image, filename)
print()
print("Final image:")
plot_image_big(mixed_image)
# Close the TensorFlow session to release its resources.
session.close()
# Return the mixed-image.
return mixed_image
"""
Explanation: Loss Functions
These helper-functions create the loss-functions that are used in optimization with TensorFlow.
This function creates a TensorFlow operation for calculating the Mean Squared Error between the two input tensors.
End of explanation
"""
content_filename = 'images/download.jpg'
content_image = load_image(content_filename, max_size=None)
filenamecontent='images/outputs_StyleTransfer/Content.jpg'
print(filenamecontent)
save_image(content_image, filenamecontent)
"""
Explanation: Example
This example shows how to transfer the style of various images onto a portrait.
First we load the content-image which has the overall contours that we want in the mixed-image.
End of explanation
"""
style_filename = 'images/style4.jpg'
style_image = load_image(style_filename, max_size=None)
filenamestyle='images/outputs_StyleTransfer/Style.jpg'
print(filenamestyle)
save_image(style_image, filenamestyle)
"""
Explanation: Then we load the style-image which has the colours and textures we want in the mixed-image.
End of explanation
"""
content_layer_ids = [4,6]
"""
Explanation: Then we define a list of integers which identify the layers in the neural network that we want to use for matching the content-image. These are indices into the layers in the neural network. For the VGG16 model, the 5th layer (index 4) seems to work well as the sole content-layer.
End of explanation
"""
# The VGG16-model has 13 convolutional layers.
# This selects all those layers as the style-layers.
style_layer_ids = list(range(13))
# You can also select a sub-set of the layers, e.g. like this:
# style_layer_ids = [1, 2, 3, 4]
"""
Explanation: Then we define another list of integers for the style-layers.
End of explanation
"""
%%time
img = style_transfer(content_image=content_image,
style_image=style_image,
content_layer_ids=content_layer_ids,
style_layer_ids=style_layer_ids,
weight_content=1.5,
weight_style=10.0,
weight_denoise=0.3,
num_iterations=150,
step_size=10.0)
# Function for printing output image
filename='images/outputs_StyleTransfer/Mixed.jpg'
save_image(img, filename)
"""
Explanation: Now perform the style-transfer. This automatically creates the appropriate loss-functions for the style- and content-layers, and
then performs a number of optimization iterations. This will gradually create a mixed-image which has similar contours as the content-image, with the colours and textures being similar to the style-image.
This can be very slow on a CPU!
End of explanation
"""
|
srcole/qwm
|
burrito/.ipynb_checkpoints/Burrito_bootcamp-checkpoint.ipynb
|
mit
|
# These commands control inline plotting
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np # Useful numeric package
import scipy as sp # Useful statistics package
import matplotlib.pyplot as plt # Plotting package
"""
Explanation: San Diego Burrito Analytics: Bootcamp 2016
Scott Cole
15 Sept 2016
This notebook characterizes the data collected from consuming burritos from Don Carlos during Neuro bootcamp.
Outline
Load data into python
Use a Pandas dataframe
View data
Print some metadata
Hypothesis tests
California burritos vs. Carnitas burritos
Don Carlos 1 vs. Don Carlos 2
Bonferroni correction
Distributions
Distributions of each burrito quality
Tests for normal distribution
Correlations
Hunger vs. Overall rating
Correlation matrix
Assumptions discussion
0. Import libraries into Python
End of explanation
"""
import pandas as pd # Dataframe package
filename = './burrito_bootcamp.csv'
df = pd.read_csv(filename)
"""
Explanation: 1. Load data into a Pandas dataframe
End of explanation
"""
df
"""
Explanation: View raw data
End of explanation
"""
print 'Number of burritos:', df.shape[0]
print 'Average burrito rating'
print 'Reviewers: '
print np.array(df['Reviewer'])
"""
Explanation: Brief metadata
End of explanation
"""
def burritotypes(x, types = {'California':'cali', 'Carnitas':'carnita', 'Carne asada':'carne asada',
'Soyrizo':'soyrizo', 'Shredded chicken':'chicken'}):
import re
T = len(types)
Nmatches = {}
for b in x:
matched = False
for t in types.keys():
re4str = re.compile('.*'+types[t]+'.*', re.IGNORECASE)
if np.logical_and(re4str.match(b) is not None, matched is False):
try:
Nmatches[t] +=1
except KeyError:
Nmatches[t] = 1
matched = True
if matched is False:
try:
Nmatches['other'] +=1
except KeyError:
Nmatches['other'] = 1
return Nmatches
typecounts = burritotypes(df.Burrito)
plt.figure(figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.65, 0.65])
# The slices will be ordered and plotted counter-clockwise.
labels = typecounts.keys()
fracs = typecounts.values()
explode=[.1]*len(typecounts)
patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels,
autopct=lambda(p): '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=0)
# The default startangle is 0, which would start
# the Frogs slice on the x-axis. With startangle=90,
# everything is rotated counter-clockwise by 90 degrees,
# so the plotting starts on the positive y-axis.
plt.title('Types of burritos',size=30)
for t in texts:
t.set_size(20)
for t in autotexts:
t.set_size(20)
autotexts[0].set_color('w')
"""
Explanation: What types of burritos have been rated?
End of explanation
"""
#California burritos vs. Carnitas burritos
TODO
# Don Carlos 1 vs. Don Carlos 2
TODO
# Bonferroni correction
TODO
"""
Explanation: 2. Hypothesis tests
End of explanation
"""
import math
def metrichist(metricname):
if metricname == 'Volume':
bins = np.arange(.375,1.225,.05)
xticks = np.arange(.4,1.2,.1)
xlim = (.4,1.2)
else:
bins = np.arange(-.25,5.5,.5)
xticks = np.arange(0,5.5,.5)
xlim = (-.25,5.25)
plt.figure(figsize=(5,5))
n, _, _ = plt.hist(df[metricname].dropna(),bins,color='k')
plt.xlabel(metricname + ' rating',size=20)
plt.xticks(xticks,size=15)
plt.xlim(xlim)
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15)
plt.tight_layout()
m_Hist = ['Hunger','Volume','Tortilla','Temp','Meat','Fillings',
'Meat:filling','Uniformity','Salsa','Synergy','Wrap','overall']
for m in m_Hist:
metrichist(m)
"""
Explanation: 3. Burrito dimension distributions
Distribution of each burrito quality
End of explanation
"""
TODO
"""
Explanation: Test for normal distribution
End of explanation
"""
|
bundgus/python-playground
|
jupyter-notebook-playground/P4DS4D; 16; Outliers.ipynb
|
mit
|
import numpy as np
from scipy.stats.stats import pearsonr
np.random.seed(101)
normal = np.random.normal(loc=0.0, scale= 1.0, size=1000)
print 'Mean: %0.3f Median: %0.3f Variance: %0.3f' % (np.mean(normal), np.median(normal), np.var(normal))
outlying = normal.copy()
outlying[0] = 50.0
print 'Mean: %0.3f Median: %0.3f Variance: %0.3f' % (np.mean(outlying), np.median(outlying), np.var(outlying))
print 'Pearson''s correlation coefficient: %0.3f p-value: %0.3f' % pearsonr(normal,outlying)
"""
Explanation: Considering Outliers and Novelty Detection
End of explanation
"""
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
"""
Explanation: Finding more things that can go wrong with your data
Understanding the difference between anomalies and novel data
Examining a Fast and Simple Univariate Method
End of explanation
"""
X,y = diabetes.data, diabetes.target
import pandas as pd
pd.options.display.float_format = '{:.2f}'.format
df = pd.DataFrame(X)
print df.describe()
%matplotlib inline
import matplotlib.pyplot as plt
import pylab as pl
box_plots = df.boxplot(return_type='dict')
"""
Explanation: Samples total 442<BR>
Dimensionality 10<BR>
Features real, -.2 < x < .2<BR>
Targets integer 25 - 346<BR>
End of explanation
"""
from sklearn.preprocessing import StandardScaler
Xs = StandardScaler().fit_transform(X)
o_idx = np.where(np.abs(Xs)>3)
# .any(1) method will avoid duplicating
print df[(np.abs(Xs)>3).any(1)]
"""
Explanation: Leveraging on the Gaussian distribution
End of explanation
"""
from scipy.stats.mstats import winsorize
Xs_w = winsorize(Xs, limits=(0.05, 0.95))
Xs_c = Xs.copy()
Xs_c[o_idx] = np.sign(Xs_c[o_idx]) * 3
"""
Explanation: Making assumptions and checking out
End of explanation
"""
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from pandas.tools.plotting import scatter_matrix
dim_reduction = PCA()
Xc = dim_reduction.fit_transform(scale(X))
print 'variance explained by the first 2 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[:2]*100))
print 'variance explained by the last 2 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[-2:]*100))
df = pd.DataFrame(Xc, columns=['comp_'+str(j) for j in range(10)])
first_two = df.plot(kind='scatter', x='comp_0', y='comp_1', c='DarkGray', s=50)
last_two = df.plot(kind='scatter', x='comp_8', y='comp_9', c='DarkGray', s=50)
print 'variance explained by the first 3 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[:3]*100))
scatter_first = scatter_matrix(pd.DataFrame(Xc[:,:3], columns=['comp1','comp2','comp3']),
alpha=0.3, figsize=(15, 15), diagonal='kde', marker='o', grid=True)
scatter_last = scatter_matrix(pd.DataFrame(Xc[:,-2:], columns=['comp9','comp10']),
alpha=0.3, figsize=(15, 15), diagonal='kde', marker='o', grid=True)
outlying = (Xc[:,-1] < -0.3) | (Xc[:,-2] < -1.0)
print df[outlying]
"""
Explanation: Developing A Multivariate Approach
Using principal component analysis
End of explanation
"""
from sklearn.cluster import DBSCAN
DB = DBSCAN(eps=2.5, min_samples=25)
DB.fit(Xc)
from collections import Counter
print Counter(DB.labels_)
print df[DB.labels_==-1]
"""
Explanation: Using cluster analysis
End of explanation
"""
from sklearn import svm
outliers_fraction = 0.01 #
nu_estimate = 0.95 * outliers_fraction + 0.05
auto_detection = svm.OneClassSVM(kernel="rbf", gamma=0.01, degree=3, nu=nu_estimate)
auto_detection.fit(Xc)
evaluation = auto_detection.predict(Xc)
print df[evaluation==-1]
inliers = Xc[evaluation==+1,:]
outliers = Xc[evaluation==-1,:]
from matplotlib import pyplot as plt
import pylab as pl
inlying = plt.plot(inliers[:,0],inliers[:,1], 'o', markersize=2, color='g', alpha=1.0, label='inliers')
outlying = plt.plot(outliers[:,0],outliers[:,1], 'o', markersize=5, color='k', alpha=1.0, label='outliers')
plt.scatter(outliers[:,0],
outliers[:,1],
s=100, edgecolors="k", facecolors="none")
plt.xlabel('Component 1 ('+str(round(dim_reduction.explained_variance_ratio_[0],3))+')')
plt.ylabel('Component 2'+'('+str(round(dim_reduction.explained_variance_ratio_[1],3))+')')
plt.xlim([-7,7])
plt.ylim([-6,6])
plt.legend((inlying[0],outlying[0]),('inliers','outliers'),numpoints=1,loc='best')
plt.title("")
plt.show()
"""
Explanation: Automating outliers detection with SVM
End of explanation
"""
|
adico-somoto/deep-learning
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
# Convert characters to ids
source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in line.split()] for line in target_text.split('\n')]
for i in range(len(source_id_text)):
target_id_text[i].append(target_vocab_to_int['<EOS>'])
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32,[None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learn_rate = tf.placeholder(tf.float32, name='learn_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learn_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
#print(target_data.shape[1])
#demonstration_outputs = np.reshape(range(batch_size * int(target_data.shape[1])), (batch_size, int(target_data.shape[1])))
#sess = tf.InteractiveSession()
#print(target_data)
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
#print(sess.run(dec_input, {'Placeholder:0': demonstration_outputs})[:2])
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# Encoder
# Use a basic LSTM cell
lstm = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(rnn_size)
## Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
# Stack up multiple LSTM layers, for deep learning
#cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return enc_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
# Add dropout to the cell
#drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
#train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
# drop, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
# Add dropout to the cell
#drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
#inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(drop, infer_decoder_fn, scope=decoding_scope)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# Decoder Embedding
#dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
#dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
# Decoder RNNs
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Decoder RNNs
#dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(rnn_size)] * num_layers)
#dec_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
with tf.variable_scope("decoding") as decoding_scope:
#dlt = decoding_layer_train(encoder_state, dec_drop, dec_embed_input, sequence_length, decoding_scope,
# output_fn, keep_prob)
dlt = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
#decoding_scope.reuse_variables()
#dli = decoding_layer_infer(encoder_state, dec_drop, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
# sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
dli = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return dlt, dli
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
enc = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_in = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_in)
out = decoding_layer(dec_embed_input, dec_embeddings, enc, target_vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 100
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 100
decoding_embedding_size = 100
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 1.0
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
lower = [word.lower() for word in sentence.split()]
words = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in lower]
return words
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
GoogleCloudPlatform/ai-platform-samples
|
ai-explanations-local-experience.ipynb
|
apache-2.0
|
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
"""
Explanation: AI Explanations: Local Experience
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/tree/main/notebooks/samples/explanations/tf2/ai-explanations-local-experience.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/tree/main/notebooks/samples/explanations/tf2/ai-explanations-local-experience.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This tutorial shows how to generate explanations for a model deployed locally using the AI Platform Explanations service. The AI Platform Explanations service is used to get feature attributions on your deployed model --explanations on how the features that contributed to the prediction.
The local experience feature is only accessible from an AI Platform notebook with Tensorflow 2.3 framework. It will not run in Colab or your laptop, or on n AIP instance with an earlier version of Tensorflow.
Dataset
The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets.
Objective
The goal of this tutorial is to undetstand how to generate explanations for a model deployed locally using the AI Explanations service to obtain explanations for prediction from an image (flower) classification model. For image models, AI Explanations returns an image with the pixels highlighted that signaled your model's prediction the most.
This tutorial focuses more on deploying the model to AI Platform with Explanations than on the design of the model itself.
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
AI Platform for:
Prediction
Explanation: AI Explanations comes at no extra charge to prediction prices. However, explanation requests take longer to process than normal predictions, so heavy usage of Explanations along with auto-scaling may result in more nodes being started and thus more charges
Cloud Storage for:
Storing model files for deploying to Cloud AI Platform
Learn about AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type
This tutorial assumes you are running the notebook either in Colab or Cloud AI Platform Notebooks.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
Enable the AI Platform Training & Prediction and Compute Engine APIs.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Project ID
If you don't know your project ID.
You might able to get your project ID using gcloud command, by executing the second code cell below.
End of explanation
"""
REGION = 'us-central1' #@param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You can
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "_xai_metadata_" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
import os
import json
import random
import numpy as np
import PIL
import tensorflow as tf
import googleapiclient
from matplotlib import pyplot as plt
from base64 import b64encode
# should be >= 2.1
print("Tensorflow version " + tf.__version__)
if tf.__version__ < "2.1":
raise Exception("TF 2.1 or greater is required")
AUTO = tf.data.experimental.AUTOTUNE
print("AUTO", AUTO)
"""
Explanation: Import libraries
Import the libraries for this tutorial.
End of explanation
"""
import explainable_ai_sdk as xai
"""
Explanation: Import the Explanability SDK.
End of explanation
"""
# GCS location of TFRecords for flowers dataset
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec'
# Input size (Height, Width) for the Model
IMAGE_SIZE = [192, 192]
# Class labels for prediction
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)
# Batch size for training
BATCH_SIZE = 32
# Validation split for training
VALIDATION_SPLIT = 0.2
# Split data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames)))
validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps))
"""
Explanation: Download and preprocess the data
This section shows how to download the flower images, use the tf.data API to create a data input pipeline, and split the data into training and validation sets.
End of explanation
"""
# @title display utilities [RUN ME]
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
if tf.executing_eagerly():
# In eager mode, iterate in the Dataset directly.
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break
else:
# In non-eager mode, must get the TF note that
# yields the nextitem and run it in a tf.Session.
get_next_item = dataset.make_one_shot_iterator().get_next()
with tf.Session() as ses:
numpy_images, numpy_labels = ses.run(get_next_item)
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
label = np.argmax(label, axis=-1) # one-hot to class number
correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image)
plt.title(title, fontsize=16, color='red' if red else 'black')
return subplot + 1
def display_9_images_from_dataset(dataset):
subplot = 331
plt.figure(figsize=(13, 13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot = 331
plt.figure(figsize=(13, 13))
for i, image in enumerate(images):
title, correct = title_from_label_and_target(predictions[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot % 10 == 1: # set up the subplots on the first call
plt.subplots(figsize=(10, 10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model ' + title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
"""
Explanation: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model.
If you're running this from Colab the cell is hidden. You can look at the code by right clicking on the cell --> "Form" --> "Show form" if you'd like to see it.
End of explanation
"""
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example['image'], channels=3)
image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range
image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU
one_hot_class = tf.sparse.to_dense(example['one_hot_class'])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
def load_dataset(filenames):
# Read data from TFRecords
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16, num_parallel_calls=AUTO) # faster
dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)
return dataset
"""
Explanation: Read images and labels from TFRecords
In this dataset each image is stored as atf.Example in a TFRecords file, which is a serialized format for storing and retrieving training data from a disk storage. To learn more about tf.Example and TFRecords, you can view the Tensorflow documentation here.
For our purpose, each image and corresponding label is encoded as a tf.Example as:
image: JPG compressed image bytes
class: the label (class) as a scalar (integer) value
one_hot_class: the label (class) as a one-hot encoded vector
Our helper function read_tfrecord takes an individual tf.Example, deserializes using the specification defined in the variable features, extracts the image bytes and corresponding label, and JPG decodes (uncompresses) the image bytes.
Our helper function load_datasets creates a tf.data.Dataset from the corresponding TFRecords specified by the parameter filenames.
End of explanation
"""
display_9_images_from_dataset(load_dataset(training_filenames))
"""
Explanation: Use the visualization utility function provided earlier to preview flower images with their labels.
End of explanation
"""
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
dataset = dataset.cache() # This dataset fits in RAM
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)
# For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
return dataset
def get_training_dataset():
return get_batched_dataset(training_filenames)
def get_validation_dataset():
return get_batched_dataset(validation_filenames)
some_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), BATCH_SIZE)
"""
Explanation: Create training and validation datasets
Next, we will use our helper functions from the previous code cell to load the training and validation data into tf.data.Datasets, and set dataset properties for:
cache(): Reads the dataset once from disk, which is then held in memory during training.
repeat(): Enables multiple passes over the dataset for training more than a single epoch.
shuffle(): The number of examples to random shuffle at a time.
batch(): The batch size for training.
prefetch(): With AUTO setting, the dataset iterator will auto determine when/how many examples to prefetch in parallel while feeding examples for training.
Finally, we will prefetch a batch of examples from the validation dataset to use later for prediction and explainability --which we store as some_flowers and some_labels.
End of explanation
"""
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, GlobalAveragePooling2D, BatchNormalization
from tensorflow.keras.optimizers import Adam
model = Sequential([
# Stem
Conv2D(kernel_size=3, filters=16, padding='same', activation='relu', input_shape=[*IMAGE_SIZE, 3]),
BatchNormalization(),
Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=96, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'),
BatchNormalization(),
# 1x1 Reduction
Conv2D(kernel_size=1, filters=32, padding='same', activation='relu'),
BatchNormalization(),
# Classifier
GlobalAveragePooling2D(),
Dense(5, activation='softmax')
])
model.compile(
optimizer=Adam(lr=0.005, decay=0.98),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
"""
Explanation: Build, train, and evaluate the model
This section shows how to build, train, evaluate, and get local predictions from a model by using the TF.Keras Sequential API.
Note that when we loaded and deserialized the tf.Example records, we extracted the one-hot-encoded version of the label. So when we compile() the model, we use categorical_crossentropy for the loss function.
End of explanation
"""
EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75%
history = model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=get_validation_dataset(), validation_steps=validation_steps)
print(history.history['val_accuracy'])
"""
Explanation: Train the model
Train this on a GPU if you have access (in Colab, from the menu select Runtime --> Change runtime type). On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes.
After the model is trained, we will print out the validation accuracy per epoch from the model's training history.
End of explanation
"""
# Randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(BATCH_SIZE)
some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation])
# Get predictions from our sample batch
predictions = model.predict(some_flowers, batch_size=16)
print('Predictions', np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist(), '\n')
# Get evaluations (true labels) from our sample batch
evaluations = model.evaluate(some_flowers, some_labels, batch_size=16)
print('Evaluation [val_loss, val_acc]', evaluations)
display_9_images_with_predictions(some_flowers, predictions, some_labels)
"""
Explanation: Visualize local predictions
Get predictions on your local model and visualize the images with their predicted labels, using the visualization utility function provided earlier.
End of explanation
"""
export_path = 'gs://' + BUCKET_NAME + '/explanations/mymodel'
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(192, 192))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
with tf.device("cpu:0"):
decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32)
return {"numpy_inputs": decoded_images} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function([tf.TensorSpec(shape=[None, 192, 192, 3], dtype=tf.float32, name="numpy_inputs")])
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
tf.saved_model.save(model, export_path, signatures={
'serving_default': serving_fn,
'xai_preprocess': preprocess_fn, # Required for XAI
'xai_model': m_call # Required for XAI
})
"""
Explanation: Export the model as a TF 2.x SavedModel
When using TensorFlow 2.x, you export the model as a SavedModel and load it into Cloud Storage. During export, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
Serving function for image data
Sending base 64 encoded image data to AI Platform is more space efficient. Since this deployed model expects input data as raw bytes, you need to ensure that the b64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is part of the model's graph (instead of upstream on a CPU).
When you send a prediction or explanation request, the request goes to the serving function (serving_fn), which preprocesses the b64 bytes into raw numpy bytes (preprocess_fn). At this point, the data can be passed to the model (m_call).
End of explanation
"""
loaded = tf.saved_model.load(export_path)
input_name = list(loaded.signatures['xai_model'].structured_input_signature[1].keys())[0]
print(input_name)
output_name = list(loaded.signatures['xai_model'].structured_outputs.keys())[0]
print(output_name)
preprocess_name = list(loaded.signatures['xai_preprocess'].structured_input_signature[1].keys())[0]
print(preprocess_name)
"""
Explanation: Get signatures
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When you subsequently generate metadata for explanations, you will need the signature of the input layer to override the metadata builder defaulting to using the serving function's input signature.
End of explanation
"""
random_baseline = np.random.rand(192, 192, 3)
"""
Explanation: Generate explanation metadata
In order to deploy this model to AI Platform Explanations, we need to create an explanation_metadata.json file with information about our model inputs, outputs, and baseline. We can automatically generate and export this file by using Metadata builder from the AI Platform Explainability SDK.
Input baseline
For image models, using [0,1] as your input baseline represents black and white images. In this case we're using np.random to generate the baseline because our training images contain a lot of black and white (i.e. daisy petals).
Let's start by first creating the input baseline.
End of explanation
"""
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
# By default models inputs and outputs are added to metadata.
builder = xai.metadata.tf.v2.SavedModelMetadataBuilder(export_path, signature_name='xai_model')
"""
Explanation: Metadata builder
Next, you will instantiate an instance of SavedModelMetadatabuilder, with the parameters:
model_path: path to the SavedModel format stored model. In this case, the path is the model stored in your Cloud Storage bucket, specified by export_path.
signature_name: overrides looking at the default signature to find the input and output layers for explanations.
End of explanation
"""
# We add actual float image to explain as b64 string can't be explained.
builder.set_image_metadata(input_name, input_baselines=[random_baseline.tolist()], visualization={'type': 'Pixels', 'overlay_type': 'grayscale'})
"""
Explanation: Next, we set some parameters for generating the metadata with the method set_image_metadata, with the parameters:
input_name: the signature of the input layer to override using the serving function.
input_baseline: the input baseline.
visualization: how the explanation data should be visualized.
End of explanation
"""
# Save the model with metadata to our GCS export path.
export_path = builder.save_model_with_metadata(export_path)
"""
Explanation: Upload the metadata
The last step in generating the metadata for explanations, is to upload the generated metadata json to where the stored model using the method save_model_with_metadata.
End of explanation
"""
lm = xai.load_model_from_local_path(export_path, xai.XraiConfig())
"""
Explanation: Local experience - predictions and explanations
Now that you have a AI Platform model with the corresponding predictions, we can export the model to a local instance of an AI Plarform notebook and make local predictions with the corresponding explanations.
The first step is to export the model locally. We will use the AI Platform Explanability SDK method load_model_from_local_path(), with the parameters:
model_path: The GCS location of the AI Platform model.
config: The type of XAI explanations to instrument the local model for.
End of explanation
"""
# Download test flowers from public bucket
! mkdir flowers
! gsutil -m cp gs://flowers_model/test_flowers/* ./flowers
# Resize the images to what your model is expecting (192,192)
test_filenames = []
for i in os.listdir('flowers'):
img_path = 'flowers/' + i
with PIL.Image.open(img_path) as ex_img:
resize_img = ex_img.resize([192, 192])
resize_img.save(img_path)
test_filenames.append(img_path)
"""
Explanation: Get and prepare test images
To prepare the test images:
Download a small sample of images from the flowers dataset -- just enough for a multiple instance prediction (3 in total).
Resize the images to match the input shape (192, 192) of the model.
Save the resized images back to your bucket.
End of explanation
"""
# Prepare your prediction JSON to send to your Cloud model
instances = []
for image_path in test_filenames:
img_bytes = tf.io.read_file(image_path)
b64str = b64encode(img_bytes.numpy()).decode('utf-8')
instances.append({preprocess_name: {'b64': b64str}})
"""
Explanation: Format your explanation request
Prepare a batch of instances in JSONL format.
End of explanation
"""
explanations = lm.explain(instances)
"""
Explanation: Make prediction with explanation
Now that you have a local version of your model on your AI Platform notebook, we can make a prediction from the three test images and get corresponding explanations using the explain() method for your model.
End of explanation
"""
explanations[0].visualize_attributions()
explanations[1].visualize_attributions()
explanations[2].visualize_attributions()
"""
Explanation: Visualize the predictions and explanations
Let's now visualize the explanations for the three test images from the response object to the explain() call. This response object will return a list, where each element is the explanation data for the corresponding test image. You will use the method visualize_attributions() to visualize the explanation.
End of explanation
"""
import datetime
MODEL = 'flowers' + TIMESTAMP
print(MODEL)
# Create the model if it doesn't exist yet (you only need to run this once)
! gcloud ai-platform models create $MODEL --enable-logging --regions=us-central1
"""
Explanation: Deploy model to AI Explanations
This section shows how to use gcloud to deploy the model to AI Explanations, using two different explanation methods for image models. After the model is deployed, we will export a local copy of the deployed model to your AI Platform notebook instance and repeat making a prediction and corresponding explanations.
Create the model
End of explanation
"""
XRAI_VERSION='xrai'
# Create the XRAI version with gcloud
! gcloud beta ai-platform versions create $XRAI_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.2 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method xrai \
--num-integral-steps 25
# Make sure the XRAI model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL
"""
Explanation: Create explainable model versions
For image models, we offer two choices for explanation methods:
* Integrated Gradients (IG)
* XRAI
You can find more info on each method in the documentation. In this tutorial, you will deploy for the XRAI explanation method.
Creating the version will take ~5-10 minutes. Note that your first deploy may take longer.
Deploy an XRAI model
End of explanation
"""
lm = xai.load_model_from_ai_platform(PROJECT_ID, MODEL, XRAI_VERSION)
"""
Explanation: Export deployed model with explanability to your local notebook instance
Finally, you will repeat the same explanability task, but this time the exported model will be the deployed model (vs. previously was the model resource). You export the deployed model using the load_model_from_ai_platform() method of the AI Explainability SDK.
End of explanation
"""
explanations = lm.explain(instances)
"""
Explanation: Make prediction with explanation
Now that you have a local version of your deployed model on your AI Platform notebook, we can repeat making a prediction from the three test images and get corresponding explanations using the explain() method for your model.
End of explanation
"""
explanations[0].visualize_attributions()
explanations[1].visualize_attributions()
explanations[2].visualize_attributions()
"""
Explanation: Visualize the predictions and explanations
Let's now visualize the explanations for the three test images from the response object to the explain() call. This response object will return a list, where each element is the explanation data for the corresponding test image. You will use the method visualize_attributions() to visualize the explanation.
End of explanation
"""
# Delete model version resource
! gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL
! gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL
# Delete model resource
! gcloud ai-platform models delete $MODEL --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following
commands:
End of explanation
"""
|
pyzos/pyzos
|
Examples/jupyter_notebooks/00_Enhancing_the_ZOS_API_Interface.ipynb
|
mit
|
from __future__ import print_function
import os
import sys
import numpy as np
from IPython.display import display, Image, YouTubeVideo
import matplotlib.pyplot as plt
# Imports for using ZOS API in Python directly with pywin32
# (not required if using PyZOS)
from win32com.client.gencache import EnsureDispatch, EnsureModule
from win32com.client import CastTo, constants
# Import for using ZOS API in Python using PyZOS
import pyzos.zos as zos
"""
Explanation: <font color='#A52A2A'>Enhancing ZOS API using PyZOS library</font>
End of explanation
"""
# Set this variable to True or False to use ZOS API with
# the PyZOS library or without it respectively.
USE_PYZOS = True
"""
Explanation: <font color='#008000'>Reference</font>
We will use the following two Zemax knowledgebase articles as base material for this discussion. Especially, the code from the first article is used to compare and illustrate the enhanced features of PyZOS library.
"How to build and optimize a singlet using ZOS-API with Python," Zemax KB, 12/16/2015, Thomas Aumeyr.
"Interfacing to OpticStudio from Mathematica," Zemax KB, 05/03/2015, David.
End of explanation
"""
# a screenshot video of the feature:
display(YouTubeVideo("ot5CrjMXc_w?t", width=900, height=600))
"""
Explanation: <font color='#008000'>Enhancements and capabilities provided by PyZOS</font>
Visible user-interface for the headless standalone ZOS-API COM application
Single-step initialization of ZOS API Interface and instantiation of an Optical System
Introspection of properties of ZOS objects on Tab press
Ability to override methods of existing ZOS objects
Tab completion and introspection of API constants
Ability to add custom methods and properties to any ZOS-API object
<font color='#005078'>1. Visible user-interface for the headless standalone ZOS-API COM application</font>
The PyZOS library provides three functions (instance methods of OpticalSystem) for the sync-with-ui mechanism.
The sync-with-ui mechanism may be turned on during the instantiation of the OpticalSystem object using the parameter sync_ui set to True (see the screen-shot video below) or using the method zSyncWithUI() of the OpticalSystem instance at other times.
The other two functions
zPushLens() -- for copying a lens from the headless ZOS COM server (invisible) to a running user-interface application (visible)
zGetRefresh()-- for copying a lens from the running user-interface application (visible) to the headless ZOS COM server (invisible).
enable the user to interact with a running OpticStudio user-interface and observe the changes made through the API instantly in the user-interface.
(If you cannot see the video in the frame below, or if the resolution is not good enough within this notebook, here is a direct link to the Youtube video: https://www.youtube.com/watch?v=ot5CrjMXc_w&feature=youtu.be)
End of explanation
"""
if USE_PYZOS:
osys = zos.OpticalSystem() # Directly get the Primary Optical system
else:
# using ZOS API directly with pywin32
EnsureModule('ZOSAPI_Interfaces', 0, 1, 0)
connect = EnsureDispatch('ZOSAPI.ZOSAPI_Connection')
app = connect.CreateNewApplication() # The Application
osys = app.PrimarySystem # Optical system (primary)
# common
osys.New(False)
"""
Explanation: <font color='#005078'>2. Single-step initialization of ZOS API Interface and instantiation of an Optical System</font>
The first enhancement is the complete elimination of boiler-plate code to get create and initialize an optical system object and get started. We just need to create an instance of a PyZOS OpticalSystem to get started. The optical system can be either a sequential or a non-sequential system.
End of explanation
"""
if USE_PYZOS:
print(osys.pTheApplication)
"""
Explanation: It may seem that if we are using PyZOS, then the application is not available. In fact, it is available through a property of the PyZOS OpticalSystem object:
End of explanation
"""
Image('./images/00_01_property_attribute.png')
if USE_PYZOS:
sdir = osys.pTheApplication.pSamplesDir
else:
sdir = osys.TheApplication.SamplesDir
sdir
"""
Explanation: <font color='#005078'>3. Introspection of properties of ZOS objects on <kbd>Tab</kbd> press</font>
Because of the way property attributes are mapped by the PyWin32 library, the properties are not introspectable (visible) on <kbd>Tab</kbd> press in smart editors such as IPython. Only the methods are shown upon pressing the <kbd>Tab</kbd> button (see figure below). Note that although the properties are not "visible" they are still accessible.
PyZOS enhances the user experience by showing both the method and the properties of the ZOS object (by creating a wrapped object and delegating the attribute access to the underlying ZOS API COM objects). In addition, the properties are easily identified by the prefix <font color='magenta'><b>p</b></font> in front of the property attribute names.
End of explanation
"""
osys.TheApplication.SamplesDir # note that the properties doesn't have 'p' prefix in the names
"""
Explanation: Note that the above enhancement doesn't come at the cost of code-breaks. If you have already written applications that interfaced directly using pywin32, i.e. the code accessed properties as CoatingDir, SamplesDir, etc., the application should run even with PyZOS library, as shown below:
End of explanation
"""
file_out = os.path.join(sdir, 'invalid_directory',
'Single Lens Example wizard+EFFL.zmx')
try:
osys.SaveAs(file_out)
except Exception as err:
print(repr(err))
file_out = os.path.join(sdir, 'Sequential', 'Objectives',
'Single Lens Example wizard+EFFL.zmx')
osys.SaveAs(file_out)
# Aperture
if USE_PYZOS:
sdata = osys.pSystemData
sdata.pAperture.pApertureValue = 40
else:
sdata = osys.SystemData
sdata.Aperture.ApertureValue = 40
# Set Field data
if USE_PYZOS:
field = sdata.pFields.AddField(0, 5.0, 1.0)
print('Number of fields =', sdata.pFields.pNumberOfFields)
else:
field = sdata.Fields.AddField(0, 5.0, 1.0)
print('Number of fields =', sdata.Fields.NumberOfFields)
"""
Explanation: <font color='#005078'>4. Ability to override methods of existing ZOS objects</font>
There are some reasons why we may want to override some of the methods provided by ZOS. As an example, consider the SaveAs method of OpticalSystem that accepts a filename. If the filename is invalid the SaveAs method doesn't raise any exception/error, instead the lens data is saved to the default file "Lens.zmx".
This function in PyZOS has been overridden to raise an exception if the directory path is not valid, as shown below:
End of explanation
"""
Image('./images/00_02_constants.png')
# Setting wavelength using wavelength preset
if USE_PYZOS:
sdata.pWavelengths.SelectWavelengthPreset(zos.Const.WavelengthPreset_d_0p587);
else:
sdata.Wavelengths.SelectWavelengthPreset(constants.WavelengthPreset_d_0p587);
"""
Explanation: <font color='#005078'>5. <kbd>Tab</kbd> completion and introspection of API constants</font>
The ZOS API provides a large set of enumerations that are accessible as constants through the constants object of PyWin32. However, they are not introspectable using the <kbd>Tab</kbd> key. PyZOS automatically retrieves the constants and makes them introspectable as shown below:
End of explanation
"""
# Set Lens data Editor
if USE_PYZOS:
osys.zInsertNewSurfaceAt(1)
osys.zInsertNewSurfaceAt(1)
osys.zSetSurfaceData(1, thick=10, material='N-BK7', comment='front of lens')
osys.zSetSurfaceData(2, thick=50, comment='rear of lens')
osys.zSetSurfaceData(3, thick=350, comment='Stop is free to move')
else:
lde = osys.LDE
lde.InsertNewSurfaceAt(1)
lde.InsertNewSurfaceAt(1)
surf1 = lde.GetSurfaceAt(1)
surf2 = lde.GetSurfaceAt(2)
surf3 = lde.GetSurfaceAt(3)
surf1.Thickness = 10.0
surf1.Comment = 'front of lens'
surf1.Material = 'N-BK7'
surf2.Thickness = 50.0
surf2.Comment = 'rear of lens'
surf3.Thickness = 350.0
surf3.Comment = 'Stop is free to move'
"""
Explanation: <font color='#005078'>6. Ability to add custom methods and properties to any ZOS API object</font>
PyZOS allows us to easily extend the functionality of any ZOS API object by adding custom methods and properties, supporting the idea of developing a useful library over time. In the following block of codes we have added custom methods zInsertNewSurfaceAt() and zSetSurfaceData() to the OpticalSystem ojbect.
(Please note there is <u>no</u> implication that one cannot build a common set of functions without using PyZOS. Here, we only show that PyZOS allows us to add methods to the ZOS objects. How to add new methods PyZOS objects will be explained later.)
End of explanation
"""
Image('./images/00_03_extendiblity_custom_functions.png')
# Setting solves - Make thickness and radii variable
# nothing to demonstrate in particular in this block of code
if USE_PYZOS:
osys.pLDE.GetSurfaceAt(1).pRadiusCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(1).pThicknessCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(2).pRadiusCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(2).pThicknessCell.MakeSolveVariable()
osys.pLDE.GetSurfaceAt(3).pThicknessCell.MakeSolveVariable()
else:
surf1.RadiusCell.MakeSolveVariable()
surf1.ThicknessCell.MakeSolveVariable()
surf2.RadiusCell.MakeSolveVariable()
surf2.ThicknessCell.MakeSolveVariable()
surf3.ThicknessCell.MakeSolveVariable()
# Setting up the default merit function
# this code block again shows that we can create add custom methods
# based on our requirements
if USE_PYZOS:
osys.zSetDefaultMeritFunctionSEQ(ofType=0, ofData=1, ofRef=0, rings=2, arms=0, grid=0,
useGlass=True, glassMin=3, glassMax=15, glassEdge=3,
useAir=True, airMin=0.5, airMax=1000, airEdge=0.5)
else:
mfe = osys.MFE
wizard = mfe.SEQOptimizationWizard
wizard.Type = 0 # RMS
wizard.Data = 1 # Spot Radius
wizard.Reference = 0 # Centroid
wizard.Ring = 2 # 3 Rings
wizard.Arm = 0 # 6 Arms
wizard.IsGlassUsed = True
wizard.GlassMin = 3
wizard.GlassMax = 15
wizard.GlassEdge = 3
wizard.IsAirUsed = True
wizard.AirMin = 0.5
wizard.AirMax = 1000
wizard.AirEdge = 0.5
wizard.IsAssumeAxialSymmetryUsed = True
wizard.CommonSettings.OK()
"""
Explanation: The custom functions are introspectable, and they are identified by the prefix <font color='magenta'><b>z</b></font> to their names as shown in the following figure.
End of explanation
"""
Image('./images/00_04_extendiblity_required_methods.png')
"""
Explanation: Here we can demonstrate another strong reason why we may require to add methods to ZOS objects when using the ZOS API with pywin32 library. The problem is illustrated in the figure below. According to the ZOS API manual, the MFE object (IMeritFunctionEditor) should have the methods AddRow(), DeleteAllRows(), DeleteRowAt(), DeleteRowsAt(), InsertRowAt() and GetRowAt() that it inherits from IEditor object. However, due to the way pywin32 handles inheritence, these methods (defined in the base class) are apparently not available to the derived class object [1].
End of explanation
"""
# Add operand
if USE_PYZOS:
mfe = osys.pMFE
operand1 = mfe.InsertNewOperandAt(1)
operand1.ChangeType(zos.Const.MeritOperandType_EFFL)
operand1.pTarget = 400.0
operand1.pWeight = 1.0
else:
operand1 = mfe.InsertNewOperandAt(1)
operand1.ChangeType(constants.MeritOperandType_EFFL)
operand1.Target = 400.0
operand1.Weight = 1.0
# Local optimization
if USE_PYZOS:
local_opt = osys.pTools.OpenLocalOptimization()
local_opt.pAlgorithm = zos.Const.OptimizationAlgorithm_DampedLeastSquares
local_opt.pCycles = zos.Const.OptimizationCycles_Automatic
local_opt.pNumberOfCores = 8
local_opt.RunAndWaitForCompletion()
local_opt.Close()
else:
local_opt = osys.Tools.OpenLocalOptimization()
local_opt.Algorithm = constants.OptimizationAlgorithm_DampedLeastSquares
local_opt.Cycles = constants.OptimizationCycles_Automatic
local_opt.NumberOfCores = 8
base_tool = CastTo(local_opt, 'ISystemTool')
base_tool.RunAndWaitForCompletion()
base_tool.Close()
# save the latest changes to the file
osys.Save()
"""
Explanation: In order to solve the above problem in PyZOS, currently we "add" these methods to the derived (and wrapped) objects and delegate the calls to the base class. (Probably there is a more intelligent method of solving this ... which will not require so much code re-writing!)
End of explanation
"""
%matplotlib inline
osys2 = zos.OpticalSystem()
# load a lens into the Optical System
lens = 'Cooke 40 degree field.zmx'
zfile = os.path.join(sdir, 'Sequential', 'Objectives', lens)
osys2.LoadFile(zfile, False)
osys2.pSystemName
# check the aperture
osys2.pSystemData.pAperture.pApertureValue
# a more detailed information about the pupil
osys2.pLDE.GetPupil()
# Thickness of a surface
surf6 = osys2.pLDE.GetSurfaceAt(6)
surf6.pThickness
# Thickness of surface through custom added method
osys2.zGetSurfaceData(6).thick
# Open Analysis windows in the system currently
num_analyses = osys2.pAnalyses.pNumberOfAnalyses
for i in range(num_analyses):
print(osys2.pAnalyses.Get_AnalysisAtIndex(i+1).pGetAnalysisName)
#mtf analysis
fftMtf = osys2.pAnalyses.New_FftMtf() # open a new FFT MTF window
fftMtf
fftMtf_settings = fftMtf.GetSettings()
fftMtf_settings
# Set the maximum frequency to 160 lp/mm
fftMtf_settings.pMaximumFrequency = 160.0
# run the analysis
fftMtf.ApplyAndWaitForCompletion()
# results
fftMtf_results = fftMtf.GetResults() # returns an <pyzos.zosutils.IAR_ object
# info about the result data
print('Number of data grids:', fftMtf_results.pNumberOfDataGrids)
print('Number of data series:', fftMtf_results.pNumberOfDataSeries)
ds = fftMtf_results.GetDataSeries(1)
ds.pDescription
ds.pNumSeries
ds.pSeriesLabels
"""
Explanation: <font color='#008000'>Create a second optical system to load a standard lens for FFT MTF analysis</font>
End of explanation
"""
dsXdata = ds.pXData
dsYdata = ds.pYData
freq = np.array(dsXdata.pData)
mtf = np.array(dsYdata.pData) # shape = (len(freq) , ds.pNumSeries)
# build a function to plot the FFTMTF
def plot_FftMtf(optical_system, max_freq=160.0):
fftMtf = optical_system.pAnalyses.New_FftMtf()
fftMtf.GetSettings().pMaximumFrequency = max_freq
fftMtf.ApplyAndWaitForCompletion()
fftMtf_results = fftMtf.GetResults()
fig, ax = plt.subplots(1,1, figsize=(8,6))
num_dataseries = fftMtf_results.pNumberOfDataSeries
col = ['#0080FF', '#F52080', '#00CC60', '#B96F20', '#1f77b4',
'#ff7f0e', '#2ca02c', '#8c564b', '#00BFFF', '#FF8073']
for i in range(num_dataseries):
ds = fftMtf_results.GetDataSeries(i)
dsXdata = ds.pXData
dsYdata = ds.pYData
freq = np.array(dsXdata.pData)
mtf = np.array(dsYdata.pData) # shape = (len(freq) , ds.pNumSeries)
ax.plot(freq[::5], mtf[::5, 0], color=col[i], lw=1.5, label=ds.pDescription) # tangential
ax.plot(freq[::5], mtf[::5, 1], '--', color=col[i], lw=2) # sagittal
ax.set_xlabel('Spatial Frequency (lp/mm)')
ax.set_ylabel('FFT MTF')
ax.legend()
plt.text(0.85, -0.1,u'\u2014 {}'.format(ds.pSeriesLabels[0]), transform=ax.transAxes)
plt.text(0.85, -0.15,'-- {}'.format(ds.pSeriesLabels[1]), transform=ax.transAxes)
plt.grid()
plt.show()
# FFT MTF of Optical System 2
plot_FftMtf(osys2)
# FFT MTF of Optical System 1
plot_FftMtf(osys, 100)
# Close the application
if USE_PYZOS:
app = osys.pTheApplication
app.CloseApplication()
del app
"""
Explanation: Since these objects has not been wrapped (at the time of this writing), we will wrap them first
End of explanation
"""
|
philmui/datascience
|
lecture07.big.data/lecture07.3.trends.ipynb
|
mit
|
yrs = [str(yr) for yr in range(2002, 2016)]
"""
Explanation: We are only interested the year range from 2002 - 2006
End of explanation
"""
export_df = df[(df['trade_type'] == 'Export') &
(df['partner'] == 'EXT_EU28')
].loc[['EU28', 'UK']][yrs]
export_df.head(4)
"""
Explanation: Let's filter out the following types of record:
1. Export only
2. Partners are those with the UK and outside the EU28
End of explanation
"""
export_df = export_df.T
export_df.head(4)
"""
Explanation: Let's transpoe this to get 2 columns of series data:
End of explanation
"""
export_df = export_df.rename(columns={'EU28': 'EU28_TO_EXT', 'UK': 'UK_TO_EXT'})
export_df.head(4)
"""
Explanation: Let's rename the columns to clarify these columns related to export from these entities:
End of explanation
"""
int_df = df[(df['trade_type'] == 'Export') &
(df['partner'] == 'EU28')
].loc[['EU28', 'UK']][yrs]
int_df.head(4)
int_df = int_df.T
int_df.head(4)
"""
Explanation: Now, let's get the columns from UK and EU28 to those partners inside EU28
End of explanation
"""
export_df = pd.concat([export_df, int_df], axis=1)
export_df.head(4)
export_df = export_df.rename(columns={'EU28': 'EU28_TO_INT',
'UK' : 'UK_TO_INT'})
export_df.head(4)
"""
Explanation: Let's now combine these 2 new columns to the exports to outside UK and EU28
End of explanation
"""
export_df.plot(legend=False)
export_df.plot()
export_df[['UK_TO_EXT', 'UK_TO_INT']].plot()
"""
Explanation: Trends
Let's now plot to see any trends
End of explanation
"""
from bokeh.plotting import figure, output_file, show
from bokeh.layouts import gridplot
TOOLS = 'resize,pan,wheel_zoom,box_zoom,reset,hover'
p = figure(tools=TOOLS, x_range=(2002, 2015), y_range=(200000, 500000),
title="UK Import Export Trends from 2002-2014")
p.yaxis.axis_label = "Value in $1000"
p.line(yrs, export_df['UK_TO_EXT'], color='#A6CEE3', legend='UK_TO_EXT')
p.line(yrs, export_df['UK_TO_INT'], color='#B2DF8A', legend='UK_TO_INT')
p.legend.location = 'top_left'
output_file("uk_grade.html", title="UK Trade from 2002-2014")
# open a browser
show(p)
"""
Explanation: Interactive Plot
End of explanation
"""
df = df[~ df.index.isin(['EU28'])]
df.head(4)
pct_change_df = df.copy()
"""
Explanation: Outliers
Let look at % change.
First, let's remove the aggregate sum (by identifying the aggregate key 'EU28'. Remember that we have set the index to "geo" already.
End of explanation
"""
for yr in yrs:
pct_change_df[yr] = (df[yr] - df[str(int(yr)-1)]) / df[str(int(yr)-1)]
pct_change_df.head(4)
"""
Explanation: Recall that yrs column is of type "str" even though they supposedly represent the year number.
End of explanation
"""
[(yr, abs(pct_change_df[yr].max() - pct_change_df[yr].min(0))) for yr in yrs]
"""
Explanation: What is the year with the largest spread
End of explanation
"""
pct_change_df['2010'].std()
pct_change_df['2010'].mean()
"""
Explanation: 2010 seems to have a big % change in recent years.
Let's find some outliers by using standard deviations.
End of explanation
"""
pct_change_df[pct_change_df['2010'].abs() >=
(pct_change_df['2010'].mean() + 2*pct_change_df['2010'].std())]
"""
Explanation: Let's define outliers are those > 2 standard deviations from the mean.
End of explanation
"""
pct_change_df['2010'].sort_values()
"""
Explanation: Looks like these 3 countries are outliers defined as having % change > 2 standard deviations from their means.
Let's use sorting to see the range of values for 2010
End of explanation
"""
pct_change_df[pct_change_df['2010'] < 0]
"""
Explanation: There are very few countries with negative % change values for 2010. Let's separate out those values.
End of explanation
"""
pct_change_df[pct_change_df['2010'] > 0.4]
"""
Explanation: Looks like Greece, Hungary, and Ireland all shrunk in imports for 2010. Luxumberg shrunk in both imports & exports in 2010.
Also looks like very few countries have % change values > 0.4. Let's examine those values for 2010.
End of explanation
"""
|
Jim00000/Numerical-Analysis
|
7_Boundary_Value_Problems.ipynb
|
unlicense
|
# Import modules
import numpy as np
import scipy
"""
Explanation: ★ Boundary Value Problems ★
End of explanation
"""
def ode_rkf45(f, a, b, y0, h = 1e-3, tol = 1e-6):
w = y0.astype(np.float64)
t = a
while(t < b):
w_this, t_this = w, t
s1 = f(t, w)
hs1 = h * s1
s2 = f(t + h / 4, w + hs1 / 4)
hs2 = h * s2
s3 = f(t + 3 / 8 * h, w + 3 / 32 * hs1 + 9 / 32 * hs2)
hs3 = h * s3
s4 = f(t + 12 / 13 * h, w + 1932 / 2197 * hs1 - 7200 / 2197 * hs2 + 7296 / 2197 * hs3)
hs4 = h * s4
s5 = f(t + h, w + 439 / 216 * hs1 - 8 * hs2 + 3680 / 513 * hs3 - 845 / 4104 * hs4)
hs5 = s5 * h
s6 = f(t + h / 2, w - 8 / 27 * hs1 + 2 * hs2 - 3544 / 2565 * hs3 + 1859 / 4104 * hs4 - 11 / 40 * hs5)
z = w + h * (16 / 135 * s1 + 6656 / 12825 * s3 + 28561 / 56430 * s4 - 9 / 50 * s5 + 2 / 55 * s6)
w += h * (25 / 216 * s1 + 1408 / 2565 * s3 + 2197 / 4104 * s4 - s5 / 5)
t += h
e = abs(z - w)
if np.all(e / np.abs(w) < tol):
w = z
else:
h = 0.8 * pow(tol * abs(w) / e, 1 / 5) * h
return w
from scipy.optimize import fsolve
f = lambda t, w : np.array([w[1], 4 * w[0]])
def F(s):
return (ode_rkf45(f, 0, 1, np.array([1, s])) - 3)[0]
x = fsolve(F, 0)
print(x)
from scipy.integrate import ode
from scipy.optimize import fsolve
def F(s):
a = 0
ya = 1.0
b = 1
step = 100
dt = (b - a) / step
f = lambda t, w : np.array([w[1], 4 * w[0]])
r = ode(f).set_integrator('dopri5').set_initial_value(np.array([ya, s]), 0)
while r.successful() and step > 0:
x = r.integrate(r.t + dt)[0]
step -= 1
return x - 3
x = fsolve(F, 0)
print(x)
"""
Explanation: 7.1 Shooting Method
Example
Apply the shooting method to the boundary value problem
$$
\left{\begin{matrix}\begin{align}
& y'' = 4y \
& y(0) = 1 \
& y(1) = 3
\end{align}\end{matrix}\right.
$$
End of explanation
"""
A = np.array([-9/4, 1, 0, 1, -9/4, 1, 0, 1, -9/4]).reshape(3, 3)
b = np.array([-1, 0, -3])
x = np.linalg.solve(A, b)
print(x)
"""
Explanation: 7.2 Finite Difference Methods
$$
\begin{align}
y'(t) &= \frac{y(t+h) - y(t-h)}{2h} - \frac{h^2}{6}y'''(c) \
y''(t) &= \frac{y(t+h) - 2y(t) + y(t-h)}{h^2} + \frac{h^2}{12}f''''(c)
\end{align}
$$
Example
Solve the BVP
$$
\left{\begin{matrix}\begin{align}
y'' &= 4y \
y(0) &= 1 \
y(1) &= 3
\end{align}\end{matrix}\right.
$$
using finite differences
$$
\begin{align}
& \frac{w_{i+1} - 2w_{i} + w_{i-1}}{h^2} - 4w_i = 0 \
& \rightarrow w_{i-1} + (-4h^2 - 2)w_i + w_{i_+1} = 0 \
& \Rightarrow \
& 1 + (-4h^2 - 2)w_1 + w_2 = 0 \
& w_1 + (-4h^2 - 2)w_2 + w_3 = 0 \
& w_2 + (-4h^2 - 2)w_3 + 3 = 0 \
\end{align}
$$
$$
\begin{bmatrix}
-\frac{9}{4} & 1 & 0 \
1 & -\frac{9}{4} & 1 \
0 & 1 & -\frac{9}{4}
\end{bmatrix}
\begin{bmatrix}
w_1 \
w_2 \
w_3
\end{bmatrix}
=
\begin{bmatrix}
-1 \
0 \
-3
\end{bmatrix}
$$
End of explanation
"""
def finite_element_method_sp( a, b, ya, yb, n):
h = (b - a) / (n + 1)
alpha = (8 / 3) * h + 2 / h
beta = (2 / 3) * h - 1 / h
A = np.zeros(n * n).reshape(n, n)
np.fill_diagonal(A, alpha)
dia_range = np.arange(n - 1)
A[dia_range, dia_range + 1] = beta
A[dia_range + 1, dia_range] = beta
b = np.zeros(n)
b[0] = -ya * beta
b[-1] = -yb * beta
x = np.linalg.solve(A, b)
return x
finite_element_method_sp(0, 1, 1, 3, 3)
"""
Explanation: 7.3 Collocation and the Finite Element Method
Example
Apply the Finite Element Method to the BVP
$$
\left{\begin{matrix}\begin{align}
y'' &= 4y \
y(0) &= 1 \
y(1) &= 3
\end{align}\end{matrix}\right.
$$
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
|
mit
|
import numpy as np
from astropy.table import Table
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: A Practical Guide to the Machine Learning Workflow:
Separating Stars and Galaxies from SDSS
Version 0.1
By AA Miller 2017 Jan 22
We will now follow the steps from the machine learning workflow lecture to develop an end-to-end machine learning model using actual astronomical data. As a reminder the workflow is as follows:
Data Preparation
Model Building
Model Evaluation
Model Optimization
Model Predictions
Some of these steps will be streamlined to allow us to fully build a model within the alloted time.
Science background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring galaxy clustering without knowing which sources are galaxies and which are stars.
During this exercise, we will utilize supervised machine-learning methods to separate extended (galaxies) and point sources (stars, QSOs) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
End of explanation
"""
from astroquery.sdss import SDSS # enables direct queries to the SDSS database
"""
Explanation: Problem 1) Obtain and Examine Training Data
As a reminder, for supervised-learning problems we use a training set, sources with known labels, i.e. they have been confirmed as normal stars, QSOs, or galaxies, to build a model to classify new observations where we do not know the source label.
The training set for this exercise uses Sloan Digital Sky Survey (SDSS) data. For features, we will start with each $r$-band magnitude measurement made by SDSS. This yields 8 features (twice that of the Iris data set, but significantly fewer than the 454 properties measured for each source in SDSS).
Step 1 in the ML workflow is data preparation - we must curate the training set. As a reminder:
A machine-learning model is only as good as its training set.
This point cannot be emphasized enough. Machine-learning models are data-driven, they do not capture any physical theory, and thus it is essential that the training set satisfy several criteria.
Two of the most important criteria for a good training set are:
the training set should be unbiased [this is actually really hard to achieve in astronomy since most surveys are magnitude limited]
the training set should be representative of the (unobserved or field) population of sources [a training set with no stars will yield a model incapable of discovering point sources]
So, step 1 (this is a must), we are going to examine the training set to see if anything suspicious is going on. We will use astroquery to directly access the SDSS database, and store the results in an astropy Table.
Note The SDSS API for astroquery is not standard for the package, which leads to a warning. This is not, however, a problem for our purposes.
End of explanation
"""
sdss_query = """SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
"""
sdss_set = SDSS.query_sql(sdss_query)
sdss_set
"""
Explanation: While it is possible to look up each of the names of the $r$-band magnitudes in the SDSS PhotoObjAll schema, the schema list is long, and thus difficult to parse by eye. Fortunately, we can identify the desired columns using the database itself:
select COLUMN_NAME
from INFORMATION_SCHEMA.Columns
where table_name = 'PhotoObjAll' AND
COLUMN_NAME like '%Mag/_r' escape '/'
which returns the following list of columns: psfMag_r, fiberMag_r, fiber2Mag_r, petroMag_r, deVMag_r, expMag_r, modelMag_r, cModelMag_r.
We now select these magnitude measurements for 10000 stars and galaxies from SDSS. Additionally, we join these photometric measurements with the SpecObjAll table to obtain their spectroscopic classifications, which will serve as labels for the machine-learning model.
Note - the SDSS database contains duplicate observations, flagged observations, and non-detections, which we condition the query to exclude (as explained further below). We also exclude quasars, as the precise photometric classification of these objects is ambiguous: low-$z$ AGN have resolvable host galaxies, while high-$z$ QSOs are point-sources. Query conditions:
p.mode = 1 select only the primary photometric detection of a source
s.sciencePrimary = 1 select only the primary spectroscopic detection of a source (together with above, prevents duplicates)
p.clean = 1 the SDSS clean flag excludes flagged observations and sources with non-detections
s.class != 'QSO' removes potentially ambiguous QSOs from the training set
End of explanation
"""
# complete
import seaborn as sns
sns.pairplot(sdss_set.to_pandas(), hue = 'class', diag_kind = 'kde')
"""
Explanation: To reiterate a point from above: data-driven models are only as good as the training set. Now that we have a potential training set, it is essential to inspect the data for any peculiarities.
Problem 1a
Can you easily identify any important properties of the data from the above table?
If not - is there a better way to examine the data?
Hint - emphasis on easy.
Solution 1a
This is the first instance where domain knowledge really helps us to tackle this problem. In this case the domain knowledge is the following: PSF measurements of galaxy brightness are terrible. Thus, psfmag_r is very different from the other mag measurements for galaxies, but similar for stars. Of course - this is readily identifiable, even to those without domain knowledge, if we visualize the data.
Problem 1b
Visualize the 8 dimensional feature set [this is intentionally open-ended...]
Does this visualization reveal anything that is not obvious from the table?
Can you identify any biases in the training set?
Remember - always worry about the data
Hint astropy Tables can be converted to pandas DataFrames with the .to_pandas() operator.
End of explanation
"""
from sklearn.model_selection import train_test_split
rs = 2 # we are in second biggest metropolitan area in the US
# complete
X = np.array( # complete
y = np.array( # complete
train_X, test_X, train_y, test_y = train_test_split( X, y, # complete
from sklearn.model_selection import train_test_split
rs = 2 # we are in second biggest metropolitan area in the US
feats = list(sdss_set.columns)
feats.remove('class')
X = np.array(sdss_set[feats].to_pandas())
y = np.array(sdss_set['class'])
train_X, test_X, train_y, test_y = train_test_split( X, y, test_size = 0.3, random_state = rs)
"""
Explanation: Solution 1b
The visualization confirms our domain knowledge assertion: galaxy PSF measurements differ significantly from the other magnitude measurements.
The visualization also reveals the magnitude distribution of the training set, as well as a potential bias: the dip in the distribution at $r' \approx 19$ mag. There is no reason nature should produce fewer $r' \approx 19$ mag stars than $r' \approx 18$ mag stars, and, indeed, this is a bias due to the SDSS spectroscopic targeting algorithm. We will proceed, but we should be weary of this moving forward.
Finally, to finish off our preparation of the data - we need to create an independent test that will be used to evalute the accuracy/generalization properies of the model after everything has been tuned. Often, independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$. For this problem we will adopt 0.3.
sklearn.model_selection has a handy function train_test_split, which will simplify this process.
Problem 1c Split the 10k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.
Hint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.
End of explanation
"""
bright_query = """SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
AND p.cModelMag_r < 18.5
ORDER BY p.objid ASC
"""
bright_set = SDSS.query_sql(bright_query)
bright_set
faint_query = """SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
AND p.cModelMag_r > 19.5
ORDER BY p.objid ASC
"""
faint_set = SDSS.query_sql(faint_query)
faint_set
"""
Explanation: Problem 2) An Aside on the Importance of Feature Engineering
It has been said that all machine learning is an exercise in feature engineering.
Feature engineering - the process of creating new features, combining features, removing features, collecting new data to supplement existing features, etc. is essential in the machine learning workflow. As part of the data preparation stage, it is useful to apply domain knowledge to engineer features prior to model construction. [Though it is important to know that feature engineering may be needed at any point in the ML workflow if the model does not provide desired results.]
Due to a peculiarity of our SDSS training set, we need to briefly craft a separate problem to demonstrate the importance of feature engineering.
For this aside, we will train the model on bright ($r' < 18.5$ mag) sources and test the model on faint ($r' > 19.5$ mag) sources. As you might guess the model will not perform well. Following some clever feature engineering, we will be able to improve this.
aside-to-the-aside
This exact situation happens in astronomy all the time, and it is known as sample selection bias. In brief, any time a larger aperture telescope is built, or instrumentation is greatly improved, a large swath of sources that were previously undetectable can now be observed. These fainter sources, however, may contain entirely different populations than their brighter counterparts, and thus any models trained on the bright sources will be biased when making predictions on the faint sources.
We train and test the model with 10000 sources using an identical query to the one employed above, with the added condition restricting the training set to bright sources and the test set to faint sources.
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
feats = # complete
bright_X = # complete
bright_y = # complete
KNNclf = # complete
from sklearn.neighbors import KNeighborsClassifier
feats = list(bright_set.columns)
feats.remove('class')
bright_X = np.array(bright_set[feats].to_pandas())
bright_y = np.array(bright_set['class'])
KNNclf = KNeighborsClassifier(n_neighbors = 11)
KNNclf.fit(bright_X, bright_y)
"""
Explanation: Problem 2a
Train a $k$ Nearest Neighbors model with $k = 11$ neighbors on the 10k source training set. Note - for this particular problem, the number of neighbors does not matter much.
End of explanation
"""
from sklearn.metrics import accuracy_score
faint_X = # complete
faint_y = # complete
faint_preds = # complete
print("The raw features produce a KNN model with accuracy ~{:.4f}".format( # complete
from sklearn.metrics import accuracy_score
faint_X = np.array(faint_set[feats].to_pandas())
faint_y = np.array(faint_set['class'])
faint_preds = KNNclf.predict(faint_X)
print("The raw features produce a KNN model with accuracy ~{:.4f}".format(accuracy_score(faint_y, faint_preds)))
"""
Explanation: Problem 2b
Evaluate the accuracy of the model when applied to the sources in the faint test set.
Does the model perform well?
Hint - you may find sklearn.metrics.accuracy_score useful for this exercise.
End of explanation
"""
bright_Xnorm = # complete
KNNclf = # complete
faint_predsNorm = # complete
print("The normalized features produce an accuracy ~{:.4f}".format( # complete
bright_Xnorm = bright_X[:,0][:,np.newaxis] - bright_X[:,1:]
faint_Xnorm = faint_X[:,0][:,np.newaxis] - faint_X[:,1:]
KNNclf = KNeighborsClassifier(n_neighbors = 11)
KNNclf.fit(bright_Xnorm, bright_y)
faint_predsNorm = KNNclf.predict(faint_Xnorm)
print("The normalized features produce an accuracy ~{:.4f}".format(accuracy_score(faint_y, faint_predsNorm)))
"""
Explanation: Solution 2b
Based on the pair plots generated above - stars and galaxies appear highly distinct based on their SDSS $r'$-band measurements, thus, this model likely exhibits poor performance. [we will see if we can confirm this]
Leveraging the same domain knowledge discussed above, namely that galaxies cannot be modeled with a PSF, we can "normalize" the magnitude measurements by taking their difference relative to psfMag_r. This normalization has the added advantage of removing any knowledge of the apparent brightness of the sources, which should help when comparing independent bright and faint sets.
Problem 2c
Normalize the feature vector relative to psfMag_r, and refit the $k$NN model to the 7 newly engineered features.
Does the accuracy improve when predicting the class of sources in the faint test set?
Hint - be sure you apply the eaxct same normalization to both the training and test set
End of explanation
"""
import # complete
rs = 626 # aread code for Pasadena
train_Xnorm = # complete
RFclf = # complete
from sklearn.ensemble import RandomForestClassifier
rs = 626 # aread code for Pasadena
train_Xnorm = train_X[:,0][:,np.newaxis] - train_X[:,1:]
RFclf = RandomForestClassifier(n_estimators = 25, random_state = rs)
RFclf.fit(train_Xnorm, train_y)
"""
Explanation: Solution 2c
Wow! Normalizing the features produces a huge ($\sim{35}\%$) increase in accuracy. Clearly, we should be using normalized magnitude features moving forward.
In addition to demonstrating the importance of feature engineering, this exercise teaches another important lesson: contextual features can be dangerous.
Contextual astronomical features can provide very strong priors: stars are more likely close to the galactic plane, supernovae occur next to/on top of galaxies, bluer stars have have lower metallicity, etc. Thus, including contextual information may improve overall model performance.
However, all astronomical training sets are heavily biased. Thus, the strong priors associated with contextual features can lead to severely biased model predictions.
Generally, I (AAM) remove all contextual features from my ML models for this reason. If you are building ML models, consider contextual information as it may help overall performance, but... be weary.
Worry about the data
Problem 3) Model Building
After the data have been properly curated, the next important choice in the ML workflow is the selection of ML algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try three (or four, or five) different models and choose whichever works the best.
For the star-galaxy problem, we will use the Random Forest (RF) algorithm (Breiman 2001) as implemented by scikit-learn.
RandomForestClassifier is part of the sklearn.ensemble module.
RF has a number of nice properties for working with astronomical data:
relative insensitivity to noisy or useless features
invariant response to highly non-gaussian feature distributions
fast, flexible and scales well to large data sets
which is why we will adopt it here.
Problem 3a
Build a RF model using the normalized features from the training set.
Include 25 trees in the forest using the n_estimators paramater in RandomForestClassifier.
End of explanation
"""
# complete
print("The relative importance of the features is: \n{:s}".format( # complete
print(RFclf.feature_importances_) # print the importances
indicies = np.argsort(RFclf.feature_importances_)[::-1] # sort the features most imp. --> least imp.
# recall that all features are normalized relative to psfMag_r
featStr = ", \n".join(['psfMag_r - {:s}'.format(x) for x in list(np.array(feats)[1:][indicies])])
print("The relative importance of the features is: \n{:s}".format(featStr))
"""
Explanation: scikit-learn really makes it easy to build ML models.
Another nice property of RF is that it naturally provides an estimate of the most important features in the model.
[Once again - feature engineering comes into play, as it may be necessary to remove correlated features or unimportant features during the model construction in order to reduce run time or allow the model to fit in the available memory.]
In this case we don't need to remove any features [RF is relatively immune to correlated or unimportant features], but for completeness we measure the importance of each feature in the model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() class. The higher the value, the more important feature.
Problem 3b
Calculate the relative importance of each feature.
Which feature is most important? Can you make sense of the feature ordering?
Hint - do not dwell too long on the final ordering of the features.
End of explanation
"""
# complete
print("The SDSS phot model produces an accuracy ~{:.4f}".format( # complete
phot_y = train_Xnorm[:,6] > 0.145
phot_class = np.empty(len(phot_y), dtype = '|S6')
phot_class[phot_y] = 'GALAXY'
phot_class[phot_y == False] = 'STAR'
print("The SDSS phot model produces an accuracy ~{:.4f}".format(accuracy_score(train_y, phot_class)))
"""
Explanation: Solution 3b
psfMag_r - deVMag_r is the most important feature. This makes sense based on the separation of stars and galaxies in the psfMag_r-deVMag_r plane (see the visualization results above).
Note - the precise ordering of the features can change due to their strong correlation with each other, though the fiberMag features are always the least important.
Problem 4) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. This in essence is the essential "engineering" step of machine learning [and why I (AAM) often caution against ML for scientific measurements and advocate for engineering-like problems instead].
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
Tthe SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag} - \mathtt{cmodelMag} > 0.145.$$
Sources that satisfy this criteria are considered galaxies.
Problem 4a
Determine the baseline for the ML model by measuring the accuracy of the SDSS photometric classifier on the training set.
Hint - you may need to play around with array values to get accuracy_score to work.
End of explanation
"""
from sklearn.model_selection import # complete
RFpreds = # complete
print("The CV accuracy for the training set is {:.4f}".format( # complete
from sklearn.model_selection import cross_val_predict
RFpreds = cross_val_predict(RFclf, train_Xnorm, train_y, cv = 10)
print("The CV accuracy for the training set is {:.4f}".format(accuracy_score(train_y, RFpreds)))
"""
Explanation: The simple SDSS model sets a high standard! A $\sim{96}\%$ accuracy following a single hard cut is a phenomenal performance.
Problem 4b Using 10-fold cross validation, estimate the accuracy of the RF model.
End of explanation
"""
rs = 1936 # year JPL was founded
CVpreds1 = # complete
# complete
# complete
print("The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}".format( # complete
rs = 1936 # year JPL was founded
CVpreds1 = cross_val_predict(RandomForestClassifier(n_estimators = 1, random_state=rs),
train_Xnorm, train_y, cv = 5)
CVpreds10 = cross_val_predict(RandomForestClassifier(n_estimators = 10, random_state=rs),
train_Xnorm, train_y, cv = 5)
CVpreds100 = cross_val_predict(RandomForestClassifier(n_estimators = 100, random_state=rs),
train_Xnorm, train_y, cv = 5)
print("The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}".format(accuracy_score(train_y, CVpreds1),
accuracy_score(train_y, CVpreds10),
accuracy_score(train_y, CVpreds100)))
"""
Explanation: Phew! Our hard work to build a machine learning model has been rewarded, by creating an improved model: $\sim{96.9}\%$ accuracy vs. $\sim{96.4}\%$.
[But - was our effort worth only a $0.5\%$ improvement in the model?]
Problem 5) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
$N_\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn
$m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn
Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
On Tuesday we were introduced to GridSearchCV, which is an excellent tool for optimizing model parameters.
Before we get to that, let's try to develop some intuition for how the tuning parameters affect the final model predictions.
Problem 5a
Determine the 5-fold CV accuracy for models with $N_\mathrm{tree}$ = 1, 10, 100.
How do you expect changing the number of trees to affect the results?
End of explanation
"""
rs = 64 # average temperature in Los Angeles
from sklearn.model_selection import GridSearchCV
grid_results = # complete
print("The optimal parameters are:")
for key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3
print("{}: {}".format(key, item))
rs = 64 # average temperature in Los Angeles
from sklearn.model_selection import GridSearchCV
grid_results = GridSearchCV(RandomForestClassifier(random_state = rs),
{'n_estimators': [30, 100, 300], 'max_features': [1, 3, 7], 'min_samples_leaf': [1,10]},
cv = 3)
grid_results.fit(train_Xnorm, train_y)
print("The optimal parameters are:")
for key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3
print("{}: {}".format(key, item))
"""
Explanation: Solution 5a
Using a single tree will produce high variance results, as the features selected at the top of the tree greatly influence the final classifications. Thus, we expect it to have the lowest accuracy.
While (in this case) the affect is small, it is clear that $N_\mathrm{tree}$ affects the model output.
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters?
Brute force.
This data set and the number of tuning parameters is small, so brute force is appropriate (alternatives exist when this isn't the case). We can optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
Problem 5b
Use GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
What are the optimal tuning parameters for the model?
Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.
Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid
End of explanation
"""
RFopt_clf = # complete
test_preds = # complete
print('The optimized model produces a generalization error of {:.4f}'.format( # complete
RFopt_clf = RandomForestClassifier(n_estimators=30, max_features=3, min_samples_leaf=10)
RFopt_clf.fit(train_Xnorm, train_y)
test_Xnorm = test_X[:,0][:,np.newaxis] - test_X[:,1:]
test_preds = RFopt_clf.predict(test_Xnorm)
print('The optimized model produces a generalization error of {:.4f}'.format(1 - accuracy_score(test_y, test_preds)))
"""
Explanation: Now that the model is fully optimized - we are ready for the moment of truth!
Problem 5c
Using the optimized model parameters, train a RF model and estimate the model's generalization error using the test set.
How does this compare to the baseline model?
End of explanation
"""
from sklearn.metrics import # complete
# complete
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(test_y, test_preds)
print(cm)
"""
Explanation: Solution 5c
The optimized model provides a $\sim{0.6}\%$ improvement over the baseline model.
We will now examine the performance of the model using some alternative metrics.
Note - if these metrics are essential for judging the model performance, then they should be incorporated to the workflow in the evaluation stage, prior to examination of the test set.
Problem 5d
Calculate the confusion matrix for the model, as determined by the test set.
Is there symmetry to the misclassifications?
End of explanation
"""
from sklearn.metrics import roc_curve
test_preds_proba = # complete
# complete
fpr, tpr, thresholds = roc_curve( # complete
plt.plot( # complete
plt.legend()
from sklearn.metrics import roc_curve, roc_auc_score
test_preds_proba = RFopt_clf.predict_proba(test_Xnorm)
test_y_stars = np.zeros(len(test_y), dtype = int)
test_y_stars[np.where(test_y == "STAR")] = 1
test_y_galaxies = test_y_stars*-1. + 1
fpr, tpr, thresholds = roc_curve(test_y_stars, test_preds_proba[:,1])
plt.plot(fpr, tpr, label = r'$\mathrm{STAR}$', color = "MediumAquaMarine")
fpr, tpr, thresholds = roc_curve(test_y_galaxies, test_preds_proba[:,0])
plt.plot(fpr, tpr, label = r'$\mathrm{GALAXY}$', color = "Tomato")
plt.legend()
"""
Explanation: Solution 5d
Adopting galaxies as the positive class, the TPR = 96.7%, while the TNR = 97.1%. Thus, yes, these is ~symmetry to the classifications.
Problem 5e
Calculate and plot the ROC curves for both stars and galaxies.
Hint - you'll need probabilities in order to calculate the ROC curve.
End of explanation
"""
# complete
fpr01_idx = (np.abs(fpr-0.01)).argmin()
tpr01 = tpr[fpr01_idx]
threshold01 = thresholds[fpr01_idx]
print("To achieve FPR = 0.01, a decision threshold = {:.4f} must be adopted".format(threshold01))
print("This threshold will miss {:.4f} of galaxies".format(1 - tpr01))
"""
Explanation: Problem 5f
Suppose you want a model that only misclassifies 1% of stars as galaxies.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model miss?
Can you think of a reason to adopt such a threshold?
End of explanation
"""
QSO_query = """SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class = 'QSO'
ORDER BY s.specobjid ASC
"""
QSO_set = SDSS.query_sql(QSO_query)
"""
Explanation: Solution 5f
When building galaxy 2-point correlation functions it is very important to avoid including stars in the statistics as they will bias the final measurement.
Finally - always remember:
worry about the data
Challenge Problem) Taking the Plunge
Applying the model to field data
QSOs are unresolved sources that look like stars in optical imaging data. We will now download photometric measurements for 10k QSOs from SDSS and see how accurate the RF model performs for these sources.
End of explanation
"""
qso_X = np.array(QSO_set[feats].to_pandas())
qso_y = np.empty(len(QSO_set),dtype='|S4') # we are defining QSOs as stars for this exercise
qso_y[0:-1] = 'STAR'
qso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:]
qso_preds = RFclf.predict(qso_Xnorm)
print("The RF model correctly classifies ~{:.4f} of the QSOs".format(accuracy_score(qso_y, qso_preds)))
"""
Explanation: Challenge 1
Calculate the accuracy with which the model classifies QSOs based on the 10k QSOs selected with the above command. How does that accuracy compare to that estimated by the test set?
End of explanation
"""
# As discussed above, low-z AGN have resolved host galaxies which will confuse the classifier,
# this can be resolved by only selecting high-z QSOs (z > 1.5)
QSO_query = """SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class = 'QSO'
AND s.z > 1.5
ORDER BY s.specobjid ASC
"""
QSO_set = SDSS.query_sql(QSO_query)
qso_X = np.array(QSO_set[feats].to_pandas())
qso_y = np.empty(len(QSO_set),dtype='|S4') # we are defining QSOs as stars for this exercise
qso_y[0:-1] = 'STAR'
qso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:]
qso_preds = RFclf.predict(qso_Xnorm)
print("The RF model correctly classifies ~{:.4f} of the QSOs".format(accuracy_score(qso_y, qso_preds)))
"""
Explanation: Challenge 2
Can you think of any reasons why the performance would be so much worse for the QSOs than it is for the stars?
Can you obtain a ~.97 accuracy when classifying QSOs?
End of explanation
"""
# complete
"""
Explanation: Challenge 3
Perform an actual test of the model using "field" sources. The SDSS photometric classifier is nearly perfect for sources brighter than $r = 21$ mag. Download a random sample of $r < 21$ mag photometric sources, and classify them using the optimized RF model. Adopting the photometric classifications as ground truth, what is the accuracy of the RF model?
Hint - you'll need to look up the parameter describing photometric classification in SDSS
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/sandbox-2/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-2', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
sbu-python-summer/python-tutorial
|
day-1/python-day1-exercises1.ipynb
|
bsd-3-clause
|
import random
random_number = random.randint(0,9)
"""
Explanation: Exercises
Q 1
When talking about floating point, we discussed machine epsilon, $\epsilon$—this is the smallest number that when added to 1 is still different from 1.
We'll compute $\epsilon$ here:
Pick an initial guess for $\epsilon$ of eps = 1.
Create a loop that checks whether 1 + eps is different from 1
Each loop iteration, cut the value of eps in half
What value of $\epsilon$ do you find?
Q 2
To iterate over the tuples, where the i-th tuple contains i-th elements of certain sequences, we can use zip(*sequences) function.
We will iterate over two lists, names and age, and print out the resulting tuples.
Start by initializing lists names = ["Mary", "John", "Sarah"] and age = [21, 56, 98].
Iterate over the tuples containing a name and an age, the zip(list1, list2) function might be useful here.
Print out formatted strings of the type "NAME is AGE years old".
Q 3
The function enumerate(sequence) returns tuples containing indecies of objects in the sequence, and the objects.
The random module provides tools for working with the random objects. In particular, random.randint(start, end) generates a random number not smaller than start, and not bigger than end.
Generate a list of 10 random numbers from 0 to 9.
Using the enumerate(random_list) function, iterate over the tuples of random numbers and their indecies, and print out "Match: NUMBER and INDEX" if the random number and its index in the list match.
End of explanation
"""
titles = ["don quixote",
"in search of lost time",
"ulysses",
"the odyssey",
"war and piece",
"moby dick",
"the divine comedy",
"hamlet",
"the adventures of huckleberry finn",
"the great gatsby"]
"""
Explanation: Q 4
The Fibbonacci sequence is a numerical sequence where each number is the sum of the 2 preceding numbers, e.g., 1, 1, 2, 3, 5, 8, 13, ...
Create a list where the elements are the terms in the Fibbonacci sequence:
Start with the list fib = [1, 1]
Loop 25 times, compute the next term as the sum of the previous 2 terms and append to the list
After the loop is complete, print out the terms
You may find it useful to use fib[-1] and fib[-2] to access the last to items in the list
Q 5
We can use the input() function to ask for input from the prompt (note: in python 2 the function was called raw_input()).
Create an empty list and use a while loop to ask the user for input and append their input to the list. Keep looping until 10 items are added to the list
Q 6
Here is a list of book titles (from http://thegreatestbooks.org). Loop through the list and capitalize each word in each title. You might find the .capitalize() method that works on strings useful.
End of explanation
"""
gettysburg_address = """
Four score and seven years ago our fathers brought forth on this continent,
a new nation, conceived in Liberty, and dedicated to the proposition that
all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or
any nation so conceived and so dedicated, can long endure. We are met on
a great battle-field of that war. We have come to dedicate a portion of
that field, as a final resting place for those who here gave their lives
that that nation might live. It is altogether fitting and proper that we
should do this.
But, in a larger sense, we can not dedicate -- we can not consecrate -- we
can not hallow -- this ground. The brave men, living and dead, who struggled
here, have consecrated it, far above our poor power to add or detract. The
world will little note, nor long remember what we say here, but it can never
forget what they did here. It is for us the living, rather, to be dedicated
here to the unfinished work which they who fought here have thus far so nobly
advanced. It is rather for us to be here dedicated to the great task remaining
before us -- that from these honored dead we take increased devotion to that
cause for which they gave the last full measure of devotion -- that we here
highly resolve that these dead shall not have died in vain -- that this
nation, under God, shall have a new birth of freedom -- and that government
of the people, by the people, for the people, shall not perish from the earth.
"""
"""
Explanation: <span class="fa fa-star"></span> Q 7
Here's some text (the Gettysburg Address). Our goal is to count how many times each word repeats. We'll do a brute force method first, and then we'll look a ways to do it more efficiently (and compactly).
End of explanation
"""
ga = gettysburg_address.split()
ga
"""
Explanation: We've already seen the .split() method will, by default, split by spaces, so it will split this into words, producing a list:
End of explanation
"""
a = "end.,"
b = a.replace(".", "").replace(",", "")
b
"""
Explanation: Now, the next problem is that some of these still have punctuation. In particular, we see ".", ",", and "--".
When considering a word, we can get rid of these by using the replace() method:
End of explanation
"""
a = "But"
b = "but"
a == b
a.lower() == b.lower()
"""
Explanation: Another problem is case—we want to count "but" and "But" as the same. Strings have a lower() method that can be used to covert a string:
End of explanation
"""
# your code here
"""
Explanation: Recall that strings are immutable, so replace() produces a new string on output.
your task
Create a dictionary that uses the unique words as keys and has as a value the number of times that word appears.
Write a loop over the words in the string (using our split version) and do the following:
* remove any punctuation
* convert to lowercase
* test if the word is already a key in the dictionary (using the in operator)
- if the key exists, increment the word count for that key
- otherwise, add it to the dictionary with the appropiate count of 1.
At the end, print out the words and a count of how many times they appear
End of explanation
"""
words = [q.lower().replace(".", "").replace(",", "") for q in ga]
"""
Explanation: More compact way
We can actually do this a lot more compactly by using another list comprehensions and another python datatype called a set. A set is a group of items, where each item is unique (e.g., no repetitions).
Here's a list comprehension that removes all the punctuation and converts to lower case:
End of explanation
"""
unique_words = set(words)
"""
Explanation: and by using the set() function, we turn the list into a set, removing any duplicates:
End of explanation
"""
count = {}
for uw in unique_words:
count[uw] = words.count(uw)
count
"""
Explanation: now we can loop over the unique words and use the count method of a list to find how many there are
End of explanation
"""
c = {uw: count[uw] for uw in unique_words}
c
"""
Explanation: Even shorter -- we can use a dictionary comprehension, like a list comprehension
End of explanation
"""
|
deeplook/notebooks
|
mapping/geodesic_polylines.ipynb
|
mit
|
%matplotlib inline
import math
import folium
la = 34.05351, -118.24531
nyc = 40.71453, -74.00713
berlin = 52.516071, 13.37698
potsdam = 52.39962, 13.04784
singapore = 1.29017, 103.852
sydney = -33.86971, 151.20711
"""
Explanation: Using truly geodesic polylines with Folium
This notebook shows how long straight lines are unusable on map projections like the very popular Mercator one, also used in Folium. And it implements a more detailed version with intermediate points in order to get a more realistic visualization of long lines on such map projections. These might look unusual (except for pilots), but they are much closer to the truth.
You will need to pip-install folium and geographiclib for this notebook to work as expected. If you look at this inside a GitHub repo you won't see the maps. Then try on the notebook viewer, there you will.
End of explanation
"""
map = folium.Map()
folium.Marker(location=la, popup="Los Angeles").add_to(map)
folium.Marker(location=nyc, popup="New York").add_to(map)
folium.Marker(location=berlin, popup="Berlin").add_to(map)
folium.Marker(location=singapore, popup="Singapore").add_to(map)
folium.Marker(location=sydney, popup="Sydney").add_to(map)
folium.PolyLine([la, nyc, berlin, singapore, sydney],
weight=2,
color="red"
).add_to(map)
map
"""
Explanation: Straight lines
End of explanation
"""
from geographiclib.geodesic import Geodesic
def intermediatePoints(start, end, min_length_km=2000, segment_length_km=1000):
geod = Geodesic.WGS84
if geod.Inverse(*(start + end))["s12"] / 1000.0 < min_length_km:
yield start
yield end
else:
inv_line = geod.InverseLine(*(start + end))
segment_length_m = 1000 * segment_length_km
n = int(math.ceil(inv_line.s13 / segment_length_m))
for i in range(n + 1):
s = min(segment_length_m * i, inv_line.s13)
g = inv_line.Position(s, Geodesic.STANDARD | Geodesic.LONG_UNROLL)
yield g["lat2"], g["lon2"]
for lat, lon in intermediatePoints(berlin, sydney):
print(lat, lon)
for lat, lon in intermediatePoints(berlin, potsdam):
print(lat, lon)
"""
Explanation: Geodesic lines
End of explanation
"""
from itertools import tee
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
map = folium.Map()
folium.Marker(location=la, popup="Los Angeles").add_to(map)
folium.Marker(location=nyc, popup="New York").add_to(map)
folium.Marker(location=berlin, popup="Berlin").add_to(map)
folium.Marker(location=singapore, popup="Singapore").add_to(map)
folium.Marker(location=sydney, popup="Sydney").add_to(map)
for start, end in pairwise([la, nyc, berlin, singapore, sydney]):
folium.PolyLine(list(intermediatePoints(start, end)), weight=2).add_to(map)
for start, end in pairwise([la, nyc, berlin, singapore, sydney]):
folium.PolyLine([start, end], weight=2, color="red").add_to(map)
map
"""
Explanation: Mapping geodesic lines
End of explanation
"""
# not used anymore, but still nice ;-)
def extract_dict(aDict, keys):
"Return a sub-dict of ``aDict`` by keys and remove those from ``aDict``."
return {k: aDict.pop(k) for k in keys}
class GeodesicPolyLine(folium.PolyLine):
"""
A geodesic version of a PolyLine inserting intermediate points when needed.
This will calculate intermediate points with some segment length whenever
the geodesic length between two adjacent locations is above some maximal
value.
"""
def __init__(self, locations, min_length_km=2000, segment_length_km=1000, **kwargs):
kwargs1 = dict(min_length_km=min_length_km, segment_length_km=segment_length_km)
geodesic_locs = [intermediatePoints(start, end, **kwargs1) for start, end in pairwise(locations)]
super().__init__(geodesic_locs, **kwargs)
map = folium.Map()
folium.Marker(location=la, popup="Los Angeles").add_to(map)
folium.Marker(location=berlin, popup="Berlin").add_to(map)
folium.Marker(location=sydney, popup="Sydney").add_to(map)
GeodesicPolyLine([la, berlin, sydney]).add_to(map)
map
"""
Explanation: Building a geodesic PolyLine subclass
End of explanation
"""
map = folium.Map()
folium.Marker(location=la, popup="Los Angeles").add_to(map)
folium.Marker(location=berlin, popup="Berlin").add_to(map)
folium.Marker(location=sydney, popup="Sydney").add_to(map)
GeodesicPolyLine([la, berlin, sydney], segment_length_km=2000).add_to(map)
map
"""
Explanation: Test using longer segment lengths:
End of explanation
"""
map = folium.Map()
folium.Marker(location=la, popup="Los Angeles").add_to(map)
folium.Marker(location=berlin, popup="Berlin").add_to(map)
folium.Marker(location=sydney, popup="Sydney").add_to(map)
GeodesicPolyLine([la, berlin, sydney], min_length_km=10000).add_to(map)
map
"""
Explanation: Test using longer minimum length:
End of explanation
"""
Geodesic.WGS84.Inverse(*(la + berlin))["s12"] / 1000
Geodesic.WGS84.Inverse(*(berlin + sydney))["s12"] / 1000
"""
Explanation: Let's check the distances (in km):
End of explanation
"""
|
w4zir/ml17s
|
lectures/lec03-gradient-descent.ipynb
|
mit
|
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
# read data in pandas frame
dataframe = pd.read_csv('datasets/house_dataset1.csv')
# assign x and y
X = np.array(dataframe[['Size']])
y = np.array(dataframe[['Price']])
m = y.size # number of training examples
#Insert the usual column of 1's into the "X" matrix
X = np.insert(X,0,1,axis=1)
# check data by printing first few rows
dataframe.head()
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 3: Gradient Descent
Overview
What is Machine Learning?
The three different types of machine learning
Supervised Learning
Machine Learning pipeline
Goal of Machine Learning algorithm
# Linear Regression with one variable
Model Representation
Cost Function
Gradient Descent
Gradient Descent Equation
Derivate Part
Learning Rate $\alpha$
Convergence of Gradient Descent
Gradient Descent for Linear Regression
Deriving $\frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
Price of the house yet again?
Read data
Plot data
Initialize Hyper Parameters
Model/Hypothesis Function
Cost Function
Gradient Descent
Run Gradient Descent
Plot Convergence
Predict output using trained model
Plot Results
Resources
Credits
<br>
<br>
What is Machine Learning? <a name="what-is-ml"></a>
Machine Learning is making computers/machcines learn from data
Learning improve over time with more data
<br>
<br>
The three different types of machine learning
<img style="float: left;" src="images/01_01.png", width=500>
<br>
<br>
Supervised Learning
<img style="float: left;" src="images/01_02.png", width=500>
<br>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<br>
<br>
Goal of Machine Learning algorithm
How well the algorithm will perform on unseen data.
Also called generalization.
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
<br>
<br>
<br>
Cost Function
Let $\hat{y}$ = h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
Error in single sample (x,y) = $\hat{y}$ - y = h(x) - y
Cummulative error of all m samples = $\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Finally mean error or cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
<img style="float: left;" src="images/03_01.png", width=300> <img style="float: right;" src="images/03_02.png", width=300>
<br>
<br>
Gradient Descent
<img style="float: left;" src="images/03_05.png" width = 500>
<img style="float: left;" src="images/03_04.gif" width = 700>
<br>
Gradient Descent Equation
<img src="images/03_06.png" width = 500>
$\alpha$ is called learning rate that control the step size.
<br>
Derivative Part
<img style="float: left;" src="images/03_09.png" width = 500>
<br>
Learning Rate $\alpha$
<img style="float: left;" src="images/03_10.png" width = 500>
<br>
Convergence of Gradient Descent
<img style="float: left;" src="images/03_11.png" width = 500>
<br>
<br>
Gradient Descent for Linear Regression
Cost function:
J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Gradient descent equation:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
<br>
Replacing J($\theta$) for each j
\begin{align} \text{repeat until convergence: } \lbrace & \newline \theta_0 := & \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x_{i}) - y_{i}) \newline \theta_1 := & \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}\left((h_\theta(x_{i}) - y_{i}) x_{i}\right) \newline \rbrace& \end{align}
Deriving $\frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
<img src="images/03_12.png" width = 500>
<br>
<br>
Price of the house yet again?
Read data
End of explanation
"""
#visualize results
plt.scatter(X[:,1], y)
plt.show()
"""
Explanation: Plot data
End of explanation
"""
iterations = 1500
alpha = 0.000000001
"""
Explanation: Initialize Hyper Parameters
End of explanation
"""
def h(theta,X): #Linear hypothesis function
return np.dot(X,theta)
"""
Explanation: Model/Hypothesis Function
End of explanation
"""
def computeCost(mytheta,X,y): #Cost function
"""
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
"""
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(mytheta,X)-y).T,(h(mytheta,X)-y)))
#Test that running computeCost with 0's as theta returns 65591548106.45744:
initial_theta = np.zeros((X.shape[1],1)) #(theta is a vector with n rows and 1 columns (if X has n features) )
print (computeCost(initial_theta,X,y))
"""
Explanation: Cost Function
End of explanation
"""
#Actual gradient descent minimizing routine
def descendGradient(X, theta_start = np.zeros(2)):
"""
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
"""
theta = theta_start
jvec = [] #Used to plot cost as function of iteration
thetahistory = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
jvec.append(computeCost(theta,X,y))
thetahistory.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(theta,X) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, thetahistory, jvec
"""
Explanation: Gradient Descent Function
End of explanation
"""
#Actually run gradient descent to get the best-fit theta values
initial_theta = np.zeros((X.shape[1],1))
theta, thetahistory, jvec = descendGradient(X,initial_theta)
thetahistory
"""
Explanation: Run Gradient Descent
End of explanation
"""
plt.plot(jvec)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
"""
Explanation: Plot Convergence
End of explanation
"""
hx = h(theta, X)
"""
Explanation: Predict output using trained model
End of explanation
"""
plt.scatter(X[:,1], y)
plt.plot(X[:,1], hx)
plt.show()
"""
Explanation: Plot Results
End of explanation
"""
|
SKA-ScienceDataProcessor/algorithm-reference-library
|
workflows/notebooks/imaging-fits_arlexecute.ipynb
|
apache-2.0
|
%matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..'))
from data_models.parameters import arl_path
results_dir = arl_path('test_results')
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = (10.0, 10.0)
pylab.rcParams['image.cmap'] = 'rainbow'
from matplotlib import pyplot as plt
import numpy
from astropy.coordinates import SkyCoord
from astropy.time import Time
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from data_models.polarisation import PolarisationFrame
from wrappers.serial.image.iterators import image_raster_iter
from wrappers.serial.visibility.base import create_visibility
from wrappers.serial.visibility.operations import sum_visibility
from wrappers.serial.visibility.iterators import vis_timeslices, vis_wslices
from wrappers.serial.simulation.configurations import create_named_configuration
from wrappers.serial.skycomponent.operations import create_skycomponent, find_skycomponents, \
find_nearest_skycomponent, insert_skycomponent
from wrappers.serial.image.operations import show_image, export_image_to_fits, qa_image, smooth_image
from wrappers.serial.simulation.configurations import create_named_configuration
from wrappers.serial.imaging.base import advise_wide_field, create_image_from_visibility, \
predict_skycomponent_visibility
from wrappers.arlexecute.griddata.kernels import create_awterm_convolutionfunction
from wrappers.arlexecute.griddata.convolution_functions import apply_bounding_box_convolutionfunction
# Use workflows for imaging
from wrappers.arlexecute.execution_support.arlexecute import arlexecute
from workflows.shared.imaging.imaging_shared import imaging_contexts
from workflows.arlexecute.imaging.imaging_arlexecute import predict_list_arlexecute_workflow, \
invert_list_arlexecute_workflow
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
pylab.rcParams['figure.figsize'] = (12.0, 12.0)
pylab.rcParams['image.cmap'] = 'rainbow'
"""
Explanation: Demonstrate full circle wide field imaging
This include prediction of components, inversion, point source fitting. We will compare the output images with the input models, looking for closeness in flux and position.
End of explanation
"""
lowcore = create_named_configuration('LOWBD2-CORE')
"""
Explanation: Construct the SKA1-LOW core configuration
End of explanation
"""
arlexecute.set_client(use_dask=True)
"""
Explanation: Use Dask
End of explanation
"""
times = numpy.array([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]) * (numpy.pi / 12.0)
frequency = numpy.array([1e8])
channel_bandwidth = numpy.array([1e6])
reffrequency = numpy.max(frequency)
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')
vt = create_visibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre,
polarisation_frame=PolarisationFrame('stokesI'))
advice = advise_wide_field(vt, wprojection_planes=1)
"""
Explanation: We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
End of explanation
"""
vt.data['vis'] *= 0.0
npixel=256
model = create_image_from_visibility(vt, npixel=npixel, cellsize=0.001, nchan=1,
polarisation_frame=PolarisationFrame('stokesI'))
centre = model.wcs.wcs.crpix-1
spacing_pixels = npixel // 8
log.info('Spacing in pixels = %s' % spacing_pixels)
spacing = model.wcs.wcs.cdelt * spacing_pixels
locations = [-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5]
original_comps = []
# We calculate the source positions in pixels and then calculate the
# world coordinates to put in the skycomponent description
for iy in locations:
for ix in locations:
if ix >= iy:
p = int(round(centre[0] + ix * spacing_pixels * numpy.sign(model.wcs.wcs.cdelt[0]))), \
int(round(centre[1] + iy * spacing_pixels * numpy.sign(model.wcs.wcs.cdelt[1])))
sc = pixel_to_skycoord(p[0], p[1], model.wcs)
log.info("Component at (%f, %f) [0-rel] %s" % (p[0], p[1], str(sc)))
flux = numpy.array([[100.0 + 2.0 * ix + iy * 20.0]])
comp = create_skycomponent(flux=flux, frequency=frequency, direction=sc,
polarisation_frame=PolarisationFrame('stokesI'))
original_comps.append(comp)
insert_skycomponent(model, comp)
predict_skycomponent_visibility(vt, original_comps)
cmodel = smooth_image(model)
show_image(cmodel)
plt.title("Smoothed model image")
plt.show()
"""
Explanation: Fill in the visibility with exact calculation of a number of point sources
End of explanation
"""
comps = find_skycomponents(cmodel, fwhm=1.0, threshold=10.0, npixels=5)
plt.clf()
for i in range(len(comps)):
ocomp, sep = find_nearest_skycomponent(comps[i].direction, original_comps)
plt.plot((comps[i].direction.ra.value - ocomp.direction.ra.value)/cmodel.wcs.wcs.cdelt[0],
(comps[i].direction.dec.value - ocomp.direction.dec.value)/cmodel.wcs.wcs.cdelt[1],
'.', color='r')
plt.xlabel('delta RA (pixels)')
plt.ylabel('delta DEC (pixels)')
plt.title("Recovered - Original position offsets")
plt.show()
"""
Explanation: Check that the skycoordinate and image coordinate system are consistent by finding the point sources.
End of explanation
"""
wstep = 8.0
nw = int(1.1 * 800/wstep)
gcfcf = create_awterm_convolutionfunction(model, nw=110, wstep=8, oversampling=8,
support=60,
use_aaf=True)
cf=gcfcf[1]
print(cf.data.shape)
plt.clf()
plt.imshow(numpy.real(cf.data[0,0,0,0,0,:,:]))
plt.title(str(numpy.max(numpy.abs(cf.data[0,0,0,0,0,:,:]))))
plt.show()
cf_clipped = apply_bounding_box_convolutionfunction(cf, fractional_level=1e-3)
print(cf_clipped.data.shape)
gcfcf_clipped=(gcfcf[0], cf_clipped)
plt.clf()
plt.imshow(numpy.real(cf_clipped.data[0,0,0,0,0,:,:]))
plt.title(str(numpy.max(numpy.abs(cf_clipped.data[0,0,0,0,0,:,:]))))
plt.show()
"""
Explanation: Make the convolution function
End of explanation
"""
contexts = imaging_contexts().keys()
print(contexts)
print(gcfcf_clipped[1])
contexts = ['2d', 'facets', 'timeslice', 'wstack', 'wprojection']
for context in contexts:
print('Processing context %s' % context)
vtpredict_list =[create_visibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame('stokesI'))]
model_list = [model]
vtpredict_list = arlexecute.compute(vtpredict_list, sync=True)
vtpredict_list = arlexecute.scatter(vtpredict_list)
if context == 'wprojection':
future = predict_list_arlexecute_workflow(vtpredict_list, model_list, context='2d', gcfcf=[gcfcf_clipped])
elif context == 'facets':
future = predict_list_arlexecute_workflow(vtpredict_list, model_list, context=context, facets=8)
elif context == 'timeslice':
future = predict_list_arlexecute_workflow(vtpredict_list, model_list, context=context, vis_slices=vis_timeslices(
vtpredict, 'auto'))
elif context == 'wstack':
future = predict_list_arlexecute_workflow(vtpredict_list, model_list, context=context, vis_slices=31)
else:
future = predict_list_arlexecute_workflow(vtpredict_list, model_list, context=context)
vtpredict_list = arlexecute.compute(future, sync=True)
vtpredict = vtpredict_list[0]
uvdist = numpy.sqrt(vt.data['uvw'][:, 0] ** 2 + vt.data['uvw'][:, 1] ** 2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis'][:]), '.', color='r', label="DFT")
plt.plot(uvdist, numpy.abs(vtpredict.data['vis'][:]), '.', color='b', label=context)
plt.plot(uvdist, numpy.abs(vtpredict.data['vis'][:] - vt.data['vis'][:]), '.', color='g', label="Residual")
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.legend()
plt.show()
"""
Explanation: Predict the visibility using the different approaches.
End of explanation
"""
contexts = ['2d', 'facets', 'timeslice', 'wstack', 'wprojection']
for context in contexts:
targetimage_list = [create_image_from_visibility(vt, npixel=npixel, cellsize=0.001, nchan=1,
polarisation_frame=PolarisationFrame('stokesI'))]
vt_list = [vt]
print('Processing context %s' % context)
if context == 'wprojection':
future = invert_list_arlexecute_workflow(vt_list, targetimage_list, context='2d', gcfcf=[gcfcf_clipped])
elif context == 'facets':
future = invert_list_arlexecute_workflow(vt_list, targetimage_list, context=context, facets=8)
elif context == 'timeslice':
future = invert_list_arlexecute_workflow(vt_list, targetimage_list, context=context, vis_slices=vis_timeslices(vt, 'auto'))
elif context == 'wstack':
future = invert_list_arlexecute_workflow(vt_list, targetimage_list, context=context, vis_slices=31)
else:
future = invert_list_arlexecute_workflow(vt_list, targetimage_list, context=context)
result = arlexecute.compute(future, sync=True)
targetimage = result[0][0]
show_image(targetimage)
plt.title(context)
plt.show()
print("Dirty Image %s" % qa_image(targetimage, context="imaging-fits notebook, using processor %s" % context))
export_image_to_fits(targetimage, '%s/imaging-fits_dirty_%s.fits' % (results_dir, context))
comps = find_skycomponents(targetimage, fwhm=1.0, threshold=10.0, npixels=5)
plt.clf()
for comp in comps:
distance = comp.direction.separation(model.phasecentre)
dft_flux = sum_visibility(vt, comp.direction)[0]
err = (comp.flux[0, 0] - dft_flux) / dft_flux
plt.plot(distance, err, '.', color='r')
plt.ylabel('Fractional error of image vs DFT')
plt.xlabel('Distance from phasecentre (deg)')
plt.title(
"Fractional error in %s recovered flux vs distance from phasecentre" %
context)
plt.show()
checkpositions = True
if checkpositions:
plt.clf()
for i in range(len(comps)):
ocomp, sep = find_nearest_skycomponent(comps[i].direction, original_comps)
plt.plot(
(comps[i].direction.ra.value - ocomp.direction.ra.value) /
targetimage.wcs.wcs.cdelt[0],
(comps[i].direction.dec.value - ocomp.direction.dec.value) /
targetimage.wcs.wcs.cdelt[1],
'.',
color='r')
plt.xlabel('delta RA (pixels)')
plt.ylabel('delta DEC (pixels)')
plt.title("%s: Position offsets" % context)
plt.show()
"""
Explanation: Make the image using the different approaches. We will evaluate the results using a number of plots:
The error in fitted versus the radius. The ideal result is a straightline fitted: flux = DFT flux
The offset in RA versus the offset in DEC. The ideal result is a cluster around 0 pixels.
The sampling in w is set to provide 2% decorrelation at the half power point of the primary beam.
End of explanation
"""
|
GoogleCloudPlatform/nvidia-merlin-on-vertex-ai
|
02-model-training-hugectr.ipynb
|
apache-2.0
|
import json
import os
import time
from google.cloud import aiplatform as vertex_ai
from google.cloud.aiplatform import hyperparameter_tuning as hpt
"""
Explanation: Training Large Recommender Models with NVIDIA Merlin HugeCTR and Vertex AI
This notebook demonstrates how to use Vertex AI Training to operationalize training and hyperparameter tuning of large scale deep learning models developed with NVIDIA Merlin HugeCTR framework. The notebook compiles prescriptive guidance for the following tasks:
Building a custom Vertex training container derived from NVIDIA NGC Merlin Training image
Configuring, submitting and monitoring a Vertex custom training job
Configuring, submitting and monitoring a Vertex hyperparameter tuning job
Retrieving and analyzing results of a hyperparameter tuning job
The deep learning model used in this sample is DeepFM - a Factorization-Machine based Neural Network for CTR Prediction. The HugeCTR implementation of this model used in this notebook has been configured for the Criteo dataset.
1. NVIDIA Merlin HugeCTR Overview
Merlin HugeCTR is NVIDIA's GPU-accelerated, highly scalable recommender framework. We highly encourage reviewing the HugeCTR User Guide before proceeding with this notebook.
Merlin HugeCTR facilitates highly scalable implementations of leading deep learning recommender models including Google's Wide and Deep, Facebook's DLRM, and the DeepFM model used in this notebook.
A unique feature of HugeCTR is support for model-parallel embedding tables. Applying model-parallelism for embedding tables enables massive scalability. Industrial grade deep learning recommendation systems most often employ very large embedding tables. User and item embedding tables - cornerstones of any recommender - can easily exceed tens of millions of rows. Without model-parallelism it would be impossible to fit embedding tables this large in the memory of a single device - especially when using large embeddng vectors. In HugeCTR, embedding tables can span device memory across multiple GPUs on a single node or even across GPUs in a large distributed cluster.
HugeCTR supports multiple model-parallelism configurations for embedding tables. Refer to the HugeCTR API reference for detailed descriptions. In the DeepFM implementation used in this notebook, we utilize the LocalizedSlotSparseEmbeddingHash embedding type. In this embedding type, an embedding table is segmented into multiple slots or feature fields. Each slot stores embeddings for a single categorical feature. A given slot is allocated to a single GPU and it does not span multiple GPUs. However; in a multi GPU environment different slots can be allocated to different GPUs.
The following diagram demonstrates an example configuration on a single node with multiple GPU - a hardware topology used by Vertex jobs in this notebook.
<img src="./images/deepfm.png" alt="Model parallel embeddings" style="width: 70%;"/>
The Criteo dataset has 26 categorical features so there are 26 slots in the embedding table. Cardinalities of categorical variables vary from tens of milions to low teens so the dimensions of slots vary accordingly. Each slot in the embedding table utilizes an embedding vector of the same size. Note that the distribution of slots across GPUs is handled by HugeCTR; you don't have to explicitly pin a slot to a GPU.
Dense layers of the DeepFM models are replicated on all GPUs using a canonical data-parallel pattern.
A choice of an optimizer is critical when training large deep learning recommender systems. Different optimizers may result in significantly different convergence rates impacting both time (cost) to train and a final model performance. Since large recommender systems are often retrained on frequent basis minimizing time to train is one of the key design objectives for a training workflow. In this notebook we use the Adam optimizer that has been proved to work well with many deep learning recommeder system architectures.
You can find the code that implements the DeepFM model in the src/training/hugectr/trainer/model.py file.
2. Model Training Overview
The training workflow has been optimized for Vertex AI Training.
- Google Cloud Storage (GCS) and Vertex Training GCS Fuse are used for accessing training and validation data
- A single node, multiple GPUs worker pool is used for Vertex Training jobs.
- Training code has been instrumented to support hyperparameter tuning using Vertex Training Hyperparameter Tuning Job.
You can find the code that implements the training workflow in the src/training/hugectr/trainer/task.py file.
Training data access
Large deep learning recommender systems are trained on massive datasets often hundreds of terabytes in size. Maintaining high-throughput when streaming training data to GPU workers is of critical importance. HugeCTR features a highly efficient multi-threaded data reader that parallelizes data reading and model computations. The reader accesses training data through a file system interface. The reader cannot directly access object storage systems like Google Cloud Storage, which is a canonical storage system for large scale training and validation datasets in Vertex AI Training. To expose Google Cloud Storage through a file system interface, the notebook uses an integrated feature of Vertex AI - Google Cloud Storage FUSE. Vertex AI GCS FUSE provides a high performance file system interface layer to GCS that is self-tuning and requires minimal configuration. The following diagram depicts the training data access configuration:
<img src="images/gcsfuse.png" alt="GCS Fuse" style="width:70%"/>
Vertex AI Training worker pool configuration
HugeCTR supports both single node, multiple GPU configurations and multiple node, multiple GPU distributed cluster topologies. In this sample, we use a single node, multiple GPU configuration. Due to the computational complexity of modern deep learning recommender models we recommend using Vertex Training A2 series machines for large models implemented with HugeCTR. The A2 machines can be configured with up to 16 A100 GPUs, 96 vCPUs, and 1,360GBs RAM. Each A100 GPU has 40GB of device memory. These are powerful configurations that can handle complex models with large embeddings.
In this sample we use the a2-highgpu-4g machine type.
Both custom training and hyperparameter tuning Vertex AI jobs demonstrated in this notebook are configured to use a custom training container. The container is a derivative of NVIDIA NGC Merlin training container. The definition of the container image is found in the Dockefile.hugectr file.
HugeCTR hyperparameter tuning with Vertex AI
The training module has been instrumented to support hyperparameter tuning with Vertex AI. The custom container includes the cloudml-hypertune package, which is used to report the results of model evaluations to Vertex AI hypertuning service. The following diagram depicts the training flow implemeted by the training module.
<img src="images/hugectrtrainer.png" alt="Training regime" style="width:40%"/>
Note that as of HugeCTR v3.2 release, the hugectr.inference.InferenceSession.evaluate method used in the trainer module only supports the AUC evaluation metric.
3. Executing Model Training on Vertex AI
This notebook assumes that the Criteo dataset has been preprocessed using the preprocessing workflow detailed in the 01-dataset-preprocessing.ipynb notebook and the resulting Parquet training and validation splits, and the processed data schema have been stored in Google Cloud Storage.
As you walk through the notebook you will execute the following steps:
- Configure notebook environment settings, including GCP project, compute region, and the GCS locations of training and validation data splits.
- Build a custom Vertex training container based on NVIDIA NGC Merlin Training container
- Configure and submit a Vertex custom training job
- Configure and submit a Vertex hyperparameter training job
- Retrieve the results of the hyperparameter tuning job
Setup
In this section of the notebook you configure your environment settings, including a GCP project, a GCP compute region, a Vertex AI service account and a Vertex AI staging bucket. You also set the locations of training and validation splits, and their schema as created in the 01-dataset-preprocessing.ipynb notebook.
Make sure to update the below cells with the values reflecting your environment.
First import all the necessary python packages.
End of explanation
"""
# Project definitions
PROJECT_ID = '<YOUR PROJECT ID>' # Change to your project id.
REGION = '<LOCATION OF RESOURCES>' # Change to your region.
# Service Account address
VERTEX_SA = f'vertex-sa@{PROJECT_ID}.iam.gserviceaccount.com' # Change to your service account with Vertex AI Admin permitions.
# Bucket definitions
BUCKET = '<YOUR BUCKET NAME>' # Change to your bucket.
# Dataset information for training / validation / schema. The path must point to the file _gcs_file_list.txt
# The path MUST start with `/gcs/<bucket_name>/...` to be used as a GCSFuse path.
# The following examples is a path to a bucket create by the previous notebook (preprocessing with NVTabular)
# Please change to your path.
TRAIN_DATA = '/gcs/merlin-on-gcp/nvt-preprocessing-v01-2205/nvt-csv-pipeline/375468928805/nvt-csv-pipeline-20220603050219/transform-dataset-op_-2488396574040784896/transformed_dataset/_gcs_file_list.txt'
VALID_DATA = '/gcs/merlin-on-gcp/nvt-preprocessing-v01-2205/nvt-csv-pipeline/375468928805/nvt-csv-pipeline-20220603050219/transform-dataset-op-2_4429132453600296960/transformed_dataset/_gcs_file_list.txt'
# Schema used by the training pipepine
# The path must point to the file schema.pbtxt
SCHEMA_PATH = '/gcs/merlin-on-gcp/nvt-preprocessing-v01-2205/nvt-csv-pipeline/375468928805/nvt-csv-pipeline-20220603050219/transform-dataset-op_-2488396574040784896/transformed_dataset/schema.pbtxt'
"""
Explanation: Change the following variables according to your definitions.
End of explanation
"""
# Bucket definitions
VERSION = 'v01'
MODEL_NAME = 'deepfm'
MODEL_DISPLAY_NAME = f'hugectr-{MODEL_NAME}-{VERSION}'
WORKSPACE = f'gs://{BUCKET}/{MODEL_DISPLAY_NAME}'
# Docker definitions for training
IMAGE_NAME = 'hugectr-training'
IMAGE_URI = f'gcr.io/{PROJECT_ID}/{IMAGE_NAME}'
DOCKERNAME = 'hugectr'
"""
Explanation: Change the following variables ONLY if necessary.
You can leave the default variables.
End of explanation
"""
vertex_ai.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=os.path.join(WORKSPACE, 'stg')
)
"""
Explanation: Initialize Vertex AI SDK
End of explanation
"""
FILE_LOCATION = './src'
! gcloud builds submit --config src/cloudbuild.yaml --substitutions _DOCKERNAME=$DOCKERNAME,_IMAGE_URI=$IMAGE_URI,_FILE_LOCATION=$FILE_LOCATION --timeout=2h --machine-type=e2-highcpu-8
"""
Explanation: Submit a Vertex custom training job
In this section of the notebook you define, submit and monitor a Vertex custom training job. As noted in the introduction, the job uses a custom training container that is a derivative of NVIDIA NGC Merlin training container image. The custom container image packages the training module which includes a DeepFM model definition - src/training/hugectr/model.py and a training and evaluation workflow" - src/training/hugectr/task.py. The custom container image also installs the cloudml-hypertune package for integration with Vertex AI hypertuning.
Build a custom training container
End of explanation
"""
# Training parameters
NUM_EPOCHS = 0
MAX_ITERATIONS = 25000
EVAL_INTERVAL = 1000
EVAL_BATCHES = 500
EVAL_BATCHES_FINAL = 2500
DISPLAY_INTERVAL = 200
SNAPSHOT_INTERVAL = 0
PER_GPU_BATCH_SIZE = 2048
LR = 0.001
DROPOUT_RATE = 0.5
NUM_WORKERS = 12
"""
Explanation: Configure a custom training job
The training module accepts a set of parameters that allow you to fine tune the DeepFM model implementation and configure the training workflow. Most of the parameters exposed by the training module map directly to the settings used in HugeCTR Python Inteface.
NUM_EPOCHS: The training workflow can run in either an epoch mode or a non-epoch mode. When the constant NUM_EPOCHS is set to a value greater than zero the model will be trained on the NUM_EPOCHS number of full epochs, where an epoch is defined as a single pass through all examples in the training data.
MAX_ITERATIONS: If NUM_EPOCHS is set to zero, you must set MAX_ITERATIONS to a value greater than zero. MAX_ITERATIONS defines the number of batches to train the model on. When NUM_EPOCHS is greater than zero MAX_ITERATIONS is ignored.
EVAL_INTERVAL and EVAL_BATCHES: The model will be evaluated every EVAL_INTERVAL training batches using the EVAL_BATCHES validation batches during the main training loop. In the current implementation the evaluation metric is AUC.
EVAL_BATCHES_FINAL: After the main training loop completes, a final evaluation will be run using the EVAL_BATCHES_FINAL. The AUC value returned is reported to Vertex AI hypertuner.
DISPLAY_INTERVAL: Training progress will be reported every DISPLAY_INTERVAL batches.
SNAPSHOT_INTERVAL: When set to a value greater than zero, a snapshot will be saved every SNAPSHOT_INTERVAL batches.
PER_GPU_BATCH_SIZE: Per GPU batch size. This value should be set through experimentation and depends on model architecture, training features, and GPU type. It is highly dependent on device memory available in a particular GPU. In our scenario - DeepFM, Criteo datasets, and A100 GPU - a batch size of 2048 works well.
LR: The base learning rate for the HugeCTR solver.
DROPOUT_RATE: The base dropout rate used in DeepFM dense layers.
NUM_WORKERS: The number of HugeCTR data reader workers that concurrently load data. This value should be estimated through experimentation. This is a per GPU value. The default, which works well on A100 GPUs is 12. For the optimal performance NUM_WORKERS should be aligned with the number of files (shards) in the training data.
SCHEMA: The path to the schema.pbtxt file generated during the transformation phase. It is required to exract the cardinalities categorical features.
Set HugeCTR model and trainer configuration
End of explanation
"""
cardinalities = [
9999999,
39061,
17296,
7425,
20266,
4,
7123,
1544,
64,
9999999,
3067956,
405283,
11,
2209,
11939,
155,
4,
977,
15,
9999999,
9999999,
9999999,
590152,
12974,
109,
37
]
cardinalities = ' '.join([str(c) for c in cardinalities])
"""
Explanation: To train the model with HugeCTR, you must inform the cardinalities of the categorical features.
This information can be retrieve from the fitted workflow from the execution of the previous notebook (preprocessing pipeline).
To demonstrate how to use HugeCTR, we provided this information in the next cell.
The last notebook in this series, end-to-end pipeline, this information is generated automatically by the pipeline.
End of explanation
"""
MACHINE_TYPE = 'a2-highgpu-1g'
ACCELERATOR_TYPE = 'NVIDIA_TESLA_A100'
ACCELERATOR_NUM = 1
"""
Explanation: Set training node configuration
As described in the overview, we use a single node, multiple GPU worker pool configuration. For a complex deep learning model like DeepFM, we recommend using A2 machines that are equipped with A100 GPUs.
In this sample we use a a2-highgpu-1g machine. For production systems, you may consider even more powerful configurations - a2-highgpu-8g with 8 A100 GPUs or a2-megagpu-16g with 16 A100 GPUs.
End of explanation
"""
gpus = json.dumps([list(range(ACCELERATOR_NUM))]).replace(' ','')
worker_pool_specs = [
{
"machine_spec": {
"machine_type": MACHINE_TYPE,
"accelerator_type": ACCELERATOR_TYPE,
"accelerator_count": ACCELERATOR_NUM,
},
"replica_count": 1,
"container_spec": {
"image_uri": IMAGE_URI,
"command": ["python", "-m", "task"],
"args": [
f'--per_gpu_batch_size={PER_GPU_BATCH_SIZE}',
f'--model_name={MODEL_NAME}',
f'--train_data={TRAIN_DATA}',
f'--valid_data={VALID_DATA}',
f'--schema={SCHEMA_PATH}',
f'--slot_size_array={cardinalities}',
f'--max_iter={MAX_ITERATIONS}',
f'--max_eval_batches={EVAL_BATCHES}',
f'--eval_batches={EVAL_BATCHES_FINAL}',
f'--dropout_rate={DROPOUT_RATE}',
f'--lr={LR}',
f'--num_workers={NUM_WORKERS}',
f'--num_epochs={NUM_EPOCHS}',
f'--eval_interval={EVAL_INTERVAL}',
f'--snapshot={SNAPSHOT_INTERVAL}',
f'--display_interval={DISPLAY_INTERVAL}',
f'--gpus={gpus}',
],
},
}
]
"""
Explanation: Configure worker pool specifications
In this cell we configure a worker pool specification for a Vertex Custom Training job. Refer to Vertex AI Training documentation for more details.
End of explanation
"""
job_name = 'hugectr_{}'.format(time.strftime("%Y%m%d_%H%M%S"))
base_output_dir = os.path.join(WORKSPACE, job_name)
job = vertex_ai.CustomJob(
display_name=job_name,
worker_pool_specs=worker_pool_specs,
base_output_dir=base_output_dir
)
job.run(
sync=True,
service_account=VERTEX_SA,
restart_job_on_worker_restart=False
)
"""
Explanation: Submit and monitor the job
When submitting a training job using the aiplatfom.CustomJob API you can configure the job.run function to block till the job completes or return control to the notebook immediately after the job is submitted. You control it with the sync argument.
End of explanation
"""
metric_spec = {'AUC': 'maximize'}
parameter_spec = {
'lr': hpt.DoubleParameterSpec(min=0.001, max=0.01, scale='log'),
'dropout_rate': hpt.DiscreteParameterSpec(values=[0.4, 0.5, 0.6], scale=None),
}
"""
Explanation: Submit and monitor a Vertex hyperparameter tuning job
In this section of the notebook, you configure, submit and monitor a Vertex AI raining hyperparameter tuning job. We will demonstrate how to use Vertex Training hyperparameter tuning to find optimal values for the base learning rate and the dropout ratio. This example can be easily extended to other parameters - e.g. the batch size or even the optimizer type.
As noted in the overview, the training module has been instrumented to integrate with Vertex AI Training hypertuning. After the final evaluation is completed, the AUC value calculated on the EVAL_BATCHES_FINAL number of batches from the validation dataset is reported to Vertex AI Training using the report_hyperparameter_tuning_metric. When the training module is executed in the context of a Vertex Custom Job this code path has no effect. When used with a Vertex AI Training hyperparameter job, the job is configured to use the AUC as a metric to optimize.
Configure a hyperparameter job
To prepare a Vertex Training hyperparameter tuning job you need to configure a worker pool specificatin and a hyperparameter study configuration. Configuring a worker pool is virtually the same as with a Custom Job. The only difference is that you don't need to explicitly pass values of hyperparameters being tuned to a training container. They will be provided by the hyperparameter tuning service.
To configure a hyperparameter study you need to define a metric to optimize, an optimization goal, and a set of configurations for hyperparameters to tune.
In our case the metric is AUC, the optimization goal is to maximize AUC, and the hyperparameters are lr, and dropout_rate. Notice that you have to match the name of the metric with the name used to report the metric in the training module. You also have to match the names of the hyperparameters with the respective names of command line parameters in your training container.
For each hyperparameter you specify a strategy to apply for sampling values from the hyperparameter's domain. For the lr hyperparameter we configure the tuning service to sample values from a continuous range between 0.001 to 0.01 using a logarithmic scale. For the dropout_rate we provide a list of discrete values to choose from.
For more information about configuring a hyperparameter study refer to Vertex AI Hyperparameter job configuration.
End of explanation
"""
job_name = 'HUGECTR_HTUNING_{}'.format(time.strftime("%Y%m%d_%H%M%S"))
base_output_dir = os.path.join(WORKSPACE, "model_training", job_name)
custom_job = vertex_ai.CustomJob(
display_name=job_name,
worker_pool_specs=worker_pool_specs,
base_output_dir=base_output_dir
)
hp_job = vertex_ai.HyperparameterTuningJob(
display_name=job_name,
custom_job=custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=4,
parallel_trial_count=2,
search_algorithm=None)
hp_job.run(
sync=True,
service_account=VERTEX_SA,
restart_job_on_worker_restart=False
)
"""
Explanation: Submit and monitor the job
We can now submit a hyperparameter tuning job. When submitting the job you specify a maximum number of trials to attempt and how many trials to run in parallel.
End of explanation
"""
hp_job.trials
"""
Explanation: Retrieve trial results
After a hyperparameter tuning job completes you can retrieve the trial results from the job object. The results are returned as a list of trial records. To retrieve the trial with the best value of a metric - AUC - you need to scan through all trial records.
End of explanation
"""
best_trial = sorted(hp_job.trials,
key=lambda trial: trial.final_measurement.metrics[0].value,
reverse=True)[0]
print("Best trial ID:", best_trial.id)
print(" AUC:", best_trial.final_measurement.metrics[0].value)
print(" LR:", best_trial.parameters[1].value)
print(" Dropout rate:", best_trial.parameters[0].value)
"""
Explanation: Find the best trial
End of explanation
"""
|
arcyfelix/Courses
|
18-11-22-Deep-Learning-with-PyTorch/05-Recurrent Neural Networks/02 - Character_Level_RNN.ipynb
|
apache-2.0
|
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
"""
Explanation: Character-Level LSTM in PyTorch
In this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. This model will be able to generate new text based on the text from the book!
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Below is the general architecture of the character-wise RNN.
<img src="images/charseq.jpeg" width="500">
First let's load in our required resources for data loading and model creation.
End of explanation
"""
# Open text file and read in data as `text`
with open('data/anna.txt', 'r') as f:
text = f.read()
"""
Explanation: Load in Data
Then, we'll load the Anna Karenina text file and convert it into integers for our network to use.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
# Encode the text and map each character to an integer and vice versa
# We create two dictionaries:
# 1. int2char, which maps integers to characters
# 2. char2int, which maps characters to unique integers
chars = tuple(set(text))
int2char = dict(enumerate(chars))
char2int = {ch: ii for ii, ch in int2char.items()}
# Encode the text
encoded = np.array([char2int[ch] for ch in text])
"""
Explanation: Tokenization
In the cells, below, I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see those same characters from above, encoded as integers.
End of explanation
"""
def one_hot_encode(arr, n_labels):
# Initialize the the encoded array
one_hot = np.zeros((np.multiply(*arr.shape), n_labels),
dtype=np.float32)
# Fill the appropriate elements with ones
one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.
# Finally reshape it to get back to the original array
one_hot = one_hot.reshape((*arr.shape, n_labels))
return one_hot
# Check that the function works as expected
test_seq = np.array([[3, 5, 1]])
one_hot = one_hot_encode(test_seq, 8)
print(one_hot)
"""
Explanation: Pre-processing the data
As you can see in our char-RNN image above, our LSTM expects an input that is one-hot encoded meaning that each character is converted into an integer (via our created dictionary) and then converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!
End of explanation
"""
def get_batches(arr, batch_size, seq_length):
'''Create a generator that returns batches of size
batch_size x seq_length from arr.
Arguments
---------
arr: Array you want to make batches from
batch_size: Batch size, the number of sequences per batch
seq_length: Number of encoded chars in a sequence
'''
## TODO: Get the number of batches we can make
n_batches = len(arr) // (batch_size * seq_length)
## TODO: Keep only enough characters to make full batches
arr = arr[:(n_batches * batch_size * seq_length)]
## TODO: Reshape into batch_size rows
arr = arr.reshape((batch_size, -1))
## TODO: Iterate over the batches using a window of size seq_length
for n in range(0, arr.shape[1], seq_length):
# The features
x = arr[:, n:(n + seq_length)]
# The targets, shifted by one
y = np.zeros_like(x)
try:
y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n + seq_length]
except IndexError:
y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
To train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="images/sequence_batching@1x.png" width=500px>
<br>
In this example, we'll take the encoded characters (passed in as the arr parameter) and split them into multiple sequences, given by batch_size. Each of our sequences will be seq_length long.
Creating Batches
1. The first thing we need to do is discard some of the text so we only have completely full mini-batches.
Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array arr, you divide the length of arr by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from arr, $N * M * K$.
2. After that, we need to split arr into $N$ batches.
You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.
3. Now that we have this array, we can iterate through it to get our mini-batches.
The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by seq_length. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of tokens in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is seq_length wide.
TODO: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
"""
batches = get_batches(encoded, 8, 50)
x, y = next(batches)
# printing out the first 10 items in a sequence
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Test Your Implementation
Now I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps.
End of explanation
"""
# check if GPU is available
train_on_gpu = torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU!')
else:
print('No GPU available, training on CPU; consider making n_epochs very small.')
class CharRNN(nn.Module):
def __init__(self,
tokens,
n_hidden=256,
n_layers=2,
drop_prob=0.5,
lr=0.001):
super().__init__()
self.drop_prob = drop_prob
self.n_layers = n_layers
self.n_hidden = n_hidden
self.lr = lr
# creating character dictionaries
self.chars = tokens
self.int2char = dict(enumerate(self.chars))
self.char2int = {ch: ii for ii, ch in self.int2char.items()}
## TODO: define the layers of the model
self.lstm = nn.LSTM(input_size=len(self.chars),
hidden_size=n_hidden,
num_layers=n_layers,
dropout=drop_prob,
batch_first=True)
## Define dropout
self.dropout = nn.Dropout(drop_prob)
## Define the final fully-connected layer
self.fc_out = nn.Linear(in_features=n_hidden,
out_features=len(self.chars))
def forward(self, x, hidden):
''' Forward pass through the network.
These inputs are x, and the hidden/cell state `hidden`. '''
## TODO: Get the outputs and the new hidden state from the lstm
lstm_out, hidden = self.lstm(x, hidden)
after_dropout = self.dropout(lstm_out)
# Reshaping the data
reshaped = after_dropout.contiguous().view(-1, self.n_hidden)
# Return the final output and the hidden state
out = self.fc_out(reshaped)
return out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_())
return hidden
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[25 8 60 11 45 27 28 73 1 2]
[17 7 20 73 45 8 60 45 73 60]
[27 20 80 73 7 28 73 60 73 65]
[17 73 45 8 27 73 66 8 46 27]
[73 17 60 12 73 8 27 28 73 45]
[66 64 17 17 46 7 20 73 60 20]
[73 76 20 20 60 73 8 60 80 73]
[47 35 43 7 20 17 24 50 37 73]]
y
[[ 8 60 11 45 27 28 73 1 2 2]
[ 7 20 73 45 8 60 45 73 60 45]
[20 80 73 7 28 73 60 73 65 7]
[73 45 8 27 73 66 8 46 27 65]
[17 60 12 73 8 27 28 73 45 27]
[64 17 17 46 7 20 73 60 20 80]
[76 20 20 60 73 8 60 80 73 17]
[35 43 7 20 17 24 50 37 73 36]]
``
although the exact numbers may be different. Check to make sure the data is shifted over one step fory`.
Defining the network with PyTorch
Below is where you'll define the network.
<img src="images/charRNN.png" width=500px>
Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters.
Model Structure
In __init__ the suggested structure is as follows:
* Create and store the necessary dictionaries (this has been done for you)
* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size n_hidden, a number of layers n_layers, a dropout probability drop_prob, and a batch_first boolean (True, since we are batching)
* Define a dropout layer with dropout_prob
* Define a fully-connected layer with params: input size n_hidden and output size (the number of characters)
* Finally, initialize the weights (again, this has been given)
Note that some parameters have been named and given in the __init__ function, and we use them and store them by doing something like self.drop_prob = drop_prob.
LSTM Inputs/Outputs
You can create a basic LSTM layer as follows
python
self.lstm = nn.LSTM(input_size, n_hidden, n_layers,
dropout=drop_prob, batch_first=True)
where input_size is the number of characters this cell expects to see as sequential input, and n_hidden is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the forward function, we can stack up the LSTM cells into layers using .view. With this, you pass in a list of cells and it will send the output of one cell into the next cell.
We also need to create an initial hidden state of all zeros. This is done like so
python
self.init_hidden()
End of explanation
"""
def train(net, data, epochs=10, batch_size=10, seq_length=50, lr=0.001, clip=5, val_frac=0.1, print_every=10):
''' Training a network
Arguments
---------
net: CharRNN network
data: text data to train the network
epochs: Number of epochs to train
batch_size: Number of mini-sequences per mini-batch, aka batch size
seq_length: Number of character steps per mini-batch
lr: learning rate
clip: gradient clipping
val_frac: Fraction of data to hold out for validation
print_every: Number of steps for printing training and validation loss
'''
net.train()
opt = torch.optim.Adam(params=net.parameters(),
lr=lr)
criterion = nn.CrossEntropyLoss()
# Create training and validation data
val_idx = int(len(data)*(1-val_frac))
data, val_data = data[:val_idx], data[val_idx:]
if(train_on_gpu):
net.cuda()
counter = 0
n_chars = len(net.chars)
for e in range(epochs):
# Initialize hidden state
h = net.init_hidden(batch_size)
for x, y in get_batches(data, batch_size, seq_length):
counter += 1
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
inputs, targets = torch.from_numpy(x), torch.from_numpy(y).type(torch.LongTensor)
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# Zero accumulated gradients
net.zero_grad()
# Get the output from the model
output, h = net(inputs, h)
# Calculate the loss and perform backprop
loss = criterion(output, targets.view(batch_size * seq_length))
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
opt.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for x, y in get_batches(val_data, batch_size, seq_length):
# One-hot encode our data and make them Torch tensors
x = one_hot_encode(x, n_chars)
x, y = torch.from_numpy(x), torch.from_numpy(y).type(torch.LongTensor)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
inputs, targets = x, y
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output, targets.view(batch_size * seq_length))
val_losses.append(val_loss.item())
# Reset to train mode after iterationg through validation data
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.4f}...".format(loss.item()),
"Val Loss: {:.4f}".format(np.mean(val_losses)))
"""
Explanation: Time to train
The train function gives us the ability to set the number of epochs, the learning rate, and other parameters.
Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!
A couple of details about training:
Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new tuple variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.
We use clip_grad_norm_ to help prevent exploding gradients.
End of explanation
"""
## TODO: set you model hyperparameters
# Define and print the net
n_hidden = 256
n_layers = 2
net = CharRNN(chars, n_hidden, n_layers)
print(net)
"""
Explanation: Instantiating the model
Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!
End of explanation
"""
batch_size = 128
seq_length = 100
# Start small if you are just testing initial behavior
n_epochs = 20
# Train the model
train(net, encoded, epochs=n_epochs,
batch_size=batch_size, seq_length=seq_length,
lr=0.001, print_every=10)
"""
Explanation: Set your training hyperparameters!
End of explanation
"""
# Change the name, for saving multiple files
model_name = './models/rnn_x_epoch.net'
checkpoint = {'n_hidden': net.n_hidden,
'n_layers': net.n_layers,
'state_dict': net.state_dict(),
'tokens': net.chars}
with open(model_name, 'wb') as f:
torch.save(checkpoint, f)
"""
Explanation: Getting the best model
To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.
Hyperparameters
Here are the hyperparameters for the network.
In defining the model:
* n_hidden - The number of units in the hidden layers.
* n_layers - Number of hidden LSTM layers to use.
We assume that dropout probability and learning rate will be kept at the default, in this example.
And in training:
* batch_size - Number of sequences running through the network in one pass.
* seq_length - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
* lr - Learning rate for training
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are n_hidden and n_layers. I would advise that you always use n_layers of either 2/3. The n_hidden can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make n_hidden larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
Checkpoint
After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.
End of explanation
"""
def predict(net, char, h=None, top_k=None):
''' Given a character, predict the next character.
Returns the predicted character and the hidden state.
'''
# Tensor inputs
x = np.array([[net.char2int[char]]])
x = one_hot_encode(x, len(net.chars))
inputs = torch.from_numpy(x)
if(train_on_gpu):
inputs = inputs.cuda()
# Detach hidden state from history
h = tuple([each.data for each in h])
# Get the output of the model
out, h = net(inputs, h)
# Get the character probabilities
p = F.softmax(out, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# Get top characters
if top_k is None:
top_ch = np.arange(len(net.chars))
else:
p, top_ch = p.topk(top_k)
top_ch = top_ch.numpy().squeeze()
# Select the likely next character with some element of randomness
p = p.numpy().squeeze()
char = np.random.choice(top_ch, p=p/p.sum())
# Return the encoded value of the predicted char and the hidden state
return net.int2char[char], h
"""
Explanation: Making Predictions
Now that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!
A note on the predict function
The output of our RNN is from a fully-connected layer and it outputs a distribution of next-character scores.
To actually get the next character, we apply a softmax function, which gives us a probability distribution that we can then sample to predict the next character.
Top K sampling
Our predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about topk, here.
End of explanation
"""
def sample(net, size, prime='The', top_k=None):
if(train_on_gpu):
net.cuda()
else:
net.cpu()
# Eval mode
net.eval()
# First off, run through the prime characters
chars = [ch for ch in prime]
h = net.init_hidden(1)
for ch in prime:
char, h = predict(net, ch, h, top_k=top_k)
chars.append(char)
# Now pass in the previous character and get a new one
for ii in range(size):
char, h = predict(net, chars[-1], h, top_k=top_k)
chars.append(char)
return ''.join(chars)
print(sample(net, 1000, prime='Anna', top_k=5))
"""
Explanation: Priming and generating text
Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.
End of explanation
"""
# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`
with open('./models/rnn_x_epoch.net', 'rb') as f:
checkpoint = torch.load(f)
loaded = CharRNN(checkpoint['tokens'],
n_hidden=checkpoint['n_hidden'],
n_layers=checkpoint['n_layers'])
loaded.load_state_dict(checkpoint['state_dict'])
# Sample using a loaded model
print(sample(loaded, 2000,
top_k=5,
prime="And Levin said"))
"""
Explanation: Loading a checkpoint
End of explanation
"""
|
Ecotrust/growth-yield-batch
|
notebooks/QAQC on Ridge Property.ipynb
|
bsd-3-clause
|
%matplotlib inline
from matplotlib.pylab import plt
import pandas as pd
from sqlalchemy import create_engine
from matplotlib import cm
import seaborn as sns
"""
Explanation: This notebook will explore the Ridge property data as modeled by FVS and the Ecotrust Growth-Yield-Batch system. Also serves as a demonstration of pandas and associated python libraries.
First, we import the necessary libraries
End of explanation
"""
engine = create_engine('sqlite:///data.db')
df = pd.read_sql_table('trees_fvsaggregate', engine)
"""
Explanation: Create a connection ("engine") to the sqlite database produced by GYB and read the entire table into a pandas DataFrame.
End of explanation
"""
df.head(6)
"""
Explanation: Ipython is not the cleanest interface with which to browse large datasets. Luckily you can just take the top couple rows
End of explanation
"""
df.shape # (rows, columns)
"""
Explanation: or write it to excel with df.to_excel('file.txt'). Keep in mind the size of the dataset though, not recommended for the full dataset due to limitations in excel
End of explanation
"""
df.describe()
"""
Explanation: Basic descriptive statistics
End of explanation
"""
import numpy as np
pt_year = pd.pivot_table(df,
index=['cond', 'rx', 'offset'],
columns=['year'],
values=['removed_merch_bdft'],
aggfunc=[np.sum],
margins=True)
pt_year.to_excel("harvest_by_year.xls")
pt_year.head()
"""
Explanation: The best feature of both pandas and excel: pivot tables. And note that we can write the resulting DataFrame out to an excel file for easier viewing.
End of explanation
"""
startdf = df.query("year == 2013 and rx == 1")
# same result with alternate syntax using .loc
startdf = df.loc[(df.year == 2013) & (df.rx == 1)]
"""
Explanation: In testing in the Forest Planner, we noticed that yields were very low initially and that stands were starting the simulation at single-digit trees per acre (TPA). Let's confirm that we've resolved that issue.
First subset the query for the grow-only rx in 2013.
End of explanation
"""
startdf.start_tpa.hist()
conds = df.cond.unique()
conds.sort()
conds
plt.rcParams['figure.figsize'] = (10.0, 8.0)
sns.tsplot(df.loc[(df.offset == 0)],
"year", unit="cond", condition="rx", value="after_merch_ft3")
from pandas.tools.plotting import scatter_matrix
scatter_matrix(df[['after_qmd', 'after_tpa']],
alpha=0.2, figsize=(6, 6), diagonal='kde')
"""
Explanation: Examining the distribution of starting TPA, we see reasonable TPAs
End of explanation
"""
|
anugrah-saxena/pycroscopy
|
docs/auto_examples/microdata_example.ipynb
|
mit
|
# Code source: Chris Smith -- cq6@ornl.gov
# Liscense: MIT
import numpy as np
import pycroscopy as px
"""
Explanation: Writing to hdf5 using the Microdata objects
End of explanation
"""
# First create some data
data1 = np.random.rand(5, 7)
"""
Explanation: Create some MicroDatasets and MicroDataGroups that will be written to the file.
With h5py, groups and datasets must be created from the top down,
but the Microdata objects allow us to build them in any order and link them later.
End of explanation
"""
ds_main = px.MicroDataset('Main_Data', data=data1, parent='/')
"""
Explanation: Now use the array to build the dataset. This dataset will live
directly under the root of the file. The MicroDataset class also implements the
compression and chunking parameters from h5py.Dataset.
End of explanation
"""
ds_empty = px.MicroDataset('Empty_Data', data=[], dtype=np.float32, maxshape=[7, 5, 3])
"""
Explanation: We can also create an empty dataset and write the values in later
With this method, it is neccessary to specify the dtype and maxshape kwarg parameters.
End of explanation
"""
data_group = px.MicroDataGroup('Data_Group', parent='/')
root_group = px.MicroDataGroup('/')
# After creating the group, we then add an existing object as its child.
data_group.addChildren([ds_empty])
root_group.addChildren([ds_main, data_group])
"""
Explanation: We can also create groups and add other MicroData objects as children.
If the group's parent is not given, it will be set to root.
End of explanation
"""
root_group.showTree()
"""
Explanation: The showTree method allows us to view the data structure before the hdf5 file is
created.
End of explanation
"""
# First we specify the path to the file
h5_path = 'microdata_test.h5'
# Then we use the ioHDF5 class to build the file from our objects.
hdf = px.ioHDF5(h5_path)
"""
Explanation: Now that we have created the objects, we can write them to an hdf5 file
End of explanation
"""
h5_refs = hdf.writeData(root_group, print_log=True)
# We can use these references to get the h5py dataset and group objects
h5_main = px.io.hdf_utils.getH5DsetRefs(['Main_Data'], h5_refs)[0]
h5_empty = px.io.hdf_utils.getH5DsetRefs(['Empty_Data'], h5_refs)[0]
"""
Explanation: The writeData method builds the hdf5 file using the structure defined by the
MicroData objects. It returns a list of references to all h5py objects in the
new file.
End of explanation
"""
print(np.allclose(h5_main[()], data1))
"""
Explanation: Compare the data in our dataset to the original
End of explanation
"""
data2 = np.random.rand(*h5_empty.shape)
h5_empty[:] = data2[:]
"""
Explanation: As mentioned above, we can now write to the Empty_Data object
End of explanation
"""
h5_file = hdf.file
h5_file.flush()
"""
Explanation: Now that we are using h5py objects, we must use flush to write the data to file
after it has been altered.
We need the file object to do this. It can be accessed as an attribute of the
hdf object.
End of explanation
"""
h5_file.close()
"""
Explanation: Now that we are done, we should close the file so that it can be accessed elsewhere.
End of explanation
"""
|
sangheestyle/ml2015project
|
howto/make_data_a_serialized_object.ipynb
|
mit
|
import csv
import gzip
import cPickle as pickle
from collections import defaultdict
import yaml
question_reader = csv.reader(open("../data/questions.csv"))
question_header = ["answer", "group", "category", "question", "pos_token"]
questions = defaultdict(dict)
for row in question_reader:
question = {}
row[-1] = yaml.load(row[-1].replace(": u'", ": '"))
qid = int(row.pop(0))
for index, item in enumerate(row):
question[question_header[index]] = item
questions[qid] = question
"""
Explanation: Storing and loading questions as a serialized object.
As questinos.csv is not easy to use itself, it might be healpful to make the csv file into a serialized object. In this case, we can use pickle, Python object serialization.
https://docs.python.org/2/library/pickle.html
Loading csv and make it dictionary
First of all, we need to load questions.csv file and convert it into dictionary. If key of the dictionay is question id(qid), we can find questions by key.
End of explanation
"""
print len(questions)
"""
Explanation: Let's check how many items in the dictionary.
End of explanation
"""
print sorted(questions.keys())[:10]
print sorted(questions.keys())[-10:]
"""
Explanation: Yes, 7949 items is right. How about question numbers? It is continuous number or not? We might want to check the first and last 10 items.
End of explanation
"""
questions[1]
"""
Explanation: Yes, it's not continuous data in terms of qid. But, it's OK. What about just see one question? How can we do it? Just look at qid 1.
End of explanation
"""
questions[1].keys()
questions[1]['answer']
questions[1]['pos_token']
questions[1]['pos_token'].keys()
questions[1]['pos_token'].values()
questions[1]['pos_token'].items()
"""
Explanation: Yes, it's dictionary. So, you can use some dictionary functions. Check this out.
End of explanation
"""
max(questions[1]['pos_token'].keys())
"""
Explanation: How can figure out questions length without tokenizing question it self?
End of explanation
"""
with gzip.open("questions.pklz", "wb") as output:
pickle.dump(questions, output)
"""
Explanation: Make questions pickled data
As you know that, reading csv and coverting it as a dictionary spend more than one minute. Once we convert it as a dictionary, we can save it as a pickled data and we can load it when we need. It is really simple and fast. Look at that!
Wait! We will use gzip.open instead of open because pickled file is too big. So we will use compression. It's easy and it consumes only 1/10 size of that of original one. Of course, it will take few more seconds than plain one.
original: 1 sec in my PC
compressed: 5 sec in my PC
Also, "wb" means writing as binary mode, and "rb" means reading file as binary mode.
End of explanation
"""
with gzip.open("questions.pklz", "rb") as fp:
questions_new = pickle.load(fp)
print len(questions_new)
"""
Explanation: Yes, now we can load pickled data as a variable.
End of explanation
"""
print questions == questions
print questions == questions_new
questions_new[0] = 1
print questions == questions_new
"""
Explanation: Yes, it took only few second. I will save it, make it as a commit, and push it into github. So, you can use pickled data instead of converting questions.csv
End of explanation
"""
|
eneskemalergin/Data_Structures_and_Algorithms
|
Chapter4/4-Algorithm_Analysis.ipynb
|
gpl-3.0
|
def ex1(n):
total = 0
for i in range(n):
total += i
return total
print ex1(10)
"""
Explanation: Algorithm Analysis
We can solve a problem with different solutions, but which one is better/best solution? We can answer this question by measuing the execution time, measuring the memory usage, and so on...
Complexity Analysis
To determine the efficiency of an alforithm, we can examine the solution itself and measure those aspects of the algorithms that most critically affect its execution time.
Computing the sum of each rown of an nxn matrix and overal sum of the entire matrix:
```Python
Version 1
totalSum = 0
for i in range(n):
rowSum[i] = 0
for j in range(n):
rowSum[i] += matrix[i,j]
totalSum += matrix[i, j]
```
2 nested loops and 2 addition operation takes 2*(n**2) steps to complete.
```Python
Version 2
totalSum = 0
for i in range(n):
rowSum[i] = 0
for j in range(n):
rowSum[i] += matrix[i,j]
totalSum += rowSum[i]
```
This time we took one of the inner loop's addition operation to outter loop which makes n*(n+1)
Version 2 will be slightly faster however difference in the execution times will not be significant.
Big-O Notation
In big notation we don't include constants or scalar multiplications, like in the example above version 1 and version 2 has O(n**2) complexity.
Evaluating Python Code
Linear Time O(n) Examples
Following example function run in O(n) times
End of explanation
"""
def ex2(n):
count = 0
for i in range(n):
count += 1
for j in range(n):
count += 1
return count
print ex2(10)
"""
Explanation: Function ex2 runs in O(n) = O(n+n)
End of explanation
"""
def ex3(n):
count = 0
for i in range(n):
for j in range(n):
count += 1
return count
"""
Explanation: Quadratic Time Examples
End of explanation
"""
def ex6(n): ## O(log n)
count = 0
i = n
while i >= 1:
count += 1
i = i // 2
return count
def ex7(n): ## O(n*log n)
count = 0
for i in range(n):
count += ex6(n)
return count
"""
Explanation: Logarithmic Time Examples
The next example contains a single loop, but notice the change to the modification step. Instead of incrementing (or decrementing) by one, it cuts the loop variable in half each time through the loop
End of explanation
"""
def findNeq(intList):
n = len(intList)
for i in range(n):
if intList[i] < 0:
return i
return None
"""
Explanation: Different Cases
These different case algorithms can be evaluated for their best, worst, and average cases.
Example traverses a list containing integer values to find the position of the first negative value. Note that for problem below, the input is the collection of n values contained in the list.
End of explanation
"""
|
NeuroDataDesign/pan-synapse
|
pipeline_3/background/GabaExploration.ipynb
|
apache-2.0
|
def otsuVox(argVox):
probVox = np.nan_to_num(argVox)
bianVox = np.zeros_like(probVox)
for zIndex, curSlice in enumerate(probVox):
#if the array contains all the same values
if np.max(curSlice) == np.min(curSlice):
#otsu thresh will fail here, leave bianVox as all 0's
continue
thresh = threshold_otsu(curSlice)
bianVox[zIndex] = curSlice > thresh
return bianVox
def precision_recall_f1(labels, predictions):
if len(predictions) == 0:
print 'ERROR: prediction list is empty'
return 0., 0., 0.
labelFound = np.zeros(len(labels))
truePositives = 0
falsePositives = 0
for prediction in predictions:
#casting to set is ok here since members are uinque
predictedMembers = set([tuple(elem) for elem in prediction.getMembers()])
detectionCutoff = 1
found = False
for idx, label in enumerate(labels):
labelMembers = set([tuple(elem) for elem in label.getMembers()])
#if the predictedOverlap is over the detectionCutoff ratio
if len(predictedMembers & labelMembers) >= detectionCutoff:
truePositives +=1
found=True
labelFound[idx] = 1
if not found:
falsePositives +=1
precision = truePositives/float(truePositives + falsePositives)
recall = np.count_nonzero(labelFound)/float(len(labels))
f1 = 0
try:
f1 = 2 * (precision*recall)/(precision + recall)
except ZeroDivisionError:
f1 = 0
return precision, recall, f1
"""
Explanation: Since the annotation channel of this data is somewhat suspect, I decided to load some alternate annoation files to compare
Interestingly, the labels are exactly 3 times the z span of the p1 data. could this mean they are just stacked p1, p2, p3?
Ok so the annotatons are legit. Next thing I want to be sure about is that we are checking precision, recall, and f1 correctly for this data
End of explanation
"""
gaba = procData[5][1]
otsuOut = otsuVox(gaba)
"""
Explanation: I think I may see where the issue was arising. I was checking to see if the overlap between the label and the prediction was greater than the overlap ratio times the volume of the prediction. Since, in this data, our predictions are massive compared to the labels, this would not work super well.
I made a change such that the overlap between the label and the prediction be merely nondisjoint.
Revised Pipeline
End of explanation
"""
clusterList = cLib.clusterThresh(otsuOut, 500, 1000000)
"""
Explanation: This looks more like it. So far so good
End of explanation
"""
aveList = []
maxList = []
for cluster in clusterList:
curClusterDist = []
for member in cluster.members:
curClusterDist.append(gaba[member[0]][member[1]][member[2]])
aveList.append(np.mean(curClusterDist))
maxList.append(np.max(curClusterDist))
plt.figure()
plt.hist(aveList, bins=40)
plt.title('Averages Of Clusters')
plt.show()
plt.figure()
plt.hist(maxList, bins=30)
plt.title('Max Of Clusters')
plt.show()
plt.figure()
plt.scatter(range(len(aveList)), aveList, c='b')
plt.scatter(range(len(maxList)), maxList, c='r')
plt.show()
"""
Explanation: Now let's take a look at the max and average intensity in each of the clusters. I have a feeling that there will be some level of bimodality
End of explanation
"""
#thresh = threshold_otsu(np.array(maxList))
thresh = 23
finalClusters = []
for i in range(len(maxList)): #this is bad and i should feel bad
if aveList[i] > thresh:
finalClusters.append(clusterList[i])
outVol = np.zeros_like(gaba)
for cluster in finalClusters:
for member in cluster.members:
outVol[member[0]][member[1]][member[2]] = 1
labelClusters = cLib.clusterThresh(procData[0][1], 0, 10000000)
print precision_recall_f1(labelClusters, finalClusters)
"""
Explanation: That's what I like to see. The Max clusters exibits clear bimodality. Time to get the otsu threshold for the optimum class breakdown here, and disregard any clusters whose max is under the thresh
End of explanation
"""
|
Bowenislandsong/Distributivecom
|
Archive/Actors.ipynb
|
gpl-3.0
|
import ray
ray.init(num_gpus=2)
"""
Explanation: Remote functions in Ray should be thought of as functional and side-effect free. Restricting ourselves only to remote functions gives us distributed functional programming, which is great for many use cases, but in practice is a bit limited.
Ray extends the dataflow model with actors. An actor is essentially a stateful worker (or a service). When a new actor is instantiated, a new worker is created, and methods of the actor are scheduled on that specific worker and can access and mutate the state of that worker.
Suppose we've already started Ray.
End of explanation
"""
@ray.remote
class Counter(object):
def __init__(self):
self.value = 0
def increment(self):
self.value += 1
return self.value
"""
Explanation: Defining and creating an actor
Consider the following simple example. The ray.remote decorator indicates that instances of the Counter class will be actors.
End of explanation
"""
a1 = Counter.remote()
a2 = Counter.remote()
"""
Explanation: To actually create an actor, we can instantiate this class by calling Counter.remote ( ).
End of explanation
"""
a1.increment.remote() # ray.get returns 1
a2.increment.remote() # ray.get returns 1
"""
Explanation: When an actor is instantiated, the following events happen.
A node in the cluster is chosen and a worker process is created on that node (by the local scheduler on that node) for the purpose of running methods called on the actor.
A Counter object is created on that worker and the Counter constructor is run.
Using an actor
We can schedule tasks on the actor by calling its methods.
End of explanation
"""
# Create ten Counter actors.
counters = [Counter.remote() for _ in range(10)]
# Increment each Counter once and get the results. These tasks all happen
# in PARALLEL.
results = ray.get([c.increment.remote() for c in counters])
print(results) # prints [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
# Increment the first Counter five times. These tasks are executed SERIALLY
# and share state.
results = ray.get([counters[0].increment.remote() for _ in range(5)])
print(results) # prints [2, 3, 4, 5, 6]
"""
Explanation: When a1.increment.remote() is called, the following events happens.
A task is created.
The task is assigned directly to the local scheduler responsible for the actor by the driver's local scheduler. Thus, this scheduling procedure bypasses the global scheduler.
An object ID is returned.
We can then call ray.get on the object ID to retrieve the actual value.
Similarly, the call to a2.increment.remote() generates a task that is scheduled on the second Counter actor. Since these two tasks run on different actors, they can be executed in parallel (note that only actor methods will be scheduled on actor workers, regular remote functions will not be).
On the other hand, methods called on the same Counter actor are executed serially in the order that they are called. They can thus share state with one another, as shown below.
End of explanation
"""
import gym
@ray.remote
class GymEnvironment(object):
def __init__(self, name):
self.env = gym.make(name)
self.env.reset()
def step(self, action):
return self.env.step(action)
def reset(self):
self.env.reset()
"""
Explanation: A More Interesting Actor Example
A common pattern is to use actors to encapsulate the mutable state managed by an external library or service.
Gym provides an interface to a number of simulated environments for testing and training reinforcement learning agents. These simulators are stateful, and tasks that use these simulators must mutate their state. We can use actors to encapsulate the state of these simulators.
End of explanation
"""
pong = GymEnvironment.remote("Pong-v0")
pong.step.remote(0) # Take action 0 in the simulator.
"""
Explanation: We can then instantiate an actor and schedule a task on that actor as follows.
End of explanation
"""
import tensorflow as tf
def construct_network():
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),
reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize
(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return x, y_, train_step, accuracy
"""
Explanation: Using GPUs on actors
A common use case is for an actor to contain a neural network. For example, suppose we have imported Tensorflow and have created a method for constructing a neural net.
End of explanation
"""
import os
# Define an actor that runs on GPUs. If there are no GPUs, then simply use
# ray.remote without any arguments and no parentheses.
@ray.remote(num_gpus=1)
class NeuralNetOnGPU(object):
def __init__(self):
# Set an environment variable to tell TensorFlow which GPUs to use.
# Note that this must be done before the call to tf.Session.
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join([str(i) for i in
ray.get_gpu_ids()])
with tf.Graph().as_default():
with tf.device("/gpu:0"):
self.x, self.y_, self.train_step, self.accuray = construct
_network()
# Allow this to run on CPUs if there aren't any GPUs
self.sess = tf.Session(config=config)
# Initialize the network.
init = tf.global_variables_initializer()
self.sess.run(init)
"""
Explanation: We can then define an actor for this network as follows.
End of explanation
"""
# Restart the kernel before running this to initialize Ray with specified
# number of GPUs.
import os
import ray
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
ray.init(num_gpus=8)
def construct_network():
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return x, y_, train_step, accuracy
@ray.remote(num_gpus=1)
class NeuralNetOnGPU(object):
def __init__(self, mnist_data):
self.mnist = mnist_data
# Set an environment variable to tell TensorFlow which GPUs to use. Note
# that this must be done before the call to tf.Session.
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join([str(i) for i in
ray.get_gpu_ids()])
with tf.Graph().as_default():
with tf.device("/gpu:0"):
# Allow this to run on CPUs if there aren't any GPUs.
self.x, self.y_, self.train_step, self.accuracy = construct_network()
config = tf.ConfigProto(allow_soft_placement=True)
self.sess = tf.Session(config=config)
# Initialize the network.
init = tf.global_variables_initializer()
self.sess.run(init)
def train(self, num_steps):
for _ in range(num_steps):
batch_xs, batch_ys = self.mnist.train.next_batch(100)
self.sess.run(self.train_step, feed_dict={self.x: batch_xs, self.y_: batch_ys})
def get_accuracy(self):
return self.sess.run(self.accuracy, feed_dict={self.x: self.mnist.test.images,
self.y_: self.mnist.test.labels})
# Load the MNIST dataset and tell Ray how to serialize the custom classes.
mnist = input_data.read_data_sets("MNIST_data", one_hot=True) # Create the actor.
nn = NeuralNetOnGPU.remote(mnist)
# Run a few steps of training and print the accuracy.
nn.train.remote(100)
accuracy = ray.get(nn.get_accuracy.remote())
print("Accuracy is {}.".format(accuracy))
"""
Explanation: To indicate that an actor requires one GPU, we pass in num_gpus=1 to ray.remote. Note that in order for this to work, Ray must have been started with some GPUs, e.g., via ray.init(num_gpus=2). Otherwise, when you try to instantiate the GPU version with NeuralNetOnGPU.remote(), an exception will be thrown saying that there aren’t enough GPUs in the system.
When the actor is created, it will have access to a list of the IDs of the GPUs that it is allowed to use via ray. get_gpu_ids(). This is a list of integers, like [], or [1], or [2, 5, 6]. Since we passed in ray. remote(num_gpus=1), this list will have length one.
We can put this all together as follows.
End of explanation
"""
|
mathLab/RBniCS
|
tutorials/05_gaussian/tutorial_gaussian_exact.ipynb
|
lgpl-3.0
|
from dolfin import *
from rbnics import *
"""
Explanation: TUTORIAL 05 - Exact Parametrized Functions for non-affine elliptic problems
Keywords: exact parametrized functions
1. Introduction
In this Tutorial, we consider steady heat conduction in a two-dimensional square domain $\Omega = (-1, 1)^2$.
The boundary $\partial\Omega$ is kept at a reference temperature (say, zero). The conductivity coefficient is fixed to 1, while the heat source is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = \exp{ -2 (x_0-\mu_0)^2 - 2 (x_1 - \mu_1)^2} \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
The parameter vector $\boldsymbol{\mu}$, given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
affects the center of the Gaussian source $g(\boldsymbol{x}; \boldsymbol{\mu})$, which could be located at any point $\Omega$. Thus, the parameter domain is
$$
\mathbb{P}=[-1,1]^2.
$$
In order to be able to compare the interpolation methods (EIM and DEIM) used to solve this problem, we propose to use an exact solution of the problem.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the temperature in the domain $\Omega$.
We will directly provide a weak formulation for this problem
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = \left{ v \in H^1(\Omega(\mu_0)): v|_{\partial\Omega} = 0\right}
$$
Note that, as in the previous tutorial, the function space is parameter dependent due to the shape variation.
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u,v;\boldsymbol{\mu}) = \int_{\Omega} \nabla u \cdot \nabla v \ d\boldsymbol{x}$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v;\boldsymbol{\mu}) = \int_\Omega g(\boldsymbol{\mu}) v \ d\boldsymbol{x}.$$
End of explanation
"""
@ExactParametrizedFunctions()
class Gaussian(EllipticCoerciveProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.f = ParametrizedExpression(
self, "exp(- 2 * pow(x[0] - mu[0], 2) - 2 * pow(x[1] - mu[1], 2))", mu=(0., 0.),
element=V.ufl_element())
# note that we cannot use self.mu in the initialization of self.f, because self.mu has not been initialized yet
# Return custom problem name
def name(self):
return "GaussianExact"
# Return the alpha_lower bound.
def get_stability_factor_lower_bound(self):
return 1.
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
if term == "a":
return (1.,)
elif term == "f":
return (1.,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
u = self.u
a0 = inner(grad(u), grad(v)) * dx
return (a0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),
DirichletBC(self.V, Constant(0.0), self.boundaries, 2),
DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]
return (bc0,)
elif term == "inner_product":
u = self.u
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
"""
Explanation: 3. Affine decomposition
The parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu})$ is trivially affine.
The exact solution will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$ to obtain an efficient (exact affine) expansion of $f(\cdot; \boldsymbol{\mu})$.
End of explanation
"""
mesh = Mesh("data/gaussian.xml")
subdomains = MeshFunction("size_t", mesh, "data/gaussian_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/gaussian_facet_region.xml")
"""
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
"""
V = FunctionSpace(mesh, "Lagrange", 1)
"""
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
"""
problem = Gaussian(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(-1.0, 1.0), (-1.0, 1.0)]
problem.set_mu_range(mu_range)
"""
Explanation: 4.3. Allocate an object of the Gaussian class
End of explanation
"""
reduction_method = ReducedBasis(problem)
reduction_method.set_Nmax(20)
reduction_method.set_tolerance(1e-4)
"""
Explanation: 4.4. Prepare reduction with a reduced basis method
End of explanation
"""
reduction_method.initialize_training_set(50)
reduced_problem = reduction_method.offline()
"""
Explanation: 4.5. Perform the offline phase
End of explanation
"""
online_mu = (0.3, -1.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
"""
Explanation: 4.6. Perform an online solve
End of explanation
"""
reduction_method.initialize_testing_set(50)
reduction_method.error_analysis(filename="error_analysis")
"""
Explanation: 4.7. Perform an error analysis
End of explanation
"""
|
nicolasfauchereau/ICU
|
indices/plot_real_time_indices.ipynb
|
bsd-3-clause
|
%matplotlib inline
import os, sys
import pandas as pd
from datetime import datetime, timedelta
from cStringIO import StringIO
import requests
import matplotlib as mpl
from matplotlib import pyplot as plt
from IPython.display import Image
"""
Explanation: Plots the NINO Sea Surface Temperature indices (data from the Bureau of Meteorology) and the real-time Southern Oscillation Index (SOI) from LongPaddock
Nicolas Fauchereau
End of explanation
"""
proxies = {}
# proxies['http'] = 'url:port'
"""
Explanation: set up proxies here if needed
End of explanation
"""
dpath = os.path.join(os.environ['HOME'], 'operational/ICU/indices/figures')
today = datetime.utcnow() - timedelta(15)
"""
Explanation: path where the figures will be saved
End of explanation
"""
url = 'http://www.longpaddock.qld.gov.au/seasonalclimateoutlook/southernoscillationindex/soidatafiles/DailySOI1933-1992Base.txt'
r = requests.get(url, proxies=proxies)
soi = pd.read_table(StringIO(r.content), sep='\s*', engine='python')
index = [datetime(year,1,1) + timedelta(day-1) for year, day in soi.loc[:,['Year','Day']].values]
soi.index = index
soi = soi.loc[:,['SOI']]
soi.head()
"""
Explanation: Get the SOI, set the datetime index
End of explanation
"""
soi['soirm1'] = pd.rolling_mean(soi.SOI, 30)
soi['soirm3'] = pd.rolling_mean(soi.SOI, 90)
soi = soi.ix['2013':]
soi.tail()
"""
Explanation: calculates 30 days and 90 days rolling averages
End of explanation
"""
from matplotlib.dates import YearLocator, MonthLocator, DateFormatter
years = YearLocator()
months = MonthLocator()
mFMT = DateFormatter('%b')
yFMT = DateFormatter('\n\n%Y')
mpl.rcParams['xtick.labelsize'] = 12
mpl.rcParams['ytick.labelsize'] = 12
mpl.rcParams['axes.titlesize'] = 14
mpl.rcParams['xtick.direction'] = 'out'
mpl.rcParams['ytick.direction'] = 'out'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['xtick.minor.size'] = 2
"""
Explanation: set up the matplotlib parameters for plotting
End of explanation
"""
f, ax = plt.subplots(figsize=(10,8))
ax.fill_between(soi.index, soi.soirm1, 0, (soi.soirm1 >= 0), color='b', alpha=0.7, interpolate=True)
ax.fill_between(soi.index, soi.soirm1, 0, (soi.soirm1 < 0), color='r', alpha=0.7, interpolate=True)
ax.plot(soi.index, soi.soirm1, c='k')
ax.plot(soi.index, soi.soirm3, c='w', lw=2.5)
ax.plot(soi.index, soi.soirm3, c='0.5', lw=2, label='90 days running mean')
ax.legend()
ax.axhline(0, color='k')
ax.set_ylim(-30,30)
ax.grid(linestyle='--')
ax.xaxis.set_minor_locator(months)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_minor_formatter(mFMT)
ax.xaxis.set_major_formatter(yFMT)
[label.set_fontsize(13) for label in ax.get_xminorticklabels()]
[label.set_rotation(90) for label in ax.get_xminorticklabels()]
[label.set_fontsize(18) for label in ax.get_xmajorticklabels()]
[label.set_fontsize(18) for label in ax.get_ymajorticklabels()]
ax.set_title("Southern Oscillation Index (30 days running mean)\
\n ending {0:}: latest 30 days (90 days) value = {1:<4.2f} ({2:<4.2f})".\
format(soi.index[-1].strftime("%Y-%m-%d"), soi.iloc[-1,1], soi.iloc[-1,2]))
f.savefig(os.path.join(dpath, 'SOI_LP_realtime_plot.png'), dpi=200)
"""
Explanation: plots the Southern Oscillation Index
End of explanation
"""
Image(url='http://www1.ncdc.noaa.gov/pub/data/cmb/teleconnections/nino-regions.gif')
for nino in ["3.4", "3", "4"]:
print("processing NINO{}".format(nino))
url = "http://www.bom.gov.au/climate/enso/nino_%s.txt" % (nino)
r = requests.get(url, proxies=proxies)
data = pd.read_table(StringIO(r.content), sep=',', header=None, index_col=1, \
parse_dates=True, names=['iDate','SST'])
data = data.ix['2013':]
lastmonth = data.loc[today.strftime("%Y-%m"),'SST'].mean()
f, ax = plt.subplots(figsize=(10, 8))
ax.fill_between(data.index, data.SST, 0, (data.SST >= 0), color='r', alpha=0.7, interpolate=True)
ax.fill_between(data.index, data.SST, 0, (data.SST < 0), color='b', alpha=0.7, interpolate=True)
ax.plot(data.index, data.SST, c='k')
ax.axhline(0, color='k')
ax.set_ylim(-2,2)
ax.grid(linestyle='--')
ax.xaxis.set_minor_locator(months)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_minor_formatter(mFMT)
ax.xaxis.set_major_formatter(yFMT)
[label.set_fontsize(13) for label in ax.get_xminorticklabels()]
[label.set_rotation(90) for label in ax.get_xminorticklabels()]
[label.set_fontsize(18) for label in ax.get_xmajorticklabels()]
[label.set_fontsize(18) for label in ax.get_ymajorticklabels()]
ax.set_title("NINO {} (weekly) ending {}\nlatest weekly / monthly values: {:<4.2f} / {:<4.2f}"\
.format(nino, data.index[-1].strftime("%Y-%B-%d"), data.iloc[-1,-1], lastmonth))
f.savefig(os.path.join(dpath, 'NINO{}_realtime_plot.png'.format(nino)))
"""
Explanation: Plots the NINO SST Indices
End of explanation
"""
|
w4zir/ml17s
|
lectures/lec02-regression-single-variable.ipynb
|
mit
|
import pandas as pd
from sklearn import linear_model
import matplotlib.pyplot as plt
# read data in pandas frame
dataframe = pd.read_csv('datasets/house_dataset1.csv')
# assign x and y
x_feature = dataframe[['Size']]
y_labels = dataframe[['Price']]
# check data by printing first few rows
dataframe.head()
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 2: Linear Regression
Overview
What is Machine Learning?
Definition
The three different types of machine learning
Learning from labled data with supervised learning
Regression for predicting continuous outcomes
Classification for predicting class labels
Unsupervised Learning
Reinforcement Learning
Machine Learning pipeline
Goal of Machine Learning algorithm
# Linear Regression with one variable
Model Representation
Cost Function
Simple case when $\theta_0$ = 0
When both $\theta_0$ and $\theta_1$ can vary
So what is the price of the house?
Read data
Plot data
Train Model
Predict output using trained model
Plot results
Resources
Credits
<br>
<br>
What is Machine Learning? <a name="what-is-ml"></a>
Machine Learning is making computers/machcines learn from data
Learning improve over time with more data
Definition
Mitchell ( 1997 ) define Machine Learning as “A computer
program is said to learn from experience E with respect to some class of tasks T
and performance measure P , if its performance at tasks in T , as measured by P ,
improves with experience E .”
Example: playing checkers.
T = the task of playing checkers.
E = the experience of playing many games of checkers
P = the probability that the program will win the next game.
<br>
<br>
The three different types of machine learning
<img style="float: left;" src="images/01_01.png", width=500>
<br>
<br>
Supervised Learning
<img style="float: left;" src="images/01_02.png", width=500>
<br>
<br>
Regression for predicting continuous outcomes
<img style="float: left;" src="images/01_04.png", width=300> <img style="float: right;" src="images/01_11.png", width=500>
Classification for predicting class labels
<img style="float: left;" src="images/01_03.png", width=300> <img style="float: right;" src="images/01_12.png", width=500>
<br>
<br>
Unsupervised Learning
<img style="float: left;" src="images/01_06.png" width=300>
<br>
<br>
Reinforcement Learning
<img style="float: left;" src="images/01_05.png", width=300>
<br>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<img style="float: right;" src="images/02_03.png", width=400>
Question ?
What is x<sup>(2)</sup> and y<sup>(2)</sup>?
<br>
<br>
Goal of Machine Learning algorithm
How well the algorithm will perform on unseen data.
Also called generalization.
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
<img style="float: left;" src="images/02_05.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
Question
<img style="float: left;" src="images/02_15.jpg", width=600>
<br>
<br>
<br>
Cost Function
<img style="float: left;" src="images/02_14.png", width=700>
Let $\hat{y}$ = h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
Error in single sample (x,y) = $\hat{y}$ - y = h(x) - y
Cummulative error of all m samples = $\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Finally mean error or cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
<br>
<br>
Simple case when $\theta_0$ = 0
<img style="float: center;" src="images/02_06.png", width=700>
<br>
Question
<img style="float: center;" src="images/02_15.png", width=700>
<br>
<img style="float: center;" src="images/02_07.png", width=700>
<img style="float: center;" src="images/02_08.png", width=700>
<img style="float: center;" src="images/02_09.png", width=700>
<b>
When both $\theta_0$ and $\theta_1$ can vary
<br>
<img style="float: center;" src="images/02_10.png", width=700>
<img style="float: center;" src="images/02_11.png", width=700>
<img style="float: center;" src="images/02_12.png", width=700>
<img style="float: center;" src="images/02_13.png", width=700>
<br>
<br>
So what is the price of the house?
Read data
End of explanation
"""
#visualize results
plt.scatter(x_feature, y_labels)
plt.show()
y_labels.shape
"""
Explanation: Plot data
End of explanation
"""
#train model on data
body_reg = linear_model.LinearRegression()
body_reg.fit(x_feature, y_labels)
print ('theta0 = ',body_reg.intercept_)
print ('theta1 = ',body_reg.coef_)
"""
Explanation: Train model
End of explanation
"""
hx = body_reg.predict(x_feature)
"""
Explanation: Predict output using trained model
End of explanation
"""
plt.scatter(x_feature, y_labels)
plt.plot(x_feature, hx)
plt.show()
"""
Explanation: Plot results
End of explanation
"""
theta0 = 0
theta1 = 0
inc = 1.0
#loop over all values of theta1 from 0 to 1000 with an increment of inc and find cost.
# The one with minimum cost is the answer.
m = x_feature.shape[0]
n = x_feature.shape[1]
# optimal values to be determined
minCost = 100000000000000
optimal_theta = 0
while theta1 < 1000:
cost = 0;
for indx in range(m):
hx = theta1*x_feature.values[indx,0] + theta0
cost += pow((hx - y_labels.values[indx,0]),2)
cost = cost/(2*m)
# print(theta1)
# print(cost)
if cost < minCost:
minCost = cost
optimal_theta = theta1
theta1 += inc
print ('theta0 = ', theta0)
print ('theta1 = ',optimal_theta)
"""
Explanation: Do it yourself
End of explanation
"""
hx = optimal_theta*x_feature
plt.scatter(x_feature, y_labels)
plt.plot(x_feature, hx)
plt.show()
"""
Explanation: Predict labels using model and print it
End of explanation
"""
|
mattilyra/gensim
|
docs/notebooks/annoytutorial.ipynb
|
lgpl-2.1
|
# pip install watermark
%reload_ext watermark
%watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib
"""
Explanation: Similarity Queries using Annoy Tutorial
This tutorial is about using the (Annoy Approximate Nearest Neighbors Oh Yeah) library for similarity queries with a Word2Vec model built with gensim.
Why use Annoy?
The current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster.
Prerequisites
Additional libraries needed for this tutorial:
- annoy
- psutil
- matplotlib
Outline
Download Text8 Corpus
Build Word2Vec Model
Construct AnnoyIndex with model & make a similarity query
Verify & Evaluate performance
Evaluate relationship of num_trees to initialization time and accuracy
Work with Google's word2vec C formats
End of explanation
"""
import os.path
if not os.path.isfile('text8'):
!wget -c http://mattmahoney.net/dc/text8.zip
!unzip text8.zip
"""
Explanation: 1. Download Text8 Corpus
End of explanation
"""
LOGS = False
if LOGS:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Import & Set up Logging
I'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.
End of explanation
"""
from gensim.models import Word2Vec, KeyedVectors
from gensim.models.word2vec import Text8Corpus
# Using params from Word2Vec_FastText_Comparison
params = {
'alpha': 0.05,
'size': 100,
'window': 5,
'iter': 5,
'min_count': 5,
'sample': 1e-4,
'sg': 1,
'hs': 0,
'negative': 5
}
model = Word2Vec(Text8Corpus('text8'), **params)
print(model)
"""
Explanation: 2. Build Word2Vec Model
End of explanation
"""
# Set up the model and vector that we are using in the comparison
from gensim.similarities.index import AnnoyIndexer
model.init_sims()
annoy_index = AnnoyIndexer(model, 100)
# Dry run to make sure both indices are fully in RAM
vector = model.wv.syn0norm[0]
model.most_similar([vector], topn=5, indexer=annoy_index)
model.most_similar([vector], topn=5)
import time
import numpy as np
def avg_query_time(annoy_index=None, queries=1000):
"""
Average query time of a most_similar method over 1000 random queries,
uses annoy if given an indexer
"""
total_time = 0
for _ in range(queries):
rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))]
start_time = time.clock()
model.most_similar([rand_vec], topn=5, indexer=annoy_index)
total_time += time.clock() - start_time
return total_time / queries
queries = 10000
gensim_time = avg_query_time(queries=queries)
annoy_time = avg_query_time(annoy_index, queries=queries)
print("Gensim (s/query):\t{0:.5f}".format(gensim_time))
print("Annoy (s/query):\t{0:.5f}".format(annoy_time))
speed_improvement = gensim_time / annoy_time
print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement))
"""
Explanation: See the Word2Vec tutorial for how to initialize and save this model.
Comparing the traditional implementation and the Annoy approximation
End of explanation
"""
# 100 trees are being used in this example
annoy_index = AnnoyIndexer(model, 100)
# Derive the vector for the word "science" in our model
vector = model["science"]
# The instance of AnnoyIndexer we just created is passed
approximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = model.most_similar([vector], topn=11)
print("\nNormal (not Annoy-indexed) Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
"""
Explanation: This speedup factor is by no means constant and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors.
Note: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized
Note : Gensim's 'most_similar' method is using numpy operations in the form of dot product whereas Annoy's method isnt. If 'numpy' on your machine is using one of the BLAS libraries like ATLAS or LAPACK, it'll run on multiple cores(only if your machine has multicore support ). Check SciPy Cookbook for more details.
3. Construct AnnoyIndex with model & make a similarity query
Creating an indexer
An instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index
AnnoyIndexer() takes two parameters:
model: A Word2Vec or Doc2Vec model
num_trees: A positive integer. num_trees effects the build time and the index size. A larger value will give more accurate results, but larger indexes. More information on what trees in Annoy do can be found here. The relationship between num_trees, build time, and accuracy will be investigated later in the tutorial.
Now that we are ready to make a query, lets find the top 5 most similar words to "science" in the Text8 corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.
End of explanation
"""
fname = '/tmp/mymodel.index'
# Persist index to disk
annoy_index.save(fname)
# Load index back
if os.path.exists(fname):
annoy_index2 = AnnoyIndexer()
annoy_index2.load(fname)
annoy_index2.model = model
# Results should be identical to above
vector = model["science"]
approximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2)
for neighbor in approximate_neighbors2:
print(neighbor)
assert approximate_neighbors == approximate_neighbors2
"""
Explanation: Analyzing the results
The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "science". There are some differences in the ranking of similar words and the set of words included within the 10 most similar words.
4. Verify & Evaluate performance
Persisting Indexes
You can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object.
End of explanation
"""
# Remove verbosity from code below (if logging active)
if LOGS:
logging.disable(logging.CRITICAL)
from multiprocessing import Process
import os
import psutil
"""
Explanation: Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.
Save memory by memory-mapping indices saved to disk
Annoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.
Below are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.
End of explanation
"""
%%time
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model["science"]
annoy_index = AnnoyIndexer(new_model,100)
approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
"""
Explanation: Bad Example: Two processes load the Word2vec model from disk and create there own Annoy indices from that model.
End of explanation
"""
%%time
model.save('/tmp/mymodel.pkl')
def f(process_id):
print('Process Id: {}'.format(os.getpid()))
process = psutil.Process(os.getpid())
new_model = Word2Vec.load('/tmp/mymodel.pkl')
vector = new_model["science"]
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = new_model
approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)
print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info()))
# Creating and running two parallel process to share the same index file.
p1 = Process(target=f, args=('1',))
p1.start()
p1.join()
p2 = Process(target=f, args=('2',))
p2.start()
p2.join()
"""
Explanation: Good example. Two processes load both the Word2vec model and index from disk and memory-map the index
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 5. Evaluate relationship of num_trees to initialization time and accuracy
End of explanation
"""
exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)]
x_values = []
y_values_init = []
y_values_accuracy = []
for x in range(1, 300, 10):
x_values.append(x)
start_time = time.time()
annoy_index = AnnoyIndexer(model, x)
y_values_init.append(time.time() - start_time)
approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index)
top_words = [result[0] for result in approximate_results]
y_values_accuracy.append(len(set(top_words).intersection(exact_results)))
"""
Explanation: Build dataset of Initialization times and accuracy measures
End of explanation
"""
plt.figure(1, figsize=(12, 6))
plt.subplot(121)
plt.plot(x_values, y_values_init)
plt.title("num_trees vs initalization time")
plt.ylabel("Initialization time (s)")
plt.xlabel("num_trees")
plt.subplot(122)
plt.plot(x_values, y_values_accuracy)
plt.title("num_trees vs accuracy")
plt.ylabel("% accuracy")
plt.xlabel("num_trees")
plt.tight_layout()
plt.show()
"""
Explanation: Plot results
End of explanation
"""
# To export our model as text
model.wv.save_word2vec_format('/tmp/vectors.txt', binary=False)
from smart_open import smart_open
# View the first 3 lines of the exported file
# The first line has the total number of entries and the vector dimension count.
# The next lines have a key (a string) followed by its vector.
with smart_open('/tmp/vectors.txt') as myfile:
for i in range(3):
print(myfile.readline().strip())
# To import a word2vec text model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# To export our model as binary
model.wv.save_word2vec_format('/tmp/vectors.bin', binary=True)
# To import a word2vec binary model
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)
annoy_index = AnnoyIndexer(wv, 100)
annoy_index.save('/tmp/mymodel.index')
# Load and test the saved word vectors and saved annoy index
wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)
annoy_index = AnnoyIndexer()
annoy_index.load('/tmp/mymodel.index')
annoy_index.model = wv
vector = wv["cat"]
approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
print("Approximate Neighbors")
for neighbor in approximate_neighbors:
print(neighbor)
normal_neighbors = wv.most_similar([vector], topn=11)
print("\nNormal (not Annoy-indexed) Neighbors")
for neighbor in normal_neighbors:
print(neighbor)
"""
Explanation: Initialization:
Initialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used
Accuracy:
In this dataset, the accuracy seems logarithmically related to the number of trees. We see an improvement in accuracy with more trees, but the relationship is nonlinear.
6. Work with Google word2vec files
Our model can be exported to a word2vec C format. There is a binary and a plain text word2vec format. Both can be read with a variety of other software, or imported back into gensim as a KeyedVectors object.
End of explanation
"""
|
esa-as/2016-ml-contest
|
dagrha/KNN_submission_1_dagrha.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
from sklearn import neighbors
from sklearn import preprocessing
from sklearn.model_selection import LeaveOneGroupOut
import inversion
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Facies classification using KNearestNeighbors
<a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License BY-SA" align="left" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png">
</a>
<br>
Dan Hallau
A lot of sophisticated models have been submitted for the contest so far (deep neural nets, random forests, etc.) so I thought I'd try submitting a simpler model to see how it stacks up. In that spirit here's a KNearestNeighbors classifier.
I spend a few cells back-calculating some more standard logging curves (RHOB, NPHI, etc) then create a log-based lithology model from a Umaa-Rhomaa plot. After training, I finish it up with a LeaveOneGroupOut test.
End of explanation
"""
df = pd.read_csv('../facies_vectors.csv')
df.dropna(inplace=True)
"""
Explanation: Load training data
End of explanation
"""
def estimate_dphi(df):
return ((4*(df['PHIND']**2) - (df['DeltaPHI']**2))**0.5 - df['DeltaPHI']) / 2
def estimate_rhob(df):
return (2.71 - (df['DPHI_EST']/100) * 1.71)
def estimate_nphi(df):
return df['DPHI_EST'] + df['DeltaPHI']
def compute_rhomaa(df):
return (df['RHOB_EST'] - (df['PHIND'] / 100)) / (1 - df['PHIND'] / 100)
def compute_umaa(df):
return ((df['PE'] * df['RHOB_EST']) - (df['PHIND']/100 * 0.398)) / (1 - df['PHIND'] / 100)
"""
Explanation: Build features
In the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this:
$$PHIND ≈ \sqrt{\frac{NPHI^2 + DPHI^2}{2}}$$
and it is assumed here that DeltaPHI is:
$$DeltaPHI = NPHI - DPHI$$
The functions below use the relationships from the above equations (...two equations, two unknowns...) to estimate NPHI and DPHI (and consequently RHOB).
Once we have RHOB, we can use it combined with PE to estimate apparent grain density (RHOMAA) and apparent photoelectric capture cross-section (UMAA), which are useful in lithology estimations from well logs.
End of explanation
"""
df['DPHI_EST'] = df.apply(lambda x: estimate_dphi(x), axis=1).astype(float)
df['RHOB_EST'] = df.apply(lambda x: estimate_rhob(x), axis=1)
df['NPHI_EST'] = df.apply(lambda x: estimate_nphi(x), axis=1)
df['RHOMAA_EST'] = df.apply(lambda x: compute_rhomaa(x), axis=1)
df['UMAA_EST'] = df.apply(lambda x: compute_umaa(x), axis=1)
"""
Explanation: Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about.
End of explanation
"""
df[df.GR < 125].plot(kind='scatter', x='UMAA_EST', y='RHOMAA_EST', c='GR', figsize=(8,6))
plt.ylim(3.1, 2.2)
plt.xlim(0.0, 17.0)
plt.plot([4.8, 9.0, 13.8, 4.8], [2.65, 2.87, 2.71, 2.65], c='r')
plt.plot([4.8, 11.9, 13.8, 4.8], [2.65, 3.06, 2.71, 2.65], c='g')
plt.scatter([4.8], [2.65], s=50, c='r')
plt.scatter([9.0], [2.87], s=50, c='r')
plt.scatter([13.8], [2.71], s=50, c='r')
plt.scatter([11.9], [3.06], s=50, c='g')
plt.text(2.8, 2.65, 'Quartz', backgroundcolor='w')
plt.text(14.4, 2.71, 'Calcite', backgroundcolor='w')
plt.text(9.6, 2.87, 'Dolomite', backgroundcolor='w')
plt.text(12.5, 3.06, 'Illite', backgroundcolor='w')
plt.text(7.0, 2.55, "gas effect", ha="center", va="center", rotation=-55,
size=8, bbox=dict(boxstyle="larrow,pad=0.3", fc="pink", ec="red", lw=2))
plt.text(15.0, 2.78, "heavies?", ha="center", va="center", rotation=0,
size=8, bbox=dict(boxstyle="rarrow,pad=0.3", fc="yellow", ec="orange", lw=2))
"""
Explanation: Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite).
End of explanation
"""
# QTZ-CAL-CLAY
ur = inversion.UmaaRhomaa()
ur.set_dol_uma(11.9)
ur.set_dol_rhoma(3.06)
# QTZ-CAL-DOL
ur2 = inversion.UmaaRhomaa()
df['UR_QTZ'] = np.nan
df['UR_CLY'] = np.nan
df['UR_CAL'] = np.nan
df['UR_DOL'] = np.nan
df.ix[df.GR >= 40, 'UR_QTZ'] = df.ix[df.GR >= 40].apply(lambda x: ur.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR >= 40, 'UR_CLY'] = df.ix[df.GR >= 40].apply(lambda x: ur.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR >= 40, 'UR_CAL'] = df.ix[df.GR >= 40].apply(lambda x: ur.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR >= 40, 'UR_DOL'] = 0
df.ix[df.GR < 40, 'UR_QTZ'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR < 40, 'UR_DOL'] = df[df.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR < 40, 'UR_CAL'] = df[df.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR < 40, 'UR_CLY'] = 0
"""
Explanation: Here I use matrix inversion to "solve" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The "pull" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed.
End of explanation
"""
df1 = df.dropna()
df1 = df1[(df1.PHIND <= 40) & (df1['Well Name'] != 'CROSS H CATTLE')]
facies = df1['Facies'].values
wells = df1['Well Name'].values
drop_list = ['Formation', 'Well Name', 'Facies', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI',
'RHOMAA_EST', 'UMAA_EST', 'UR_QTZ', 'UR_DOL']
fv = df1.drop(drop_list, axis=1).values
clf = neighbors.KNeighborsClassifier(n_neighbors=56, weights='distance')
X = preprocessing.StandardScaler().fit(fv).transform(fv)
y = facies
logo = LeaveOneGroupOut()
f1knn = []
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
clf.fit(X[train], y[train])
predicted_labels = clf.predict(X[test])
score = clf.fit(X[train], y[train]).score(X[test], y[test])
print("{:>20s} {:.3f}".format(well_name, score))
f1knn.append(score)
print("-Average leave-one-well-out F1 Score: %6f" % (np.mean(f1knn)))
f1knn.pop(4)
print("-Average leave-one-well-out F1 Score, no Recruit F1: %6f" % (np.mean(f1knn)))
"""
Explanation: Fit KNearestNeighbors model and apply LeaveOneGroupOut test
There is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious. For the purposes of this exercise, I'll just remove records where neutron-density crossplot porosity is super high.
Also the CROSS H CATTLE well consistently returns anomalously low F1 scores, so I'm going to omit it from the training set.
End of explanation
"""
vd = pd.read_csv('../validation_data_nofacies.csv', index_col=0)
vd.dropna(inplace=True)
vd['DPHI_EST'] = vd.apply(lambda x: estimate_dphi(x), axis=1).astype(float)
vd['RHOB_EST'] = vd.apply(lambda x: estimate_rhob(x), axis=1)
vd['NPHI_EST'] = vd.apply(lambda x: estimate_nphi(x), axis=1)
vd['RHOMAA_EST'] = vd.apply(lambda x: compute_rhomaa(x), axis=1)
vd['UMAA_EST'] = vd.apply(lambda x: compute_umaa(x), axis=1)
vd['UR_QTZ'] = np.nan
vd['UR_CLY'] = np.nan
vd['UR_CAL'] = np.nan
vd['UR_DOL'] = np.nan
vd.ix[vd.GR >= 40, 'UR_QTZ'] = vd.ix[vd.GR >= 40].apply(lambda x: ur.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR >= 40, 'UR_CLY'] = vd.ix[vd.GR >= 40].apply(lambda x: ur.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR >= 40, 'UR_CAL'] = vd.ix[vd.GR >= 40].apply(lambda x: ur.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR >= 40, 'UR_DOL'] = 0
vd.ix[vd.GR < 40, 'UR_QTZ'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR < 40, 'UR_DOL'] = vd[vd.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR < 40, 'UR_CAL'] = vd[vd.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR < 40, 'UR_CLY'] = 0
vd1 = vd.dropna()
vd1 = vd1[(vd1.PHIND <= 40)]
drop_list1 = ['Well Name', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI',
'RHOMAA_EST', 'UMAA_EST', 'UR_QTZ', 'UR_DOL']
fv1 = vd1.drop(drop_list1, axis=1).values
X1 = preprocessing.StandardScaler().fit(fv1).transform(fv1)
vd_predicted_facies = clf.predict(X1)
"""
Explanation: Apply model to validation dataset
Load validation data (vd), build features, and use the classfier from above to predict facies.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cas/cmip6/models/sandbox-1/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
konstantinstadler/country_converter
|
doc/country_converter_examples.ipynb
|
gpl-3.0
|
import country_converter as coco
converter = coco.CountryConverter()
"""
Explanation: Country Converter
The country converter (coco) is a Python package to convert country names into different classifications and between different naming versions. Internally it uses regular expressions to match country names.
Installation
The package is available as PyPI, use
pip install country_converter -upgrade
from the command line or use your preferred python package installer.
The source code is available on github: https://github.com/konstantinstadler/country_converter
Conversion
The country converter provides one main class which is used for the conversion:
End of explanation
"""
iso3_codes = ['USA', 'VUT', 'TKL', 'AUT', 'AFG', 'ALB']
"""
Explanation: Given a list of countries is a certain classification:
End of explanation
"""
converter.convert(names = iso3_codes, src = 'ISO3', to = 'name_official')
"""
Explanation: This can be converted to any classification provided by:
End of explanation
"""
converter.convert(names = iso3_codes, src = 'ISO3', to = 'continent')
"""
Explanation: or
End of explanation
"""
converter.valid_class
"""
Explanation: The parameter "src" specifies the input-, "to" the output format. Possible values for both parameter can be found by:
End of explanation
"""
converter.convert(names = iso3_codes, src = 'ISO3', to = 'ISO2')
"""
Explanation: Internally, these names are the column header of the underlying pandas dataframe (see below).
The convert function can also be accessed without initiating the CountryConverter. This can be useful for one time usage. For multiple matches, initiating the CountryConverter avoids that the file providing the matching data gets read in for each conversion.
End of explanation
"""
converter.EU27
converter.OECDas('ISO2')
"""
Explanation: Some of the classifications can be accessed by some shortcuts. For example:
End of explanation
"""
iso3_codes_missing = ['ABC', 'AUT', 'XXX']
converter.convert(iso3_codes_missing, src='ISO3')
"""
Explanation: Handling missing data
The return value for non-found entries is be default set to 'not found':
End of explanation
"""
converter.convert(iso3_codes_missing, src='ISO3', not_found='missing')
"""
Explanation: but can also be rest to something else:
End of explanation
"""
converter.convert(iso3_codes_missing, src='ISO3', not_found=None)
"""
Explanation: Alternativly, the non-found entries can be passed through by passing None to not_found:
End of explanation
"""
import pandas as pd
add_data = pd.DataFrame.from_dict({
'name_short' : ['xxx country', 'abc country'],
'name_official' : ['The XXX country', 'The ABC country'],
'regex' : ['xxx country', 'abc country'],
'ISO3': ['xxx', 'abc']}
)
add_data
extended_converter = coco.CountryConverter(additional_data=add_data)
extended_converter.convert(iso3_codes_missing, src='ISO3', to='name_short')
"""
Explanation: To extend the underlying dataset, an additional dataframe (or file) can be passed.
End of explanation
"""
# extended_converter = coco.CountryConverter(additional_data=path/to/datafile)
"""
Explanation: Alternatively to a ad hoc dataframe, additional datafiles can be passed. These must have the same format as basic data set.
An example can be found here:
https://github.com/konstantinstadler/country_converter/tree/master/tests/custom_data_example.txt
The custom data example contains the ISO3 code mapping for Romania before 2002 and switches the regex matching for congo between DR Congo and Congo Republic.
To use is pass the path to the additional country file:
End of explanation
"""
switched_converter = coco.CountryConverter(additional_data=pd.DataFrame.from_dict({
'name_short' : ['India', 'Indonesia'],
'name_official' : ['India', 'Indonesia'],
'regex' : ['india', 'indonesia'],
'ISO2': ['ID', 'IN']}))
converter.convert('IN', src='ISO2', to='name_short')
switched_converter.convert('ID', src='ISO2', to='name_short')
"""
Explanation: The passed data (file or dataframe) must at least contain the headers 'name_official', 'name_short' and 'regex'. Of course, if the additional data shall be used to a conversion to any other field, these must also be included.
Additionally passed data always overwrites the existing one.
This can be used to adjust coco for datasets with wrong country names.
For example, assuming a dataset erroneous switched the ISO2 codes for India (IN) and Indonesia (ID) (therefore assuming 'ID' for India and 'IN' for Indonesia), one can accomedate for that by:
End of explanation
"""
some_names = ['United Rep. of Tanzania', 'Cape Verde', 'Burma', 'Iran (Islamic Republic of)', 'Korea, Republic of', "Dem. People's Rep. of Korea"]
coco.convert(names = some_names, src = "regex", to = "name_short")
"""
Explanation: Regular expression matching
The input parameter "src" can be set to "regex" to use regular expression matching for a given country list. For example:
End of explanation
"""
match_these = ['norway', 'united_states', 'china', 'taiwan']
master_list = ['USA', 'The Swedish Kingdom', 'Norway is a Kingdom too', 'Peoples Republic of China', 'Republic of China' ]
coco.match(match_these, master_list)
"""
Explanation: The regular expressions can also be used to match any list of countries to any other. For example:
End of explanation
"""
match_these = ['norway', 'united_states', 'china', 'taiwan']
master_list = ['USA', 'The Swedish Kingdom', 'Norway is a Kingdom too', 'Peoples Republic of China', 'Taiwan, province of china', 'Republic of China' ]
coco.match(match_these, master_list)
"""
Explanation: If the regular expression matches several times, all results are given as list and a warning is generated:
End of explanation
"""
coco.match(match_these, master_list, enforce_sublist = True)
"""
Explanation: The parameter "enforce_sublist" can be set to ensure consistent output:
End of explanation
"""
match_these = ['norway', 'united_states', 'china', 'taiwan', 'some other country']
master_list = ['USA', 'The Swedish Kingdom', 'Norway is a Kingdom too', 'Peoples Republic of China', 'Republic of China' ]
coco.match(match_these, master_list)
"""
Explanation: You get a warning if one of the names couldn't be found:
End of explanation
"""
coco.match(match_these, master_list, not_found = 'its not there')
"""
Explanation: And the value for non found countries can be specified:
End of explanation
"""
coco.match(match_these, master_list, not_found = None)
"""
Explanation: This can also be used to pass the not found country to the new classification:
End of explanation
"""
converter.data.head()
"""
Explanation: Internals
Within the new instance, the raw data for the conversion is saved within a pandas dataframe.
This dataframe can be accessed directly with:
End of explanation
"""
some_countries = ['Australia', 'Belgium', 'Brazil', 'Bulgaria', 'Cyprus', 'Czech Republic', 'Denmark', 'Estonia', 'Finland', 'France', 'Germany', 'Greece', 'Hungary', 'India', 'Indonesia', 'Ireland', 'Italy', 'Japan', 'Latvia', 'Lithuania', 'Luxembourg', 'Malta', 'Romania', 'Russia', 'Turkey', 'United Kingdom', 'United States']
converter.data[(converter.data.OECD >= 1995) & converter.data.name_short.isin(some_countries)].name_short
"""
Explanation: This dataframe can be extended in both directions. The only requirement is to provide unique values for name_short, name_official and regex.
Internally, the data is saved in country_data.txt as tab-separated values (utf-8 encoded).
Of course, all pandas indexing and matching methods can be used. For example, to get new OECD members since 1995 present in a list:
End of explanation
"""
|
a301-teaching/a301_code
|
notebooks/ground_track.ipynb
|
mit
|
from a301utils.a301_readfile import download
from a301lib.cloudsat import get_geo
import glob
import os
from pathlib import Path
import sys
import json
import numpy as np
import h5py
from matplotlib import pyplot as plt
from mpl_toolkits.basemap import Basemap
rad_file='MYD021KM.A2006303.2220.006.2012078143305.h5'
geom_file='MYD03.A2006303.2220.006.2012078135515.h5'
lidar_file='2006303212128_02702_CS_2B-GEOPROF-LIDAR_GRANULE_P2_R04_E02.h5'
download(rad_file)
download(geom_file)
download(lidar_file)
"""
Explanation: Plotting the cloudsat groundtrack on a modis raster
This notebook is my solution to Assignment 16, satellite groundtrack assigned on Day 26
Environment requires:
h5py, matplotlib, pyresample, requests, basemap
End of explanation
"""
lats,lons,date_times,prof_times,dem_elevation=get_geo(lidar_file)
"""
Explanation: 1. Read in the groundtrack data
End of explanation
"""
from a301utils.modismeta_read import parseMeta
metadict=parseMeta(rad_file)
corner_keys = ['min_lon','max_lon','min_lat','max_lat']
min_lon,max_lon,min_lat,max_lat=[metadict[key] for key in corner_keys]
"""
Explanation: 2. use the modis corner lats and lons to clip the cloudsat lats and lons to the same region
End of explanation
"""
lon_hit=np.logical_and(lons>min_lon,lons<max_lon)
lat_hit = np.logical_and(lats>min_lat,lats< max_lat)
in_box=np.logical_and(lon_hit,lat_hit)
print("ground track has {} points, we've selected {}".format(len(lon_hit),np.sum(in_box)) )
box_lons,box_lats=lons[in_box],lats[in_box]
"""
Explanation: Find all the cloudsat points that are between the min/max by construting a logical
True/False vector. As with matlab, this vector can be used as an index to pick
out those points at the indices where it evaluates to True. Also as in matlab
if a logical vector is passed to a numpy function like sum, the True values are
cast to 1 and the false values are cast to 0, so summing a logical vector tells
you the number of true values.
End of explanation
"""
from a301lib.modis_reproject import make_projectname
reproject_name=make_projectname(rad_file)
reproject_path = Path(reproject_name)
if reproject_path.exists():
print('using reprojected h5 file {}'.format(reproject_name))
else: #need to create reproject.h5 for channel 1
channels='-c 1 4 3 31'
template='python -m a301utils.modis_to_h5 {} {} {}'
command=template.format(rad_file,geom_file,channels)
if 'win' in sys.platform[:3]:
print('platform is {}, need to run modis_to_h5.py in new environment'
.format(sys.platform))
print('open an msys terminal and run \n{}\n'.format(command))
else: #osx, so presample is available
print('running \n{}\n'.format(command))
out=os.system(command)
the_size=reproject_path.stat().st_size
print('generated reproject file for 4 channels, size is {} bytes'.format(the_size))
"""
Explanation: 3. Reproject MYD021KM channel 1 to a lambert azimuthal projection
If we are on OSX we can run the a301utils.modis_to_h5 script to turn
the h5 level 1b files into a pyresample projected file for channel 1
by running python using the os.system command
If we are on windows, a201utils.modis_to_h5 needs to be run in
the pyre environment in a separate shell
End of explanation
"""
with h5py.File(reproject_name,'r') as h5_file:
basemap_args=json.loads(h5_file.attrs['basemap_args'])
chan1=h5_file['channels']['1'][...]
geo_string = h5_file.attrs['geotiff_args']
geotiff_args = json.loads(geo_string)
print('basemap_args: \n{}\n'.format(basemap_args))
print('geotiff_args: \n{}\n'.format(geotiff_args))
%matplotlib inline
from matplotlib import cm
from matplotlib.colors import Normalize
cmap=cm.autumn #see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
cmap.set_over('w')
cmap.set_under('b',alpha=0.2)
cmap.set_bad('0.75') #75% grey
plt.close('all')
fig,ax = plt.subplots(1,1,figsize=(14,14))
#
# set up the Basemap object
#
basemap_args['ax']=ax
basemap_args['resolution']='c'
bmap = Basemap(**basemap_args)
#
# transform the ground track lons/lats to x/y
#
cloudsatx,cloudsaty=bmap(box_lons,box_lats)
#
# plot as blue circles
#
bmap.plot(cloudsatx,cloudsaty,'bo')
#
# now plot channel 1
#
num_meridians=180
num_parallels = 90
col = bmap.imshow(chan1, origin='upper',cmap=cmap, vmin=0, vmax=0.4)
lon_sep, lat_sep = 5,5
parallels = np.arange(-90, 90, lat_sep)
meridians = np.arange(0, 360, lon_sep)
bmap.drawparallels(parallels, labels=[1, 0, 0, 0],
fontsize=10, latmax=90)
bmap.drawmeridians(meridians, labels=[0, 0, 0, 1],
fontsize=10, latmax=90)
bmap.drawcoastlines()
colorbar=fig.colorbar(col, shrink=0.5, pad=0.05,extend='both')
colorbar.set_label('channel1 reflectivity',rotation=-90,verticalalignment='bottom')
_=ax.set(title='vancouver')
"""
Explanation: read in the chan1 read in the basemap argument string and turn it into a dictionary of basemap arguments using json.loads
End of explanation
"""
groundtrack_name = reproject_name.replace('reproject','groundtrack')
print('writing groundtrack to {}'.format(groundtrack_name))
box_times=date_times[in_box]
#
# h5 files can't store dates, but they can store floating point
# seconds since 1970, which is called POSIX timestamp
#
timestamps = [item.timestamp() for item in box_times]
timestamps= np.array(timestamps)
with h5py.File(groundtrack_name,'w') as groundfile:
groundfile.attrs['cloudsat_filename']=lidar_file
groundfile.attrs['modis_filename']=rad_file
groundfile.attrs['reproject_filename']=reproject_name
dset=groundfile.create_dataset('cloudsat_lons',box_lons.shape,box_lons.dtype)
dset[...] = box_lons[...]
dset.attrs['long_name']='cloudsat longitude'
dset.attrs['units']='degrees East'
dset=groundfile.create_dataset('cloudsat_lats',box_lats.shape,box_lats.dtype)
dset[...] = box_lats[...]
dset.attrs['long_name']='cloudsat latitude'
dset.attrs['units']='degrees North'
dset= groundfile.create_dataset('cloudsat_times',timestamps.shape,timestamps.dtype)
dset[...] = timestamps[...]
dset.attrs['long_name']='cloudsat UTC datetime timestamp'
dset.attrs['units']='seconds since Jan. 1, 1970'
"""
Explanation: write the groundtrack out for future use
End of explanation
"""
|
laisee/bitfinex
|
examples/Backtest.ipynb
|
mit
|
import sys
sys.path.append('..')
from bitfinex.backtest import data
%pylab inline
"""
Explanation: Backtesting example
This notebook assumes you have the bitfinex library installed
End of explanation
"""
with open('quandl.key', 'r') as f:
key = f.read().strip()
data.Quandl.search('bitfinex')
"""
Explanation: fetching data from Quandl
For better access to Quandl data it is nice to have an API key. Let's see what kind of data Quandl keeps for bitfinex.
End of explanation
"""
history = data.CSVDataSource('bitfinexUSD.csv.gz',fields=['unix_time', 'price', 'ammount'])
history.parse_timestamp_column('unix_time',unit='s')
#history = data.pd.read_csv('bitfinexUSD.csv.gz',names=['unix_time', 'price', 'amount'])
#history['time'] = data.pd.to_datetime(history['unix_time'],unit='s')
#history = history.set_index('time')
"""
Explanation: Apparently Quandl is offering us only daily prices. For backtesting it is nice to have higher frequency data. For that, another possible source is bitcoincharts, which provides some nice CSVs: Trade data is delayed by approx. 15 minutes. It will return the 2000 most recent trades.. The CSV to be loaded below is about 50MB in size: http://api.bitcoincharts.com/v1/csv/bitfinexUSD.csv.gz
End of explanation
"""
history.data.info()
history.data[-10:]
history.data[-500:].plot(y='price',figsize=(10,6), style='-o', grid=True);
history.data[-50:].plot(y='ammount', kind='bar',figsize=(10,6), grid=True);
"""
Explanation: Let's check how many points we have.
End of explanation
"""
|
pydata/xarray
|
doc/examples/monthly-means.ipynb
|
apache-2.0
|
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
"""
Explanation: Calculating Seasonal Averages from Time Series of Monthly Means
Author: Joe Hamman
The data used for this example can be found in the xarray-data repository. You may need to change the path to rasm.nc below.
Suppose we have a netCDF or xarray.Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days.
End of explanation
"""
ds = xr.tutorial.open_dataset("rasm").load()
ds
"""
Explanation: Open the Dataset
End of explanation
"""
month_length = ds.time.dt.days_in_month
month_length
# Calculate the weights by grouping by 'time.season'.
weights = (
month_length.groupby("time.season") / month_length.groupby("time.season").sum()
)
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby("time.season").sum().values, np.ones(4))
# Calculate the weighted average
ds_weighted = (ds * weights).groupby("time.season").sum(dim="time")
ds_weighted
# only used for comparisons
ds_unweighted = ds.groupby("time.season").mean("time")
ds_diff = ds_weighted - ds_unweighted
# Quick plot to show the results
notnull = pd.notnull(ds_unweighted["Tair"][0])
fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14, 12))
for i, season in enumerate(("DJF", "MAM", "JJA", "SON")):
ds_weighted["Tair"].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 0],
vmin=-30,
vmax=30,
cmap="Spectral_r",
add_colorbar=True,
extend="both",
)
ds_unweighted["Tair"].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 1],
vmin=-30,
vmax=30,
cmap="Spectral_r",
add_colorbar=True,
extend="both",
)
ds_diff["Tair"].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 2],
vmin=-0.1,
vmax=0.1,
cmap="RdBu_r",
add_colorbar=True,
extend="both",
)
axes[i, 0].set_ylabel(season)
axes[i, 1].set_ylabel("")
axes[i, 2].set_ylabel("")
for ax in axes.flat:
ax.axes.get_xaxis().set_ticklabels([])
ax.axes.get_yaxis().set_ticklabels([])
ax.axes.axis("tight")
ax.set_xlabel("")
axes[0, 0].set_title("Weighted by DPM")
axes[0, 1].set_title("Equal Weighting")
axes[0, 2].set_title("Difference")
plt.tight_layout()
fig.suptitle("Seasonal Surface Air Temperature", fontsize=16, y=1.02)
# Wrap it into a simple function
def season_mean(ds, calendar="standard"):
# Make a DataArray with the number of days in each month, size = len(time)
month_length = ds.time.dt.days_in_month
# Calculate the weights by grouping by 'time.season'
weights = (
month_length.groupby("time.season") / month_length.groupby("time.season").sum()
)
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby("time.season").sum().values, np.ones(4))
# Calculate the weighted average
return (ds * weights).groupby("time.season").sum(dim="time")
"""
Explanation: Now for the heavy lifting:
We first have to come up with the weights,
- calculate the month length for each monthly data record
- calculate weights using groupby('time.season')
Finally, we just need to multiply our weights by the Dataset and sum along the time dimension. Creating a DataArray for the month length is as easy as using the days_in_month accessor on the time coordinate. The calendar type, in this case 'noleap', is automatically considered in this operation.
End of explanation
"""
|
edwardd1/phys202-2015-work
|
midterm/AlgorithmsEx03.ipynb
|
mit
|
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
"""
Explanation: Algorithms Exercise 3
Imports
End of explanation
"""
def char_probs(s):
"""Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
"""
dictionary = {}
for n in s:
dictionary[n]= (s.count(n))/len(s)
return dictionary
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
"""
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
"""
def entropy(d):
"""Compute the entropy of a dict d whose values are probabilities."""
"""Return a list of 2-tuples of (word, count), sorted by count descending."""
#t = np.array(d)
#t = np.sort(t)
H = 0
l = [(i,d[i]) for i in d]
t = sorted(l, key = lambda x:x[1], reverse = True)
for n in t:
H = H + (n[1])*np.log2(n[1])
#t = char_probs(t)*np.log2(char_probs(t))
return -H
entropy({'a': 0.5, 'b': 0.5})
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
"""
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
"""
def z(x):
print(entropy(char_probs(x)))
return entropy(char_probs(x))
interact(z, x='string');
assert True # use this for grading the pi digits histogram
"""
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation
"""
|
Piezoid/pyGATB
|
samples/notebook.ipynb
|
agpl-3.0
|
from gatb import Graph
graph = Graph('-in ../../DiscoSnp/test/large_test/discoRes_k_31_c_auto.h5') # chr1 with simulated variants
graph
help(graph)
"""
Explanation: pyGATB: presentation and usage
bash
git clone --recursive https://github.com/GATB/pyGATB
cd pyGATB
mkdir build && cd build
cmake . .. -DCMAKE_BUILD_TYPE=Release
make -j8
python3 setup.py install --user
End of explanation
"""
for i, node in enumerate(graph):
print('{}: {!r}'.format(i, node))
if i > 10: break
"""
Explanation: Iterate over branching nodes:
End of explanation
"""
kmer = b'AACGAGCACCAAAGACTTAGCATGAAAACCC'
node = graph[kmer] # Either a real graph node or one of their neighbors
node
help(node)
bytes(node) # Conversion to bytestring encoded kmer
assert node.reversed == node
node.reversed
"""
Explanation: Graph is a factory for Nodes:
End of explanation
"""
print(node.succs, node.out_degree)
print(node.preds, node.in_degree)
"""
Explanation: Query neighbors and degrees:
End of explanation
"""
node_kmer = bytes(node)
for ext in b'ATGC':# NB: iterating over bytes produces character codes
ext = bytes((ext,)) # So this line reconstruct a single character bytes object
ext_kmer = node_kmer[1:] + ext
ext_node = graph[ext_kmer] # Construct the Node from the bytes encoded kmer
if ext_node in graph: # Checks if the node belong to the graph
print(ext_node)
"""
Explanation: Query neighbors by manually doing the extension:
End of explanation
"""
node.paths
"""
Explanation: Simple paths from neighbors are obtained as list of (path, end node, end reason):
End of explanation
"""
def check_paths(origin_node):
origin_node_kmer = bytes(origin_node)
for path, end_node, end_reason in origin_node.paths:
assert (origin_node_kmer + path).endswith(bytes(end_node))
if end_reason == 2: # In-branching end reason prioritized over out-branching
assert end_node.in_degree > 1
elif end_reason == 1:
assert end_node.out_degree > 1 # Out-branching
elif end_reason == 3: # Dead-end
assert end_node.out_degree == 0
for origin_node in graph:
check_paths(origin_node)
origin_node.reverse() # in-place reverse
check_paths(origin_node)
"""
Explanation: Both paths ends at the same k-mer : this is a bubble induced by A/T on the first nucleotide in the path.
This code tests the forward paths results for all branching nodes and their RCs:
End of explanation
"""
from plot_graph import bfs_igraph
bfs_igraph(node, max_depth=20)
"""
Explanation: Example: forward simple paths graph
A graph (branching node, simple path) is constructed by a bounded breadth first seach over forward simple paths.
The start node is the running example (AACGAGCACCAAAGACTTAGCATGAAAACCC) of a node preceding a bubble.
The edges (paths) visited by the BFS are then displayed with `igraph :
* Red node : root node (start point of the BFS)
* yellow node: node with in-branching (predecesors are not fetched by the forward seach),
* blue node: node with out-branching,
* green node: in and out-branching,
* Red circle: nodes with unexplored descendants.
The numbers on the edges are the simple path lengths.
End of explanation
"""
|
inncretech/datascience
|
projects/data_clean/notebook/blog_data-cleaning.ipynb
|
mit
|
import pandas as pd
"""
Explanation: <html>
<body>
<img src="logo.png">
<B><p style="text-align:center; color: blue; font-size: 30px "> Inncretech Project
<br><br>
<I style= "text-align:center;color:black; font-size: 20px"><B style = "color:red">Inn</B>ovation <B style = "color:red">Cre</B>ativity <B style = "color:red">Tech</B>nology: Engineering Imaginations</I>
</B></p>
<p>-----------------------------------------------------------------------------------------------------------------------------------------------</p>
</body>
</html>
<!--<b><font size="5" color="brown">Data Science </font> </b> <p>
Data Science has been a growing field of study, research and disciple that supports extensive thinking on various scientific methods, processes and systems to extract knowledge, predict insights and contribute results for making various kind of business, scientific, computational or linguistic results. </p>
<b><font size="5" color="brown"> Data Cleaning </font> </b> <p>
Data Cleaning is an integral part of data science where data scientists have to spend almost 80% of their time. Data cleaning helps remove abnormality in data and makes it consistent for data sciencetists to understand, run algorithms and make decisions. Data Cleaning passes through several process of structuring, restructuring, modifications and cleaning.
</p>
In this blog, we will discuss basic data cleaning process using <b>[Pandas](https://pandas.pydata.org/pandas-docs/stable/) </b>library and learn about basic things a data science engineers do. -->
Step 1: Import required library.
In this blog, we only used pandas for basic work on data.
End of explanation
"""
import numpy as np
"""
Explanation: We also use basic numpy library functionalities for basic data processing.
End of explanation
"""
df = pd.DataFrame()
"""
Explanation: Step 2: Create a data frame
End of explanation
"""
df = pd.read_csv('file.csv') # you may import other file format instead of csv
# df = pd.read_csv('/path/file.csv') # you may mention file path before file name based on its location
# df = pd.read_csv('/path/file.csv', encoding = 'utf-8') # you can replace encoding type as per the need
"""
Explanation: Step 3: Import the data file
Pandas support several kinds of data file types like csv, json, xls, html, fwf, hdf, pickle etc and can be used by many engineers to convert from one format to the other as well. Pandas gives this flexibility to the users to work on other python library by converting different file types to the supported one's. For exmple, if you are using Graphlab library and want to work on xls file type, graphlab may not support, so you need to convert xls file to supported file type like csv to get started.
End of explanation
"""
df.head() # shows the first five rows
# df.head(1) # Equivalent to df.head(n=1)
# df.tail() # shows the last five rows
#df.head(n=0) # gives you just columns but not any rows, n=0 means no of rows to be displayed is zer0
"""
Explanation: Please note that you may have to mention file path and encoding type as per the need.
Step 4: Have first look of the data
Depending on the type of data, we will see head and tail of data to observe at basic level what all kind of data are present in the file. At this point, you may also use other libraries of python and see visuals from basic data.
End of explanation
"""
df.column_name.describe()
#df.Business_Name.describe()
# df.Address_1.describe() # gives detailed information about
"""
Explanation: Step 5: Describe individual columns to see more details
Each column can have possible dirty data or duplicated data. With dirty data, I mean missing, unstructured or incomplete data. Any individual column can have many number of inconsistent data that may create problem when analysing and in order to get rid of them, we first need to figure out what kind of inconsistency the data have.
End of explanation
"""
df.isnull().values.any()
"""
Explanation: <p>
Describe function can help you analyse the kind of
<br>
count: 100000
<br>
unique: 84505
<br>
top: ---- / ? ...
<br>
freq: 1607
<br>
Name: Column_name, dtype: object
</p>
Step 6: FInding missing values
Most of the dataset have some missing values that are termed as Nan, null, not availabe or missing values. Term NaN is in numeric arrays, None/NaN in object arrays. We first check if we have missing values and then we find some replacing techniques to go further.
End of explanation
"""
df.column_name = df.column_name.fillna('')
# df.column_name = df.column_name.fillna('Missing') # This will put 'Missing' at spot of all missing data.
"""
Explanation: This will give you boolean result, True or False and based on the result, you may decide further operations on data.
Step 7: Swapping Missing values with default values
Missing values can create several problems in later phase of data engineering if not handled effeciently. So, we fill all the missing values with default values. Dropping rows with missing values at this phase can be dangerous and can be done after this step.
End of explanation
"""
df['column_name'] = df['column_name'].astype(str)
#df['column_1'] = df['column_1'].astype(str)
#df['column_2'] = df['column_2'].astype(int)
"""
Explanation: Now, repeat step 6 and see if you still have missing values, hope you get False.
Step 8: Check data type of each column and change if necessary
In step 5, we can observe dtype at the end of description that we get using describe() function and we can see if the data type of each column matches the needed data type, we keep otherwise change. In usual practise, sometimes int/float datatypes are represented as str/objects, in that case, we need to change the data type and convert it into the required format.
End of explanation
"""
df.dtypes
"""
Explanation: Another way to get all the columns data type together can be as follows:-
End of explanation
"""
del df['column_name']
"""
Explanation: This will give you list of all columns and corresponding data types. Please note that in Python 3, str data type is referred as object data type. Repeat Step 5 and see if values changes and keep comparing dataframe with its previous state and new state.
Step 9: Delete Blank Columns
For large datasets, having columns with all entries blank or null or no data can be deleted. You first need to see if a particular column is useful or not and then decide this step.
End of explanation
"""
df = df.dropna(axis=1, how=’all’)
#df = df.dropna(axis=1, how=’any’)
"""
Explanation: Another way of doing this is with dropna() function where we can define threshold that we will discuss in later steps. We drop the column having all null values.
End of explanation
"""
df.dropna() # Drops all rows with null values
#df.dropna(thresh=10) # Drops all rows with less than 10 values
"""
Explanation: Now, we get rid of all such columns having almost no data and proceed further.
Step 10: Delete Blank Rows
Depending on the required threshold, we may delete rows with values less than our expected threshold value. For example, if we have thresh = 5, means, we should have atleast 5 not-null values in the given row. By default, dropna() function deletes all the rows having atleats one null value.
End of explanation
"""
df['String_column_name'] = df['String_column_name'].str.strip()
"""
Explanation: Step 11: Working on Strings
Columns with string data types can undergo several data processing. Various methods can be applied that can go beyond the scope of this blog, we will try to briefly explain how things work with strings.
a) Removing white spaces
White spaces can potentially create a major issue of disturbance during processing, cleaning and observing data. In string columns, we need to cleanup white spaces on edges i.e trailing whitespace and replace with default values.
End of explanation
"""
df['String_column_name'] = df['String_column_name'].str.lower() # converts all data in the column in lower case
"""
Explanation: b) Making string cases consistent
Keeping data in one case, either upper or lower can have many benifits. Usually we keep all column values in one case to perform efficient string matching.
End of explanation
"""
df['String_column_name'] = df['String_column_name'].str.replace(' ', '_')
"""
Explanation: c) Introducing binding in strings
We replace all the white spaces between words and attach them using underscore _ to bing all the strings together with no empty space. We may also use some other key to bind data together and work.
End of explanation
"""
df = df.drop_duplicates(subset=['Dup_column1', 'Dup_column2'], keep=False) # drop all duplicates
#df = df.drop_duplicates(subset=['Dup_column1', 'Dup_column2'], keep= 'first') #keeps first occurance of duplicates
#df = df.drop_duplicates(subset=['Dup_column1', 'Dup_column2'], keep= 'last') # keeps last occurance of duplicates
"""
Explanation: d) String Matching
String matching can be performed to deduplicate data. One value in a string column can be present in many different ways. Ex- location of states of US can be represented in a dataset as <b>New York </b> and <b> New York, NY </b>. Several algorithms can sort out these problems of string matching like fuzzy matching, similarity scores of column elements etc. and resulting data processing. We will simply keep this task for readers to try as it goes beyond the scope of this blog.
e) Drop Duplicates
In most of datasets, there are chances of duplication and after all the required processing, we can further drop all duplicates and get unique non repeating data.
End of explanation
"""
df['String_column_name'] = df['String_column_name'].str.replace('1','')
df['String_column_name'] = df['String_column_name'].str.replace('2','')
df['String_column_name'] = df['String_column_name'].str.replace('3','')
df['String_column_name'] = df['String_column_name'].str.replace('4','')
df['String_column_name'] = df['String_column_name'].str.replace('0','')
#...
df['String_column_name'] = df['String_column_name'].str.replace('-','')
df['String_column_name'] = df['String_column_name'].str.replace('"','')
df['String_column_name'] = df['String_column_name'].str.replace(' ','')
df['String_column_name'] = df['String_column_name'].str.replace('(','')
df['String_column_name'] = df['String_column_name'].str.replace('.','')
df['String_column_name'] = df['String_column_name'].str.replace('/','')
#...
#...
#...
"""
Explanation: f) Remove Special Characters
In string column, we may have garbage values, chinese scripts, special characters and many other abnormal text that can create problems in observing data. Occurances of normal integer values can also create trouble for data scientists to observe. So, its always a good practise to check for columns with string data type if they have integer values or special character. We can replace all such values with default values and again go to previous steps and run the process of cleaning data simultaneously.
End of explanation
"""
df.to_csv('cleaned_data.csv', encoding = 'utf-8') # you may change file type depending on the requirements.
"""
Explanation: The above process can be done initially or in later stages of data cleaning as per the need. String data processing can be tidious as well as conceptual when it comes to larger datasets. Many algorithms can also play a role in string cleaning within the datasets.
Step 12: Working on integers
Many missing integer data sets can be replaced in several situations with the help of predictions. For example, if we have carbon datasets and we want to replace missing data based on year of emission given that we have trend of 40 years of data, we may give a value that is consistent with the trend based on some algorithm and replace missing data with the trend data. The same thing follows with the integer values.
Step 13: Output and save the cleaned dataframe to the require file format.
This step can be done at several stages of the project depending on size of the data. Above mentioned steps can take longer time and its always a good idea to keep saving data in some file if processing takes longer time than usual and terminates in the middle (very less chances but happens)
End of explanation
"""
|
scidash/sciunit
|
docs/chapter4.ipynb
|
mit
|
!pip install -q sciunit
"""
Explanation: <a href="https://colab.research.google.com/github/scidash/sciunit/blob/master/docs/chapter4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Chapter 4. Example of RunnableModel and Backend
(or back to Chapter 3)
If you are using this file in Google Colab, this block of code can help you install sciunit from PyPI in Colab environment.
End of explanation
"""
import sciunit, random
from sciunit import Test
from sciunit.capabilities import Runnable
from sciunit.scores import BooleanScore
from sciunit.models import RunnableModel
from sciunit.models.backends import register_backends, Backend
"""
Explanation: Beside the usual model in previous sections, let’s create a model that run a Backend instance to simulate and obtain results.
Firstly, import necessary components from SciUnit package.
End of explanation
"""
class RandomNumBackend(Backend):
'''generate a random integer between min and max'''
def set_run_params(self, **run_params):
# get min from run_params, if not exist, then 0.
self.min = run_params.get('min', 0)
# get max from run_params, if not exist, then self.min + 100.
self.max = run_params.get('max', self.min + 100)
def _backend_run(self):
# generate and return random integer between min and max.
return random.randint(self.min, self.max)
class RandomNumModel(RunnableModel):
"""A model that always produces a constant number as output."""
def run(self):
self.results = self._backend.backend_run()
class RangeTest(Test):
"""Tests if the model predicts the same number as the observation."""
# Default Runnable Capability for RunnableModel
required_capabilities = (Runnable,)
# This test's 'judge' method will return a BooleanScore.
score_type = BooleanScore
def generate_prediction(self, model):
model.run()
return model.results
def compute_score(self, observation, prediction):
score = BooleanScore(
observation['min'] <= prediction and observation['max'] >= prediction
)
return score
"""
Explanation: Let’s define subclasses of SciUnit Backend, Test, and Model.
Note that:
1. A SciUnit Backend subclass should implement _backend_run method.
2. A SciUnit Backend subclass should implement run method.
End of explanation
"""
model = RandomNumModel("model 1")
"""
Explanation: Let’s define the model instance named model 1.
End of explanation
"""
register_backends({"Random Number": RandomNumBackend})
model.set_backend("Random Number")
model.set_run_params(min=1, max=10)
"""
Explanation: We must register any backend isntance in order to use it in model instances.
set_backend and set_run_params methods can help us to set the run-parameters in the model and its backend.
End of explanation
"""
observation = {'min': 1, 'max': 10}
oneToTenTest = RangeTest(observation, "test 1")
score = oneToTenTest.judge(model)
"""
Explanation: Next, create an observation that requires the generated random integer between 1 and 10 and a test instance that use the observation and against the model
Then we get a more quantitative summary of the results:
End of explanation
"""
print(score)
"""
Explanation: print the score, and we can see the result.
End of explanation
"""
|
Chiroptera/QCThesis
|
notebooks/cuda sorting test.ipynb
|
mit
|
4e7*4/1024/1024
a=np.random.randint(0,1e4,1e6)
dA = cuda.to_device(a)
del a
@cuda.reduce
def argmax_gpu(a,b):
if a >= b:
return a
else:
return b
%time a.max()
%time argmax_gpu(dA)
sorter = RadixSort(maxcount=dA.size, dtype=dA.dtype)
dRes = sorter.argsort(dA)
res = dRes.copy_to_host()
a
res
"""
Explanation: The RadixSort.argsort() sorts the input array and outputs an array containing the indices of the sort. In the end the input array is sorted!
End of explanation
"""
weight = dA
MAX_TPB = 256
myStream = 0
"""
Explanation: sl mst lifetime gpu
End of explanation
"""
# get MST
mst, n_edges = boruvka_minho_gpu(dest, weight, firstEdge, outDegree, MAX_TPB=MAX_TPB)
"""
Explanation: Compute the MST for the input graph.
End of explanation
"""
h_n_edges = n_edges.getitem(0)
mst_weights = cuda.device_array(h_n_edges, dtype = weight.dtype)
"""
Explanation: Allocate array for the mst weights.
End of explanation
"""
mstGrid = compute_cuda_grid_dim(h_n_edges, MAX_TPB)
getWeightsOfEdges_gpu[mstGrid, MAX_TPB, myStream](mst, n_edges, weight, mst_weights)
"""
Explanation: Get array with only the considered weights in the MST.
End of explanation
"""
RadixSort.sort
sorter = RadixSort(maxcount = mst_weights.size, dtype = mst_weights.dtype)
sortedWeightArgs = sorter.argsort(mst_weights)
"""
Explanation: Sort the MST weights. There are no repeted edges at this point since the output MST is like a directed graph.
End of explanation
"""
lifetimes = cuda.device_array(mst_weights.size - 1, dtype = nweight.dtype)
compute_lifetimes_CUDA[lifetimeGrid, MAX_TPB, myStream](nweight, lifetimes)
"""
Explanation: Allocate array for the lifetimes.
End of explanation
"""
# get contracted graph from MST
ndest, nweight, nfe, nod = getGraphFromEdges_gpu(dest, weight, fe, od, edges = mst, n_edges = n_edges,\
MAX_TPB = MAX_TPB, stream = myStream)
nweight = dA
lifetimeGrid = compute_cuda_grid_dim(lifetimes.size, MAX_TPB)
@cuda.jit
def getWeightsOfEdges_gpu(edges, n_edges, weights, nweights):
"""
This function will take a list of edges (edges), the number of edges to consider (n_edges,
the weights of all the possible edges (weights) and the array for the weights of the list
of edges and put the weight of each edge in the list of edges in the nweights, in the same
position.
"""
n_edges_sm = cuda.shared.array(1, dtype = np.int32)
edge = cuda.grid(1)
if edge == 0:
n_edges_sm[0] = n_edges[0]
if edge >= edges.size:
return
cuda.syncthreads()
nweights[edge] = weights[edges[edge]]
@cuda.jit
def compute_lifetimes_CUDA(nweight, lifetimes):
edge = cuda.grid(1)
if edge >= lifetimes.size:
return
lifetimes[edge] = nweight[edge + 1] - nweight[edge]
res = sortedWeightArgs.copy_to_host()
res
"""
Explanation: Between the above cell and the next, I have to get the argmax of lifetimes. I also have to sort the mst with the argsort from the weights. Don't forget to update the number of n_edges. Send those arrays to build the resulting graph.
End of explanation
"""
|
xdze2/thermique_appart
|
drafts/Model03_old.ipynb
|
mit
|
filename = './results/model02results.csv'
Ttuile = pd.read_csv( filename, index_col=0, parse_dates=True )
Ttuile.plot(figsize=(14, 5) ); plt.ylabel('T_tuile °C');
"""
Explanation: Modèle 03 -old-
Utilise le Model02 pour prédire la température intérieure de l'appartement
<img src="images/sch_model03.jpg" width="500px" alt='schema mod03' />
Get data from Model02
End of explanation
"""
Ttuile.columns
Ttuile['bastille'].mean()
surface_toiture = 37 # m2
ep_isolation = 0.2 # m
k_isolation = 0.035 # conductivité, laine de verre( ? ) 0.03 W/m/K - 0.04 W/m/K
Sh_pan = surface_toiture*k_isolation/ep_isolation
print( Sh_pan )
flux = Sh_bastille * 20
print( flux )
k_isolation/ep_isolation * 20
"""
Explanation: Grandeurs physiques
End of explanation
"""
# walls def. :
T_nodes = Ttuile.copy()
Sh_nodes = {'bastille': Sh_pan, 'vercors' : Sh_pan}
T_nodes = T_nodes.resample('10min').interpolate(method='quadratic')
T_nodes.plot(figsize=(14, 5) ); plt.ylabel('T_tuile °C');
M = 1750000
righthandside = np.zeros( len( T_nodes ) )
somme_Sh = 0
for name in walls_def.keys():
righthandside += T_nodes[ name ] * Sh_nodes[ name ]
somme_Sh += Sh_nodes[ name ]
# TF (reel)
righthandside_TF = np.fft.rfft( righthandside )
freq = np.fft.rfftfreq( righthandside.size, d=10*60 )
# modele
T_theo_TF = righthandside_TF / ( 2j*np.pi*freq*M + somme_Sh )
# TF inverse
T_theo = np.fft.irfft( T_theo_TF, n=len( T_nodes ) )
T_nodes['T_theo']= T_theo
T_nodes['T_int']= df['T_int']
T_nodes.plot( figsize=(14, 5) ); plt.ylabel('T_tuile °C'); #plt.ylim([22, 35])
"""
Explanation: Modèle
End of explanation
"""
import emoncmsfeed as getfeeds
dataframefreq = '10min'
feeds = { 'T_int':3 } # 'T_ext':2,
df = getfeeds.builddataframe( feeds, dataframefreq ) # startdate=pd.to_datetime('22/06/2017')
df.plot();
df['T_theo'] = T_nodes['T_theo']
df.plot( figsize=(14, 5) )
T_nodes['T_int']= df['T_int']
T_nodes.plot( figsize=(14, 5) )
"""
Explanation: Mesures exp
End of explanation
"""
|
metpy/MetPy
|
dev/_downloads/87fd6ee8be4ea1587fa2ad7f4206407a/Combined_plotting.ipynb
|
bsd-3-clause
|
import xarray as xr
from metpy.cbook import get_test_data
from metpy.plots import ContourPlot, ImagePlot, MapPanel, PanelContainer
from metpy.units import units
# Use sample NARR data for plotting
narr = xr.open_dataset(get_test_data('narr_example.nc', as_file_obj=False))
"""
Explanation: Combined Plotting
Demonstrate the use of MetPy's simplified plotting interface combining multiple plots.
Also shows how to control the maps that are plotted. Plots sample NARR data.
End of explanation
"""
contour = ContourPlot()
contour.data = narr
contour.field = 'Temperature'
contour.level = 850 * units.hPa
contour.linecolor = 'red'
contour.contours = 15
"""
Explanation: Create a contour plot of temperature
End of explanation
"""
img = ImagePlot()
img.data = narr
img.field = 'Geopotential_height'
img.level = 850 * units.hPa
"""
Explanation: Create an image plot of Geopotential height
End of explanation
"""
panel = MapPanel()
panel.area = 'us'
panel.layers = ['coastline', 'borders', 'states', 'rivers', 'ocean', 'land']
panel.title = 'NARR Example'
panel.plots = [contour, img]
pc = PanelContainer()
pc.size = (10, 8)
pc.panels = [panel]
pc.show()
"""
Explanation: Plot the data on a map
End of explanation
"""
|
lyoung13/deep-learning-nanodegree
|
p4-language-translation/dlnd_language_translation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int[ '<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input, targets, learning_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
RNN_output, RNN_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype = tf.float32)
return RNN_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
# Create RNN cell for decoding using rnn_size and num_layers
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
# Create the output fuction using lambda to transform it's input, logits, to class logits
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
# sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
with tf.variable_scope('decoding') as decoding_scope:
training_logits = decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length,
decoding_scope, output_fn, keep_prob)
# Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
# maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state, cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, sequence_length, vocab_size, decoding_scope,
output_fn, keep_prob)
return training_logits, inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
# Apply embedding to the input data for the encoder.
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data for the decoder.
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size,
# sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
training_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logits, inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 8
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
word_list = []
lower_words = sentence.lower().split()
for word in lower_words:
if(word in vocab_to_int):
word_list.append(vocab_to_int[word])
else:
word_list.append(vocab_to_int['<UNK>'])
return word_list
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
enbanuel/phys202-2015-work
|
assignments/midterm/InteractEx06.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
"""
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
Image('fermidist.png')
"""
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
"""
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
# YOUR CODE HERE
g = energy - mu
c = g/(kT)
ex = 1/(np.exp(c)+1)
return ex
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
"""
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
YOUR ANSWER HERE: $\Large{
F(\epsilon) = \frac{1}{e^{(\epsilon - \mu)/kT}+1}
}$
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
"""
def plot_fermidist(mu, kT):
# YOUR CODE HERE
energy = np.arange(0, 10.0)
f = plt.figure(figsize=(12,6))
plt.plot(fermidist(energy, mu, kT), 'g-')
plt.ylabel('Distribution')
plt.xlabel('Quantum state energy')
plt.title('Fermi-Dirac Distribution')
plt.grid(True)
plt.ylim(0, 1.0)
plt.xlim(0, 10.0)
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
"""
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
"""
# YOUR CODE HERE
interact(plot_fermidist, mu =[0.0, 5.0], kT =[0.1, 10.0])
"""
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
"""
|
JJINDAHOUSE/deep-learning
|
transfer-learning/Transfer_Learning_Solution.ipynb
|
mit
|
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
CompPhysics/MachineLearning
|
doc/pub/week35/ipynb/.ipynb_checkpoints/week35-checkpoint.ipynb
|
cc0-1.0
|
%matplotlib inline
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
# Then nice printout using pandas
DesignMatrix = pd.DataFrame(X)
DesignMatrix.index = A
DesignMatrix.columns = ['1', 'A', 'A^(2/3)', 'A^(-1/3)', '1/A']
display(DesignMatrix)
"""
Explanation: <!-- dom:TITLE: Week 35: Linear Regression and Review of Statistical Analysis and Probability Theory -->
Week 35: Linear Regression and Review of Statistical Analysis and Probability Theory
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Sep 16, 2020
Copyright 1999-2020, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
Plans for week 35, August 24-28
Thursday: Introduction to ordinary Least Squares and derivation of basic equation
Friday: Linear regression and statistical analysis and probability theory
Thursday August 27
Video of Lecture.
Why Linear Regression (aka Ordinary Least Squares and family)
Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$.
* Method of choice for fitting a continuous function!
Gives an excellent introduction to central Machine Learning features with understandable pedagogical links to other methods like Neural Networks, Support Vector Machines etc
Analytical expression for the fitting parameters $\boldsymbol{\beta}$
Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more
Analytical relation with probabilistic interpretations
Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics
Easy to code! And links well with classification problems and logistic regression and neural networks
Allows for easy hands-on understanding of gradient descent methods
and many more features
For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended.
Similarly, Mehta et al's article is also recommended.
Regression analysis, overarching aims
Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$.
The first variable is called the dependent, the outcome or the response variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable.
A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with
* $n$ cases $i = 0, 1, 2, \dots, n-1$
Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$
$p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}i=[x{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples.
The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{x}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things.
Regression analysis, overarching aims II
Consider an experiment in which $p$ characteristics of $n$ samples are
measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix
$\mathbf{X}$.
The matrix $\mathbf{X}$ is called the design
matrix. Additional information of the samples is available in the
form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is
generally referred to as the response variable. The aim of
regression analysis is to explain $\boldsymbol{y}$ in terms of
$\boldsymbol{X}$ through a functional relationship like $y_i =
f(\mathbf{X}{i,\ast})$. When no prior knowledge on the form of
$f(\cdot)$ is available, it is common to assume a linear relationship
between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to
the linear regression model where $\boldsymbol{\beta} = [\beta_0, \ldots,
\beta{p-1}]^{T}$ are the regression parameters.
Linear regression gives us a set of analytical equations for the parameters $\beta_j$.
Examples
In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$,
consider the model we discussed for describing nuclear binding energies.
There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model.
Assuming
$$
BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1},
$$
we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms.
This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a
$p\times n$ matrix $\boldsymbol{X}$.
Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the
so-called credit card default data from Taiwan. The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$. This is however a classification problem and we will come back to it when we discuss Logistic Regression.
General linear models
Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function.
Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is
$$
y=y(x) \rightarrow y(x_i)=\tilde{y}i+\epsilon_i=\sum{j=0}^{n-1} \beta_j x_i^j+\epsilon_i,
$$
where $\epsilon_i$ is the error in our approximation.
Rewriting the fitting procedure as a linear algebra problem
For every set of values $y_i,x_i$ we have thus the corresponding set of equations
$$
\begin{align}
y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\
y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\
y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\
\dots & \dots \
y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\
\end{align}
$$
Rewriting the fitting procedure as a linear algebra problem, more details
Defining the vectors
$$
\boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T,
$$
and
$$
\boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T,
$$
and
$$
\boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T,
$$
and the design matrix
$$
\boldsymbol{X}=
\begin{bmatrix}
1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\
1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\
1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\
\end{bmatrix}
$$
we can rewrite our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The above design matrix is called a Vandermonde matrix.
Generalizing the fitting procedure as a linear algebra problem
We are obviously not limited to the above polynomial expansions. We
could replace the various powers of $x$ with elements of Fourier
series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j
x_i)}$, or time series or other orthogonal functions. For every set
of values $y_i,x_i$ we can then generalize the equations to
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case!
Generalizing the fitting procedure as a linear algebra problem
We redefine in turn the matrix $\boldsymbol{X}$ as
$$
\boldsymbol{X}=
\begin{bmatrix}
x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\
x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\
x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\
\end{bmatrix}
$$
and without loss of generality we rewrite again our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknow quantities. How can we obtain the optimal set of $\beta_i$ values?
Optimizing our parameters
We have defined the matrix $\boldsymbol{X}$ via the equations
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
As we noted above, we stayed with a system with the design matrix
$\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define
our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements.
Our model for the nuclear binding energies
In our introductory notes we looked at the so-called liquid drop model. Let us remind ourselves about what we did by looking at the code.
We restate the parts of the code we are most interested in.
End of explanation
"""
# matrix inversion to find beta
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Energies)
# and then make the prediction
ytilde = X @ beta
"""
Explanation: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
throughout these lectures.
Optimizing our parameters, more details
With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
$$
C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
This function is one possible way to define the so-called cost function.
It is also common to define
the function $C$ as
$$
C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
$$
since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
Interpretations and optimizing our parameters
The function
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right},
$$
can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value
$$
y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
$$
where $\langle y_i \rangle$ is the mean value. Keep in mind also that
till now we have treated $y_i$ as the exact value. Normally, the
response (dependent or outcome) variable $y_i$ the outcome of a
numerical experiment or another type of experiment and is thus only an
approximation to the true value. It is then always accompanied by an
error estimate, often limited to a statistical error estimate given by
the standard deviation discussed earlier. In the discussion here we
will treat $y_i$ as our exact value for the response variable.
In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
In practical terms it means we will require
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
$$
Interpretations and optimizing our parameters
We can rewrite
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
We note also that since our design matrix is defined as $\boldsymbol{X}\in
{\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
{\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
in our case $p=5$ meaning that we end up with inverting a small
$5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
matrices to invert. The methods discussed here and for many other
supervised learning algorithms like classification with logistic
regression or support vector machines, exhibit dimensionalities which
allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix
$\boldsymbol{X}^T\boldsymbol{X}$.
Small question: Do you think the example we have at hand here (the nuclear binding energies) can lead to problems in inverting the matrix $\boldsymbol{X}^T\boldsymbol{X}$? What kind of problems can we expect?
Some useful matrix and vector expressions
The following matrix and vector relation will be useful here and for the rest of the course. Vectors are always written as boldfaced lower case letters and
matrices as upper case boldfaced letters.
2
6
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
2
7
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
2
8
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
$$
\frac{\partial \log{\vert\boldsymbol{A}\vert}}{\partial \boldsymbol{A}} = (\boldsymbol{A}^{-1})^T.
$$
Interpretations and optimizing our parameters
The residuals $\boldsymbol{\epsilon}$ are in turn given by
$$
\boldsymbol{\epsilon} = \boldsymbol{y}-\boldsymbol{\tilde{y}} = \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta},
$$
and with
$$
\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
we have
$$
\boldsymbol{X}^T\boldsymbol{\epsilon}=\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
meaning that the solution for $\boldsymbol{\beta}$ is the one which minimizes the residuals. Later we will link this with the maximum likelihood approach.
Let us now return to our nuclear binding energies and simply code the above equations.
Own code for Ordinary Least Squares
It is rather straightforward to implement the matrix inversion and obtain the parameters $\boldsymbol{\beta}$. After having defined the matrix $\boldsymbol{X}$ we simply need to
write
End of explanation
"""
fit = np.linalg.lstsq(X, Energies, rcond =None)[0]
ytildenp = np.dot(fit,X.T)
"""
Explanation: Alternatively, you can use the least squares functionality in Numpy as
End of explanation
"""
Masses['Eapprox'] = ytilde
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016OLS")
plt.show()
"""
Explanation: And finally we plot our fit with and compare with data
End of explanation
"""
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
"""
Explanation: Adding error analysis and training set up
We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides.
Since we are not using Scikit-Learn here we can define our own $R2$ function as
End of explanation
"""
print(R2(Energies,ytilde))
"""
Explanation: and we would be using it as
End of explanation
"""
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
print(MSE(Energies,ytilde))
"""
Explanation: We can easily add our MSE score as
End of explanation
"""
def RelativeError(y_data,y_model):
return abs((y_data-y_model)/y_data)
print(RelativeError(Energies, ytilde))
"""
Explanation: and finally the relative error as
End of explanation
"""
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),4))
X[:,3] = Density**(4.0/3.0)
X[:,2] = Density
X[:,1] = Density**(2.0/3.0)
X[:,0] = 1
# We use now Scikit-Learn's linear regressor and ridge regressor
# OLS part
clf = skl.LinearRegression().fit(X, Energies)
ytilde = clf.predict(X)
EoS['Eols'] = ytilde
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, ytilde))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, ytilde))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, ytilde))
print(clf.coef_, clf.intercept_)
# The Ridge regression with a hyperparameter lambda = 0.1
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X, Energies)
yridge = clf_ridge.predict(X)
EoS['Eridge'] = yridge
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, yridge))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, yridge))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, yridge))
print(clf_ridge.coef_, clf_ridge.intercept_)
fig, ax = plt.subplots()
ax.set_xlabel(r'$\rho[\mathrm{fm}^{-3}]$')
ax.set_ylabel(r'Energy per particle')
ax.plot(EoS['Density'], EoS['Energy'], alpha=0.7, lw=2,
label='Theoretical data')
ax.plot(EoS['Density'], EoS['Eols'], alpha=0.7, lw=2, c='m',
label='OLS')
ax.plot(EoS['Density'], EoS['Eridge'], alpha=0.7, lw=2, c='g',
label='Ridge $\lambda = 0.1$')
ax.legend()
save_fig("EoSfitting")
plt.show()
"""
Explanation: The $\chi^2$ function
Normally, the response (dependent or outcome) variable $y_i$ is the
outcome of a numerical experiment or another type of experiment and is
thus only an approximation to the true value. It is then always
accompanied by an error estimate, often limited to a statistical error
estimate given by the standard deviation discussed earlier. In the
discussion here we will treat $y_i$ as our exact value for the
response variable.
Introducing the standard deviation $\sigma_i$ for each measurement
$y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
as
$$
\chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
The $\chi^2$ function
In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
$$
where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
The $\chi^2$ function
We can rewrite
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
$$
The $\chi^2$ function
If we then introduce the matrix
$$
\boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
$$
we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
$$
\beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
$$
We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
$$
\sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
$$
resulting in
$$
\sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
$$
The $\chi^2$ function
The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
$$
y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
$$
By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
$$
and
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
$$
The $\chi^2$ function
For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
Defining
$$
\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
$$
$$
\gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
$$
$$
\gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
$$
$$
\gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
$$
$$
\gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
$$
we obtain
$$
\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
$$
$$
\beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
$$
This approach (different linear and non-linear regression) suffers
often from both being underdetermined and overdetermined in the
unknown coefficients $\beta_i$. A better approach is to use the
Singular Value Decomposition (SVD) method discussed below. Or using
Lasso and Ridge regression. See below.
Fitting an Equation of State for Dense Nuclear Matter
Before we continue, let us introduce yet another example. We are going to fit the
nuclear equation of state using results from many-body calculations.
The equation of state we have made available here, as function of
density, has been derived using modern nucleon-nucleon potentials with
the addition of three-body
forces. This
time the file is presented as a standard csv file.
The beginning of the Python code here is similar to what you have seen
before, with the same initializations and declarations. We use also
pandas again, rather extensively in order to organize our data.
The difference now is that we use Scikit-Learn's regression tools
instead of our own matrix inversion implementation. Furthermore, we
sneak in Ridge regression (to be discussed below) which includes a
hyperparameter $\lambda$, also to be explained below.
The code
End of explanation
"""
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organized into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),5))
X[:,0] = 1
X[:,1] = Density**(2.0/3.0)
X[:,2] = Density
X[:,3] = Density**(4.0/3.0)
X[:,4] = Density**(5.0/3.0)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2)
# matrix inversion to find beta
beta = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train)
# and then make the prediction
ytilde = X_train @ beta
print("Training R2")
print(R2(y_train,ytilde))
print("Training MSE")
print(MSE(y_train,ytilde))
ypredict = X_test @ beta
print("Test R2")
print(R2(y_test,ypredict))
print("Test MSE")
print(MSE(y_test,ypredict))
"""
Explanation: The above simple polynomial in density $\rho$ gives an excellent fit
to the data.
We note also that there is a small deviation between the
standard OLS and the Ridge regression at higher densities. We discuss this in more detail
below.
Splitting our Data in Training and Test data
It is normal in essentially all Machine Learning studies to split the
data in a training set and a test set (sometimes also an additional
validation set). Scikit-Learn has an own function for this. There
is no explicit recipe for how much data should be included as training
data and say test data. An accepted rule of thumb is to use
approximately $2/3$ to $4/5$ of the data as training data. We will
postpone a discussion of this splitting to the end of these notes and
our discussion of the so-called bias-variance tradeoff. Here we
limit ourselves to repeat the above equation of state fitting example
but now splitting the data into a training set and a test set.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
"""
Explanation: <!-- !split -->
The Boston housing data example
The Boston housing
data set was originally a part of UCI Machine Learning Repository
and has been removed now. The data set is now included in Scikit-Learn's
library. There are 506 samples and 13 feature (predictor) variables
in this data set. The objective is to predict the value of prices of
the house using the features (predictors) listed here.
The features/predictors are
1. CRIM: Per capita crime rate by town
ZN: Proportion of residential land zoned for lots over 25000 square feet
INDUS: Proportion of non-retail business acres per town
CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
NOX: Nitric oxide concentration (parts per 10 million)
RM: Average number of rooms per dwelling
AGE: Proportion of owner-occupied units built prior to 1940
DIS: Weighted distances to five Boston employment centers
RAD: Index of accessibility to radial highways
TAX: Full-value property tax rate per USD10000
B: $1000(Bk - 0.63)^2$, where $Bk$ is the proportion of [people of African American descent] by town
LSTAT: Percentage of lower status of the population
MEDV: Median value of owner-occupied homes in USD 1000s
Housing data, the code
We start by importing the libraries
End of explanation
"""
from sklearn.datasets import load_boston
boston_dataset = load_boston()
# boston_dataset is a dictionary
# let's check what it contains
boston_dataset.keys()
"""
Explanation: and load the Boston Housing DataSet from Scikit-Learn
End of explanation
"""
boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
boston.head()
boston['MEDV'] = boston_dataset.target
"""
Explanation: Then we invoke Pandas
End of explanation
"""
# check for missing values in all the columns
boston.isnull().sum()
"""
Explanation: and preprocess the data
End of explanation
"""
# set the size of the figure
sns.set(rc={'figure.figsize':(11.7,8.27)})
# plot a histogram showing the distribution of the target values
sns.distplot(boston['MEDV'], bins=30)
plt.show()
"""
Explanation: We can then visualize the data
End of explanation
"""
# compute the pair wise correlation for all columns
correlation_matrix = boston.corr().round(2)
# use the heatmap function from seaborn to plot the correlation matrix
# annot = True to print the values inside the square
sns.heatmap(data=correlation_matrix, annot=True)
"""
Explanation: It is now useful to look at the correlation matrix
End of explanation
"""
plt.figure(figsize=(20, 5))
features = ['LSTAT', 'RM']
target = boston['MEDV']
for i, col in enumerate(features):
plt.subplot(1, len(features) , i+1)
x = boston[col]
y = target
plt.scatter(x, y, marker='o')
plt.title(col)
plt.xlabel(col)
plt.ylabel('MEDV')
"""
Explanation: From the above coorelation plot we can see that MEDV is strongly correlated to LSTAT and RM. We see also that RAD and TAX are stronly correlated, but we don't include this in our features together to avoid multi-colinearity
End of explanation
"""
X = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns = ['LSTAT','RM'])
Y = boston['MEDV']
"""
Explanation: Now we start training our model
End of explanation
"""
from sklearn.model_selection import train_test_split
# splits the training and test data set in 80% : 20%
# assign random_state to any value.This ensures consistency.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state=5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
"""
Explanation: We split the data into training and test sets
End of explanation
"""
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
lin_model = LinearRegression()
lin_model.fit(X_train, Y_train)
# model evaluation for training set
y_train_predict = lin_model.predict(X_train)
rmse = (np.sqrt(mean_squared_error(Y_train, y_train_predict)))
r2 = r2_score(Y_train, y_train_predict)
print("The model performance for training set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
print("\n")
# model evaluation for testing set
y_test_predict = lin_model.predict(X_test)
# root mean square error of the model
rmse = (np.sqrt(mean_squared_error(Y_test, y_test_predict)))
# r-squared score of the model
r2 = r2_score(Y_test, y_test_predict)
print("The model performance for testing set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
# plotting the y_test vs y_pred
# ideally should have been a straight line
plt.scatter(Y_test, y_test_predict)
plt.show()
"""
Explanation: Then we use the linear regression functionality from Scikit-Learn
End of explanation
"""
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler, Normalizer
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def FrankeFunction(x,y):
term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))
term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))
term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))
term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)
return term1 + term2 + term3 + term4
def create_X(x, y, n ):
if len(x.shape) > 1:
x = np.ravel(x)
y = np.ravel(y)
N = len(x)
l = int((n+1)*(n+2)/2) # Number of elements in beta
X = np.ones((N,l))
for i in range(1,n+1):
q = int((i)*(i+1)/2)
for k in range(i+1):
X[:,q+k] = (x**(i-k))*(y**k)
return X
# Making meshgrid of datapoints and compute Franke's function
n = 5
N = 1000
x = np.sort(np.random.uniform(0, 1, N))
y = np.sort(np.random.uniform(0, 1, N))
z = FrankeFunction(x, y)
X = create_X(x, y, n=n)
# split in training and test data
X_train, X_test, y_train, y_test = train_test_split(X,z,test_size=0.2)
clf = skl.LinearRegression().fit(X_train, y_train)
# The mean squared error and R2 score
print("MSE before scaling: {:.2f}".format(mean_squared_error(clf.predict(X_test), y_test)))
print("R2 score before scaling {:.2f}".format(clf.score(X_test,y_test)))
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print("Feature min values before scaling:\n {}".format(X_train.min(axis=0)))
print("Feature max values before scaling:\n {}".format(X_train.max(axis=0)))
print("Feature min values after scaling:\n {}".format(X_train_scaled.min(axis=0)))
print("Feature max values after scaling:\n {}".format(X_train_scaled.max(axis=0)))
clf = skl.LinearRegression().fit(X_train_scaled, y_train)
print("MSE after scaling: {:.2f}".format(mean_squared_error(clf.predict(X_test_scaled), y_test)))
print("R2 score for scaled data: {:.2f}".format(clf.score(X_test_scaled,y_test)))
"""
Explanation: Reducing the number of degrees of freedom, overarching view
Many Machine Learning problems involve thousands or even millions of
features for each training instance. Not only does this make training
extremely slow, it can also make it much harder to find a good
solution, as we will see. This problem is often referred to as the
curse of dimensionality. Fortunately, in real-world problems, it is
often possible to reduce the number of features considerably, turning
an intractable problem into a tractable one.
Later we will discuss some of the most popular dimensionality reduction
techniques: the principal component analysis (PCA), Kernel PCA, and
Locally Linear Embedding (LLE).
Principal component analysis and its various variants deal with the
problem of fitting a low-dimensional affine
subspace to a set of of
data points in a high-dimensional space. With its family of methods it
is one of the most used tools in data modeling, compression and
visualization.
Preprocessing our data
Before we proceed however, we will discuss how to preprocess our
data. Till now and in connection with our previous examples we have
not met so many cases where we are too sensitive to the scaling of our
data. Normally the data may need a rescaling and/or may be sensitive
to extreme values. Scaling the data renders our inputs much more
suitable for the algorithms we want to employ.
Scikit-Learn has several functions which allow us to rescale the
data, normally resulting in much better results in terms of various
accuracy scores. The StandardScaler function in Scikit-Learn
ensures that for each feature/predictor we study the mean value is
zero and the variance is one (every column in the design/feature
matrix). This scaling has the drawback that it does not ensure that
we have a particular maximum or minimum in our data set. Another
function included in Scikit-Learn is the MinMaxScaler which
ensures that all features are exactly between $0$ and $1$. The
More preprocessing
The Normalizer scales each data
point such that the feature vector has a euclidean length of one. In other words, it
projects a data point on the circle (or sphere in the case of higher dimensions) with a
radius of 1. This means every data point is scaled by a different number (by the
inverse of it’s length).
This normalization is often used when only the direction (or angle) of the data matters,
not the length of the feature vector.
The RobustScaler works similarly to the StandardScaler in that it
ensures statistical properties for each feature that guarantee that
they are on the same scale. However, the RobustScaler uses the median
and quartiles, instead of mean and variance. This makes the
RobustScaler ignore data points that are very different from the rest
(like measurement errors). These odd data points are also called
outliers, and might often lead to trouble for other scaling
techniques.
Simple preprocessing examples, Franke function and regression
End of explanation
"""
|
marshal789/Lectures-On-Machine-Learning
|
Support Vector Machines/SVM.ipynb
|
mit
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Support Vector Machines
Import Libraries
End of explanation
"""
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
"""
Explanation: Get the Data
Using the built-in breast-cancer dataset of sklearn
End of explanation
"""
cancer.keys()
"""
Explanation: The data set is presented in a dictionary form:
End of explanation
"""
print(cancer['DESCR'])
cancer['feature_names']
"""
Explanation: We can grab information and arrays out of this dictionary to set up our data frame and understanding of the features:
End of explanation
"""
df_feat = pd.DataFrame(cancer['data'],columns=cancer['feature_names'])
df_feat.info()
cancer['target']
df_target = pd.DataFrame(cancer['target'],columns=['Cancer'])
df_target.head()
"""
Explanation: Set up DataFrame
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.30, random_state=101)
"""
Explanation: Train Test Split
End of explanation
"""
from sklearn.svm import SVC
model = SVC()
model.fit(X_train,y_train)
"""
Explanation: Train the Support Vector Classifier
End of explanation
"""
predictions = model.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
"""
Explanation: Predictions and Evaluations
End of explanation
"""
param_grid = {'C': [0.1,1, 10, 100, 1000], 'gamma': [1,0.1,0.01,0.001,0.0001], 'kernel': ['rbf']}
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
"""
Explanation: Notice that we are classifying everything into a single class! This means our model needs to have it parameters adjusted (it may also help to normalize the data).
Gridsearch
Finding the right parameters (like what C or gamma values to use) is a tricky task! This idea of creating a 'grid' of parameters and just trying out all the possible combinations is called a Gridsearch, this method is common enough that Scikit-learn has this functionality built in with GridSearchCV! The CV stands for cross-validation which is the GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train. The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested.
End of explanation
"""
# May take awhile!
grid.fit(X_train,y_train)
"""
Explanation: What fit does is a bit more involved then usual. First, it runs the same loop with cross-validation, to find the best parameter combination. Once it has the best combination, it runs fit again on all data passed to fit (without cross-validation), to built a single new model using the best parameter setting.
End of explanation
"""
grid.best_params_
grid.best_estimator_
"""
Explanation: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best estimator in the best_estimator_ attribute:
End of explanation
"""
grid_predictions = grid.predict(X_test)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))
"""
Explanation: Then you can re-run predictions on this grid object just like you would with a normal model.
End of explanation
"""
|
hongguangguo/shogun
|
doc/ipython-notebooks/clustering/GMM.ipynb
|
gpl-3.0
|
%pylab inline
%matplotlib inline
# import all Shogun classes
from modshogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
"""
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
"""
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
"""
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - <a href="mailto:heiko.strathmann@gmail.com">heiko.strathmann@gmail.com</a> - <a href="github.com/karlnapf">github.com/karlnapf</a> - <a href="herrstrathmann.de">herrstrathmann.de</a>. Based on the GMM framework of the <a href="https://www.google-melange.com/gsoc/project/google/gsoc2011/alesis_novik/11001">Google summer of code 2011 project</a> of Alesis Novik - <a href="https://github.com/alesis">https://github.com/alesis</a>
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
"""
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.set_coef(weights)
"""
Explanation: Set up the model in Shogun
End of explanation
"""
# now sample from each component seperately first, the from the joint model
hold(True)
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.set_coef(w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
hold(False)
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.set_coef(weights)
"""
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
"""
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
"""
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> interface, including the mixture.
End of explanation
"""
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
"""
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
"""
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
features=RealFeatures(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(features)
# learn GMM
gmm_est.train_em()
return gmm_est
"""
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
"""
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
"""
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
"""
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
"""
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
"""
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
hold(True)
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
hold(False)
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
"""
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
"""
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
hold(True)
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
hold(False)
title("Data coloured by likelihood for component %d" % comp_idx)
"""
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
"""
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
hold(True)
pcolor(Xs,Ys,D_est)
hold(False)
"""
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation
"""
|
drericstrong/Blog
|
20170106_DQ0TransformInPython.ipynb
|
agpl-3.0
|
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# User configurable
freq = 1/60
end_time = 180
v_peak = 220
step_size = 0.01
# Find the three-phase voltages
v1 = []
v2 = []
v3 = []
thetas = 2 * np.pi * freq * np.arange(0,end_time,step_size)
for ii, t in enumerate(thetas):
v1.append(v_peak * np.sin(t))
v2.append(v_peak * np.sin(t - (2/3)*np.pi))
v3.append(v_peak * np.sin(t - (4/3)*np.pi))
v1, v2, v3 = np.array(v1), np.array(v2), np.array(v3)
# Plot the results
plt.plot(v1, label="V1")
plt.plot(v2, label="V2")
plt.plot(v3, label="V3")
plt.xlabel('Time')
plt.ylabel('Voltage')
plt.legend(ncol=3);
"""
Explanation: The DQ0 (or DQZ) Transform is used in electrical engineering for modeling three-phase voltage and current. Since three-dimensional plots are difficult to visualize and analyze, the DQ0 Transform translates the three-dimensional electrical parameters into a two-dimensional vector space. Although the DQ0 Transform is most commonly used in engineering, it may also be applicable for any situation involving multidimensional harmonic behavior.
First, let's generate a "perfect" three-phase signal:
End of explanation
"""
def dq0_transform(v_a, v_b, v_c):
d=(np.sqrt(2/3)*v_a-(1/(np.sqrt(6)))*v_b-(1/(np.sqrt(6)))*v_c)
q=((1/(np.sqrt(2)))*v_b-(1/(np.sqrt(2)))*v_c)
return d, q
# Calculate and plot the results
d, q = dq0_transform(v1, v2, v3)
plt.figure()
plt.plot(d, q)
plt.xlabel('D Phase')
plt.ylabel('Q Phase');
"""
Explanation: The DQ0 Transform will translate the three variables above into two variables, called the "Direct" phase (or "D") and the "Quadrature" phase (or "Q"). Hence, the resulting variables can be plotted on an X-Y graph for simple visualization. The following code implements a DQ0 Transform in Python, which takes all three phases of voltage as an input.
End of explanation
"""
|
jellis18/enterprise
|
tests/data.ipynb
|
mit
|
% matplotlib inline
%config InlineBackend.figure_format = 'retina'
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from enterprise.pulsar import Pulsar
import enterprise.signals.parameter as parameter
from enterprise.signals import utils
from enterprise.signals import signal_base
from enterprise.signals import selections
from enterprise.signals.selections import Selection
#from tests.enterprise_test_data import datadir
from enterprise_test_data import datadir
"""
Explanation: enterprise Data Structures
This guide will give an introduction to the unique data structures used in enterprise. These are all designed with the goal of making this code as user-friendly as possible, both for the end user and the developer.
End of explanation
"""
def A(farg1, farg2):
class A(object):
def __init__(self, iarg):
self.iarg = iarg
def print_info(self):
print('Object instance {}\nInstance argument: {}\nFunction args: {} {}\n'.format(
self, self.iarg, farg1, farg2))
return A
# define class A with arguments that can be seen within the class
a = A('arg1', 'arg2')
# instantiate 2 instances of class A with different arguments
a1 = a('iarg1')
a2 = a('iarg2')
# call print_info method
a1.print_info()
a2.print_info()
"""
Explanation: Class Factories
The enterprise code makes heavy use of so-called class factories. Class factories are functions that return classes (not objects of class instances). A simple example is as follows:
End of explanation
"""
psr = Pulsar(datadir+'/B1855+09_NANOGrav_9yv1.gls.par', datadir+'/B1855+09_NANOGrav_9yv1.tim')
"""
Explanation: In the example above we see that the arguments arg1 and arg2 are seen by both instances a1 and a2; however these instances were intantiated with different input arguments iarg1 and iarg2. So we see that class-factories are great when we want to give "global" parameters to a class without having to pass them on initialization. This also allows us to mix and match classes, as we will do in enterprise before we instantiate them.
The Pulsar class
The Pulsar class is a simple data structure that stores all of the important information about a pulsar that is obtained from a timing package such as the TOAs, residuals, error-bars, flags, design matrix, etc.
This class is instantiated with a par and a tim file. Full documentation on this class can be found here.
End of explanation
"""
# lets define an efac parameter with a uniform prior from [0.5, 5]
efac = parameter.Uniform(0.5, 5)
print(efac)
"""
Explanation: This Pulsar object is then passed to other enterprise data structures in a loosley coupled way in order to interact with the pulsar data.
The Parameter class
In enterprise signal parameters are set by specifying a prior distribution (i.e., Uniform, Normal, etc.). These Parameters are how enterprise builds signals. Below we will give an example of this functionality.
End of explanation
"""
# initialize efac parameter with name "efac_1"
efac1 = efac('efac_1')
print(efac1)
# return parameter name
print(efac1.name)
# get pdf at a point (log pdf is access)
print(efac1.get_pdf(1.3), efac1.get_logpdf(1.3))
# return 5 samples from this prior distribution
print(efac1.sample(n=5))
"""
Explanation: Uniform is a class factory that returns a class. The parameter is then intialized via a name. This way, a single parameter class can be initialized for multiple signal parameters with different names (i.e. EFAC per observing backend, etc). Once the parameter is initialized then you then have access to many useful methods.
End of explanation
"""
@signal_base.function
def sine_wave(toas, log10_A=-7, log10_f=-8):
return 10**log10_A * np.sin(2*np.pi*toas*10**log10_f)
"""
Explanation: The Function structure
In enterprise we have defined a special data structure called Function. This data structure provides the user with a way to use and combine several different enterprise components in a user friendly way. More explicitly, it converts and standard function into an enterprise Function which can extract information from the Pulsar object and can also interact with enterprise Parameters.
[put reference to docstring here]
For example, consider the function:
End of explanation
"""
# treat it just as a standard function with a vector input
sw = sine_wave(np.array([1,2,3]), log10_A=-8, log10_f=-7.5)
print(sw)
"""
Explanation: Notice that the first positional argument of the function is toas, which happens to be a name of an attribute in the Pulsar class and the keyword arguments specify the default parameters for this function.
The decorator converts this standard function to a Function which can be used in two ways: the first way is to treat it like any other function.
End of explanation
"""
# or use it as an enterprise function
sw_function = sine_wave(log10_A=parameter.Uniform(-10,-5), log10_f=parameter.Uniform(-9, -7))
print(sw_function)
"""
Explanation: the second way is to use it as a Function:
End of explanation
"""
sw2 = sw_function('sine_wave', psr=psr)
print(sw2)
"""
Explanation: Here we see that Function is actually a class factory, that is, when initialized with enterprise Parameters it returns a class that is initialized with a name and a Pulsar object as follows:
End of explanation
"""
print(sw2.params)
"""
Explanation: Now this Function object carries around instances of the Parameter classes given above for this particular function and Pulsar
End of explanation
"""
print(sw2())
"""
Explanation: Most importantly it can be called in three different ways:
If given without parameters it will fall back on the defaults given in the original function definition
End of explanation
"""
print(sw2(log10_A=-8, log10_f=-6.5))
"""
Explanation: or we can give it new fixed parameters
End of explanation
"""
params = {'sine_wave_log10_A':-8, 'sine_wave_log10_f':-6.5}
print(sw2(params=params))
"""
Explanation: or most importantly we can give it a parameter dictionary with the Parameter names as keys. This is how Functions are use internally inside enterprise.
End of explanation
"""
def sine_wave(toas, log10_A=-7, log10_f=-8):
return 10**log10_A * np.sin(2*np.pi*toas*10**log10_f)
sw3 = signal_base.Function(sine_wave, log10_A=parameter.Uniform(-10,-5),
log10_f=parameter.Uniform(-9, -7))
print(sw3)
"""
Explanation: Notice that the last two methods give the same answer since we gave it the same values just in different ways. So you may be thinking: "Why did we pass the Pulsar object on initialization?" or "Wait. How does it know about the toas?!". Well the first question answers the second. By passing the pulsar object it grabs the toas attribute internally. This feature, combined with the ability to recognize Parameters and the ability to call the original function as we always would are the main strengths of Function, which is used heavily in enterprise.
Note that if we define a function without the decorator then we can still obtain a Function via:
End of explanation
"""
def cut_half(toas):
midpoint = (toas.max() + toas.min()) / 2
return dict(zip(['t1', 't2'], [toas <= midpoint, toas > midpoint]))
"""
Explanation: Make your own Function
To define your own Function all you have to do is to define a function with these rules in mind.
If you want to use Pulsar attributes, define them as positional arguments with the same name as used in the Pulsar class (see here for more information.
Any arguments that you may use as Parameters must be keyword arguments (although you can have others that aren't Parameters)
Add the @function decorator.
And thats it! You can now define your own Functions with minimal overhead and use them in enterprise or for tests and simulations or whatever you want!
The Selection structure
In the course of our analysis it is useful to split different signals into pieces. The most common flavor of this is to split the white noise parameters (i.e., EFAC, EQUAD, and ECORR) by observing backend system. The Selection structure is here to make this as smooth and versatile as possible.
The Selection structure is also a class-factory that returns a specific selection dictionary with keys and Boolean arrays as values.
This will become more clear with an example. Lets say that you want to split our parameters between the first and second half of the dataset, then we can define the following function:
End of explanation
"""
toas = np.array([1,2,3,4])
print(cut_half(toas))
"""
Explanation: This function will return a dictionary with keys (i.e. the names of the different subsections) t1 and t2 and boolean arrays corresponding to the first and second halves of the data span, respectively. So for a simple input we have:
End of explanation
"""
ch = Selection(cut_half)
print(ch)
"""
Explanation: To pass this to enterprise we turn it into a Selection via:
End of explanation
"""
ch1 = ch(psr)
print(ch1)
print(ch1.masks)
"""
Explanation: As we have stated, this is class factory that will be initialized inside enterprise signals with a Pulsar object in a very similar way to Functions.
End of explanation
"""
# make efac class factory
efac = parameter.Uniform(0.1, 5.0)
# now give it to selection
params, masks = ch1('efac', efac)
# named parameters
print(params)
# named masks
print(masks)
"""
Explanation: The Selection object has a method masks that uses the Pulsar object to evaluate the arguments of cut_half (these can be any number of Pulsar attributes, not just toas). The Selection object can also be called to return initialized Parameters with the split names as follows:
End of explanation
"""
|
KirtoXX/Security_Camera
|
ssd_mobilenet/object_detection/object_detection_tutorial.ipynb
|
apache-2.0
|
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
"""
Explanation: Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
End of explanation
"""
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
"""
Explanation: Env setup
End of explanation
"""
from utils import label_map_util
from utils import visualization_utils as vis_util
"""
Explanation: Object detection imports
Here are the imports from the object detection module.
End of explanation
"""
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
"""
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
"""
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
"""
Explanation: Download Model
End of explanation
"""
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
"""
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
"""
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
"""
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
"""
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
"""
Explanation: Helper code
End of explanation
"""
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
"""
Explanation: Detection
End of explanation
"""
|
feststelltaste/software-analytics
|
demos/20181213_EuregJUG_Aachen/No Go Areas.ipynb
|
gpl-3.0
|
import pandas as pd
log = pd.read_csv("../../../software-data/projects/linux/linux_blame_log.csv.gz")
log.head()
log.info()
top10 = log['author'].value_counts().head(10)
top10
%matplotlib inline
top10.plot.pie();
"""
Explanation: Versionskontrollsysteme sind eine unglaubliche Informationsquelle um Softwaresysteme und deren Entwicklung zu analysieren. Allein durch simple Informationen über welcher Autor hat wann was geändert lassen sich einige sehr interessante Dinge herausstellen.
In diesem Artikel sehen wir uns an, was wir mit einem sog. git blame log alles herausfinden können. Ein git blame log enthält Informationen darüber, welcher Autor in welcher Datei welche Zeilenänderung durchgeführt hat. Nehmen wir hier nur alle als zuletzt durchgeführte Zeilenänderungen je Quellcodedatei, können wir darüber potenziell abschätzen, wer noch etwas über eine Zeile wissen könnte. Darüber können wir berechnen, wer noch etwas übe reine komplette Datei sowie einer Komponente (was das auch immer sein möge) wissen könnte.
Als Basis des ganzen ist jedoch einiges an Vorarbeit notwendig. Wir brauchen ein git blame für jede Datei unseres Softwaresystems.
End of explanation
"""
log['timestamp'] = pd.to_datetime(log['timestamp'])
log.head()
log['age'] = pd.Timestamp("today") - log['timestamp']
log.head()
log['component'] = log['path'].str.split("/").str[:2].str.join(":")
log.head()
age_per_component = log.groupby('component')['age'].min().sort_values()
age_per_component.head()
age_per_component.plot.bar(
title="Alter pro Komponente (in Jahren)",
figsize=[15,5]);
"""
Explanation: No-Go Areas
End of explanation
"""
knowledge = log.groupby(
['path', 'author']).agg(
{'timestamp':'min', 'line':'count'}
)
knowledge.head()
"""
Explanation: Wissensinseln
Bewertung des vorhandenen Wissens
...anhand der zuletzt geänderten Quellcodezeilen
Gruppieren mit minimalen Zeitdauer und Zeilenanzahl
=> Jüngste Änderung und Anzahl geänderter Zeilen pro Datei und Autor
End of explanation
"""
knowledge['all'] = knowledge.groupby('path')['line'].transform('sum')
knowledge['knowing'] = knowledge['line'] / knowledge['all']
knowledge.head()
"""
Explanation: Wissensanteile berechnen
=> Prozentualer Anteil der zuletzt geänderten Zeilen pro Datei und Autor
End of explanation
"""
max_knowledge_per_file = knowledge.groupby(['path'])['knowing'].transform(max)
knowledge_carriers = knowledge[knowledge['knowing'] == max_knowledge_per_file]
knowledge_carriers = knowledge_carriers.reset_index(level=1)
knowledge_carriers.head()
"""
Explanation: Maximales Wissen pro Datei identifizieren
=> Hauptautor pro Datei
End of explanation
"""
from ausi import d3
d3.create_json_for_zoomable_circle_packing(
knowledge_carriers.reset_index(),
'author',
'author',
'path',
'/',
'all',
'knowing',
'linux_circle_packing'
)
"""
Explanation: Visualisierung erstellen
=> Export in D3 Visualisierung "Zoomable Circle Packing"
End of explanation
"""
|
sbenthall/bigbang
|
examples/experimental_notebooks/Analyze Senders.ipynb
|
agpl-3.0
|
%matplotlib inline
"""
Explanation: This notebook shows how BigBang can help you analyze the senders in a particular mailing list archive.
First, use this IPython magic to tell the notebook to display matplotlib graphics inline. This is a nice way to display results.
End of explanation
"""
import bigbang.mailman as mailman
import bigbang.graph as graph
import bigbang.process as process
from bigbang.parse import get_date
from bigbang.archive import Archive
reload(process)
"""
Explanation: Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly.
End of explanation
"""
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import numpy as np
import math
import pytz
import pickle
import os
pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options
"""
Explanation: Also, let's import a number of other dependencies we'll use later.
End of explanation
"""
urls = ["http://www.ietf.org/mail-archive/text/ietf-privacy/",
"http://lists.w3.org/Archives/Public/public-privacy/"]
mlists = [mailman.open_list_archives(url,"../archives") for url in urls]
activities = [Archive.get_activity(Archive(ml)) for ml in mlists]
"""
Explanation: Now let's load the data for analysis.
End of explanation
"""
a = activities[1] # for the first mailing list
ta = a.sum(0) # sum along the first axis
ta.sort()
ta[-10:].plot(kind='barh', width=1)
"""
Explanation: This variable is for the range of days used in computing rolling averages.
Now, let's see: who are the authors of the most messages to one particular list?
End of explanation
"""
levdf = process.sorted_matrix(a) # creates a slightly more nuanced edit distance matrix
# and sorts by rows/columns that have the best candidates
levdf_corner = levdf.iloc[:25,:25] # just take the top 25
fig = plt.figure(figsize=(15, 12))
plt.pcolor(levdf_corner)
plt.yticks(np.arange(0.5, len(levdf_corner.index), 1), levdf_corner.index)
plt.xticks(np.arange(0.5, len(levdf_corner.columns), 1), levdf_corner.columns, rotation='vertical')
plt.colorbar()
plt.show()
"""
Explanation: This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to.
Many mailing lists will have some duplicate senders: individuals who use multiple email addresses or are recorded as different senders when using the same email address. We want to identify those potential duplicates in order to get a more accurate representation of the distribution of senders.
To begin with, let's calculate the similarity of the From strings, based on the Levenshtein distance.
End of explanation
"""
consolidates = []
# gather pairs of names which have a distance of less than 10
for col in levdf.columns:
for index, value in levdf.loc[levdf[col] < 10, col].iteritems():
if index != col: # the name shouldn't be a pair for itself
consolidates.append((col, index))
print str(len(consolidates)) + ' candidates for consolidation.'
c = process.consolidate_senders_activity(a, consolidates)
print 'We removed: ' + str(len(a.columns) - len(c.columns)) + ' columns.'
"""
Explanation: For this still naive measure (edit distance on a normalized string), it appears that there are many duplicates in the <10 range, but that above that the edit distance of short email addresses at common domain names can take over.
End of explanation
"""
lev_c = process.sorted_matrix(c)
levc_corner = lev_c.iloc[:25,:25]
fig = plt.figure(figsize=(15, 12))
plt.pcolor(levc_corner)
plt.yticks(np.arange(0.5, len(levc_corner.index), 1), levc_corner.index)
plt.xticks(np.arange(0.5, len(levc_corner.columns), 1), levc_corner.columns, rotation='vertical')
plt.colorbar()
plt.show()
"""
Explanation: We can create the same color plot with the consolidated dataframe to see how the distribution has changed.
End of explanation
"""
fig, axes = plt.subplots(nrows=2, figsize=(15, 12))
ta = a.sum(0) # sum along the first axis
ta.sort()
ta[-20:].plot(kind='barh',ax=axes[0], width=1, title='Before consolidation')
tc = c.sum(0)
tc.sort()
tc[-20:].plot(kind='barh',ax=axes[1], width=1, title='After consolidation')
plt.show()
"""
Explanation: Of course, there are still some duplicates, mostly people who are using the same name, but with a different email address at an unrelated domain name.
How does our consolidation affect the graph of distribution of senders?
End of explanation
"""
grouped = tc.groupby(process.domain_name_from_email)
domain_groups = grouped.size()
domain_groups.sort(ascending=True)
domain_groups[-20:].plot(kind='barh', width=1, title="Number of participants at domain")
"""
Explanation: Okay, not dramatically different, but the consolidation makes the head heavier. There are more people close to that high end, a stronger core group and less a power distribution smoothly from one or two people.
We could also use sender email addresses as a naive inference for affiliation, especially for mailing lists where corporate/organizational email addresses are typically used.
Pandas lets us group by the results of a keying function, which we can use to group participants sending from email addresses with the same domain.
End of explanation
"""
domain_messages_sum = grouped.sum()
domain_messages_sum.sort(ascending=True)
domain_messages_sum[-20:].plot(kind='barh', width=1, title="Number of messages from domain")
"""
Explanation: We can also aggregate the number of messages that come from addresses at each domain.
End of explanation
"""
|
ndanielsen/dc_parking_violations_data
|
notebooks/Top 15 Violations by Revenue And Total for MD.ipynb
|
mit
|
dc_df = df[(df.rp_plate_state.isin(['MD']))]
dc_fines = dc_df.groupby(['violation_code']).fine.sum().reset_index('violation_code')
fine_codes_15 = dc_fines.sort_values(by='fine', ascending=False)[:15]
top_codes = dc_df[dc_df.violation_code.isin(fine_codes_15.violation_code)]
top_violation_by_state = top_codes.groupby(['violation_description']).fine.sum()
ax = top_violation_by_state.plot.barh()
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f'))
plt.draw()
top_violation_by_state = top_codes.groupby(['violation_description']).counter.sum()
ax = top_violation_by_state.plot.barh()
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f'))
plt.draw()
"""
Explanation: MD Top 15 violations by total revenue (revenue and total)
End of explanation
"""
dc_df = df[(df.rp_plate_state.isin(['MD']))]
dc_fines = dc_df.groupby(['violation_code']).counter.sum().reset_index('violation_code')
fine_codes_15 = dc_fines.sort_values(by='counter', ascending=False)[:15]
top_codes = dc_df[dc_df.violation_code.isin(fine_codes_15.violation_code)]
top_violation_by_state = top_codes.groupby(['violation_description']).fine.sum()
ax = top_violation_by_state.plot.barh()
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f'))
plt.draw()
top_violation_by_state = top_codes.groupby(['violation_description']).counter.sum()
ax = top_violation_by_state.plot.barh()
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f'))
plt.draw()
"""
Explanation: Areas to explore
Failure to secure dc tags --- huge revenue maker
Residential Parking beyond permit period
Park at Expired Meter
MD Top 15 violations by total tickets (revenue and total)
End of explanation
"""
|
wbinventor/openmc
|
examples/jupyter/mgxs-part-ii.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
"""
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:
Creation of multi-group cross sections on a heterogeneous geometry
Calculation of cross sections on a nuclide-by-nuclide basis
The use of tally precision triggers with multi-group cross sections
Built-in features for energy condensation in downstream data processing
The use of the openmc.data module to plot continuous-energy vs. multi-group cross sections
Validation of multi-group cross sections with OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system in order to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
"""
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
"""
Explanation: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
End of explanation
"""
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.region = +min_x & -max_x & +min_y & -max_y
root_cell.fill = pin_cell_universe
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
End of explanation
"""
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
"""
Explanation: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
End of explanation
"""
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
"""
Explanation: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
End of explanation
"""
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1E-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
"""
Explanation: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
End of explanation
"""
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
End of explanation
"""
# Run OpenMC
openmc.run()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
"""
Explanation: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
"""
Explanation: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
End of explanation
"""
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
"""
Explanation: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
End of explanation
"""
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
"""
Explanation: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
End of explanation
"""
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
"""
Explanation: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
End of explanation
"""
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
"""
Explanation: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.
End of explanation
"""
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
"""
Explanation: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
End of explanation
"""
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
"""
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
"""
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
"""
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
End of explanation
"""
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
"""
Explanation: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Visualizing MGXS Data
It is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.
One particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.plotter module to plot continuous-energy cross sections from the openly available cross section library distributed by NNDC.
The MGXS data can also be plotted using the openmc.plot_xs command, however we will do this manually here to show how the openmc.Mgxs.get_xs method can be used to obtain data.
End of explanation
"""
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
"""
Explanation: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
End of explanation
"""
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
"""
Explanation: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
End of explanation
"""
|
brandoncgay/deep-learning
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
mit
|
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='inputs_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
diging/tethne-notebooks
|
5. Co-citation analysis.ipynb
|
gpl-3.0
|
%pylab inline
import matplotlib.pyplot as plt
"""
Explanation: Introduction to Tethne: Co-citation analysis
In this workbook we will conduct a co-citation analysis using the approach outlined in Chen (2009). If you have used the Java-based desktop application CiteSpace II, this should be familiar: this is the same methodology that is implemented in that application.
Before you start
Download the practice dataset from here, and store it in a place where you can find it. You'll need the full path to your dataset.
Complete the tutorial "Time-variant networks"
End of explanation
"""
from tethne.readers import wos
datadirpath = '/Users/erickpeirson/Projects/tethne-notebooks/data/wos'
MyCorpus = wos.read(datadirpath)
"""
Explanation: Background
Co-citation analysis gained popularity in the 1970s as a technique for “mapping” scientific literatures, and for finding latent semantic relationships among technical publications.
Two papers are co-cited if they are both cited by the same, third, paper. The standard approach to co-citation analysis is to generate a sample of bibliographic records from a particular field by using certain keywords or journal names, and then build a co-citation graph describing relationships among their cited references. Thus the majority of papers that are represented as nodes in the co-citation graph are not papers that responded to the selection criteria used to build the dataset.
Our objective in this tutorial is to identify papers that bridge the gap between otherwise disparate areas of knowledge in the scientific literature. In this tutorial, we rely on the theoretical framework described in Chen (2006) and Chen et al. (2009).
According to Chen, we can detect potentially transformative changes in scientific knowledge by looking for cited references that both (a) rapidly accrue citations, and (b) have high betweenness-centrality in a co-citation network. It helps if we think of each scientific paper as representing a “concept” (its core knowledge claim, perhaps), and a co-citation event as representing a proposition connecting two concepts in the knowledge-base of a scientific field. If a new paper emerges that is highly co-cited with two otherwise-distinct clusters of concepts, then that might mean that the field is adopting new concepts and propositions in a way that is structurally radical for their conceptual framework.
Chen (2009) introduces sigma ($\Sigma$) as a metric for potentially transformative cited references:
$$
\Sigma(v) = (g(v) + 1)^{burstness(v)}
$$
...where the betweenness centrality of each node v is:
$$
g(v) = \sum\limits_{i\neq j\neq v} \frac{\sigma_{ij} (v)}{\sigma_{ij}}
$$
...where $\sigma_{ij}$ is the number of shortest paths from node i to node j and $\sigma_{ij}(v)$ is the number of those paths that pass through v. Burstness (0.-1. normalized) is estimated using Kleingberg’s (2002) automaton model, and is designed to detect rate-spikes around features in a stream of documents.
Loading data
We'll use the same WoS dataset as in previous workbooks...
End of explanation
"""
years, values = MyCorpus.distribution(window_size=2)
plt.plot(years, values)
plt.show()
"""
Explanation: Time-slicing
Our first decision is about the time-resolution for our analysis. In this tutorial, we'll slice our Corpus into two-year sequential time periods.
Note: in previous version of Tethne, slice() created new slice indices, which could then be accessed by other methods. As of v0.7, slice() returns a generator that yields subcorpora.
End of explanation
"""
from tethne import GraphCollection
CoCitation = GraphCollection()
CoCitation.build(MyCorpus, 'cocitation', method_kwargs={'min_weight': 3})
"""
Explanation: Co-citation graph
We will use the GraphCollection.build method to generate a cocitation GraphCollection.
The method_kwargs parameter lets us set keyword arguments for the networks.papers.cocitation graph builder. min_weight sets the minimum number of cocitations for an edge to be included in the graph.
End of explanation
"""
from tethne.analyze.corpus import burstness
B = burstness(MyCorpus, 'citations', k=5, topn=5, perslice=True)
B.items()[1]
"""
Explanation: Burstness
Kleingberg’s (2002) burstness model is a popular approach for detecting “busts” of interest or activity in streams of data (e.g. identifying trending terms in Twitter feeds). Chen (2009) suggests that we apply this model to citations. The idea is that the (observed) frequency with which a reference is cited is a product of an (unobserved) level or state of interest surrounding that citation. Kleinberg uses a hidden hidden markov model to infer the most likely sequence of “burstness” states for an event (a cited reference, in our case) over time. His algorithm is implemented in tethne.analyze.corpus.burstness(), and can be used for any feature in our Corpus.
Since citations are features in our Corpus, we can use the burstness function in tethne.analyze.corpus to get the burstness profiles for the top-cited reference in our dataset.
End of explanation
"""
from tethne.plot import plot_burstness
plot_burstness(MyCorpus, B)
"""
Explanation: We can visualize the results of the burstness algorithm using the plot_burstness() function in tethne.plot.
End of explanation
"""
from tethne.analyze.corpus import sigma
S = sigma(CoCitation, MyCorpus, 'citations')
S.items()[0:10]
"""
Explanation: Burstness values are normalized with respect to the highest possible burstness state. In other words, a burstness of 1.0 corresponds to the highest possible state.
Years prior to the first occurrence of each feature are grayed out. Periods in which the feature was bursty are depicted by colored blocks, the opacity of which indicates burstness intensity.
Sigma, $\Sigma$
Chen (2009) proposed sigma ($\Sigma$) as a metric for potentially transformative cited references:
$$
\Sigma(v) = (g(v) + 1)^{burstness(v)}
$$
The module analyze.corpus provides methods for calculating $\Sigma$ from a cocitation GraphCollection and a Corpus in one step.
End of explanation
"""
from tethne.plot import plot_sigma
fig = plot_sigma(MyCorpus, S, topn=5, perslice=True) # The top 5 citations per slice.
"""
Explanation: The method plot_sigma generates a figure that shows $\Sigma$ values for the top nodes in the corpus.
End of explanation
"""
CoCitation.values()[-1].nodes(data=True)[0] # Attributes for a node in the GraphCollection
"""
Explanation: The nodes in our CoCitation GraphCollection were updated with a new 'sigma' node attribute.
End of explanation
"""
from tethne.writers import collection
outpath = '/Users/erickpeirson/Projects/tethne-notebooks/output/my_cocitation.xgmml'
collection.to_dxgmml(CoCitation, outpath)
"""
Explanation: Export and visualize
We can export our CoCitation GraphCollection using tethne.writers.collection.to_dxgmml.
End of explanation
"""
|
AndreySheka/dl_ekb
|
hw6/Seminar 6 - segmentation.ipynb
|
mit
|
! wget https://www.dropbox.com/s/o8loqc5ih8lp2m9/weights.pkl?dl=0
! wget https://www.dropbox.com/s/jy34yowcf85ydba/data.zip?dl=0
! unzip -q data.zip
import scipy as sp
import scipy.misc
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
"""
Explanation: Seminar 6 - Neural networks for segmentation
End of explanation
"""
# Human HT29 colon-cancer cells
plt.figure(figsize=(10,8))
plt.subplot(1,2,1)
im = sp.misc.imread('BBBC018_v1_images-fixed/train/00735-actin.DIB.bmp')
plt.imshow(im)
plt.subplot(1,2,2)
mask = sp.misc.imread('BBBC018_v1_outlines/train/00735-cells.png')
plt.imshow(mask, 'gray')
"""
Explanation: Задача на эту неделю: обучить сеть детектировать края клеток.
End of explanation
"""
def get_valid_patches(img_shape, patch_size, central_points):
start = central_points - patch_size/2
end = start + patch_size
mask = np.logical_and(start >= 0, end < np.array(img_shape))
mask = np.all(mask, axis=-1)
return mask
def extract_patches(img, mask, n_pos=64, n_neg=64, patch_size=100):
res = []
labels = []
pos = np.argwhere(mask > 0)
accepted_patches_mask = get_valid_patches(np.array(img.shape[:2]), patch_size, pos)
pos = pos[accepted_patches_mask]
np.random.shuffle(pos)
for i in range(n_pos):
start = pos[i] - patch_size/2
end = start + patch_size
res.append(img[start[0]:end[0], start[1]:end[1]])
labels.append(1)
neg = np.argwhere(mask == 0)
accepted_patches_mask = get_valid_patches(np.array(img.shape[:2]), patch_size, neg)
neg = neg[accepted_patches_mask]
np.random.shuffle(neg)
for i in range(n_neg):
start = neg[i] - patch_size/2
end = start + patch_size
res.append(img[start[0]:end[0], start[1]:end[1]])
labels.append(0)
return np.array(res), np.array(labels)
patches, labels = extract_patches(im, mask, 32,32)
plt.imshow(patches[0])
from lasagne.layers import InputLayer
from lasagne.layers import DenseLayer
from lasagne.layers import NonlinearityLayer
from lasagne.layers import Pool2DLayer as PoolLayer
from lasagne.layers import Conv2DLayer as ConvLayer
from lasagne.layers import BatchNormLayer, batch_norm
from lasagne.nonlinearities import softmax
import theano.tensor as T
import pickle
import lasagne.layers
import theano
with open('weights.pkl') as f:
weights = pickle.load(f)
def build_network(weights):
net = {}
net['input'] = InputLayer((None, 3, 100, 100))
net['conv1_1'] = batch_norm(ConvLayer(net['input'], num_filters=64, filter_size=3, pad=0, flip_filters=False,
W=weights['conv1_1_w'], b=weights['conv1_1_b']),
beta=weights['conv1_1_bn_beta'], gamma=weights['conv1_1_bn_gamma'], epsilon=1e-6)
net['conv1_2'] = batch_norm(ConvLayer(net['conv1_1'], num_filters=64, filter_size=3, pad=0, flip_filters=False,
W=weights['conv1_2_w'], b=weights['conv1_2_b']),
beta=weights['conv1_2_bn_beta'], gamma=weights['conv1_2_bn_gamma'], epsilon=1e-6)
net['pool1'] = PoolLayer(net['conv1_2'], pool_size=2)
net['conv2_1'] = batch_norm(ConvLayer(net['pool1'], num_filters=128, filter_size=3, pad=0, flip_filters=False,
W=weights['conv2_1_w'], b=weights['conv2_1_b']),
beta=weights['conv2_1_bn_beta'], gamma=weights['conv2_1_bn_gamma'], epsilon=1e-6)
net['conv2_2'] = batch_norm(ConvLayer(net['conv2_1'], num_filters=128, filter_size=3, pad=0, flip_filters=False,
W=weights['conv2_2_w'], b=weights['conv2_2_b']),
beta=weights['conv2_2_bn_beta'], gamma=weights['conv2_2_bn_gamma'], epsilon=1e-6)
net['pool2'] = PoolLayer(net['conv2_2'], pool_size=2)
net['conv3_1'] = batch_norm(ConvLayer(net['pool2'], num_filters=256, filter_size=3, pad=0, flip_filters=False,
W=weights['conv3_1_w'], b=weights['conv3_1_b']),
beta=weights['conv3_1_bn_beta'], gamma=weights['conv3_1_bn_gamma'], epsilon=1e-6)
net['conv3_2'] = batch_norm(ConvLayer(net['conv3_1'], num_filters=256, filter_size=3, pad=0, flip_filters=False,
W=weights['conv3_2_w'], b=weights['conv3_2_b']),
beta=weights['conv3_2_bn_beta'], gamma=weights['conv3_2_bn_gamma'], epsilon=1e-6)
net['pool3'] = PoolLayer(net['conv3_2'], pool_size=2)
net['fc1'] = batch_norm(DenseLayer(net['pool3'], num_units=512,
W=weights['fc1_w'],
b=weights['fc1_b']),
beta=weights['fc1_bn_beta'], gamma=weights['fc1_bn_gamma'], epsilon=1e-6)
net['fc2'] = DenseLayer(net['fc1'], num_units=2, W=weights['fc2_w'], b=weights['fc2_b'])
net['prob'] = NonlinearityLayer(net['fc2'], softmax)
return net
net = build_network(weights)
input_image = T.tensor4('input')
prob = lasagne.layers.get_output(net['prob'], input_image, batch_norm_use_averages=False)
get_probs = theano.function([input_image], prob)
def preproces(patches):
patches = patches.astype(np.float32)
patches = patches / 255 - 0.5
patches = patches.transpose(0,3,1,2)
return patches
predictions = get_probs(preproces(patches)).argmax(axis=-1)
print predictions
print (predictions == labels).mean()
np.mean(predictions[:32] == 1), np.mean(predictions[32:] == 0)
"""
Explanation: Самый естественный способ (но не самый эффективный) - свести задачу сегментации к задаче классификации отдельных патчей картинки. Очевидный плюс такого перехода - человечество уже придумало множество хороших архитектур для классификационных сеток (спасибо imagenet'y), в то время как с архитектурами для сегментационных сеток пока не все так однозначно.
End of explanation
"""
from lasagne.layers import DilatedConv2DLayer as DilatedConvLayer
def dilated_pool2x2(incoming, dilation_rate):
d,input_h,input_w = incoming.output_shape[-3:]
#print "dilated pool", input_h, input_w
# 1. padding
h_remainer = input_h % dilation_rate
w_remainer = input_w % dilation_rate
h_pad = 0 if h_remainer == 0 else dilation_rate - h_remainer
w_pad = 0 if w_remainer == 0 else dilation_rate - w_remainer
#print h_pad, w_pad
incoming_padded = lasagne.layers.PadLayer(incoming, width=[(0, h_pad), (0, w_pad)], batch_ndim=2)
h,w = incoming_padded.output_shape[-2:]
assert h % dilation_rate == 0, "{} {}".format(h, dilation_rate)
assert w % dilation_rate == 0, "{} {}".format(w, dilation_rate)
# 2. reshape and transpose
incoming_reshaped = lasagne.layers.ReshapeLayer(
incoming_padded, ([0], [1], h/dilation_rate, dilation_rate, w/dilation_rate, dilation_rate))
incoming_transposed = lasagne.layers.DimshuffleLayer(incoming_reshaped,
(0, 1,3,5,2,4))
incoming_reshaped = lasagne.layers.ReshapeLayer(incoming_transposed, ([0], -1, [4], [5]))
# 3. max pool
incoming_pooled = PoolLayer(incoming_reshaped, pool_size=2, stride=1)
# 4. reshape
pooled_reshaped = lasagne.layers.ReshapeLayer(incoming_pooled, ([0], d, dilation_rate, dilation_rate, [2], [3]))
pooled_transposed = lasagne.layers.DimshuffleLayer(pooled_reshaped, (0, 1, 4, 2, 5, 3))
pooled_reshaped = lasagne.layers.ReshapeLayer(pooled_transposed, ([0], [1], h - dilation_rate, w - dilation_rate))
# 5. crop
result = lasagne.layers.SliceLayer(pooled_reshaped, indices=slice(0, input_h - dilation_rate), axis=2)
result = lasagne.layers.SliceLayer(result, indices=slice(0, input_w - dilation_rate), axis=3)
return result
"""
Explanation: Вопрос: это что ж, если мы хотим отсегментировать картинку, нам для каждого пикселя надо вытаскивать патч и их независимо через сетку прогонять?
Ответ: нет, можно модифицировать исходную сетку так, чтобы она принимала на вход картинку произвольного размера и возвращала для каждого пикселя вероятности классов. И это задача на сегодняшний семинар!
Что нам потребуется:
- избавиться от полносвязных слоев, превратив их в эквивалентные сверточные;
- избавиться от страйдов в пулинге, из-за которых размер картинки уменьшается.
- перейти от обычных сверток и пулингов к dilated-сверткам и dilated-пулингам.
End of explanation
"""
def build_network2(weights):
net = {}
dilation = 1
net['input'] = InputLayer((None, 3, 200, 200))
# TODO
# you may copy-paste original function and fix it
#net['conv1_1'] =
# ...
#net['fc2'] =
#net['prob'] = NonlinearityLayer(net['fc2'], softmax)
print "output_shape", net['fc2'].output_shape
return net
net2 = build_network2(weights)
input_image = T.tensor4('input')
fc2 = lasagne.layers.get_output(net2['fc2'], input_image, batch_norm_use_averages=False)
get_fc2 = theano.function([input_image], fc2)
"""
Explanation: Обратите внимание на грабли, положенные в лазанье в реализации dilated convolution. Описание параметра W из документации:
W : Theano shared variable, expression, numpy array or callable
Initial value, expression or initializer for the weights. These should be a 4D tensor with shape (num_input_channels, num_filters, filter_rows, filter_columns). Note that the first two dimensions are swapped compared to a non-dilated convolution.
End of explanation
"""
%time predictions = get_fc2(preproces(im[None,:200, :200])).transpose(0,2,3,1)
predictions.shape
plt.figure(figsize=(12,8))
plt.subplot(1,3,1)
plt.imshow(predictions[0].argmax(axis=-1), plt.cm.gray)
plt.title('predicted')
plt.subplot(1,3,2)
plt.imshow(im[49:200-50,49:200-50])
plt.title('input')
plt.subplot(1,3,3)
plt.imshow(mask[49:200-50,49:200-50], 'gray')
plt.title('gt')
"""
Explanation: Давайте посмотрим, что у нас получилось
End of explanation
"""
|
corochann/deep-learning-tutorial-with-chainer
|
src/04_cifar_cnn/image_processing_basic.ipynb
|
mit
|
import os
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
def readRGBImage(imagepath):
image = cv2.imread(imagepath) # Height, Width, Channel
(major, minor, _) = cv2.__version__.split(".")
if major == '3':
# version 3 is used, need to convert
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else:
# Version 2 is used, not necessary to convert
pass
return image
"""
Explanation: Basic Image Processing tutorial
Basic image data manipulation is introduced, using OpenCV library.
The sample image is obtained from PEXELS.
OpenCV is image processing library which supports
- loading image in numpy.ndarray format, save image
- converting image color format (RGB, YUV, Gray scale etc)
- resize
and other useful image processing functionality.
To install opencv, execute
$conda install -c https://conda.binstar.org/menpo -y opencv3
End of explanation
"""
# Read image from file, save image with matplotlib using `imshow` function
basedir = './src/cnn/images'
imagepath = os.path.join(basedir, 'sample.jpeg')
#image = cv2.imread(imagepath, cv2.IMREAD_GRAYSCALE)
image = readRGBImage(imagepath)
# Width and Height shows pixel size of this image
# Channel=3 indicates the RGB channel
print('image.shape (Height, Width, Channel) = ', image.shape)
# Save image with openCV
# This may be blue image because the color format RGB is opposite.
cv2.imwrite('./src/cnn/images/out.jpg', image)
# bgr_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
# cv2.imwrite('./src/cnn/images/out.jpg', bgr_image)
# Plotting
plt.imshow(image)
plt.savefig('./src/cnn/images/out_plt.jpg')
"""
Explanation: Loading and save image
cv2.imread for loading image.
cv2.imwrite for save image.
plt.imshow for plotting, and plt.savefig for save plot image.
OpenCV image format is usually 3 dimension (or 2 dimension if the image is gray scale).
1st dimension is for height,
2nd dimension is for width,
3rd dimension is for channel (RGB, YUV etc).
To convert color format cv2.cvtColor can be used.
Details are written in next section.
End of explanation
"""
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Gray scale image is 2 dimension, No channel dimension.
print('gray_image.shape (Height, Width) = ', gray_image.shape)
cv2.imwrite('./src/cnn/images/out_gray.jpg', gray_image)
"""
Explanation: Change color format
cv2.cvtColor for converting color format.
Note that openCV version 3 reads the image color in the order B, G, R.
However, matplotlib deals with the image color in the corder R, G, B.
So you need to convert color order, refer readRGBImage function.
If the image is gray scale, the image is 2 dimensional array
1st dimension is for height,
2nd dimension is for width.
End of explanation
"""
%matplotlib inline
print('image.shape (Height, Width, Channel) = ', image.shape)
# Resize image to half size
height, width = image.shape[:2]
half_image = cv2.resize(image, (width//2, height//2)) # size must be int
print('half_image.shape (Height, Width, Channel) = ', half_image.shape)
plt.imshow(half_image)
plt.savefig('./src/cnn/images/out_half.jpg')
# Resize image by specifying longer side size
def resize_longedge(image, pixel):
"""Resize the input image
Longer edge size will be `pixel`, and aspect ratio doesn't change
"""
height, width = image.shape[:2]
longer_side = max(height, width)
ratio = float(pixel) / longer_side
return cv2.resize(image, None, fx=ratio, fy=ratio) # size must be int
resized128_image = resize_longedge(image, 128)
print('resized128_image.shape (Height, Width, Channel) = ', resized128_image.shape)
plt.imshow(resized128_image)
plt.savefig('./src/cnn/images/out_resized128.jpg')
"""
Explanation: Resize
cv2.imread for resize.
Note that size should be specified in the order width, height.
End of explanation
"""
%matplotlib inline
# Crop center of half_image
height, width = half_image.shape[:2]
crop_length = min(height, width)
height_start = (height - crop_length) // 2
width_start = (width - crop_length) // 2
cropped_image = half_image[
height_start:height_start+crop_length,
width_start:width_start+crop_length,
:]
print('cropped_image.shape (Height, Width, Channel) = ', cropped_image.shape)
plt.imshow(cropped_image)
plt.savefig('./src/cnn/images/out_cropped.jpg')
"""
Explanation: Crop
numpy slicing can be used for cropping image
End of explanation
"""
%matplotlib inline
# Show RGB channel separately in gray scale
fig, axes = plt.subplots(1, 3)
# image[:, :, 0] is R channel.
axes[0].set_title('R channel')
axes[0].imshow(image[:, :, 0], cmap='gray')
# image[:, :, 1] is G channel.
axes[1].set_title('G channel')
axes[1].imshow(image[:, :, 1], cmap='gray')
# image[:, :, 2] is B channel.
axes[2].set_title('B channel')
axes[2].imshow(image[:, :, 2], cmap='gray')
plt.savefig(os.path.join(basedir, 'RGB_gray.jpg'))
# Show RGB channel separately in color
fig, axes = plt.subplots(1, 3)
# image[:, :, 0] is R channel, replace the rest by 0.
imageR = image.copy()
imageR[:, :, 1:3] = 0
axes[0].set_title('R channel')
axes[0].imshow(imageR)
# image[:, :, 1] is G channel, replace the rest by 0.
imageG = image.copy()
imageG[:, :, [0, 2]] = 0
axes[1].set_title('G channel')
axes[1].imshow(imageG)
# image[:, :, 2] is B channel, replace the rest by 0.
imageB = image.copy()
imageB[:, :, 0:2] = 0
axes[2].set_title('B channel')
axes[2].imshow(imageB)
plt.savefig(os.path.join(basedir, 'RGB_color.jpg'))
"""
Explanation: Image processing with channels
RGB channel manipulation.
Understanding the meaning of "channel" is important in deep learning.
Below code provides some insight that what each channel represents.
End of explanation
"""
|
ling7334/tensorflow-get-started
|
mnist/Getting_Started_With_TensorFlow.ipynb
|
apache-2.0
|
import tensorflow as tf
"""
Explanation: 开始使用Tensorflow
本教程帮助你使用TensorFlow编程, 开始之前,确保你安装了Tensorflow。使用 TensorFlow,你必须了解:
* 如何使用Python编程。
* 至少了解数组的概念。
* 最好了解过机器学习。但不了解的话,本教程仍不失为一个很好的开始。
Tensorflow提供多种API。最底层API——Tensorflow核心——提供了完全的编程控制。我们建议机器学习研究人员以及需要精细控制他们模型的人使用Tensorflow核心。最高层API是建立在Tensorflow核心上的。这些高层API通常比Tensorflow核心易于学习和使用。此外,更高层的API是重复工作在不同使用者间更简单更一致。高层API像是tf.contrib.learn帮助你管理数据集,预测,训练和推理。注意一部分高层Tensorflow API——方法名包含contrib的——仍在开发中。有可能一些contrib方法在随后的版本中会改变或过时。
本教程从Tensorflow核心开始,随后我们会展示如何应用tf.contrib.learn中的一些模型。了解Tensorflow核心的理念有助于你理解Tensorflow内部是如何工作的。
张量(Tensor)
Tensorflow中的数据核心单位就是张量(Tensor)。张量包含了一组任意维度的数组的原始值。一个张量的阶(rank)是其维度值。以下是几个张量示例:
python
3 # 0阶张量:这是个有维度的纯量
[1. ,2., 3.] # 1阶张量:这是个维度为[3]的向量
[[1., 2., 3.], [4., 5., 6.]] # 2阶张量:一个维度为[2,3]的矩阵
[[[1., 2., 3.]], [[7., 8., 9.]]] # 维度为[2,1,3]的3阶张量
Tensorflow核心教程
导入Tensorflow
TensorFlow程序的规范导入声明如下:
End of explanation
"""
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)
"""
Explanation: 这使Python可以访问TensorFlow所有的类,方法和符号。 大多数文档假定您已经完成了。
计算图
TensorFlow核心程序通常由两个不同阶段组成:
1. 构建计算图阶段
2. 执行计算图阶段
计算图是一系列排列成节点的Tensorflow操作的图,让我们构建一个简单的计算图。每个节点有零或多个张量作为输入并输出一个张量。常量也是一种张量。就像所有Tensorflow常量,常量没有输入,并输出一个保存在其内部的值。我们创建两个浮点型张量node1和node2:
End of explanation
"""
sess = tf.Session()
print(sess.run([node1, node2]))
"""
Explanation: 注意到打印出来的节点没有输出数值3.0和4.0。它们在运算时才会相应的输出3.0和4.0。真正计算这些节点,我们要在一个会话(session)中运行计算图。会话包括了Tensorflow运行时的控制和状态。
下面这段代码创建了一个会话,然后调用了其run方法来计算计算图以得出node1和node2。代码如下:
End of explanation
"""
node3 = tf.add(node1, node2)
print("node3: ", node3)
print("sess.run(node3): ",sess.run(node3))
"""
Explanation: 我们看到了期望的数值3.0和4.0。
我们可以通过操作张量来构建更复杂的计算(操作同样是张量)。举例来说,我们可以将两个常数节点相加来得到一个新的图:
End of explanation
"""
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
"""
Explanation: Tensorflow提供了一个叫做Tensorboard的视觉辅助工具来展示计算图。以下展示了Tensorboard如何视觉化计算图的:
这个图并不十分有趣,因为其总是输出常量。一个图可以是参数化的,接受输入,这被称作占位符(placeholders),占位符保证了在计算中会提供值。
End of explanation
"""
print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))
"""
Explanation: 以上三行有点类似函数或匿名函数(lambda),我们定义了两个参数(a和b)以及对它们的操作。我们可以使用feed_dict参数来指定包含具体值的张量给占位符以进行计算:
End of explanation
"""
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b:4.5}))
"""
Explanation: 在Tensorboard中,计算图如下图说是:
我们可以通过加入其他操作来使计算图更复杂。例如,
End of explanation
"""
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
"""
Explanation: 以上计算图在Tensorboard中显示如下:
在机器学习中我们通常想要模型接受外部输入,就如上面一样。为了让模型更可训练,我们要修改图来输入相同的值获取新的输出。变量让我们向图添加可训练参数。它们由类型和初始值组成:
End of explanation
"""
init = tf.global_variables_initializer()
sess.run(init)
"""
Explanation: 常量当你调用tf.constant时就初始化了,而且它的值永远不会变。不同的是,变量当你调用tf.Variable时不会初始化。初始化Tensorflow程序中所有变量,你需要明确调用该操作:
End of explanation
"""
print(sess.run(linear_model, {x:[1,2,3,4]}))
"""
Explanation: init是Tensorflow初始化所有变量子图的句柄。直到我们调用sess.run之前,变量都没初始化。
x是一个占位符,我们可以同时得出多个x值所对应的linear_model:
End of explanation
"""
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
"""
Explanation: 我们创建了一个模型,当我们并不知道它有多好。评估训练数据模型,我们需要一个y占位符来提供所需的值,还要写损失函数。
损失函数描述了当前模型与提供的数据有多大偏离。我们使用一个标准的线性回归损失模型——当前模型与提供的数据的变化值的平方和。
linear_model - y创建了一个每个元素都为相应例子的错误变化量的向量。我们调用tf.square计算误差的平方。然后我们使用tf.reduce_sum将误差的平方相加来创建单个数值指代所有例子的误差:
End of explanation
"""
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
"""
Explanation: 我们可以手工重新分配W和b的值为完美值-1和1来改进。变量在调用tf.variable时指定了初始化值,但可以用tf.assign来修改。举例来说,W=-1和b=1是这个模型的最佳参数。修改如下:
End of explanation
"""
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})
print(sess.run([W, b]))
"""
Explanation: 这里我们猜测了W和b的“完美”值,但机器学习的意义就在于让机器自动修正模型参数。在下一节我们会展示如何来完成。
tf.train API
完整的机器学习的讨论超出本教程的讨论范围了。然而Tensorflow提供了优化器(optimizers)慢慢地改变每个变量来最小化损失函数。最简单的优化器是梯度下降函数(gradient descent)。它根据相对于每个变量的损失导数的大小来修改该变量。总体来说,手工进行符号求导太乏味并且容易出错。因此,仅使用tf.gradients给出模型的描述,Tensorflow就能自动处理求导。简单来说,优化器通常代劳了这些工作。例如,
End of explanation
"""
import numpy as np
import tensorflow as tf
# Model parameters
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
"""
Explanation: 现在我们确实完成了一次机器学习!尽管这个简单的线性回归不需要许多Tensorflow核心代码,输入数据到模型的更复杂的模型和方法需要更多代码。因此Tensorflow为常见的模式,结构和功能提供更高级的抽象。下一节我们会学习如何使用这些抽象层。
完整程序
完整的可训练线性回归模型如下所示:
End of explanation
"""
import tensorflow as tf
# NumPy is often used to load, manipulate and preprocess data.
import numpy as np
# Declare list of features. We only have one real-valued feature. There are many
# other types of columns that are more complicated and useful.
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]
# An estimator is the front end to invoke training (fitting) and evaluation
# (inference). There are many predefined types like linear regression,
# logistic regression, linear classification, logistic classification, and
# many neural network classifiers and regressors. The following code
# provides an estimator that does linear regression.
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)
# TensorFlow provides many helper methods to read and set up data sets.
# Here we use `numpy_input_fn`. We have to tell the function how many batches
# of data (num_epochs) we want and how big each batch should be.
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
num_epochs=1000)
# We can invoke 1000 training steps by invoking the `fit` method and passing the
# training data set.
estimator.fit(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did. In a real example, we would want
# to use a separate validation and testing data set to avoid overfitting.
print(estimator.evaluate(input_fn=input_fn))
"""
Explanation: 这个更复杂的程序一样可以在Tensorboard里显示出来
tf.contrib.learn
tf.contrib.learn是一个高级的Tensorflow库,讲话了机器学习的机制,包括下列:
* 运行训练循环
* 运行评估循环
* 管理数据集
* 管理输入
tf.contrib.learn 定义了许多常见模型。
基本使用
看使用tf.contrib.learn让线性回归程序变得多简单:
End of explanation
"""
import numpy as np
import tensorflow as tf
# Declare list of features, we only have one real-valued feature
def model(features, labels, mode):
# Build a linear model and predict values
W = tf.get_variable("W", [1], dtype=tf.float64)
b = tf.get_variable("b", [1], dtype=tf.float64)
y = W*features['x'] + b
# Loss sub-graph
loss = tf.reduce_sum(tf.square(y - labels))
# Training sub-graph
global_step = tf.train.get_global_step()
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = tf.group(optimizer.minimize(loss),
tf.assign_add(global_step, 1))
# ModelFnOps connects subgraphs we built to the
# appropriate functionality.
return tf.contrib.learn.ModelFnOps(
mode=mode, predictions=y,
loss=loss,
train_op=train)
estimator = tf.contrib.learn.Estimator(model_fn=model)
# define our data set
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x": x}, y, 4, num_epochs=1000)
# train
estimator.fit(input_fn=input_fn, steps=1000)
# evaluate our model
print(estimator.evaluate(input_fn=input_fn, steps=10))
"""
Explanation: 自定义模型
tf.contrib.learn并不会将你限定在预制好的模型里。如果我们想要创建一个Tensorflow没建立的自定义模型。我们仍能保留tf.contrib.learn中的数据集,输入,训练等等的高级抽象。为了说明,我们将展示如何使用我们学到的底层API来实现我们自己的LinearRegressor等效模型。
要定义与tf.contrib.learn兼容的自定义模型,我们需要使用tf.contrib.learn.Estimator。tf.contrib.learn.LinearRegressor实际上是tf.contrib.learn.Estimator的子类。不同于Estimator的子类,我们简单的提供了Estimator方法model_fn告诉tf.contrib.learn如何计算预测值,训练梯度,以及损失。代码如下所示:
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree
|
my-experiments/reinforcement learning/Frozen Lake.ipynb
|
mit
|
import gym
import tensorflow as tf
from collections import deque
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from gym import wrappers
import shutil
shutil.rmtree('./monitor')
env = gym.make('FrozenLake-v0')
env = wrappers.Monitor(env,'./monitor')
print(env.observation_space.n)
print(env.action_space.n)
"""
Explanation: OpenAI Frozen Lake v0
We are attempting to solve the openai frozen lake environment using reinforcement learning. Openai provides many environments for simulation.
Let us understand what reinforcement learning is and compare it with other available methodologies. The frozen lake problem involves a grid maze with a starting point and a goal. Along the way there are a few fatal pitfalls. Solving the frozen lake requires us to build an agent/bot who can traverse the maze and reach the goal without falling into the pitfalls. The duration of the travel is not a constraint but it is better if the bot can reach it quicker.
SFFF (S: starting point, safe)<br>
FHFH (F: frozen surface, safe)<br>
FFFH (H: hole, fall to your doom)<br>
HFFG (G: goal, where the frisbee is located)<br>
The frozen lake can be solved using a multitude of ways with standard graph search algos to optimization algorithms. There is one caveat though, the environment adds a small complexity to the mix. If our bot wanted to move right, there is a slight chance the bot may move in a random direction. Apparently a gust of wind pushes us away This complexity tends to break our algos like graph search etc which may not handle random probabilities. One worthy method to solve this dynamic environment is Reinforcement Learning.
Framework
we will use the openai gym module to train our bot. Our bot will be a neural network and we will use keras and tensorflow for implementation. We will use Q learning which is a technique of RL(Reinforcement Learning).
End of explanation
"""
state = env.reset()
for i in range(10):
env.render()
action = env.action_space.sample()
next_state, reward, done, prob = env.step(action)
print('state:{} action:{} next_state:{} reward:{} done:{}'.format(state, action, next_state, reward, done))
state = next_state
if done:
break
"""
Explanation: Sample Simulation
The frozen lake environment has the following important entities;
* State: A state denotes the current nature of the environment. For frozen lake there are 16 states, each corresponding to a single step.
* Action: An action denotes a decision that can be taken by an agent in any state. For frozen lake there are 4 possible actions that can be take in any state; up, down, left and right provided there is a possibility
* Reward: A reward signifies the result of an action at a particular state. In frozen lake the reward is 1 if we reach the goal and 0 if we fail. The openai doesn't provide any reward for each action or falling into a hole.
* Done: Done signifies the end state of a simulation. For frozen lake, if a bot reaches the goal or falls into a hole, Done returns a value of True, else False.
Below is an example of a single simulation episode.
End of explanation
"""
learning_rate = 0.001
memory_size = 2000
episodes = 5000
steps = 100
# changed from 0.95
gamma = 0.99
epsilon = 1
epsilon_stop = 0.01
epsilon_decay = 0.995
batch_size = 32
"""
Explanation: Q Learning
Q-learning is one of the techniques used to implement RL. Q-learning relies on the Bellman's equation.
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
The Bellman equation provides a way to incorporate future rewards of an action to a current state. This is a core component of Q Learning. In a nutshell Q-learning is finding the function Q = f(state, action) which can be used to derive the action needed for a given state to traverse an environment.
The pseudo code for Q-learning is below
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
End of explanation
"""
bot = Sequential()
bot.add(Dense(10, input_dim=16, activation='relu'))
bot.add(Dense(4, activation='linear'))
bot.compile(loss='mse', optimizer = Adam(learning_rate))
memory = deque(maxlen=memory_size)
state_lkup = np.eye(env.observation_space.n)
action_lkup = np.eye(env.action_space.n)
reward_list=[]
state = env.reset()
for episode in range(episodes):
total_reward = 0
state = env.reset()
for step in range(steps):
if epsilon > np.random.rand():
action = env.action_space.sample()
else:
action = np.argmax(bot.predict(np.reshape(state_lkup[state], (-1, env.observation_space.n))))
next_state, reward, done, prob = env.step(action)
if done and reward==0:
reward = -1
if not done and reward==0:
reward = -0.02
memory.append((state, action, reward, next_state, done))
state = next_state
total_reward = total_reward + reward
if done:
print('episode:{} steps:{} total reward:{} epsilon:{}'.format(episode + 1, step + 1, total_reward, epsilon))
reward_list.append(total_reward)
break
minibatch = [memory[ii] for ii in np.random.choice(range(len(memory)), batch_size)]
states = [each[0] for each in minibatch]
actions = [each[1] for each in minibatch]
rewards = [each[2] for each in minibatch]
next_states = [each[3] for each in minibatch]
dones = [each[4] for each in minibatch]
next_states = np.reshape([state_lkup[state] for state in next_states], [-1,env.observation_space.n])
states = np.reshape([state_lkup[state] for state in states], [-1,env.observation_space.n])
actions = np.reshape(actions, (batch_size, 1))
targetQs = bot.predict(next_states)
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
targetQs[episode_ends] = (0, 0, 0, 0)
targets = rewards + gamma * np.max(targetQs, axis=1)
targets_f = bot.predict(states)
for pos in range(len(actions)):
targets_f[pos,actions[pos]] = targets[pos]
bot.fit(states, targets_f, epochs=1, verbose=0)
if epsilon > epsilon_stop:
epsilon = epsilon * epsilon_decay
"""
Explanation: The Learner
Our agent is a simple neural network with a [rectified linear](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) hidden layer and a linear output layer. We are using Adam optimizer. The neural network will be having a memory to store past experiences. This is known as Temporal Difference Learning.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
smoothed_rews = running_mean(reward_list, 10)
plt.plot(reward_list, color='grey', alpha=0.3)
plt.plot(smoothed_rews)
"""
Explanation: Performance
After running the simulation for around 5000 episodes and plotting the rewards per iteration, we can notice that the rewards increase over time.
End of explanation
"""
env.close()
gym.upload('./monitor', api_key='')
"""
Explanation: Submission to OpenAI
This submission solves the environment around the 2.3K episodes.
End of explanation
"""
|
yy/dviz-course
|
m06-data/m06-lab.ipynb
|
mit
|
import pandas as pd
pew_df = pd.read_csv('https://raw.githubusercontent.com/tidyverse/tidyr/4c0a8d0fdb9372302fcc57ad995d57a43d9e4337/vignettes/pew.csv')
pew_df
"""
Explanation: Module 6: Data types and tidy data
Tidy data
Let's do some tidy exercise first. This is one of the non-tidy dataset assembled by Hadley Wickham (check out here for more datasets, explanation, and R code).
Let's take a look at this small dataset: https://raw.githubusercontent.com/tidyverse/tidyr/4c0a8d0fdb9372302fcc57ad995d57a43d9e4337/vignettes/pew.csv
End of explanation
"""
# TODO: Replace the dummy value of pew_tidy_df and put your code here.
pew_tidy_df = pd.DataFrame({"religion": ["ABCD" for i in range(15)],
"income": ["1k" for i in range(15)],
"frequency": [i for i in range(15)]})
"""
Explanation: This dataset is about the relationships between income and religion, assembled from a research by the Pew Research Center. You can read more details here. Is this dataset tidy or not? Why?
Yes, many of the columns are values, not variable names. How should we fix it?
Pandas provides a convenient function called melt. You specify the id_vars that are variable columns, and value_vars that are value columns, and provide the name for the variable as well as the name for the values.
Q: so please go ahead and tidy it up! I'd suggest to use the variable name "income" and value name "frequency"
End of explanation
"""
pew_tidy_df.sample(10)
"""
Explanation: If you were successful, you'll have something like this:
End of explanation
"""
pew_tidy_df.income.value_counts()
"""
Explanation: Data types
Let's talk about data types briefly. Understanding data types is not only important for choosing the right visualizations, but also important for efficient computing and storage of data. You may not have thought about how pandas represent data in memory. A Pandas Dataframe is essentially a bunch of Series, and those Series are essentially numpy arrays. An array may contain a fixed-length items such as integers or variable length items such as strings. Putting some efforts to think about the correct data type can potentially save a lot of memory as well as time.
A nice example would be the categorical data type. If you have a variable that only has several possible values, it's essentially a categorical data. Take a look at the income variable.
End of explanation
"""
pew_tidy_df.income.dtype
"""
Explanation: These were the column names in the original non-tidy data. The value can take only one of these income ranges and thus it is a categorical data. What is the data type that pandas use to store this column?
End of explanation
"""
pew_tidy_df.memory_usage()
pew_tidy_df.memory_usage(deep=True)
"""
Explanation: The O means that it is an object data type, which does not have a fixed size like integer or float. The series contains a sort of pointer to the actual text objects. You can actually inspect the amount of memory used by the dataset.
End of explanation
"""
pew_tidy_df.frequency.dtype
"""
Explanation: What's going on with the deep=True option? When you don't specify deep=True, the memory usage method just tells you the amount of memory used by the numpy arrays in the pandas dataframe. When you pass deep=True, it tells you the total amount of memory by including the memory used by all the text objects. So, the religion and income columns occupies almost ten times of memory than the frequency column, which is simply an array of integers.
End of explanation
"""
income_categorical_series = pew_tidy_df.income.astype('category')
# you can do pew_tidy_df.income = pew_tidy_df.income.astype('category')
"""
Explanation: Is there any way to save up the memory? Note that there are only 10 categories in the income variable. That means we just need 10 numbers to represent the categories! Of course we need to store the names of each category, but that's just one-time cost. The simplest way to convert a column is using astype method.
End of explanation
"""
income_categorical_series.dtype
"""
Explanation: Now, this series has the CategoricalDtype dtype.
End of explanation
"""
income_categorical_series.memory_usage(deep=True)
pew_tidy_df.income.memory_usage(deep=True)
"""
Explanation: How much memory do we use?
End of explanation
"""
from pandas.api.types import CategoricalDtype
income_type = CategoricalDtype(categories=["Don't know/refused", '<$10k', '$10-20k', '$20-30k', '$30-40k',
'$40-50k', '$50-75k', '$75-100k', '$100-150k', '>150k'], ordered=True)
income_type
pew_tidy_df.income.astype(income_type).dtype
"""
Explanation: We have reduced the memory usage by almost 10 fold! Not only that, because now the values are just numbers, it will be much faster to match, filter, manipulate. If your dataset is huge, this can save up a lot of space and time.
If the categories have ordering, you can specify the ordering too.
End of explanation
"""
# TODO: put your code here
"""
Explanation: This data type now allows you to compare and sort based on the ordering.
Q: ok, now convert both religion and income columns of pew_tidy_df as categorical dtype (in place) and show that pew_tidy_df now uses much less memory
End of explanation
"""
|
cdt15/lingam
|
examples/RESIT.ipynb
|
mit
|
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
import warnings
warnings.filterwarnings('ignore')
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
"""
Explanation: RESIT
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
End of explanation
"""
X = pd.read_csv('nonlinear_data.csv')
m = np.array([
[0, 0, 0, 0, 0],
[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 0, 1, 0]])
dot = make_dot(m)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot
"""
Explanation: Test data
First, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3.
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor(max_depth=4, random_state=0)
model = lingam.RESIT(regressor=reg)
model.fit(X)
"""
Explanation: Causal Discovery
To run causal discovery, we create a RESIT object and call the fit method.
End of explanation
"""
model.causal_order_
"""
Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering.
End of explanation
"""
model.adjacency_matrix_
"""
Explanation: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.
End of explanation
"""
make_dot(model.adjacency_matrix_)
"""
Explanation: We can draw a causal graph by utility funciton.
End of explanation
"""
import warnings
warnings.filterwarnings('ignore', category=UserWarning)
n_sampling = 100
model = lingam.RESIT(regressor=reg)
result = model.bootstrap(X, n_sampling=n_sampling)
"""
Explanation: Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
End of explanation
"""
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
"""
Explanation: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
End of explanation
"""
print_causal_directions(cdc, n_sampling)
"""
Explanation: We can check the result by utility function.
End of explanation
"""
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
"""
Explanation: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
End of explanation
"""
print_dagc(dagc, n_sampling)
"""
Explanation: We can check the result by utility function.
End of explanation
"""
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
"""
Explanation: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
End of explanation
"""
from_index = 0 # index of x0
to_index = 3 # index of x3
pd.DataFrame(result.get_paths(from_index, to_index))
"""
Explanation: Bootstrap Probability of Path
Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [0, 1, 3] shows the path from variable X0 through variable X1 to variable X3.
End of explanation
"""
|
ThunderShiviah/code_guild
|
interactive-coding-challenges/stacks_queues/queue_list/queue_list_challenge.ipynb
|
mit
|
class Node(object):
def __init__(self, data):
# TODO: Implement me
pass
class Queue(object):
def __init__(self):
# TODO: Implement me
pass
def enqueue(self, data):
# TODO: Implement me
pass
def dequeue(self):
# TODO: Implement me
pass
"""
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a queue with enqueue and dequeue methods using a linked list.
Constraints
Test Cases
Algorithm
Code
Unit Test
Pythonic-Code
Solution Notebook
Constraints
If there is one item in the list, do you expect the first and last pointers to both point to it?
Yes
If there are no items on the list, do you expect the first and last pointers to be None?
Yes
If you dequeue on an empty queue, does that return None?
Yes
Test Cases
Enqueue
Enqueue to an empty queue
Enqueue to a non-empty queue
Dequeue
Dequeue an empty queue -> None
Dequeue a queue with one element
Dequeue a queue with more than one element
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
"""
# %load test_queue_list.py
from nose.tools import assert_equal
class TestQueue(object):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
print('Test: Dequeue an empty queue')
queue = Queue()
assert_equal(queue.dequeue(), None)
print('Test: Enqueue to an empty queue')
queue.enqueue(1)
print('Test: Dequeue a queue with one element')
assert_equal(queue.dequeue(), 1)
print('Test: Enqueue to a non-empty queue')
queue.enqueue(2)
queue.enqueue(3)
queue.enqueue(4)
print('Test: Dequeue a queue with more than one element')
assert_equal(queue.dequeue(), 2)
assert_equal(queue.dequeue(), 3)
assert_equal(queue.dequeue(), 4)
print('Success: test_end_to_end')
def main():
test = TestQueue()
test.test_end_to_end()
if __name__ == '__main__':
main()
"""
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
"""
|
deepmind/dm_alchemy
|
examples/AlchemyGettingStarted.ipynb
|
apache-2.0
|
import os
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import dm_alchemy
from dm_alchemy import io
from dm_alchemy import symbolic_alchemy
from dm_alchemy import symbolic_alchemy_bots
from dm_alchemy import symbolic_alchemy_trackers
from dm_alchemy import symbolic_alchemy_wrapper
from dm_alchemy.encode import chemistries_proto_conversion
from dm_alchemy.encode import symbolic_actions_proto_conversion
from dm_alchemy.encode import symbolic_actions_pb2
from dm_alchemy.types import stones_and_potions
from dm_alchemy.types import unity_python_conversion
from dm_alchemy.types import utils
width, height = 240, 200
level_name = 'alchemy/perceptual_mapping_randomized_with_rotation_and_random_bottleneck'
seed = 1023
settings = dm_alchemy.EnvironmentSettings(
seed=seed, level_name=level_name, width=width, height=height)
env = dm_alchemy.load_from_docker(settings)
"""
Explanation: Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
Alchemy Environment and Analysis
This is an introduction to the environment described in the paper Alchemy: A structured task distribution for meta-reinforcement learning. Please see the paper for full details.
Installation
Follow the instructions in the README.md to install dm_alchemy and dependencies. Run in jupyter or colab with a local runtime. If you use a virtual environment ensure that the python kernel for this virtual environment is being used for example by running python -m ipykernel install --user --name=dm_alchemy inside your virtual environment and selecting the kernel dm_alchemy in jupyter.
3d Environment
We create an instance of the 3d environment and inspect it.
End of explanation
"""
env.observation_spec()
"""
Explanation: We can see that observations consist of:
RGB_INTERLEAVED - This is the image that the agent can see. For our experiments we set the resolution to 96x72 but here we use 240x200 so that the image is easier to interpret.
ACCELERATION, HAND_FORCE, HAND_IS_HOLDING, HAND_DISTANCE - These are proprioceptive observations.
events - This is a variable sized array of protobufs containing information from the environment which can be used for analysis but should not be used by the agent.
End of explanation
"""
env.action_spec()
"""
Explanation: The actions available to the agent consist of looking left or right, up or down, moving left or right and backward or forward and actions for grabbing an object and moving it around.
End of explanation
"""
timestep = env.reset()
plt.figure()
plt.imshow(timestep.observation['RGB_INTERLEAVED'])
"""
Explanation: Let's start a new episode and look at the image observations.
End of explanation
"""
for _ in range(38):
timestep = env.step({'MOVE_BACK_FORWARD': 1.0})
for _ in range(6):
timestep = env.step({'LOOK_LEFT_RIGHT': -1.0})
for _ in range(3):
timestep = env.step({'LOOK_DOWN_UP': 1.0})
plt.figure()
plt.imshow(timestep.observation['RGB_INTERLEAVED'])
"""
Explanation: With this seed we have started far away from the table, let's take a few actions to get closer to it.
End of explanation
"""
env = symbolic_alchemy.get_symbolic_alchemy_level(level_name, seed=314)
"""
Explanation: Now we can see the potions of different colours which can be used to transform stones to increase their value before placing them in the big white cauldron to get reward.
Symbolic Environment
We can represent the same challenge in a symbolic form which eliminates the complex perceptual challenges, more difficult action space and longer timescales of the 3d environment.
End of explanation
"""
env.observation_spec()
"""
Explanation: Observations from the symbolic environment are concatenated together into a single array of length 39 with 5 dimensions for each stone:
3 perceptual features - colour, size and roundness. These can take values -1, 0 or 1 e.g. for small, medium and large.
1 reward. This takes values -1, -1/3, 1/3, 1 which correspond to actual rewards -3, -1, 1, 15.
1 dimension which shows if the stone has been used in the cauldron. This is 0 if the stone still exists 1 otherwise. If the stone is put into the cauldron all the other dimensions are set to default values.
2 dimensions for each potion:
* 1 for colour. This takes values -1, -2/3, -1/3, 0, 1/3, 2/3 for colours red, green, orange, yellow, pink, turquoise.
* 1 dimension which shows if the potion has been used by placing a stone in it. This is 0 if the potion is still full, 1 otherwise. If the potion is used the colour dimension is set to a default value.
End of explanation
"""
env.action_spec()
"""
Explanation: There are 40 possible actions in the symbolic environment which are selected using an int between 0 and 39.
These are:
0 represents doing nothing
The remaining integers represent putting a stone into a potion or into the cauldron, i.e. s * (num potion slots (= 12) + 1) + 1 represents putting the stone in slot s (from 0 - 2) into the cauldron and s * (num potion slots (= 12) + 1) + 2 + p represents putting stone s into the potion in slot p (from 0 - 11). For example putting stone 1 into potion 5 is represented by action 20.
End of explanation
"""
timestep = env.reset()
timestep.observation['symbolic_obs']
"""
Explanation: For example with this seed we start with stone 0 being purple, small and halfway between round and pointy and its reward indicator shows a -1. Potion 0 is turquoise.
End of explanation
"""
timestep = env.step(2)
timestep.observation['symbolic_obs']
"""
Explanation: If we put stone 0 into potion 0 and look at the resulting observation we can see that stone 0 has become purple, large and halfway between round and pointy and its reward indicator shows -3. From this an agent which understands the task and the generative process can deduce things like:
The turquoise potion makes stones large.
The pink potion makes stones small (because it is always paired with the turquoise potion).
The blue, small, halfway between round and pointy stone has maximum value (because it is in the opposite corner to the stone we have observed to have the minimum possible value).
End of explanation
"""
seed = 1023
settings = dm_alchemy.EnvironmentSettings(
seed=seed, level_name=level_name, width=width, height=height)
env3d = dm_alchemy.load_from_docker(settings)
env = symbolic_alchemy_wrapper.SymbolicAlchemyWrapper(
env3d, level_name=level_name, see_symbolic_observation=True)
"""
Explanation: Combined environment
We can create an environment which is a combination of the 3d environment and the symbolic environment by using a wrapper around the 3d environment which listens to the events output and uses them to initialise and take actions in a symbolic environment to keep it in sync with the 3d environment. In this setting the agent takes actions in the 3d environment and recieves observations from the 3d environment but can also optionally receive observations that are only produced by the symbolic environment.
End of explanation
"""
timestep = env.reset()
plt.figure()
plt.imshow(timestep.observation['RGB_INTERLEAVED'])
for _ in range(37):
timestep = env.step({'MOVE_BACK_FORWARD': 1.0})
for _ in range(4):
timestep = env.step({'LOOK_LEFT_RIGHT': -1.0})
for _ in range(3):
timestep = env.step({'LOOK_DOWN_UP': 1.0})
plt.figure()
plt.imshow(timestep.observation['RGB_INTERLEAVED'])
"""
Explanation: Using the same seed as before we create the environment and go and look at the table.
End of explanation
"""
print('Stones:')
for i in range(0, 15, 5):
print(timestep.observation['symbolic_obs'][i:i + 5])
print('Potions:')
for i in range(15, 39, 2):
potion_index = int(round(timestep.observation['symbolic_obs'][i] * 3 + 3))
potion = stones_and_potions.perceived_potion_from_index(potion_index)
print(unity_python_conversion.POTION_COLOUR_AT_PERCEIVED_POTION[potion])
"""
Explanation: Although a few potions are occluded, if we look at the picture above we can see that the symbolic observations match. i.e. we have:
* 2 stones which are blue, small with medium roundness
* 1 stone which is midway between blue and purple, large and pointy.
The potion colours also match.
End of explanation
"""
seed = 789
env = symbolic_alchemy.get_symbolic_alchemy_level(
level_name, seed=seed,
see_chemistries={
'input_chem': utils.ChemistrySeen(
content=utils.ElementContent.GROUND_TRUTH)
})
env.observation_spec()
timestep = env.reset()
input_chem = timestep.observation['input_chem']
print('Graph:', input_chem[:12])
print('Potion map dimension map:', input_chem[12:18])
print('Potion map direction map:', input_chem[18:21])
print('Stone map direction map:', input_chem[21:24])
print('Rotation:', input_chem[24:])
"""
Explanation: Additional Observations
With the symbolic or combined environment we can get additional observations of the ground thruth chemistry or a belief state over the chemistry.
Ground Truth Chemistry
The ground truth chemistry observation consists of 4 parts:
Graph - this consists of 12 entries for the 12 edges on the graph, each entry is set to 0 if there is no edge or 1 if there is an edge.
The potion map - this is a 1 hot vector over the 6 possible assignments of potion colour pairs to latent space dimensions (called the dimension map) and then a 3 dimensional direction map (i.e. we assume a canonical direction for each potion colour pair and this value will be 1 if this is the direction in latent space or 0 if it is the reverse, e.g. if the red potion increases the value of the latent variable and the green potion decreases it this entry is set to 1, if the green potion increases the value of the latent variable and the red potion decreases this is set to 0).
Stone map - this is another 3 dimensional direction map (e.g. if a large size stone has a higher latent variable than a small size stone then the entry is set to 1, otherwise it is set to 0 etc).
Rotation - this is a 1 hot vector over the 4 possible rotations (no rotation or 45 degrees around x, y or z).
In total this is a 28 dimensional observation (12 + 6 + 3 + 3 + 4).
End of explanation
"""
seed = 22
env = symbolic_alchemy.get_symbolic_alchemy_level(
level_name, seed=seed,
see_chemistries={
'input_chem': utils.ChemistrySeen(
content=utils.ElementContent.BELIEF_STATE,
precomputed=level_name)
})
"""
Explanation: Belief state
We can also maintain a belief state over the underlying chemistry. This is a probability distribution over all possible chemistries which could be present in the current episode.
The belief state is updated to perfectly incorporate all information available to the agent both through:
Infinite training on the task distribution.
Observations within the current episode.
The first form of information means that with no other information the belief state starts as a prior distribution over all possible chemistries which accurately reflects the generative distribution.
The second form of information means that the belief state is correctly updated to eliminate chemistries which are inconsistent with observations during the episode and rescale the resulting distribution.
As soon as the agent is presented with a set of stones, it can rule out some chemistries. For example, some rotations will not include the stones that are present and the reward indicator on each stone can eliminate some of the possible stone maps.
As the agent takes actions in the environment the belief state is further updated. For example, if a small stone becomes large when put into a green potion all chemistries but those where red and green potions are mapped to the size variable and the green potion increases the size can be eliminated.
The belief state observation is a relaxation of the ground truth observation in which the values from the ground truth observation are used if the belief state assigns all probability to that possibility.
The graph - if the presence of an edge is unknown then the corresponding entry is set to 0.5.
The potion map - for the dimension map if the correct assignment is unknown then all possibilities are set to 0.5, for the direction map if the true direction for that dimension is unknown then we set the entry to 0.5.
The stone map - this is another direction map so this works the same way as in the potion map.
Rotation - if multiple rotations are possibile then the entry corresponding to each possibile rotation is set to 0.5.
End of explanation
"""
timestep = env.reset()
print('Stones:')
for i in range(0, 15, 5):
print(timestep.observation['symbolic_obs'][i:i + 5])
input_chem = timestep.observation['input_chem']
print('Graph:', input_chem[:12])
print('Potion map dimension map:', input_chem[12:18])
print('Potion map direction map:', input_chem[18:21])
print('Stone map direction map:', input_chem[21:24])
print('Rotation:', input_chem[24:])
"""
Explanation: Before any actions are taken the presence of each edge in the graph is unknown and nothing is known about the potion map.
However, the stones present narrow down the possible rotations and stone maps. The stones present are:
* blue, small, medium roundness with reward 1
* purple, small, medium roundness with reward 1
* blue, large, medium roundness with reward -1
There cannot be medium roundness stones in the unrotated chemistry or the chemistry which is rotated around the roundness axis so these are ruled out. The chemistry which is rotated around the colour axis can also be ruled out because the first and second stones differ only in colour and have the same reward. The chemistry which is rotated around the colour axis has 1 latent variable aligned with change in colour so 2 stones which are the same size and roundness but different colour must have different reward. Therefore, the rotation is known to be around the size axis.
Since the second and third stones differ only in size and the small stone has higher reward we can also determine in the stone map that increasing size decreases the value of the associated latent variable.
End of explanation
"""
timestep = env.step(3)
print('Stones:')
for i in range(0, 15, 5):
print(timestep.observation['symbolic_obs'][i:i + 5])
input_chem = timestep.observation['input_chem']
print('Graph:', input_chem[:12])
print('Potion map dimension map:', input_chem[12:18])
print('Potion map direction map:', input_chem[18:21])
print('Stone map direction map:', input_chem[21:24])
print('Rotation:', input_chem[24:])
"""
Explanation: When one of the stones is transformed by a potion the belief state updates to reflect that the corresponding edge in the graph is known to exist.
The potion dimension maps are narrowed down to 2 possibilities since this action has shown us the assignment of one pair of potions to one latent space axis but the other 2 potion pairs could be assigned to either of the remaining 2 latent space axes.
The potion direction map is updated on one dimension to reflect whether the potion used changed the associated latent variable in the canonical direction assigned to the pair of potions (e.g. that for the green and red pair that green is positive and red is negative).
End of explanation
"""
seed = 22
env = symbolic_alchemy.get_symbolic_alchemy_level(level_name, seed=seed)
env.add_trackers({
symbolic_alchemy_trackers.ItemGeneratedTracker.NAME:
symbolic_alchemy_trackers.ItemGeneratedTracker(),
symbolic_alchemy_trackers.ItemsUsedTracker.NAME:
symbolic_alchemy_trackers.ItemsUsedTracker(),
symbolic_alchemy_trackers.ScoreTracker.NAME:
symbolic_alchemy_trackers.ScoreTracker(env._reward_weights),
})
timestep = env.reset()
timestep = env.step_slot_based_action(utils.SlotBasedAction(
stone_ind=0, potion_ind=0))
timestep = env.step_slot_based_action(utils.SlotBasedAction(
stone_ind=0, potion_ind=1))
timestep = env.step_slot_based_action(utils.SlotBasedAction(
stone_ind=0, cauldron=True))
_ = env.end_trial()
"""
Explanation: Symbolic Alchemy Trackers
We can add trackers to the symbolic environment which have callbacks which are executed when the environment is reset or a step is taken.
We provide implementations of trackers which:
Keep a record of the symbolic actions that occurred in a matrix
Keep track of the stones and potions generated each trial
Keep track of the number of potions and stones used
Keep track of the score achieved
Update the belief state
Track the average value of stones put into the cauldron
Track the average improvement of the value of stones before they are put into the cauldron
Track the frequency of putting negative stones into the cauldron
Track the frequency of leaving stones with a specified reward (e.g. +1) on the table at the end of a trial
Track the frequency of actions where a stone is put into a potion which has no effect on the stone
We initialise the environment with the same seed as above and add a few of the above trackers. We then put the first stone into the first 2 potions and then into the cauldron and then end the trial by taking the "no operation" action for the rest of the trial.
End of explanation
"""
episode_returns = env.episode_returns()
print(episode_returns['items_generated'].trials[0])
print(episode_returns['items_used']['per_trial'][0])
print(episode_returns['score']['per_trial'][0])
"""
Explanation: The returns from the trackers show the potions and stones generated and show that 1 stone and 2 potion were used and that the score achieved was -1 (because that was the value of the stone put into the cauldron).
End of explanation
"""
env = symbolic_alchemy.get_symbolic_alchemy_level(level_name)
env.add_trackers({
symbolic_alchemy_trackers.ScoreTracker.NAME:
symbolic_alchemy_trackers.ScoreTracker(env._reward_weights)
})
bot = symbolic_alchemy_bots.RandomActionBot(env._reward_weights, env)
bot_returns = bot.run_episode()
bot_returns['score']['per_trial']
"""
Explanation: Symbolic Bots
We provide the following hand-coded policies for acting with the symbolic environment:
Random action bot - this bot randomly selects any stone which does not have the maximum possible value and puts it into randomly selected potions until either all potions are used or all stones have the maximum value. Then it puts any positive stones into the cauldron.
Search oracle bot - this bot is given the chemistry for the episode and exhaustively searches all possible actions to maximise the reward it will obtain.
Ideal observer bot - this bot maintains a belief state (as described above) and exhaustively searches over all possible actions and all possible outcomes of those actions using the belief state to track the probability of each outcome to maximise the expected reward it will obtain in the current trial.
Replay bot - this bot takes a record of actions taken and takes them again. This can be used to replay the actions an agent or another bot took to perform additional tracking.
End of explanation
"""
chems = chemistries_proto_conversion.load_chemistries_and_items(
'chemistries/perceptual_mapping_randomized_with_random_bottleneck/chemistries')
"""
Explanation: Evaluation Episodes
We have released a set of 1000 episodes on which we evaluated our agents. Each episode consists of a chemistry and a set of stones and potions generated at the start of each trial.
End of explanation
"""
env = symbolic_alchemy.get_symbolic_alchemy_fixed(
chemistry=chems[0][0], episode_items=chems[0][1])
env.add_trackers({
symbolic_alchemy_trackers.ItemGeneratedTracker.NAME:
symbolic_alchemy_trackers.ItemGeneratedTracker()})
env.reset()
env.end_trial()
first_trial_items = env.episode_returns()[symbolic_alchemy_trackers.ItemGeneratedTracker.NAME].trials[0]
assert first_trial_items == chems[0][1].trials[0]
assert env._chemistry == chems[0][0]
"""
Explanation: To run symbolic alchemy on these evaluation episodes the function get_symbolic_alchemy_fixed can be used.
End of explanation
"""
settings = dm_alchemy.EnvironmentSettings(
seed=seed, level_name='alchemy/evaluation_episodes/0', width=width, height=height)
env3d = dm_alchemy.load_from_docker(settings)
env = symbolic_alchemy_wrapper.SymbolicAlchemyWrapper(
env3d, level_name=level_name, see_symbolic_observation=True)
env.env_symbolic.add_trackers({
symbolic_alchemy_trackers.ItemGeneratedTracker.NAME:
symbolic_alchemy_trackers.ItemGeneratedTracker()})
env.reset()
# We need to take a step or 2 to ensure that the trial has started.
for _ in range(2):
env.step({})
# Let the symbolic environment trial end now so we can test the items generated were as expected.
env.env_symbolic.end_trial()
first_trial_items = env.env_symbolic.episode_returns()[symbolic_alchemy_trackers.ItemGeneratedTracker.NAME].trials[0]
first_trial_potions = {p.latent_potion() for p in first_trial_items.potions}
first_trial_stones = {s.latent_stone() for s in first_trial_items.stones}
chem_first_trial_potions = {p.latent_potion() for p in chems[0][1].trials[0].potions}
chem_first_trial_stones = {s.latent_stone() for s in chems[0][1].trials[0].stones}
assert first_trial_potions == chem_first_trial_potions
assert first_trial_stones == chem_first_trial_stones
assert env.env_symbolic._chemistry == chems[0][0]
"""
Explanation: To run the 3d environment on the evaluation episodes we have included them as named levels alchemy/evaluation_episodes/{episode_number}.
End of explanation
"""
serialized = io.read_proto('agent_events/ideal_observer')
proto = symbolic_actions_pb2.EvaluationSetEvents.FromString(serialized)
ideal_observer_events = symbolic_actions_proto_conversion.proto_to_evaluation_set_events(proto)
env = symbolic_alchemy.get_symbolic_alchemy_fixed(
chemistry=chems[0][0], episode_items=chems[0][1])
env.add_trackers({
symbolic_alchemy_trackers.ScoreTracker.NAME:
symbolic_alchemy_trackers.ScoreTracker(env._reward_weights)})
bot = symbolic_alchemy_bots.ReplayBot(ideal_observer_events[0], env)
episode_results = bot.run_episode()
episode_results['score']['per_trial']
"""
Explanation: Whilst you can run the ideal observer and search oracle yourself these bots (particularly the ideal observer) are slow on full size trials of 12 potions and 3 stones, due to the size of the exhaustive search performed. However, we have included in the release the symbolic actions taken on the evaluation episodes. So you can perform analysis and, for example, compare agent actions to those of the ideal observer.
End of explanation
"""
loaded_events = {}
for name, events_file in [('ideal_observer', 'agent_events/ideal_observer'),
('search_oracle', 'agent_events/search_oracle'),
('baseline', 'agent_events/baseline'),
('belief_state_predict', 'agent_events/belief_state_predict'),
('ground_truth_predict', 'agent_events/ground_truth_predict')]:
serialized = io.read_proto(events_file)
proto = symbolic_actions_pb2.EvaluationSetEvents.FromString(serialized)
events = symbolic_actions_proto_conversion.proto_to_evaluation_set_events(proto)
loaded_events[name] = events
random_action_events = []
for chem, items in chems:
env = symbolic_alchemy.get_symbolic_alchemy_fixed(
chemistry=chem, episode_items=items)
env.add_trackers({
symbolic_alchemy_trackers.AddMatrixEventTracker.NAME:
symbolic_alchemy_trackers.AddMatrixEventTracker()})
bot = symbolic_alchemy_bots.RandomActionBot(env._reward_weights, env)
episode_results = bot.run_episode()
random_action_events.append(episode_results['matrix_event']['event_tracker'])
plt.figure()
for name, events, colour in [
('ideal_observer', loaded_events['ideal_observer'], 'blue'),
('search_oracle', loaded_events['search_oracle'], 'red'),
('baseline', loaded_events['baseline'], 'orange'),
('belief_state_predict', loaded_events['belief_state_predict'], 'yellow'),
('ground_truth_predict', loaded_events['ground_truth_predict'], 'purple'),
('random_action', random_action_events, 'green')]:
scores = []
for (chem, items), ep_events in zip(chems, events):
env = symbolic_alchemy.get_symbolic_alchemy_fixed(
chemistry=chem, episode_items=items)
env.add_trackers({
symbolic_alchemy_trackers.ScoreTracker.NAME:
symbolic_alchemy_trackers.ScoreTracker(env._reward_weights)})
bot = symbolic_alchemy_bots.ReplayBot(ep_events, env)
episode_results = bot.run_episode()
scores.append(np.sum(episode_results['score']['per_trial']))
sns.histplot(scores, label=name, bins=12, kde=True, edgecolor=None,
stat='density', color=colour)
plt.xlabel('Episode reward')
plt.ylabel('Proportion of episodes')
plt.title('Histogram of strategies rewards')
plt.legend()
plt.show()
"""
Explanation: Plotting performance
We can compare agent's performance on the 1000 evaluation episodes to the ideal observer, search oracle and random action bot. New agents can also be added to these performance plots by simply running on the 1000 evaluation episodes with AddMatrixEventTracker added to the environment. The events for 3 task settings are included in the repository. Events for agents run on other settings can be downloaded at https://storage.googleapis.com/dm-alchemy/agent_events.tar.gz.
End of explanation
"""
|
saketkc/hatex
|
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
|
mit
|
%matplotlib inline
from __future__ import division
import pandas as pd
import matplotlib
import itertools
matplotlib.rcParams['figure.figsize'] = (16,12)
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(1)
def propose(S):
r = np.random.choice(len(S), 2)
rs = np.sort(r)
j,k=rs[0],rs[1]
y=np.copy(S)
y[j:k+1] = y[j:k+1][::-1]
return y
def count_cycles(S):
sample_length = len(S)
n_cycles = 0
index = 0
length_travelled = 0
visited = []
while length_travelled < sample_length:
if S[index] == index and index < sample_length :
index+=1
n_cycles+=1
length_travelled+=1
else:
visited.append(index)
index = S[index]
length_travelled+=1
if index not in visited:
n_cycles+=1
return n_cycles
N = [2,3,4, 100]
alpha = 3
assert count_cycles([0,1]) == 2
assert count_cycles([0,2,1]) == 2
assert count_cycles([1,0]) == 1
N_iterations = 10000
def theoretical(S, alpha, denom):
n_cycles = count_cycles(S)
return n_cycles**alpha/denom
def run(n, show=True):
oldS = np.arange(n)
old_n_cycles = count_cycles(oldS)
count_dict = {}
if show:
denom = sum([count_cycles(x)**alpha for x in itertools.permutations(range(n))])
for i in range(N_iterations):
proposedS = propose(oldS)
new_n_cycles = count_cycles(proposedS)
pi_ab = new_n_cycles**alpha/(old_n_cycles**alpha)
q = min(1,pi_ab)
if q>= np.random.uniform():
oldS = proposedS
old_n_cycles = new_n_cycles
tkey = ','.join([str(x+1) for x in oldS.tolist()])
key="["+tkey+"]"
if key not in count_dict:
if show:
count_dict[key] = [0,0,0]
count_dict[key][1] = theoretical(oldS,alpha,denom)
count_dict[key][2] = old_n_cycles
else:
count_dict[key] = [0,0]
count_dict[key][1] = old_n_cycles
count_dict[key][0]+=1
df = pd.DataFrame(count_dict)
df=df.transpose()
if show:
df.columns=[r'Simulated $\pi(s)$', 'Theoretical', 'c(s)']
df[r'Simulated $\pi(s)$'] = df[r'Simulated $\pi(s)$']/N_iterations
df['Percentage Error'] = 100*(df[r'Simulated $\pi(s)$']/df['Theoretical']-1)
else:
df.columns=[r'Simulated $\pi(s)$', 'c(s)']
df[r'Simulated $\pi(s)$'] = df[r'Simulated $\pi(s)$']/N_iterations
df.index.name='State'
return df
"""
Explanation: Problem 1
Given: $N \sim Poisson(\lambda)$ and $X_1, \dots, X_n \sim \vec{\pi}$
$X_k(t)$ is continous time MC with $X_k(0) = X_k$
$N_t(a) = ${k:X_k(t) = a}$
i.e. $N_t$ is the number of visits to state $a$ in time $t$.
$\sum_a\pi(a)Q_{ab}=0$ for each $b$ with the constraint $\sum_a\pi(a)=1$
$\sum_a\pi(a)Q_{ab}=0$ $\implies$ $\vec{\pi}^TQ=0$ $\implies$
$$
\begin{align}
\vec{\pi}^TQ&=0\
\Longleftrightarrow \vec{\pi}^TQ^n&=0\ \ \forall n \geq 1\
\Longleftrightarrow \sum_{n\geq 1}\vec{\pi}\frac{t^n}{n!}Q^n &=0 \ \ \forall t \geq 0\
\Longleftrightarrow \vec{\pi}\sum_{n\geq 0}\frac{t^n}{n!}Q^n &=\vec{\pi}\
\Longleftrightarrow \vec{\pi}P &=\vec{\pi}\
\Longleftrightarrow \vec{\pi}\ \text{is a stationary distribution}
\end{align}
$$
Now, $P(X_k(t)=a)=\pi(a)$ and $N_t(a) = {k:X_k(t) = a}$ $\implies$ $N_t(a)|N \sim Binom(N, \pi(a))$ and
$N \sim Poisson(\lambda)$ then $\boxed{N_t \sim Poisson(\lambda \pi)}$
Problem 2
End of explanation
"""
df = run(N[0])
df
"""
Explanation: n=2
End of explanation
"""
df = run(N[1])
df
"""
Explanation: n=3
End of explanation
"""
count_dict = run(N[2])
count_dict
"""
Explanation: n=4
End of explanation
"""
df = run(N[3], show=False)
"""
Explanation: N=100
End of explanation
"""
expectation = sum(df[r'Simulated $\pi(s)$']*df['c(s)'])
expectation2 = sum(df[r'Simulated $\pi(s)$']*df['c(s)']*df['c(s)'])
t_expectation = np.mean(df['c(s)'])
t_expectation2 = np.var(df['c(s)'])+np.mean(df['c(s)'])**2
print 'Simulated E[c(s)] = {}\t\t Theoretical(M0M) E[c(s)] = {}'.format(expectation, t_expectation)
print 'Simulated E[c^2(s)] = {}\t\t Theoretical(M0M) E[c^2(s)] = {}'.format(expectation2, t_expectation2)
"""
Explanation: $$\sum_{s \in S_a}\pi(s)c(s)=E[c(s)]$$
and similarly,
$$\sum_{s \in S_a}\pi(s)c^2(s)=E[c^2(s)]=Var(c(s))+E^2[c(s)]$$
End of explanation
"""
cycles = df['c(s)']
plt.hist(cycles, normed=True)
"""
Explanation: I use method of moments to calculate the Theoretical values. They seem to be in sync with the simulated values. The estimates seem to be in sync even though MOM is just a first approximation because the sample size is large enough to capture the dynamics of the population distribution.
End of explanation
"""
def run():
N = 1000
N_iterations = 200
chrom_length = 3*(10**9)
transposon_length = 3*1000
mu = 0.05
t_positions = []
n_initial = np.random.random_integers(N-1)
x_initial = np.random.random_integers(chrom_length-1)
offspring_positions = []
all_positions = [[] for t in range(N)]
all_positions[n_initial].append(x_initial)
all_t_count =[]
for nn in range(N_iterations):
for i in range(N):
indicator = np.random.binomial(1,mu,len(all_positions[i]))
temp_indices = []
for ind, ind_value in enumerate(indicator):
if ind_value == 1:
temp_indices.append(ind)
for j in temp_indices:
x_temp = np.random.random_integers(chrom_length-1)
all_positions[i][j] = x_temp
all_positions[i].append(np.random.random_integers(chrom_length-1))
offspring_positions = [[] for t in range(N)]
for j in range(N):
y,z = np.random.random_integers(0,N-1,2)
y_parent = np.random.binomial(1,0.5,len(all_positions[y]))
z_parent = np.random.binomial(1,0.5,len(all_positions[z]))
temp_y = []
temp_z = []
for index,value in enumerate(y_parent):
if value>=1:
temp_y.append(all_positions[y][index])
for index,value in enumerate(z_parent):
if value>=1:
temp_z.append(all_positions[z][index])
for t_y in temp_y:
offspring_positions[j].append(t_y)
for t_z in temp_z:
offspring_positions[j].append(t_z)
all_positions = offspring_positions
count_t = 0
count_x = []
for p in range(N):
count_t += len(all_positions[p])
count_x.append(all_positions[p])
survived_t = np.unique(count_x, return_counts=True)[1]
all_t_count.append((count_t, len(survived_t[survived_t>=N*mu])))
return all_t_count
all_t_count = run()
die_out_transposons = all_t_count
fig, axs = plt.subplots(2,2)
axs[0][0].plot([x[0] for x in die_out_transposons])
axs[0][0].set_title('No. of Transpososn v/s Generations')
axs[0][0].set_xlabel('Generations')
axs[0][0].set_ylabel('No. of Transpososn')
axs[0][1].plot([x[1] for x in die_out_transposons])
axs[0][1].set_title('No. of Common Transpososn v/s Generations')
axs[0][1].set_xlabel('Generations')
axs[0][1].set_ylabel('No. of Common Transposons')
increasing_rate = []
for i in range(1,len(die_out_transposons)):
increasing_rate.append(die_out_transposons[i][0]/(die_out_transposons[i-1][0]+0.000001))
axs[1][0].plot(increasing_rate)
axs[1][0].set_title('Increasing rate v/s Generations')
axs[1][0].set_xlabel('Generations')
axs[1][0].set_ylabel('Increasing Rate')
"""
Explanation: Problem 3
Part (A)
End of explanation
"""
all_t_count = run()
nondie_out_transposons = all_t_count
fig, axs = plt.subplots(2,2)
axs[0][0].plot([x[0] for x in nondie_out_transposons])
axs[0][0].set_title('No. of Transpososn v/s Generations')
axs[0][0].set_xlabel('Generations')
axs[0][0].set_ylabel('No. of Transpososn')
axs[0][1].plot([x[1] for x in die_out_transposons])
axs[0][1].set_title('No. of Surviving Transpososn v/s Generations')
axs[0][1].set_xlabel('Generations')
axs[0][1].set_ylabel('No. of Common Transposons')
increasing_rate = []
for i in range(1,len(die_out_transposons)):
increasing_rate.append(die_out_transposons[i][0]/(die_out_transposons[i-1][0]+0.000001))
axs[1][0].plot(increasing_rate)
axs[1][0].set_title('Increasing rate v/s Generations')
axs[1][0].set_xlabel('Generations')
axs[1][0].set_ylabel('Increasing Rate')
"""
Explanation: The above example shows one case when the "the transposon does not spread"
End of explanation
"""
from math import log
N=10**7
mu=0.01
L=3*(10**9)
t = log(0.1*N*L*L/2)/log(1+mu)
print(t)
"""
Explanation: The above example shows one case when the "the transposon does spread", with rate being exponential and the common transposons still being limited
Part (B)
Treating the total number of transposons $N(t)$ at any time $t$ to be a branching process, then $N(t+1) = \sum_{i=1}^{N(t)} W_{t,i}$ Where $W_{t,i}$ is the number of locations of the $i^{th}$ transposon in the offspring.
Now consider $E[N_t]$
Claim: $E[N_t] = (1+\mu)^t$
Proof:
With probability of $\mu$ the transposon undergoes becomes 2 from 1. and hence $W_{t,i}$, the number of locations of the $i^{th}$ transposon in the offspring is a poisson random varaible with mean $1+\mu$
$W_k \sim Poisson(1+\mu)$
$$
\begin{align}
E[N_t] = E[\sum_{i=1}^{N(t-1)} W_{t,i}] &= E[E[\sum_{i=1}^{n} W_{t,i}|N(t-1)=n]]\
&= E[N(t-1)] \times (1+\mu)\
&= N(1+\mu)^t
\end{align}
$$
Thus, the expected number of total transposons is an exponential.
Part (C)
$P(X>0) \leq EX$
Consider $X_t$ as the total number of trasposon copies at location $x$ at generation $t$
For each new generation, the new arrival at $x$ is a poisson process with mean = $2 \times \mu \times N(t) \times \frac{1}{L}$. Let $R(t)$ represent the new arrivals at x
$N(t)$ represents the number of transposon copies of the transposon suriving for $t$ generations!
Then, $E[N_1]=1+\mu$ By indution, $E[N_t] = (1+\mu)^t$
Thus, $R(t) \sim \text{Poisson}(\frac{2\mu N(t)}{L})$
Now Using a branching process model for number of transposon copies located at location $x$, the offspring mean number of offspring transposons is = $(1-\mu)1 + \mu2 = 1+\mu$
Let $Z_{t,k}(u)$= Number of offspring copies of $k^{th}$ transposon at $x$ that occured at time $t$ inserted at time u ($u \leq t)
Using branching process property, $E[Z_{t,k}(t+u)] = (1+\mu)^u$
Then
$$
\begin{align}
EX_t &= \sum_{u \leq 0} \sum_{k=1}^{R(u)} Z_{u,k}(0)\
&= \sum_{u \leq 0}E[R(u)]E[Z_{u,1}(0)]\
&= \sum_{u \leq 0} (1+\mu)^t \frac{2 N \mu }{L} \times (1+\mu)^u \
&= \sum_{u\geq 0}(1+\mu)^t \frac{2 N \mu }{L(1+\mu)^u}\
&\approx \frac{2 \mu }{L}\times(1+\frac{1}{\mu})\
&= \frac{2}{L}(1+\mu)^{t+1}
\end{align}
$$
Thus, $P(X>0) \leq \frac{2}{L}(1+\mu)^{t+1}$
Part (D)
$\mu = 10^{-2}$
$N = 10^7$
For an individual $ EX = \frac{2}{L}(1+\mu)^{t+1} \times \frac{1}{N} = \frac{2}{NL}(1+\mu)^{t+1} $
Now, $\frac{2}{NL}(1+\mu)^{t+1}=0.1L$ $\implies$ $(1+\mu)^{t+1}=0.1NL^2/2$
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.13.1/examples/notebooks/generated/discrete_choice_overview.ipynb
|
bsd-3-clause
|
import numpy as np
import statsmodels.api as sm
"""
Explanation: Discrete Choice Models Overview
End of explanation
"""
spector_data = sm.datasets.spector.load()
spector_data.exog = sm.add_constant(spector_data.exog, prepend=False)
"""
Explanation: Data
Load data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).
End of explanation
"""
print(spector_data.exog.head())
print(spector_data.endog.head())
"""
Explanation: Inspect the data:
End of explanation
"""
lpm_mod = sm.OLS(spector_data.endog, spector_data.exog)
lpm_res = lpm_mod.fit()
print("Parameters: ", lpm_res.params[:-1])
"""
Explanation: Linear Probability Model (OLS)
End of explanation
"""
logit_mod = sm.Logit(spector_data.endog, spector_data.exog)
logit_res = logit_mod.fit(disp=0)
print("Parameters: ", logit_res.params)
"""
Explanation: Logit Model
End of explanation
"""
margeff = logit_res.get_margeff()
print(margeff.summary())
"""
Explanation: Marginal Effects
End of explanation
"""
print(logit_res.summary())
"""
Explanation: As in all the discrete data models presented below, we can print a nice summary of results:
End of explanation
"""
probit_mod = sm.Probit(spector_data.endog, spector_data.exog)
probit_res = probit_mod.fit()
probit_margeff = probit_res.get_margeff()
print("Parameters: ", probit_res.params)
print("Marginal effects: ")
print(probit_margeff.summary())
"""
Explanation: Probit Model
End of explanation
"""
anes_data = sm.datasets.anes96.load()
anes_exog = anes_data.exog
anes_exog = sm.add_constant(anes_exog)
"""
Explanation: Multinomial Logit
Load data from the American National Election Studies:
End of explanation
"""
print(anes_data.exog.head())
print(anes_data.endog.head())
"""
Explanation: Inspect the data:
End of explanation
"""
mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog)
mlogit_res = mlogit_mod.fit()
print(mlogit_res.params)
"""
Explanation: Fit MNL model:
End of explanation
"""
rand_data = sm.datasets.randhie.load()
rand_exog = rand_data.exog
rand_exog = sm.add_constant(rand_exog, prepend=False)
"""
Explanation: Poisson
Load the Rand data. Note that this example is similar to Cameron and Trivedi's Microeconometrics Table 20.5, but it is slightly different because of minor changes in the data.
End of explanation
"""
poisson_mod = sm.Poisson(rand_data.endog, rand_exog)
poisson_res = poisson_mod.fit(method="newton")
print(poisson_res.summary())
"""
Explanation: Fit Poisson model:
End of explanation
"""
mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog)
res_nbin = mod_nbin.fit(disp=False)
print(res_nbin.summary())
"""
Explanation: Negative Binomial
The negative binomial model gives slightly different results.
End of explanation
"""
mlogit_res = mlogit_mod.fit(method="bfgs", maxiter=250)
print(mlogit_res.summary())
"""
Explanation: Alternative solvers
The default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the method argument:
End of explanation
"""
|
googlegenomics/datalab-examples
|
datalab/genomics/Explore 1000 Genomes Samples.ipynb
|
apache-2.0
|
import gcp.bigquery as bq
samples_table = bq.Table('genomics-public-data:1000_genomes.sample_info')
samples_table.schema
"""
Explanation: <!-- Copyright 2015 Google Inc. All rights reserved. -->
<!-- Licensed under the Apache License, Version 2.0 (the "License"); -->
<!-- you may not use this file except in compliance with the License. -->
<!-- You may obtain a copy of the License at -->
<!-- http://www.apache.org/licenses/LICENSE-2.0 -->
<!-- Unless required by applicable law or agreed to in writing, software -->
<!-- distributed under the License is distributed on an "AS IS" BASIS, -->
<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
<!-- See the License for the specific language governing permissions and -->
<!-- limitations under the License. -->
Explore the 1000 Genomes Sample Information
This notebook demonstrates exploring the sample information for the 1000 Genomes dataset stored in BigQuery. Specifically you will:
* Use the %%sql statement to write and execute SQL statements within the notebook
* Extract data from BigQuery and create a local dataset that can be manipulated in Python
* Refine and pivot your local dataset via the Pandas Python library
* Visualize different aspects of your dataset via the Matplotlib and Seaborn Python libraries
Related Links:
* BigQuery
* BigQuery SQL reference
* 1,000 Genomes Data Description
* This notebook is based on the Google Genomics BigQuery example Exploring the Phenotypic Data
NOTE:
If you're new to notebooks, or want to check out additional samples, check out the full list of general notebooks.
For additional Genomics samples, check out the full list of Genomics notebooks.
Working with data in BigQuery
First let's take a look at the 1000 Genomes sample dataset table.
End of explanation
"""
%%sql --module pops
SELECT
population,
population_description,
super_population,
super_population_description,
COUNT(population) AS population_count,
FROM
$samples_table
GROUP BY
population,
population_description,
super_population,
super_population_description
bq.Query(pops, samples_table=samples_table).to_dataframe()
"""
Explanation: We can see in the table schema that a number of different annotations exist for each genomic sample.
To get a feel for the dataset, let's see how the samples are distributed across populations and super populations.
End of explanation
"""
pops_df = bq.Query(pops, samples_table=samples_table).to_dataframe()
pops_df[:5]
"""
Explanation: In order to further analyze our query results locally within the IPython notebook, let's convert the result set into a Pandas dataframe:
End of explanation
"""
import seaborn as sns
pops_df.plot(kind='bar', y='population_count')
"""
Explanation: Pandas dataframes have a dataframe.plot() method that allows us to quickly render a number of standard visualizations. Let's draw a bar chart of the per-population sample counts:
End of explanation
"""
superpop_df = pops_df.groupby('super_population').sum()
superpop_df.plot(kind='bar')
"""
Explanation: Now that we have a dataframe object, we can compute arbitrary rollups and aggregations locally. For example, let's aggregate the count of samples from the population level to the super population level. We'll do this via the dataframe.groupby operation, which is generally of the form:
dataframe.groupby("column name").aggregation_func()
End of explanation
"""
%%sql --module metrics
select
super_population
, total_lc_sequence -- lc=low coverage
, total_exome_sequence
, Main_Project_E_Centers
from $samples_table
where
total_exome_sequence is not null
and total_lc_sequence is not null
and Main_Project_E_Centers is not null
and Main_Project_E_Centers != 'BCM,BGI' -- remove this single outlier
"""
Explanation: We see that the distribution of genome samples across population and super population is relatively uniform, the exception being the AFR (African) super population.
Genome sample metrics
Let's explore a few quantitative attributes of the genomic samples. We'll keep our super population sample classification, but also add some metrics around the extent to which given samples were sequenced, for exomic and low coverage regions. We'll further annotate our samples with a tag for the laboratory/center that produced performed the sequencing.
End of explanation
"""
df = bq.Query(metrics, samples_table=samples_table).to_dataframe()
df[:10]
"""
Explanation: Again, we can convert these results to a Pandas dataframe for local analysis and visualization
End of explanation
"""
df.hist(alpha=0.5, bins=30)
"""
Explanation: To get a feel for the quantitative sample attributes, let's see how their values are distributed among the dataset samples by plotting histograms. Note that a histogram of any dataframe attribute can be easily rendered with the pattern dataframe.attribute_name.hist()
End of explanation
"""
g = sns.jointplot('total_exome_sequence', 'total_lc_sequence', df)
"""
Explanation: What does the joint distribution of these two traits look like?
End of explanation
"""
g = sns.lmplot('total_exome_sequence', 'total_lc_sequence', hue='super_population', data=df, fit_reg=False)
"""
Explanation: Only a very slight positive correlation. Let's further annotate our scatter chart by rendering each mark with a color according to its super population assignment.
End of explanation
"""
g = sns.lmplot('total_exome_sequence', 'total_lc_sequence', hue='super_population', col='Main_Project_E_Centers', col_wrap=2, data=df, fit_reg=False)
"""
Explanation: Now, let's take the same plot as above, by facet our results based upon the genomic sequencing center that produced it to look for inter-center variability in the dataset.
End of explanation
"""
g = sns.lmplot('total_exome_sequence', 'total_lc_sequence', hue='super_population', col='super_population', row='Main_Project_E_Centers', data=df, fit_reg=False)
"""
Explanation: The WUSGC (the Genome Institute at Washington University) shows a small outlier cluster that is distinct relative to the other centers. The BCM (Baylor College of Medicine) facet appears the least variable within the exome sequencing dimension.
Are there any super population trends here? We can facet our data a second time, this time by super population to dig deeper.
End of explanation
"""
|
ueapy/ueapy.github.io
|
content/notebooks/2018-02-19-debugging-profiling.ipynb
|
mit
|
from IPython.core.debugger import set_trace
"""
Explanation: Today we went through some basic tools to inspect Python scripts for errors and performance bottlenecks.
Debugging
Python DeBugger (PDB)
The standard Python tool for interactive debugging is pdb, the Python debugger.
This debugger lets the user step through the code line by line in order to see what might be causing a more difficult error.
The IPython-enhanced version of this is ipdb, the IPython debugger.
Using in the command line
We went through this excellent pdb tutorial
An even more convenient version is pdbpp
Using PDB in IPython or Jupyter
First, we import a function to insert breakpoints:
End of explanation
"""
def fun(foo, bar):
set_trace()
return foo + bar
# fun(1, 2)
"""
Explanation: Define a simple function, insert a breakpoint:
End of explanation
"""
def func1(a, b):
return a / b
def func2(x):
a = x
b = x - 1
return func1(a, b)
func2(1)
# %debug
"""
Explanation: (Uncomment it to run with debugger)
In IPython, perhaps the most convenient interface to debugging is the %debug magic command. If you call it after hitting an exception, it will automatically open an interactive debugging prompt at the point of the exception. The ipdb prompt lets you explore the current state of the stack, explore the available variables, and even run Python commands!
Example: 2 functions, one of which calls another
End of explanation
"""
%timeit sum(range(100))
"""
Explanation: If you'd like the debugger to launch automatically whenever an exception is raised, you can use the %pdb magic function to turn on this automatic behavior:
python
%pdb on
...some code to debug...
%pdb off
Timing and Profiling
Simple timing
In IPython and Jupyter you can use special magics to measure how much time a snippet of code takes to run.
End of explanation
"""
def sum_of_lists(N):
total = 0
for i in range(5):
L = [j ^ (j >> i) for j in range(N)]
total += sum(L)
return total
"""
Explanation: Note that because this operation is so fast, %timeit automatically does a large number of repetitions. For slower commands, %timeit will automatically adjust and perform fewer repetitions.
Profiling
Once you have your code working, it can be useful to dig into its efficiency a bit. Sometimes it's useful to check the execution time of a given command or set of commands; other times it's useful to dig into a multiline process and determine where the bottleneck lies in some complicated series of operations.
You can either use the built in profiler:
```python
import cProfile
cp = cProfile.Profile()
cp.enable()
some code to profile
cp.disable()
cp.print_stats()
```
Line-by-line profiling
Let's define a function with a big number of for-loop iterations.
End of explanation
"""
%load_ext line_profiler
"""
Explanation: We can now load a line_profiler extension (after we install line_profiler package using pip or conda).
End of explanation
"""
%lprun -f sum_of_lists sum_of_lists(5000)
"""
Explanation: And call the function using the special profiler magic:
End of explanation
"""
%load_ext memory_profiler
%memit sum_of_lists(100000)
"""
Explanation: Memory profiling
In the similar manner, we can even do memory profiling (once the memory_profiler package is installed).
End of explanation
"""
def foo(a, b):
return a + b
"""
Explanation: Disassembling and inspecting Python code
Tracing CPython code execution
End of explanation
"""
import dis
dis.dis(foo)
"""
Explanation: Let's disassemble it with the dis module:
End of explanation
"""
%timeit foo(1, 2)
%timeit foo('a', 'b')
"""
Explanation: What's BINARY_ADD?
We crack open CPython's source code and take a look inside Python/ceval.c:
c
/* Python/ceval.c */
TARGET(BINARY_ADD) {
PyObject *right = POP();
PyObject *left = TOP();
PyObject *sum;
/* NOTE(haypo): Please don't try to micro-optimize int+int on
CPython using bytecode, it is simply worthless.
See http://bugs.python.org/issue21955 and
http://bugs.python.org/issue10044 for the discussion. In short,
no patch shown any impact on a realistic benchmark, only a minor
speedup on microbenchmarks. */
if (PyUnicode_CheckExact(left) &&
PyUnicode_CheckExact(right)) {
sum = unicode_concatenate(left, right, f, next_instr);
/* unicode_concatenate consumed the ref to left */
}
else {
sum = PyNumber_Add(left, right); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Py_DECREF(left);
}
Py_DECREF(right);
SET_TOP(sum);
if (sum == NULL)
goto error;
DISPATCH();
}
What's PyNumber_Add(left, right)?
c
/* Objects/abstract.c */
PyObject *
PyNumber_Add(PyObject *v, PyObject *w)
{
PyObject *result = binary_op1(v, w, NB_SLOT(nb_add)); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
if (result == Py_NotImplemented) {
PySequenceMethods *m = v->ob_type->tp_as_sequence;
Py_DECREF(result);
if (m && m->sq_concat) {
return (*m->sq_concat)(v, w);
}
result = binop_type_error(v, w, "+");
}
return result;
}
What's binary_op1()?
```c
static PyObject *
binary_op1(PyObject v, PyObject w, const int op_slot)
{
PyObject *x;
binaryfunc slotv = NULL;
binaryfunc slotw = NULL;
if (v->ob_type->tp_as_number != NULL)
slotv = NB_BINOP(v->ob_type->tp_as_number, op_slot);
if (w->ob_type != v->ob_type &&
w->ob_type->tp_as_number != NULL) {
slotw = NB_BINOP(w->ob_type->tp_as_number, op_slot); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
if (slotw == slotv)
slotw = NULL;
}
if (slotv) {
if (slotw && PyType_IsSubtype(w->ob_type, v->ob_type)) {
x = slotw(v, w);
if (x != Py_NotImplemented)
return x;
Py_DECREF(x); /* can't do it */
slotw = NULL;
}
x = slotv(v, w);
if (x != Py_NotImplemented)
return x;
Py_DECREF(x); /* can't do it */
}
if (slotw) {
x = slotw(v, w);
if (x != Py_NotImplemented)
return x;
Py_DECREF(x); /* can't do it */
}
Py_RETURN_NOTIMPLEMENTED;
}
```
What's NB_BINOP()?
c
#define NB_BINOP(nb_methods, slot) \
(*(binaryfunc*)(& ((char*)nb_methods)[slot]))
Cut to the chase: where's the addition function for two ints (longs)?
```c
/ Objects/longobject.c /
static PyObject *
long_add(PyLongObject a, PyLongObject b)
{
PyLongObject *z;
CHECK_BINOP(a, b);
if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
return PyLong_FromLong(MEDIUM_VALUE(a) + MEDIUM_VALUE(b)); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
}
if (Py_SIZE(a) < 0) {
if (Py_SIZE(b) < 0) {
z = x_add(a, b);
if (z != NULL) {
/* x_add received at least one multiple-digit int,
and thus z must be a multiple-digit int.
That also means z is not an element of
small_ints, so negating it in-place is safe. */
assert(Py_REFCNT(z) == 1);
Py_SIZE(z) = -(Py_SIZE(z));
}
}
else
z = x_sub(b, a);
}
else {
if (Py_SIZE(b) < 0)
z = x_sub(a, b);
else
z = x_add(a, b);
}
return (PyObject *)z;
}
```
What's MEDIUM_VALUE()?
c
/* Objects/longobject.c */
#define MEDIUM_VALUE(x) (assert(-1 <= Py_SIZE(x) && Py_SIZE(x) <= 1), \
Py_SIZE(x) < 0 ? -(sdigit)(x)->ob_digit[0] : \
(Py_SIZE(x) == 0 ? (sdigit)0 : \
(sdigit)(x)->ob_digit[0]))
Why so much code for such a simple operation?
C-level is interpreting bytecodes (BINARY_ADD in this case).
Polymorphism -- code can handle foo('a', 'b') or any types that support +.
Works for user-defined types, too, with an __add__ or __radd__ magic method.
Error checking everywhere...
For adding ints, does overflow checking and conversions, etc.
All these features mean a lot of code at the C level.
What is the performance?
End of explanation
"""
HTML(html)
"""
Explanation: Summary
CPython slowdown 1: interpreted bytecode execution
Fetching CPython bytecode ops, managing the stack machine, all ops in ceval.c.
Extensive error checking and handling.
CPython slowdown 2: dynamic type resolution
Type introspection, dynamic dispatch on every operation / method call, supporting generic operations.
And extensive error checking and handling, up-and-down the call stack.
References
https://github.com/kwmsmith/scipy-2017-cython-tutorial
https://jakevdp.github.io/PythonDataScienceHandbook/01.06-errors-and-debugging.html
https://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html
https://toucantoco.com/en/tech-blog/tech/python-performance-optimization
http://mortada.net/easily-profile-python-code-in-jupyter.html
End of explanation
"""
|
Petr-By/qtpyvis
|
notebooks/caffe/train.ipynb
|
mit
|
solver = caffe.SGDSolver('mnist_solver.prototxt')
solver.net.forward()
niter = 2500
test_interval = 100
# losses will also be stored in the log
train_loss = np.zeros(niter)
test_acc = np.zeros(int(np.ceil(niter / test_interval)))
output = np.zeros((niter, 8, 10))
# the main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
# store the train loss
train_loss[it] = solver.net.blobs['loss'].data
# store the output on the first test batch
# (start the forward pass at conv1 to avoid loading new data)
solver.test_nets[0].forward(start='conv2d_1')
output[it] = solver.test_nets[0].blobs['dense_2'].data[:8]
# run a full test every so often
# (Caffe can also do this for us and write to a log, but we show here
# how to do it directly in Python, where more complicated things are easier.)
if it % test_interval == 0:
print ('Iteration', it, 'testing...')
correct = 0
test_iter = 100
for test_it in range(test_iter):
solver.test_nets[0].forward()
correct += sum(solver.test_nets[0].blobs['dense_2'].data.argmax(1)
== solver.test_nets[0].blobs['label'].data)
test_acc[it // test_interval] = correct / (64 * test_iter)
_, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(np.arange(niter), train_loss)
ax2.plot(test_interval * np.arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
ax2.set_title('Test Accuracy: {:.2f}'.format(test_acc[-1]))
"""
Explanation: In Caffe models get specified in separate protobuf files.
Additionally a solver has to be specified, that determines training parameters.
Instantiate the solver and train the network.
End of explanation
"""
solver.net.save('mnist.caffemodel')
"""
Explanation: The weights are saved in a .caffemodel file.
End of explanation
"""
|
AllenDowney/ModSimPy
|
soln/throwingaxe_soln.ipynb
|
mit
|
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Case study: Throwing Axe
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
radian = UNITS.radian
def make_system():
"""Makes a System object for the given conditions.
returns: System with init, ...
"""
P = Vector(0, 2) * m
V = Vector(8, 4) * m/s
theta = 2 * radian
omega = -7 * radian/s
init = State(P=P, V=V, theta=theta, omega=omega)
t_end = 1.0 * s
return System(init=init, t_end=t_end,
g = 9.8 * m/s**2,
mass = 1.5 * kg,
length = 0.7 * m)
"""
Explanation: Throwing axe
Our favorite event at Lumberjack Competitions is axe throwing. The axes used for this event typically weigh 1.5 to 2 kg, with handles roughly 0.7 m long. They are thrown overhead at a target typically 6 m away and 1.5 m off the ground. Normally, the axe makes one full rotation in the air to hit the target blade first, with the handle close to vertical.
Here's a version of make_system that sets the initial conditions.
The state variables are x, y, theta, vx, vy, omega, where theta is the orientation (angle) of the axe in radians and omega is the angular velocity in radians per second.
I chose initial conditions based on videos of axe throwing.
End of explanation
"""
system = make_system()
system.init
"""
Explanation: Let's make a System
End of explanation
"""
def slope_func(state, t, system):
"""Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with length0, m, k
returns: sequence (vx, vy, ax, ay)
"""
P, V, theta, omega = state
A = Vector(0, -system.g)
alpha = 0 * radian / s**2
return V, A, omega, alpha
"""
Explanation: As a simple starting place, I ignore drag, so vx and omega are constant, and ay is just -g.
End of explanation
"""
slope_func(system.init, 0, system)
"""
Explanation: As always, let's test the slope function with the initial conditions.
End of explanation
"""
results, details = run_ode_solver(system, slope_func)
details
results.tail()
"""
Explanation: And then run the simulation.
End of explanation
"""
def plot_position(P):
x = P.extract('x')
y = P.extract('y')
plot(x, label='x')
plot(y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results.P)
"""
Explanation: Visualizing the results
The simplest way to visualize the results is to plot the state variables as a function of time.
End of explanation
"""
def plot_velocity(V):
vx = V.extract('x')
vy = V.extract('y')
plot(vx, label='vx')
plot(vy, label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results.V)
plot(results.theta, label='theta', color='C2')
decorate(xlabel='Time (s)',
ylabel='Angle (radian)')
plot(results.omega, label='omega', color='C2')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
"""
Explanation: We can plot the velocities the same way.
End of explanation
"""
def plot_trajectory(P, **options):
x = P.extract('x')
y = P.extract('y')
plot(x, y, **options)
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results.P, label='trajectory')
"""
Explanation: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
End of explanation
"""
def make_frame(theta):
rhat = Vector(pol2cart(theta, 1))
that = rhat.perp()
return rhat, that
P, V, theta, omega = results.first_row()
rhat, that = make_frame(theta)
rhat
that
np.dot(rhat, that)
O = Vector(0, 0)
plot_segment(O, rhat)
plot_segment(O, that)
plt.axis('equal')
xs = results.P.extract('x')
ys = results.P.extract('y')
l1 = 0.6 * m
l2 = 0.1 * m
def draw_func(state, t):
plt.axis('equal')
set_xlim([0,8])
set_ylim([0,6])
P, V, theta, omega = state
rhat, that = make_frame(theta)
# plot the handle
A = P - l1 * rhat
B = P + l2 * rhat
plot_segment(A, B, color='red')
# plot the axe head
C = B + l2 * that
D = B - l2 * that
plot_segment(C, D, color='black', linewidth=10)
# plot the COG
x, y = P
plot(x, y, 'bo')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
"""
Explanation: Animation
Animating this system is a little more complicated, if we want to show the shape and orientation of the axe.
It is useful to construct a frame with $\hat{r}$ along the handle of the axe and $\hat{\theta}$ perpendicular.
Now we're ready to animate the results. The following figure shows the frame and the labeled points A, B, C, and D.
End of explanation
"""
state = results.first_row()
draw_func(state, 0)
animate(results, draw_func)
"""
Explanation: During the animation, the parts of the axe seem to slide around relative to each other. I think that's because the lines and circles get rounded off to the nearest pixel.
Here's the final state of the axe at the point of impact (assuming the target is 8 m away).
End of explanation
"""
|
xdze2/thermique_appart
|
drafts/get_sun_position.ipynb
|
mit
|
map_coords = (45.1973288, 5.7103223) #( 45.166672, 5.71667 )
import pysolar.solar as solar
import datetime as dt
d = dt.datetime.now()
#d = dt.datetime(2017, 6, 20, 13, 30, 0, 130320)
solar.get_altitude( *map_coords, d)
solar.get_azimuth(*map_coords, d)
Alt = [ solar.get_altitude(*map_coords, dt.datetime(2017, 12, 21, h, 0, 0, 0)) for h in range(0, 24) ]
Az = [ solar.get_azimuth(*map_coords, dt.datetime(2017, 12, 21, h, 0, 0, 0)) for h in range(0, 24) ]
Az = np.array( Az )
Az[ Az < -180 ] = Az[ Az < -180 ]+360
plt.plot( Alt )
plt.plot( Az )
plt.plot([0, 24], [0, 0], ':'); plt.ylim([-120, 120]); plt.xlabel('hour of the day');
import pysolar.radiation as radiation
radiation.get_radiation_direct( d, 15 ) # W/m2
"""
Explanation: Get the position of the Sun
il y a une librairie : PySolar !
http://pysolar.org/
http://docs.pysolar.org/en/latest/
pip install pysolar
Rq: voir aussi une app js en ligne: http://suncalc.net/
End of explanation
"""
from numpy import genfromtxt
horizon_data = genfromtxt('horizon.csv', delimiter=',').T
horizon_data
def isUpperHorizon( azimuth, altitude_deg ):
h = np.interp(-azimuth, horizon_data[0, :], horizon_data[1, :])
if h > altitude_deg:
return 0
else:
return 1
isUpperHorizon( 20, 2 )
horizon_data[1, :].max()
"""
Explanation: Remarque: Le flux solaire au dessus de l'atmosphère est de F = 1 360,8 W/m2
https://fr.wikipedia.org/wiki/Constante_solaire
Avec l'horizon
End of explanation
"""
import math
import pysolar.radiation as radiation
import pysolar.solar as solar
import datetime as dt
def get_radiation_direct(d, alt):
if alt>0:
return radiation.get_radiation_direct( d, alt ) # W/m2
else:
return 0
def get_flux_surface( coords, date, sigma, phi_C ):
# Surface orientation :
# sigma : deg, vertical angle of the surface, ref. to the horizontal
# phi_C : deg, azimuth, relative to south, with positive values in the southeast direction and negative values in
# the southwest
# Sun position
phi_S_deg = solar.get_azimuth( *coords, date ) # deg, azimuth of the sun,relative to south
beta_deg = solar.get_altitude( *coords, date ) # deg, altitude angle of the sun
I0 = get_radiation_direct( d, beta_deg ) # W/m2
I0 = I0* isUpperHorizon( phi_S_deg, beta_deg )
beta = beta_deg*math.pi/180 # rad
phi_S = phi_S_deg*math.pi/180 #rad
sigma = sigma*math.pi/180
phi_C = phi_C*math.pi/180
cosTheta = math.cos(beta)*math.cos( phi_S - phi_C )*math.sin( sigma ) + math.cos( sigma )*math.sin( beta )
if cosTheta >0 :
Isurf = I0*cosTheta # flux projeté, W/m2
else:
Isurf = 0 # mais diffuse...
return Isurf
def get_flux_total( coords, date ):
# Sun position
beta_deg = solar.get_altitude( *coords, date ) # deg, altitude angle of the sun
I0 = get_radiation_direct( d, beta_deg ) # W/m2
return I0
get_radiation_direct( d, -4 )
d = dt.datetime(2017, 6, 22, 11, 0, 0, 0)
sigma = 37
phi_C = 50
F = get_flux_surface( map_coords, d, sigma, phi_C )
print( F )
d
import pandas as pd
start = dt.datetime(2017, 6, 22, 0, 0, 0, 0)
end = dt.datetime(2017, 6, 22, 23, 59, 0, 0)
d_range = pd.date_range( start=start, end=end, freq='5min' )
F_tot = [ get_flux_total(map_coords, d ) for d in d_range ]
F_est = [ get_flux_surface(map_coords, d, sigma, phi_C ) for d in d_range ]
F_ouest = [ get_flux_surface(map_coords, d, sigma, phi_C+180 ) for d in d_range ]
F_sud = [ get_flux_surface(map_coords, d, 90, phi_C-90 ) for d in d_range ]
x = d_range.hour + d_range.minute/60
plt.figure(figsize=(12, 5))
plt.plot( x, F_est )
plt.plot( x, F_ouest )
plt.plot( x, F_sud )
plt.plot( x, F_tot, 'k:' )
plt.xlabel('hour of the day');
plt.ylabel('flux solaire projeté');
d_range.hour + d_range.minute/60
# Sun position
phi_S = solar.get_azimuth( *map_coords, d ) # deg, azimuth of the sun,relative to south
beta = solar.get_altitude( *map_coords, d ) # deg, altitude angle of the sun
I0 = radiation.get_radiation_direct( d, 65 ) # W/m2
cosTheta = math.cos(beta)*math.cos( phi_S - phi_C )*math.sin( sigma ) + math.cos( sigma )*math.sin( beta )
Isurf = I0*cosTheta # flux projeté, W/m2
cosTheta
"""
Explanation: Projection sur une surface inclinée
http://www.a-ghadimi.com/files/Courses/Renewable%20Energy/REN_Book.pdf
page 414
End of explanation
"""
Azi = np.array( [ solar.get_azimuth( *map_coords, d ) for d in d_range ] )
Azi[ Azi < -180 ] = Azi[ Azi < -180 ]+360
Alt = [ solar.get_altitude( *map_coords, d ) for d in d_range ]
Hor = [ np.interp(-a, horizon_data[0, :], horizon_data[1, :]) for a in Azi ]
plt.plot( Azi, Hor )
plt.plot( Azi, Alt )
plt.ylim([0, 80]);
Azi
"""
Explanation: Verif
End of explanation
"""
|
dolittle007/dolittle007.github.io
|
notebooks/GLM-poisson-regression.ipynb
|
gpl-3.0
|
## Interactive magics
%matplotlib inline
import sys
import warnings
warnings.filterwarnings('ignore')
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from scipy import optimize
# pymc3 libraries
import pymc3 as pm
import theano as thno
import theano.tensor as T
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 14, 6
np.random.seed(0)
"""
Explanation: GLM: Poisson Regression
A minimal reproducable example of poisson regression to predict counts using dummy data.
This Notebook is basically an excuse to demo poisson regression using PyMC3, both manually and using the glm library to demo interactions using the patsy library. We will create some dummy data, poisson distributed according to a linear model, and try to recover the coefficients of that linear model through inference.
For more statistical detail see:
Basic info on Wikipedia
GLMs: Poisson regression, exposure, and overdispersion in Chapter 6.2 of ARM, Gelmann & Hill 2006
This worked example from ARM 6.2 by Clay Ford
This very basic model is insipired by a project by Ian Osvald, which is concerend with understanding the various effects of external environmental factors upon the allergic sneezing of a test subject.
Contents
Setup
Local Functions
Generate Data
Poisson Regression
Create Design Matrices
Create Model
Sample Model
View Diagnostics and Outputs
Package Requirements (shown as a conda-env YAML):
```
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.5
- jupyter
- ipywidgets
- numpy
- scipy
- matplotlib
- pandas
- pytables
- scikit-learn
- statsmodels
- seaborn
- patsy
- requests
- pip
- pip:
- regex
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
```
Setup
End of explanation
"""
def strip_derived_rvs(rvs):
'''Convenience fn: remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
def plot_traces_pymc(trcs, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
"""
Explanation: Local Functions
End of explanation
"""
# decide poisson theta values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# create samples
q = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df.tail()
"""
Explanation: Generate Data
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you'd like to know more https://twitter.com/ianozsvald
Assumptions:
The subject sneezes N times per day, recorded as nsneeze (int)
The subject may or may not drink alcohol during that day, recorded as alcohol (boolean)
The subject may or may not take an antihistamine medication during that day, recorded as the negative action nomeds (boolean)
I postulate (probably incorrectly) that sneezing occurs at some baseline rate, which increases if an antihistamine is not taken, and further increased after alcohol is consumed.
The data is aggegated per day, to yield a total count of sneezes on that day, with a boolean flag for alcohol and antihistamine usage, with the big assumption that nsneezes have a direct causal relationship.
Create 4000 days of data: daily counts of sneezes which are poisson distributed w.r.t alcohol consumption and antihistamine usage
End of explanation
"""
df.groupby(['alcohol','nomeds']).mean().unstack()
"""
Explanation: View means of the various combinations (poisson mean values)
End of explanation
"""
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df,
kind='count', size=4, aspect=1.5)
"""
Explanation: Briefly Describe Dataset
End of explanation
"""
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
"""
Explanation: Observe:
This looks a lot like poisson-distributed count data (because it is)
With nomeds == False and alcohol == False (top-left, akak antihistamines WERE used, alcohol was NOT drunk) the mean of the poisson distribution of sneeze counts is low.
Changing alcohol == True (top-right) increases the sneeze count nsneeze slightly
Changing nomeds == True (lower-left) increases the sneeze count nsneeze further
Changing both alcohol == True and nomeds == True (lower-right) increases the sneeze count nsneeze a lot, increasing both the mean and variance.
Poisson Regression
Our model here is a very simple Poisson regression, allowing for interaction of terms:
$$ \theta = exp(\beta X)$$
$$ Y_{sneeze_count} ~ Poisson(\theta)$$
Create linear model for interaction of terms
End of explanation
"""
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
"""
Explanation: 1. Manual method, create design matrices and manually specify model
Create Design Matrices
End of explanation
"""
with pm.Model() as mdl_fish:
# define priors, weakly informative Normal
b0 = pm.Normal('b0_intercept', mu=0, sd=10)
b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sd=10)
b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sd=10)
b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sd=10)
# define linear model and exp link function
theta = (b0 +
b1 * mx_ex['alcohol[T.True]'] +
b2 * mx_ex['nomeds[T.True]'] +
b3 * mx_ex['alcohol[T.True]:nomeds[T.True]'])
## Define Poisson likelihood
y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
"""
Explanation: Create Model
End of explanation
"""
with mdl_fish:
trc_fish = pm.sample(2000, tune=1000, njobs=4)[1000:]
"""
Explanation: Sample Model
End of explanation
"""
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)]
plot_traces_pymc(trc_fish, varnames=rvs_fish)
"""
Explanation: View Diagnostics
End of explanation
"""
np.exp(pm.df_summary(trc_fish, varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
"""
Explanation: Observe:
The model converges quickly and traceplots looks pretty well mixed
Transform coeffs and recover theta values
End of explanation
"""
with pm.Model() as mdl_fish_alt:
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Poisson())
"""
Explanation: Observe:
The contributions from each feature as a multiplier of the baseline sneezecount appear to be as per the data generation:
exp(b0_intercept): mean=1.02 cr=[0.96, 1.08]
Roughly linear baseline count when no alcohol and meds, as per the generated data:
theta_noalcohol_meds = 1 (as set above)
theta_noalcohol_meds = exp(b0_intercept)
= 1
exp(b1_alcohol): mean=2.88 cr=[2.69, 3.09]
non-zero positive effect of adding alcohol, a ~3x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_meds = 3 (as set above)
theta_alcohol_meds = exp(b0_intercept + b1_alcohol)
= exp(b0_intercept) * exp(b1_alcohol)
= 1 * 3 = 3
exp(b2_nomeds[T.True]): mean=5.76 cr=[5.40, 6.17]
larger, non-zero positive effect of adding nomeds, a ~6x multiplier of
baseline sneeze count, as per the generated data:
theta_noalcohol_nomeds = 6 (as set above)
theta_noalcohol_nomeds = exp(b0_intercept + b2_nomeds)
= exp(b0_intercept) * exp(b2_nomeds)
= 1 * 6 = 6
exp(b3_alcohol[T.True]:nomeds[T.True]): mean=2.12 cr=[1.98, 2.30]
small, positive interaction effect of alcohol and meds, a ~2x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_nomeds = 36 (as set above)
theta_alcohol_nomeds = exp(b0_intercept + b1_alcohol + b2_nomeds + b3_alcohol:nomeds)
= exp(b0_intercept) * exp(b1_alcohol) * exp(b2_nomeds * b3_alcohol:nomeds)
= 1 * 3 * 6 * 2 = 36
2. Alternative method, using pymc.glm
Create Model
Alternative automatic formulation using pmyc.glm
End of explanation
"""
with mdl_fish_alt:
trc_fish_alt = pm.sample(4000, tune=2000)[2000:]
"""
Explanation: Sample Model
End of explanation
"""
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)]
plot_traces_pymc(trc_fish_alt, varnames=rvs_fish_alt)
"""
Explanation: View Traces
End of explanation
"""
np.exp(pm.df_summary(trc_fish_alt, varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
"""
Explanation: Transform coeffs
End of explanation
"""
np.percentile(trc_fish_alt['mu'], [25,50,75])
"""
Explanation: Observe:
The traceplots look well mixed
The transformed model coeffs look moreorless the same as those generated by the manual model
Note also that the mu coeff is for the overall mean of the dataset and has an extreme skew, if we look at the median value ...
End of explanation
"""
df['nsneeze'].mean()
"""
Explanation: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of:
End of explanation
"""
|
weleen/mxnet
|
example/notebooks/basic/image_io.ipynb
|
apache-2.0
|
%matplotlib inline
import os
import subprocess
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
# change this to your mxnet location
MXNET_HOME = '/scratch/mxnet'
"""
Explanation: Image Data IO
This tutorial explains how to prepare, load and train with image data in MXNet. All IO in MXNet is handled via mx.io.DataIter and its subclasses, which is explained here. In this tutorial we focus on how to use pre-built data iterators as while as custom iterators to process image data.
There are mainly three ways of loading image data in MXNet:
- [NEW] mx.img.ImageIter: implemented in python, easily customizable, can load from both .rec files and raw image files.
- [OLD] mx.io.ImageRecordIter: implemented in backend (C++), less customizable but can be used in all language bindings, load from .rec files
- Custom iterator by inheriting mx.io.DataIter
First, we explain the record io file format used by mxnet:
RecordIO
Record IO is the main file format used by MXNet for data IO. It supports reading and writing on various file systems including distributed file systems like Hadoop HDFS and AWS S3.
First, we download the Caltech 101 dataset that contains 101 classes of objects and convert them into record io format:
Setup:
End of explanation
"""
os.system('wget http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz -P data/')
os.chdir('data')
os.system('tar -xf 101_ObjectCategories.tar.gz')
os.chdir('../')
"""
Explanation: Download and unzip:
End of explanation
"""
os.system('python %s/tools/im2rec.py --list=1 --recursive=1 --shuffle=1 --test-ratio=0.2 data/caltech data/101_ObjectCategories'%MXNET_HOME)
"""
Explanation: Let's take a look at the data. As you can see, under the root folder every category has a subfolder.
Now let's convert them into record io format. First we need to make a list that contains all the image files and their categories:
End of explanation
"""
os.system("python %s/tools/im2rec.py --num-thread=4 --pass-through=1 data/caltech data/101_ObjectCategories"%MXNET_HOME)
"""
Explanation: The resulting list file is in the format index\t(one or more label)\tpath. In this case there is only one label for each image but you can modify the list to add in more for multi label training.
Then we can use this list to create our record io file:
End of explanation
"""
data_iter = mx.io.ImageRecordIter(
path_imgrec="./data/caltech_train.rec", # the target record file
data_shape=(3, 227, 227), # output data shape. An 227x227 region will be cropped from the original image.
batch_size=4, # number of samples per batch
resize=256 # resize the shorter edge to 256 before cropping
# ... you can add more augumentation options here. use help(mx.io.ImageRecordIter) to see all possible choices
)
data_iter.reset()
batch = data_iter.next()
data = batch.data[0]
for i in range(4):
plt.subplot(1,4,i+1)
plt.imshow(data[i].asnumpy().astype(np.uint8).transpose((1,2,0)))
plt.show()
"""
Explanation: The record io files are now saved at here
ImageRecordIter
mx.io.ImageRecordIter can be used for loading image data saved in record io format. It is available in all frontend languages, but as it's implemented in C++, it is less flexible.
To use ImageRecordIter, simply create an instance by loading your record file:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/inm/cmip6/models/inm-cm4-8/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm4-8', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM4-8
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
akshayrangasai/akshayrangasai.github.io
|
Blog Post Content/.ipynb_checkpoints/Airport Waiting Time-checkpoint.ipynb
|
mit
|
%matplotlib inline
#Imports for solution
import numpy as np
import scipy.stats as sp
from matplotlib.pyplot import *
#Setting Distribution variables
##All rates are in per Minute.
"""
Explanation: Airport Wait Time Simulation
End of explanation
"""
#Everything will me modeled as a Poisson Process
SIM_TIME = 180
QUEUE_ARRIVAL_RATE = 15
N_SCANNERS =4
SCANNER_BAG_CHECKING_RATE = 3 #Takes 20 seconds to put your bag on Scanner
FRISK_MACHINES_PER_SCANNER = 3 #Number of people checking machine per scanner
N_FRISK_MACHINES = N_SCANNERS*FRISK_MACHINES_PER_SCANNER
FRISK_CHECKING_RATE = 2 #Half a minute per frisk
SCANNER_RATE = SCANNER_BAG_CHECKING_RATE*N_SCANNERS
FRISK_RATE = FRISK_CHECKING_RATE*N_FRISK_MACHINES
FRISK_ARRIVAL_RATE = SCANNER_RATE
"""
Explanation: For this simulation, we'll be using numpy and scipy for their statistical and matrix math prowess and matplotlib as our primary plotting tool
End of explanation
"""
#Queue Modeling
ARRIVAL_PATTERN = sp.poisson.rvs(QUEUE_ARRIVAL_RATE,size = SIM_TIME) #for an hour
ARRIVAL_LIST = []
for index, item in enumerate(ARRIVAL_PATTERN):
ARRIVAL_LIST += [index]*item
#print ARRIVAL_LIST
TIMEAXIS = np.linspace(1,SIM_TIME,SIM_TIME)
fig = figure()
arrivalplot = plot(TIMEAXIS,ARRIVAL_PATTERN,'go-')
ylabel('People arrived at time t')
xlabel("Time (minutes)")
show()
"""
Explanation: Setting the arrival rates for each of the steps in the airport arrival process. First is the arrival to the queue, then to the scanning machines and then scanning to the frisking booth.
We have discounted travel time in the queue, and assumed that all the frisking and scaanning booths are similar and this the overall rate of scanning and friskign will be sum of all the rates at each step
End of explanation
"""
SCAN_PATTERN = sp.poisson.rvs(SCANNER_RATE,size=SIM_TIME)
SCAN_LIST = []
for index, item in enumerate(SCAN_PATTERN):
SCAN_LIST += [index]*item
arrivalfig = figure()
arrivalplot = plot(TIMEAXIS,SCAN_PATTERN,'o-')
ylabel('People arrived at time t for the scanner')
xlabel("Time (minutes)")
show()
"""
Explanation: We're taking the arrivals at each of the time intervals, generated by a poisson function and storing the number of people who have arrived at each minute.
The ARRIVAL_LIST variable is used to calculate the entry time of each of the people in the queue. This will be later used to assess overall wait time for people in the queue.
The time axis is used to help plot results as X-axis variable
End of explanation
"""
FRISK_PATTERN = sp.poisson.rvs(FRISK_RATE,size=SIM_TIME)
FRISK_LIST = []
for index, item in enumerate(FRISK_PATTERN):
FRISK_LIST += [index]*item
arrivalfig = figure()
arrivalplot = plot(TIMEAXIS,FRISK_PATTERN,'ro-')
ylabel('People Leaving at time t from frisking counter')
xlabel("Time (minutes)")
show()
"""
Explanation: And this is the pattern for the scanner
End of explanation
"""
EXIT_NUMER = zip(FRISK_PATTERN,SCAN_PATTERN)
EXIT_NUMBER = [min(k) for k in EXIT_NUMER]
plot(EXIT_NUMBER,'o')
show()
EXIT_PATTERN = []
"""
Explanation: Critical to note that this ignores the queuing and assumes that xx people are processed at each time interval at the counter. This will be used in conjunction with the scanner output to choose the bottle neck at each point in time
End of explanation
"""
for index, item in enumerate(EXIT_NUMBER):
EXIT_PATTERN += [index]*item
RESIDUAL_ARRIVAL_PATTERN = ARRIVAL_LIST[0:len(EXIT_PATTERN)]
WAIT_TIMES = [m-n for m,n in zip(EXIT_PATTERN,RESIDUAL_ARRIVAL_PATTERN)]
#print EXIT_PATTERN
'''
for i,val in EXIT_PATTERN:
WAIT_TIMES += [ARRIVAL_PATTERN(i) - val]
'''
plot(WAIT_TIMES,'r-')
ylabel('Wait times for people entering the queue')
xlabel("Order of entering the queue")
show()
"""
Explanation: Minimum number of processed people between the scanners and the frisking is the bottleneck at any given time, and this will be the exit rate at any given time.
End of explanation
"""
|
MaximMalakhov/coursera
|
Learning on marked data/Week 5/task_nn.ipynb
|
mit
|
# Выполним инициализацию основных используемых модулей
%matplotlib inline
import random
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
import numpy as np
"""
Explanation: В этом задании вы будете настраивать двуслойную нейронную сеть для решения задачи многоклассовой классификации. Предлагается выполнить процедуры загрузки и разбиения входных данных, обучения сети и подсчета ошибки классификации. Предлагается определить оптимальное количество нейронов в скрытом слое сети. Нужно так подобрать число нейронов, чтобы модель была с одной стороны несложной, а с другой стороны давала бы достаточно точный прогноз и не переобучалась. Цель задания -- показать, как зависит точность и обучающая способность сети от ее сложности.
Для решения задачи многоклассовой классификации предлагается воспользоваться библиотекой построения нейронных сетей pybrain. Библиотека содержит основные модули инициализации двуслойной нейронной сети прямого распространения, оценки ее параметров с помощью метода обратного распространения ошибки (backpropagation) и подсчета ошибки.
Установить библиотеку pybrain можно с помощью стандартной системы управления пакетами pip:
pip install pybrain
Кроме того, для установки библиотеки можно использовать и другие способы, приведенные в документации.
Используемые данные
Рассматривается задача оценки качества вина по его физико-химическим свойствам [1]. Данные размещены в открытом доступе в репозитории UCI и содержат 1599 образцов красного вина, описанных 11 признаками, среди которых -- кислотность, процентное содержание сахара, алкоголя и пр. Кроме того, каждому объекту поставлена в соответствие оценка качества по шкале от 0 до 10. Требуется восстановить оценку качества вина по исходному признаковому описанию.
[1] P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
End of explanation
"""
with open('winequality-red.csv') as f:
f.readline() # пропуск заголовочной строки
data = np.loadtxt(f, delimiter=';')
"""
Explanation: Выполним загрузку данных
End of explanation
"""
import urllib
# URL for the Wine Quality Data Set (UCI Machine Learning Repository)
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"
# загрузка файла
f = urllib.urlopen(url)
f.readline() # пропуск заголовочной строки
data = np.loadtxt(f, delimiter=';')
"""
Explanation: В качестве альтернативного варианта, можно выполнить загрузку данных напрямую из репозитория UCI, воспользовавшись библиотекой urllib.
End of explanation
"""
TRAIN_SIZE = 0.7 # Разделение данных на обучающую и контрольную части в пропорции 70/30%
from sklearn.cross_validation import train_test_split
y = data[:, -1]
np.place(y, y < 5, 5)
np.place(y, y > 7, 7)
y -= min(y)
X = data[:, :-1]
X = normalize(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=TRAIN_SIZE, random_state=0)
"""
Explanation: Выделим из данных целевую переменную. Классы в задаче являются несбалинсированными: основной доле объектов поставлена оценка качества от 5 до 7. Приведем задачу к трехклассовой: объектам с оценкой качества меньше пяти поставим оценку 5, а объектам с оценкой качества больше семи поставим 7.
End of explanation
"""
from pybrain.datasets import ClassificationDataSet # Структура данных pybrain
from pybrain.tools.shortcuts import buildNetwork
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure.modules import SoftmaxLayer
from pybrain.utilities import percentError
"""
Explanation: Двуслойная нейронная сеть
Двуслойная нейронная сеть представляет собой функцию распознавания, которая може быть записана в виде следующей суперпозиции:
$f(x,W)=h^{(2)}\left(\sum\limits_{i=1}^D w_i^{(2)}h^{(1)}\left(\sum\limits_{j=1}^n w_{ji}^{(1)}x_j+b_i^{(1)}\right)+b^{(2)}\right)$, где
$x$ -- исходный объект (сорт вина, описанный 11 признаками), $x_j$ -- соответствующий признак,
$n$ -- количество нейронов во входном слое сети, совпадающее с количеством признаков,
$D$ -- количество нейронов в скрытом слое сети,
$w_i^{(2)}, w_{ji}^{(1)}, b_i^{(1)}, b^{(2)}$ -- параметры сети, соответствующие весам нейронов,
$h^{(1)}, h^{(2)}$ -- функции активации.
В качестве функции активации на скрытом слое сети используется линейная функция. На выходном слое сети используется функция активации softmax, являющаяся обобщением сигмоидной функции на многоклассовый случай:
$y_k=\text{softmax}k(a_1,...,a_k)=\frac{\exp(a_k)}{\sum{k=1}^K\exp(a_k)}.$
Настройка параметров сети
Оптимальные параметры сети $W_{opt}$ определяются путем минимизации функции ошибки:
$W_{opt}=\arg\min\limits_{W}L(W)+\lambda\|W\|^2$.
Здесь $L(W)$ является функцией ошибки многоклассовой классификации,
$L(W)=- \sum^N_{n=1}\sum^K_{k=1} t_{kn} log(y_{kn}),$
$t_{kn}$ -- бинарно закодированные метки классов, $K$ -- количество меток, $N$ -- количество объектов,
а $\lambda\|W\|^2$ является регуляризующим слагаемым, контролирующим суммарный вес параметров сети и предотвращающий эффект переобучения.
Оптимизация параметров выполняется методом обратного распространения ошибки (backpropagation).
Выполним загрузку основных модулей: ClassificationDataSet -- структура данных pybrain, buildNetwork -- инициализация нейронной сети, BackpropTrainer -- оптимизация параметров сети методом backpropagation, SoftmaxLayer -- функция softmax, соответствующая выходному слою сети, percentError -- функцию подсчета ошибки классификации (доля неправильных ответов).
End of explanation
"""
# Определение основных констант
HIDDEN_NEURONS_NUM = 100 # Количество нейронов, содержащееся в скрытом слое сети
MAX_EPOCHS = 100 # Максимальное число итераций алгоритма оптимизации параметров сети
"""
Explanation: Инициализируем основные параметры задачи: HIDDEN_NEURONS_NUM -- количество нейронов скрытого слоя, MAX_EPOCHS -- максимальное количество итераций алгоритма оптимизации
End of explanation
"""
# Конвертация данных в структуру ClassificationDataSet
# Обучающая часть
ds_train = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
# Первый аргумент -- количество признаков np.shape(X)[1], второй аргумент -- количество меток классов len(np.unique(y_train)))
ds_train.setField('input', X_train) # Инициализация объектов
ds_train.setField('target', y_train[:, np.newaxis]) # Инициализация ответов; np.newaxis создает вектор-столбец
ds_train._convertToOneOfMany( ) # Бинаризация вектора ответов
# Контрольная часть
ds_test = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
ds_test.setField('input', X_test)
ds_test.setField('target', y_test[:, np.newaxis])
ds_test._convertToOneOfMany( )
"""
Explanation: Инициализируем структуру данных ClassificationDataSet, используемую библиотекой pybrain. Для инициализации структура принимает два аргумента: количество признаков np.shape(X)[1] и количество различных меток классов len(np.unique(y)).
Кроме того, произведем бинаризацию целевой переменной с помощью функции _convertToOneOfMany( ) и разбиение данных на обучающую и контрольную части.
End of explanation
"""
np.random.seed(0) # Зафиксируем seed для получения воспроизводимого результата
# Построение сети прямого распространения (Feedforward network)
net = buildNetwork(ds_train.indim, HIDDEN_NEURONS_NUM, ds_train.outdim, outclass=SoftmaxLayer)
# ds.indim -- количество нейронов входного слоя, равне количеству признаков
# ds.outdim -- количество нейронов выходного слоя, равное количеству меток классов
# SoftmaxLayer -- функция активации, пригодная для решения задачи многоклассовой классификации
init_params = np.random.random((len(net.params))) # Инициализируем веса сети для получения воспроизводимого результата
net._setParameters(init_params)
"""
Explanation: Инициализируем двуслойную сеть и произведем оптимизацию ее параметров. Аргументами для инициализации являются:
ds.indim -- количество нейронов на входном слое сети, совпадает с количеством признаков (в нашем случае 11),
HIDDEN_NEURONS_NUM -- количество нейронов в скрытом слое сети,
ds.outdim -- количество нейронов на выходном слое сети, совпадает с количеством различных меток классов (в нашем случае 3),
SoftmaxLayer -- функция softmax, используемая на выходном слое для решения задачи многоклассовой классификации.
End of explanation
"""
random.seed(0)
# Модуль настройки параметров pybrain использует модуль random; зафиксируем seed для получения воспроизводимого результата
trainer = BackpropTrainer(net, dataset=ds_train) # Инициализируем модуль оптимизации
err_train, err_val = trainer.trainUntilConvergence(maxEpochs=MAX_EPOCHS)
line_train = plt.plot(err_train, 'b', err_val, 'r') # Построение графика
xlab = plt.xlabel('Iterations')
ylab = plt.ylabel('Error')
"""
Explanation: Выполним оптимизацию параметров сети. График ниже показывает сходимость функции ошибки на обучающей/контрольной части.
End of explanation
"""
res_train = net.activateOnDataset(ds_train).argmax(axis=1) # Подсчет результата на обучающей выборке
print 'Error on train: ', percentError(res_train, ds_train['target'].argmax(axis=1)), '%' # Подсчет ошибки
res_test = net.activateOnDataset(ds_test).argmax(axis=1) # Подсчет результата на тестовой выборке
print 'Error on test: ', percentError(res_test, ds_test['target'].argmax(axis=1)), '%' # Подсчет ошибки
"""
Explanation: Рассчитаем значение доли неправильных ответов на обучающей и контрольной выборке.
End of explanation
"""
random.seed(0) # Зафиксируем seed для получния воспроизводимого результата
np.random.seed(0)
def plot_classification_error(hidden_neurons_num, res_train_vec, res_test_vec):
# hidden_neurons_num -- массив размера h, содержащий количество нейронов, по которому предполагается провести перебор,
# hidden_neurons_num = [50, 100, 200, 500, 700, 1000];
# res_train_vec -- массив размера h, содержащий значения доли неправильных ответов классификации на обучении;
# res_train_vec -- массив размера h, содержащий значения доли неправильных ответов классификации на контроле
plt.figure()
plt.plot(hidden_neurons_num, res_train_vec)
plt.plot(hidden_neurons_num, res_test_vec, '-r')
def write_answer_nn(optimal_neurons_num):
with open("nnets_answer1.txt", "w") as fout:
fout.write(str(optimal_neurons_num))
hidden_neurons_num = [50, 100, 200, 500, 700, 1000]
res_train_vec = list()
res_test_vec = list()
for nnum in hidden_neurons_num:
# Put your code here
# Не забудьте про инициализацию весов командой np.random.random((len(net.params)))
net = buildNetwork(ds_train.indim, nnum, ds_train.outdim, outclass=SoftmaxLayer)
init_params = np.random.random((len(net.params))) # Инициализируем веса сети для получения воспроизводимого результата
net._setParameters(init_params)
trainer = BackpropTrainer(net, dataset=ds_train) # Инициализируем модуль оптимизации
err_train, err_val = trainer.trainUntilConvergence(maxEpochs=MAX_EPOCHS)
res_train = net.activateOnDataset(ds_train).argmax(axis=1) # Подсчет результата на обучающей выборке
res_test = net.activateOnDataset(ds_test).argmax(axis=1) # Подсчет результата на тестовой выборке
res_train_vec.append(percentError(res_train, ds_train['target'].argmax(axis=1)))
res_test_vec.append(percentError(res_test, ds_test['target'].argmax(axis=1)))
# Постройте график зависимости ошибок на обучении и контроле в зависимости от количества нейронов
plot_classification_error(hidden_neurons_num, res_train_vec, res_test_vec)
# Запишите в файл количество нейронов, при котором достигается минимум ошибки на контроле
write_answer_nn(hidden_neurons_num[res_test_vec.index(min(res_test_vec))])
"""
Explanation: Задание. Определение оптимального числа нейронов.
В задании требуется исследовать зависимость ошибки на контрольной выборке в зависимости от числа нейронов в скрытом слое сети. Количество нейронов, по которому предполагается провести перебор, записано в векторе
hidden_neurons_num = [50, 100, 200, 500, 700, 1000]
Для фиксированного разбиения на обучающую и контрольную части подсчитайте долю неправильных ответов (ошибок) классификации на обучении/контроле в зависимости от количества нейронов в скрытом слое сети. Запишите результаты в массивы res_train_vec и res_test_vec, соответственно. С помощью функции plot_classification_error постройте график зависимости ошибок на обучении/контроле от количества нейронов. Являются ли графики ошибок возрастающими/убывающими? При каком количестве нейронов достигается минимум ошибок классификации?
С помощью функции write_answer_nn запишите в выходной файл число: количество нейронов в скрытом слое сети, для которого достигается минимум ошибки классификации на контрольной выборке.
End of explanation
"""
|
Zhenxingzhang/AnalyticsVidhya
|
Articles/Parameter_Tuning_GBM_with_Example/GBM model.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import cross_validation, metrics
from sklearn.grid_search import GridSearchCV
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 4
"""
Explanation: GBM Discussion
The data here is taken form the Data Hackathon3.x - http://datahack.analyticsvidhya.com/contest/data-hackathon-3x
Import Libraries:
End of explanation
"""
train = pd.read_csv('train_modified.csv')
target='Disbursed'
IDcol = 'ID'
train['Disbursed'].value_counts()
"""
Explanation: Load Data:
The data has gone through following pre-processing:
1. City variable dropped because of too many categories
2. DOB converted to Age | DOB dropped
3. EMI_Loan_Submitted_Missing created which is 1 if EMI_Loan_Submitted was missing else 0 | EMI_Loan_Submitted dropped
4. EmployerName dropped because of too many categories
5. Existing_EMI imputed with 0 (median) - 111 values were missing
6. Interest_Rate_Missing created which is 1 if Interest_Rate was missing else 0 | Interest_Rate dropped
7. Lead_Creation_Date dropped because made little intuitive impact on outcome
8. Loan_Amount_Applied, Loan_Tenure_Applied imputed with missing
9. Loan_Amount_Submitted_Missing created which is 1 if Loan_Amount_Submitted was missing else 0 | Loan_Amount_Submitted dropped
10. Loan_Tenure_Submitted_Missing created which is 1 if Loan_Tenure_Submitted was missing else 0 | Loan_Tenure_Submitted dropped
11. LoggedIn, Salary_Account removed
12. Processing_Fee_Missing created which is 1 if Processing_Fee was missing else 0 | Processing_Fee dropped
13. Source - top 2 kept as is and all others combined into different category
14. Numerical and One-Hot-Coding performed
End of explanation
"""
def modelfit(alg, dtrain, dtest, predictors, performCV=True, printFeatureImportance=True, cv_folds=5):
#Fit the algorithm on the data
alg.fit(dtrain[predictors], dtrain['Disbursed'])
#Predict training set:
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#Perform cross-validation:
if performCV:
cv_score = cross_validation.cross_val_score(alg, dtrain[predictors], dtrain['Disbursed'], cv=cv_folds, scoring='roc_auc')
#Print model report:
print "\nModel Report"
print "Accuracy : %.4g" % metrics.accuracy_score(dtrain['Disbursed'].values, dtrain_predictions)
print "AUC Score (Train): %f" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob)
if performCV:
print "CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" % (np.mean(cv_score),np.std(cv_score),np.min(cv_score),np.max(cv_score))
#Print Feature Importance:
if printFeatureImportance:
feat_imp = pd.Series(alg.feature_importances_, predictors).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
"""
Explanation: Define a function for modeling and cross-validation
This function will do the following:
1. fit the model
2. determine training accuracy
3. determine training AUC
4. determine testing AUC
5. perform CV is performCV is True
6. plot Feature Importance if printFeatureImportance is True
End of explanation
"""
#Choose all predictors except target & IDcols
predictors = [x for x in train.columns if x not in [target, IDcol]]
gbm0 = GradientBoostingClassifier(random_state=10)
modelfit(gbm0, train, test, predictors,printOOB=False)
"""
Explanation: Baseline Model
Since here the criteria is AUC, simply predicting the most prominent class would give an AUC of 0.5 always. Another way of getting a baseline model is to use the algorithm without tuning, i.e. with default parameters.
End of explanation
"""
#Choose all predictors except target & IDcols
predictors = [x for x in train.columns if x not in [target, IDcol]]
param_test1 = {'n_estimators':range(20,81,10)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=500,
min_samples_leaf=50,max_depth=8,max_features='sqrt', subsample=0.8,random_state=10),
param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch1.fit(train[predictors],train[target])
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
"""
Explanation: GBM Models:
There 2 types of parameters here:
1. Tree-specific parameters
* min_samples_split
* min_samples_leaf
* max_depth
* min_leaf_nodes
* max_features
* loss function
2. Boosting specific paramters
* n_estimators
* learning_rate
* subsample
Approach for tackling the problem
Decide a relatively higher value for learning rate and tune the number of estimators requried for that.
Tune the tree specific parameters for that learning rate
Tune subsample
Lower learning rate as much as possible computationally and increase the number of estimators accordingly.
Step 1- Find the number of estimators for a high learning rate
We will use the following benchmarks for parameters:
1. min_samples_split = 500 : ~0.5-1% of total values. Since this is imbalanced class problem, we'll take small value
2. min_samples_leaf = 50 : Just using for preventing overfitting. will be tuned later.
3. max_depth = 8 : since high number of observations and predictors, choose relatively high value
4. max_features = 'sqrt' : general thumbrule to start with
5. subsample = 0.8 : typically used value (will be tuned later)
0.1 is assumed to be a good learning rate to start with. Let's try to find the optimum number of estimators requried for this.
End of explanation
"""
#Grid seach on subsample and max_features
param_test2 = {'max_depth':range(5,16,2), 'min_samples_split':range(200,1001,200)}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,
max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2.fit(train[predictors],train[target])
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
"""
Explanation: So we got 60 as the optimal estimators for the 0.1 learning rate. Note that 60 is a reasonable value and can be used as it is. But it might not be the same in all cases. Other situations:
1. If the value is around 20, you might want to try lowering the learning rate to 0.05 and re-run grid search
2. If the values are too high ~100, tuning the other parameters will take long time and you can try a higher learning rate
Step 2- Tune tree-specific parameters
Now, lets move onto tuning the tree parameters. We will do this in 2 stages:
1. Tune max_depth and num_samples_split
2. Tune min_samples_leaf
3. Tune max_features
End of explanation
"""
#Grid seach on subsample and max_features
param_test3 = {'min_samples_split':range(1000,2100,200), 'min_samples_leaf':range(30,71,10)}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=9,
max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch3.fit(train[predictors],train[target])
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
modelfit(gsearch3.best_estimator_, train, test, predictors)
"""
Explanation: Since we reached the maximum of min_sales_split, we should check higher values as well. Also, we can tune min_samples_leaf with it now as max_depth is fixed. One might argue that max depth might change for higher value but if you observe the output closely, a max_depth of 9 had a better model for most of cases.
So lets perform a grid search on them:
End of explanation
"""
#Grid seach on subsample and max_features
param_test4 = {'max_features':range(7,20,2)}
gsearch4 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=9,
min_samples_split=1200, min_samples_leaf=60, subsample=0.8, random_state=10),
param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch4.fit(train[predictors],train[target])
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
"""
Explanation: Tune max_features:
End of explanation
"""
#Grid seach on subsample and max_features
param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=9,
min_samples_split=1200, min_samples_leaf=60, subsample=0.8, random_state=10, max_features=7),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch5.fit(train[predictors],train[target])
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
"""
Explanation: Step3- Tune Subsample and Lower Learning Rate
End of explanation
"""
#Choose all predictors except target & IDcols
predictors = [x for x in train.columns if x not in [target, IDcol]]
gbm_tuned_1 = GradientBoostingClassifier(learning_rate=0.05, n_estimators=120,max_depth=9, min_samples_split=1200,
min_samples_leaf=60, subsample=0.85, random_state=10, max_features=7)
modelfit(gbm_tuned_1, train, test, predictors)
"""
Explanation: With all tuned lets try reducing the learning rate and proportionally increasing the number of estimators to get more robust results:
End of explanation
"""
#Choose all predictors except target & IDcols
predictors = [x for x in train.columns if x not in [target, IDcol]]
gbm_tuned_2 = GradientBoostingClassifier(learning_rate=0.01, n_estimators=600,max_depth=9, min_samples_split=1200,
min_samples_leaf=60, subsample=0.85, random_state=10, max_features=7)
modelfit(gbm_tuned_2, train, test, predictors)
"""
Explanation: 1/10th learning rate
End of explanation
"""
#Choose all predictors except target & IDcols
predictors = [x for x in train.columns if x not in [target, IDcol]]
gbm_tuned_3 = GradientBoostingClassifier(learning_rate=0.005, n_estimators=1200,max_depth=9, min_samples_split=1200,
min_samples_leaf=60, subsample=0.85, random_state=10, max_features=7,
warm_start=True)
modelfit(gbm_tuned_3, train, test, predictors, performCV=False)
#Choose all predictors except target & IDcols
predictors = [x for x in train.columns if x not in [target, IDcol]]
gbm_tuned_4 = GradientBoostingClassifier(learning_rate=0.005, n_estimators=1500,max_depth=9, min_samples_split=1200,
min_samples_leaf=60, subsample=0.85, random_state=10, max_features=7,
warm_start=True)
modelfit(gbm_tuned_4, train, test, predictors, performCV=False)
"""
Explanation: 1/50th learning rate
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.