repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
wanderer2/pymc3
|
docs/source/notebooks/bayesian_neural_network_advi.ipynb
|
apache-2.0
|
%matplotlib inline
import theano
theano.config.floatX = 'float64'
import pymc3 as pm
import theano.tensor as T
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_moons
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
sns.despine(); ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
"""
Explanation: Variational Inference: Bayesian Neural Networks
(c) 2016 by Thomas Wiecki
Original blog post: http://twiecki.github.io/blog/2016/06/01/bayesian-deep-learning/
Current trends in Machine Learning
There are currently three big trends in machine learning: Probabilistic Programming, Deep Learning and "Big Data". Inside of PP, a lot of innovation is in making things scale using Variational Inference. In this blog post, I will show how to use Variational Inference in PyMC3 to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.
Probabilistic Programming at scale
Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models. One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.
Unfortunately, when it comes to traditional ML problems like classification or (non-linear) regression, Probabilistic Programming often plays second fiddle (in terms of accuracy and scalability) to more algorithmic approaches like ensemble learning (e.g. random forests or gradient boosted regression trees.
Deep Learning
Now in its third renaissance, deep learning has been making headlines repeatadly by dominating almost any object recognition benchmark, kicking ass at Atari games, and beating the world-champion Lee Sedol at Go. From a statistical point, Neural Networks are extremely good non-linear function approximators and representation learners. While mostly known for classification, they have been extended to unsupervised learning with AutoEncoders and in all sorts of other interesting ways (e.g. Recurrent Networks, or MDNs to estimate multimodal distributions). Why do they work so well? No one really knows as the statistical properties are still not fully understood.
A large part of the innoviation in deep learning is the ability to train these extremely complex models. This rests on several pillars:
* Speed: facilitating the GPU allowed for much faster processing.
* Software: frameworks like Theano and TensorFlow allow flexible creation of abstract models that can then be optimized and compiled to CPU or GPU.
* Learning algorithms: training on sub-sets of the data -- stochastic gradient descent -- allows us to train these models on massive amounts of data. Techniques like drop-out avoid overfitting.
* Architectural: A lot of innovation comes from changing the input layers, like for convolutional neural nets, or the output layers, like for MDNs.
Bridging Deep Learning and Probabilistic Programming
On one hand we Probabilistic Programming which allows us to build rather small and focused models in a very principled and well-understood way to gain insight into our data; on the other hand we have deep learning which uses many heuristics to train huge and highly complex models that are amazing at prediction. Recent innovations in variational inference allow probabilistic programming to scale model complexity as well as data size. We are thus at the cusp of being able to combine these two approaches to hopefully unlock new innovations in Machine Learning. For more motivation, see also Dustin Tran's recent blog post.
While this would allow Probabilistic Programming to be applied to a much wider set of interesting problems, I believe this bridging also holds great promise for innovations in Deep Learning. Some ideas are:
* Uncertainty in predictions: As we will see below, the Bayesian Neural Network informs us about the uncertainty in its predictions. I think uncertainty is an underappreciated concept in Machine Learning as it's clearly important for real-world applications. But it could also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.
* Uncertainty in representations: We also get uncertainty estimates of our weights which could inform us about the stability of the learned representations of the network.
* Regularization with priors: Weights are often L2-regularized to avoid overfitting, this very naturally becomes a Gaussian prior for the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).
* Transfer learning with informed priors: If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet.
* Hierarchical Neural Networks: A very powerful approach in Probabilistic Programming is hierarchical modeling that allows pooling of things that were learned on sub-groups to the overall population (see my tutorial on Hierarchical Linear Regression in PyMC3). Applied to Neural Networks, in hierarchical data sets, we could train individual neural nets to specialize on sub-groups while still being informed about representations of the overall population. For example, imagine a network trained to classify car models from pictures of cars. We could train a hierarchical neural network where a sub-neural network is trained to tell apart models from only a single manufacturer. The intuition being that all cars from a certain manufactures share certain similarities so it would make sense to train individual networks that specialize on brands. However, due to the individual networks being connected at a higher layer, they would still share information with the other specialized sub-networks about features that are useful to all brands. Interestingly, different layers of the network could be informed by various levels of the hierarchy -- e.g. early layers that extract visual lines could be identical in all sub-networks while the higher-order representations would be different. The hierarchical model would learn all that from the data.
* Other hybrid architectures: We can more freely build all kinds of neural networks. For example, Bayesian non-parametrics could be used to flexibly adjust the size and shape of the hidden layers to optimally scale the network architecture to the problem at hand during training. Currently, this requires costly hyper-parameter optimization and a lot of tribal knowledge.
Bayesian Neural Networks in PyMC3
Generating data
First, lets generate some toy data -- a simple binary classification problem that's not linearly separable.
End of explanation
"""
# Trick: Turn inputs and outputs into shared variables.
# It's still the same thing, but we can later change the values of the shared variable
# (to switch in the test-data later) and pymc3 will just use the new data.
# Kind-of like a pointer we can redirect.
# For more info, see: http://deeplearning.net/software/theano/library/compile/shared.html
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden)
init_2 = np.random.randn(n_hidden, n_hidden)
init_out = np.random.randn(n_hidden)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation function
act_1 = T.tanh(T.dot(ann_input,
weights_in_1))
act_2 = T.tanh(T.dot(act_1,
weights_1_2))
act_out = T.nnet.sigmoid(T.dot(act_2,
weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out',
act_out,
observed=ann_output)
"""
Explanation: Model specification
A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
End of explanation
"""
%%time
with neural_network:
# Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO)
v_params = pm.variational.advi(n=50000)
"""
Explanation: That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.
Variational Inference: Scaling model complexity
We could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned, this will become very slow as we scale our model up to deeper architectures with more layers.
Instead, we will use the brand-new ADVI variational inference algorithm which was recently added to PyMC3. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior.
End of explanation
"""
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
"""
Explanation: < 40 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.
As samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC):
End of explanation
"""
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
"""
Explanation: Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
End of explanation
"""
# Replace shared variables with testing set
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
# Use probability of > 0.5 to assume prediction of class 1
pred = ppc['out'].mean(axis=0) > 0.5
fig, ax = plt.subplots()
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
sns.despine()
ax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');
print('Accuracy = {}%'.format((Y_test == pred).mean() * 100))
"""
Explanation: Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
End of explanation
"""
grid = np.mgrid[-3:3:100j,-3:3:100j]
grid_2d = grid.reshape(2, -1).T
dummy_out = np.ones(grid.shape[1], dtype=np.int8)
ann_input.set_value(grid_2d)
ann_output.set_value(dummy_out)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
"""
Explanation: Hey, our neural network did all right!
Lets look at what the classifier has learned
For this, we evaluate the class probability predictions on a grid over the whole input space.
End of explanation
"""
cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');
"""
Explanation: Probability surface
End of explanation
"""
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');
"""
Explanation: Uncertainty in predicted value
So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like:
End of explanation
"""
# Set back to original data to retrain
ann_input.set_value(X_train)
ann_output.set_value(Y_train)
# Tensors and RV that will be using mini-batches
minibatch_tensors = [ann_input, ann_output]
minibatch_RVs = [out]
# Generator that returns mini-batches in each iteration
def create_minibatch(data):
rng = np.random.RandomState(0)
while True:
# Return random data samples of set size 100 each iteration
ixs = rng.randint(len(data), size=50)
yield data[ixs]
minibatches = zip(
create_minibatch(X_train),
create_minibatch(Y_train),
)
total_size = len(Y_train)
"""
Explanation: We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.
Mini-batch ADVI: Scaling data size
So far, we have trained our model on all data at once. Obviously this won't scale to something like ImageNet. Moreover, training on mini-batches of data (stochastic gradient descent) avoids local minima and can lead to faster convergence.
Fortunately, ADVI can be run on mini-batches as well. It just requires some setting up:
End of explanation
"""
%%time
with neural_network:
# Run advi_minibatch
v_params = pm.variational.advi_minibatch(
n=50000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-2, epsilon=1.0
)
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
sns.despine()
"""
Explanation: While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM.
Lets pass those to advi_minibatch():
End of explanation
"""
pm.traceplot(trace);
"""
Explanation: As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.
For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.
End of explanation
"""
|
jhonatanoliveira/pgmpy
|
examples/Learning from data.ipynb
|
mit
|
# Generate data
import numpy as np
import pandas as pd
raw_data = np.array([0] * 30 + [1] * 70) # Representing heads by 0 and tails by 1
data = pd.DataFrame(raw_data, columns=['coin'])
print(data)
# Defining the Bayesian Model
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
model = BayesianModel()
model.add_node('coin')
# Fitting the data to the model using Maximum Likelihood Estimator
model.fit(data, estimator=MaximumLikelihoodEstimator)
print(model.get_cpds('coin'))
# Fitting the data to the model using Bayesian Estimator with Dirichlet prior with equal pseudo counts.
model.fit(data, estimator=BayesianEstimator, prior_type='dirichlet', pseudo_counts={'coin': [50, 50]})
print(model.get_cpds('coin'))
"""
Explanation: We will try to learn from data using a very simple example of tossing a coin. We will first generate some data (30% heads and 70% tails) and will try to learn the CPD of the coin using Maximum Likelihood Estimator and Bayesian Estimator with Dirichlet prior.
End of explanation
"""
# Generating radom data with each variable have 2 states and equal probabilities for each state
import numpy as np
import pandas as pd
raw_data = np.random.randint(low=0, high=2, size=(1000, 5))
data = pd.DataFrame(raw_data, columns=['D', 'I', 'G', 'L', 'S'])
print(data)
# Defining the model
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
model = BayesianModel([('D', 'G'), ('I', 'G'), ('I', 'S'), ('G', 'L')])
# Learing CPDs using Maximum Likelihood Estimators
model.fit(data, estimator=MaximumLikelihoodEstimator)
for cpd in model.get_cpds():
print("CPD of {variable}:".format(variable=cpd.variable))
print(cpd)
"""
Explanation: We can see that we get the results as expected. In the maximum likelihood case we got the probability just based on the data where as in the bayesian case we had a prior of $ P(H) = 0.5 $ and $ P(T) = 0.5 $, therefore with 30% heads and 70% tails in the data we got a posterior of $ P(H) = 0.4 $ and $ P(T) = 0.6 $.
Similarly we can learn in case of more complex model. Let's take an example of the student model and compare the results in case of Maximum Likelihood estimator and Bayesian Estimator.
TODO: Add fig for Student example
End of explanation
"""
# Learning with Bayesian Estimator using dirichlet prior for each variable.
pseudo_counts = {'D': [300, 700], 'I': [500, 500], 'G': [800, 200], 'L': [500, 500], 'S': [400, 600]}
model.fit(data, estimator=BayesianEstimator, prior_type='dirichlet', pseudo_counts=pseudo_counts)
for cpd in model.get_cpds():
print("CPD of {variable}:".format(variable=cpd.variable))
print(cpd)
"""
Explanation: As the data was randomly generated with equal probabilities for each state we can see here that all the probability values are close to 0.5 which we expected. Now coming to the Bayesian Estimator:
End of explanation
"""
|
fortyninemaps/karta
|
doc/source/tutorial.ipynb
|
mit
|
from karta import Point, Line, Polygon, Multipoint, Multiline, Multipolygon
"""
Explanation: Karta tutorial
Introduction
Karta provides a set of tools for analysing geographical data. The organization of Karta is around a set of classes for representing vector and raster data. These classes contain built-in methods for common tasks, and are easily extended for more specialized processing. This tutorial provides a brief introduction to the elements of Karta.
Should you come across any mistakes, please file a bug or submit a pull request on Github!
The following examples are shown using Python 3, however Karta is supported on Python 2.7+ and Python 3.4+.
Definitions
Vector data are data that can be treated as a set of connected or disconnected vertices. Examples might be road networks, a set of borders, geophysical survey lines, or the path taken by a bottle floating in an ocean current. In Karta, these data are classified as belonging to Point, Line and Polygon classes, and their Multipart equivalents Multipoint, Multiline, and Multipolygon. Some questions that might be asked
of vector data include
which of these points are contained in this Polygon?
do these Lines intersect, and where?
what is the average distance travelled by a particle?
what municipalities does this river flow through?
Raster data is data that are typically thought of in terms of pixels or a grid of values covering a surface. Examples might be an elevation model, a satellite image, or an upstream area map. Depending on what the data represents, one might
compute slope, aspect, and hillshades on an elevation model
resample or interpolate a grid
mask a land cover map according to management boundaries
apply a pansharpening algorithm to multispectral satellite imagery
extract an elevation profile along a path
The term coordinate reference system refers to a system of relating numerical coordinates to actual positions on Earth. Karta includes methods for geodetic calculations and basic support of projected and geographical coordinates, as well as coordinate system classes backed by pyproj.
Vector data
This section demonstrates the creation and manipulation of vector data.
End of explanation
"""
pt = Point((-123.1, 49.25))
print(pt)
mpt = Multipoint([(-122.93, 48.62),
(-123.10, 48.54),
(-122.90, 48.49),
(-122.81, 48.56)],
data={"color": ["red", "blue", "green", "yellow"],
"value": [2, 1, 3, 5]})
print(mpt)
print(mpt.data)
line = Line([(-124.35713, 49.31437),
(-124.37857, 49.31720),
(-124.39442, 49.31833),
(-124.40311, 49.31942),
(-124.41052, 49.32203),
(-124.41681, 49.32477),
(-124.42278, 49.32588)])
print(line)
poly = Polygon([(-25.41, 67.03),
(-24.83, 62.92),
(-12.76, 63.15),
(-11.44, 66.82)])
print(poly)
"""
Explanation: The Point, Line, and Polygon classes can all be instantiated by providing vertices, and optionally, associated properties. The Multipart Multipoint, Multiline, and Multipolygon classes are similar, and can additionally include part-specific tabular metadata.
End of explanation
"""
print(poly.contains(pt)) # False
# but this one is:
pt2 = Point((-25, 65))
print(poly.contains(pt2)) # True
"""
Explanation: Each geometrical object now contains a vertex/vertices in a cartesian plane.
We may be interested in determining whether the Point created about is within the Polygon:
End of explanation
"""
print(line.intersects(poly)) # False
"""
Explanation: We also can test whether the Line from above crosses the Polygon:
End of explanation
"""
print(line.shortest_distance_to(pt))
"""
Explanation: Or compute the shortest distance between the Point and the Line:
End of explanation
"""
pt = Point((0.0, 60.0))
print(poly.nearest_vertex_to(pt))
print(poly.nearest_on_boundary(pt))
"""
Explanation: There are methods for computing the nearest vertex to an external point, or the nearest point on an edge to an external point:
End of explanation
"""
subline = line[2:-2]
print(subline)
for pt in subline:
print(pt)
"""
Explanation: The positions of objects with multiple vertices can be sliced and iterated through:
End of explanation
"""
print(poly[:2])
"""
Explanation: A slice that takes part of a polygon returns a line.
End of explanation
"""
pt = Point((-123.1, 49.25))
pt2 = Point((-70.66, 41.52))
print(pt.distance(pt2))
"""
Explanation: Points have a distance() method that calculates the distance to another point.
End of explanation
"""
from karta.crs import LonLatWGS84
pt = Point((-123.1, 49.25), crs=LonLatWGS84)
pt2 = Point((-70.66, 41.52), crs=LonLatWGS84)
pt.distance(pt2)
"""
Explanation: By default, geometries in Karta use a planar cartesian coordinate system. If our positions are meant to be geographical coordinates, then we can provide the crs argument to each geometry at creation, as in
End of explanation
"""
from karta.crs import WebMercator
pt_web = Point((-14000000, 6300000), crs=WebMercator)
print(pt.distance(pt_web)) # distance in coordinate system units of *pt*
"""
Explanation: which now gives the great circle distance between point on the Earth, in meters. We can mix coordinate systems to some degree, with Karta performing the necessary transformations in the background:
End of explanation
"""
from karta.examples import us_capitols
mexico_city = Point((-99.13, 19.43), crs=LonLatWGS84)
# Filter those within 2000 km of Mexico City
nearby = list(filter(lambda pt: pt.distance(mexico_city) < 2000e3, us_capitols))
for capitol in nearby:
print("{0:4.0f} km {1}".format(mexico_city.distance(capitol)/1e3, capitol.properties["n"]))
# Or, list capitols from nearest to furthest from Mexico City
distances = map(lambda pt: mexico_city.distance(pt), us_capitols)
distances_capitols = sorted(zip(distances, us_capitols))
for d, pt in distances_capitols:
print("{km:.0f} km {name}".format(km=d/1e3, name=pt.properties["n"]))
"""
Explanation: When the coordinate system is specified, all geometrical methods obey that coordinate system. We can use this to perform queries, such which American state capitols are within 2000 km of Mexico City?
End of explanation
"""
mp = Multipoint([(1, 1), (3, 1), (4, 3), (2, 2)],
data={"species": ["T. officianale", "C. tectorum",
"M. alba", "V. cracca"]})
"""
Explanation: All of the above calculations are performed on a geoid. The LonLatWGS84 coordinate system means to use geographical (longitude and latitude) coordinates on the WGS 84 ellipsoid.
Associated data
By using the data keyword argument, additional data can be associated with a Multipart vector geometry. The data can be a list or a dictionary of lists.
End of explanation
"""
mp.d
mp.d["species"]
mp.d[1:3]
"""
Explanation: These data live in the .data attribute, which is a Table instance. For convenience, the data can also be accessed via the .d attribute, which provides a streamlined syntax supporting key-lookups, indexing, and slicing.
End of explanation
"""
pt = mp[2]
print(pt, "-", pt.properties["species"])
"""
Explanation: The data are propagated through indexing operations on their parent geometry:
End of explanation
"""
poly = Polygon([(-25.41, 67.03),
(-24.83, 62.92),
(-12.76, 63.15),
(-11.44, 66.82)],
properties={"geology": "volcanic",
"alcohol": "brennivin"})
print(poly[0:3].properties)
"""
Explanation: Geometry-level metadata at the geometry level can be provided using the properties keyword argument, which accepts a dictionary. Derived geometries carry the properties of their parent geometry.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(*line.coords())
"""
Explanation: Visualizing and importing/exporting data
The get_coordinate_lists method and coordinates attribute provide lists of coordinates for plotting or data export.
Higher-level plotting operations are provided by the separate karta.plotting submodule, not described here.
End of explanation
"""
line.to_shapefile("line.shp")
pt_web.to_geojson("point.geojson")
"""
Explanation: Data can be read from and written to several common formats, including ESRI shapefiles (through bindings to the pyshp module), GeoJSON, and GPX. Convenience functions are kept in the karta.vector.read namespace.
Each geometry has appropriate methods to write data:
End of explanation
"""
import numpy as np
from karta.raster import RegularGrid, SimpleBand, CompressedBand, read_gtiff
ls8 = read_gtiff("LC08_L1TP_011031_20180930_20181010_01_T1_B8.TIF")
print(ls8.bands) # list of one CompressedBand instance
# Print grid dimensions
print(ls8.size)
# Print grid extent
print(ls8.extent())
# Visualize data
plt.imshow(ls8[::10,::10, 0], origin="bottom", extent=ls8.extent(), cmap=plt.cm.binary, vmin=3e3, vmax=10e3)
plt.colorbar()
"""
Explanation: Raster data
Raster data are primarily represented by the karta.RegularGrid class. RegularGrid instances have a CRS, a Null-data value, a geotransform, and one or more bands, which containing the actual data.
Bands
To provide flexibility, different band classes are provided by karta.raster.bands using different strategies for data storage.
The simplest case, SimpleBand, uses a numpy array to store all data. This makes it reasonably fast, but can be memory-hungry with large rasters.
The default case, CompressedBand, uses chunking and compression via the blosc library to reduce the memory footprint of the raster data at a small speed cost.
GdalFileBand reads data directly from a valid GDAL datasource, using the least memory but performing the slowest.
Note: GdalFileBand doesn't currently handle all raster operations supported by the other band types. Many operations implicitly convert to in-memory CompressedBand representation.
End of explanation
"""
ls8_numpy = read_gtiff("LC08_L1TP_011031_20180930_20181010_01_T1_B8.TIF", bandclass=SimpleBand)
np.all(ls8[:,:] == ls8_numpy[:,:]) # True
"""
Explanation: When opening or creating a RegularGrid, a non-default band type can be specified as a keyword argument. The following code re-opens the same grid as a SimpleBand and verifies that all data are the same.
End of explanation
"""
subgrid = ls8[2000:3000, 4000:4500]
print(subgrid.shape)
every_other = ls8[::2, ::2]
print(every_other.shape)
ls8.transform
"""
Explanation: In the above, the slice syntax [:,:] is used to get an array of all grid data. Because the grid ls8 has only a single band in this case, the data array has two dimensions. The normal simple slicing rules apply, i.e. one can do things like:
End of explanation
"""
print(ls8.bbox())
print(ls8.extent())
"""
Explanation: Grid geolocation is based on an affine matrix transformation represented by the .transform attribute, as well as an associated coordinate system under the .crs attribute. The extent and bounding box of the grid can be retrieved using the respective properties:
End of explanation
"""
coords = ls8.coordinates(crs=LonLatWGS84)
print(coords[0,0])
print(coords[-1,-1])
print(coords[:5,:5])
"""
Explanation: The extent is of the form (xmin, xmax, ymin, xmax), and refers to grid centers. The bbox is of the form (xmin, ymin, xmax, ymax), and refers to grid edges.
To get swaths of grid coordinates conveniently and in arbitrary coordinate systems, used the .coordinates() method, which returns a CoordinateGenerator instance that can be indexed to generate coordinate pairs.
End of explanation
"""
val = ls8.sample(Point((500000, 8750000), crs=ls8.crs)) # sample a point using the grid CRS
print(val)
# Resample grid at 100 m postings:
ls8_coarse = ls8.resample(100, 100)
print("Original resolution:", ls8.resolution)
print("Resampled resolution:", ls8_coarse.resolution)
# Generate a line and sample at intervals along it
transit = Line(zip(np.linspace(350000, 450000), np.linspace(4550000, 4700000)), crs=ls8.crs)
mp, val = ls8.profile(transit)
plt.subplot(2, 1, 1)
x, y = mp.coords()
plt.scatter(x, y, c=val.flatten(), edgecolor="none", cmap=plt.cm.binary)
plt.subplot(2, 1, 2)
dist = Line(mp).cumulength() # convert sample points to a line and extract
# the cumulative distance along it
plt.plot(dist, val[0])
"""
Explanation: Grid sampling
Nearest-neighbour and bilinear sampling of grid points is supported via the .sample() method. It is also possible to resample the full grid at a new resolution, or to sample along a profile.
End of explanation
"""
ls8_small = ls8.resize((350000, 4600000, 450000, 4650000))
plt.imshow(ls8_small[:,:,0], origin="bottom", extent=ls8_small.extent(), cmap=plt.cm.binary, vmin=3e3, vmax=12e3)
"""
Explanation: Grid resizing
Grids can be trimmed or expanded using the .resize() method, which takes a new bounding box as an argument.
Note
When getting raster data, the array provided by slicing is not necessarily a view of the underlying data, and may be a copy instead. Modifying the array is not guaranteed to modify the raster. When the raster data must be replaced by an element-wise computation, use the Grid.apply(func) method, which operates in-place. The apply method may be chained.
```
Example
grid.apply(lambda x: x**2) \
.apply(np.sin) \
.apply(lambda x: np.where(x < 0.5, grid.nodata, x))
```
This handles nodata pixels automatically. If the raster data must be replaced by arbitrary data, set it explicitly with Grid[:,:] = ....
```
Example
grid[:,:] = np.convolve(np.ones([3,3])/9.0, grid[:,:], mode='same')
```
End of explanation
"""
from karta.crs import LonLatWGS84, GallPetersEqualArea
newgrid = RegularGrid((-180, -80, 5, 5, 0, 0), values=np.zeros((160//5, 360//5)), crs=LonLatWGS84)
# visualize the coordinate positions on a Gall-Peters projection
coords = newgrid.coordinates(crs=GallPetersEqualArea)
x, y = coords[:,:]
_ = plt.plot(x, y, ".k", ms=2)
"""
Explanation: Creating RegularGrid instances
New RegularGrid instances are created by specifying a geotransform. The geotransform is represented by a tuple of the form
transform = (xll, yll, dx, dy, sx, sy)
where xll and yll are the coordinates of the lower left grid corner, dx and dy specify resolution, and sx and sy specify grid skew and rotation.
The following creates an empty global grid with 5 degree resolution, oriented "north-up" and "east-right", and then plots the pixel centers:
End of explanation
"""
|
SylvainCorlay/bqplot
|
examples/Interactions/Selectors.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
symbol = 'Security 1'
symbol2 = 'Security 2'
price_data = pd.DataFrame(np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.4], [0.4, 1.0]]), axis=0) + 100,
columns=[symbol, symbol2],
index=pd.date_range(start='01-01-2007', periods=150))
dates_actual = price_data.index.values
prices = price_data[symbol].values
from bqplot import *
from bqplot.interacts import (
FastIntervalSelector, IndexSelector, BrushIntervalSelector,
BrushSelector, MultiSelector, LassoSelector,
)
from ipywidgets import ToggleButtons, VBox, HTML
"""
Explanation: Selectors
Index
Introduction
Brush Selectors
FastIntervalSelector
LassoSelector
IndexSelector
MultiSelector
End of explanation
"""
# Define scales for the rest of the notebook
scales = {'x': DateScale(), 'y': LinearScale()}
"""
Explanation: Introduction <a class="anchor" id="introduction"></a>
Selectors are part of the Interaction Layer (link).
They are used to select subparts of Marks, that correspond to different regions on the Figure canvas. Different types of selectors select different types of regions:
- BrushSelector, FastIntervalSelector and MultiSelector select rectangular regions
- IndexSelector selects the elements closest to an abcissa
- LassoSelector selects elements in a region drawn by the user
How they work
bqplot Selectors need to be tied to two other widgets:
- One or several marks. Their selected attribute, a list of data indices, will be set by the Selector instance.
- One (1d selection) or two (2d selection) Scales. These are the scales that the Selector operates on. The Selector's selected attribute will be expressed as values of those scales.
The Selector must then be passed to the desired Figure, as its interaction attribute.
Hopefully this will be clear in the following examples.
End of explanation
"""
# The Mark we want to select subsamples of
scatter = Scatter(x=dates_actual, y=prices, scales=scales, colors=['orange'],
selected_style={'opacity': '1'}, unselected_style={'opacity': '0.2'})
# Create the brush selector, passing it its corresponding scale.
# Notice that we do not pass it any marks for now
brushintsel = BrushIntervalSelector(scale=scales['x'])
x_ax = Axis(label='Index', scale=scales['x'])
x_ay = Axis(label=(symbol + ' Price'), scale=scales['y'], orientation='vertical')
# Pass the Selector instance to the Figure
fig = Figure(marks=[scatter], axes=[x_ax, x_ay],
title='''Brush Interval Selector Example. Click and drag on the Figure to action.''',
interaction=brushintsel)
# The following text widgets are used to display the `selected` attributes
text_brush = HTML()
text_scatter = HTML()
# This function updates the text, triggered by a change in the selector
def update_brush_text(*args):
text_brush.value = "The Brush's selected attribute is {}".format(brushintsel.selected)
def update_scatter_text(*args):
text_scatter.value = "The scatter's selected indices are {}".format(scatter.selected)
brushintsel.observe(update_brush_text, 'selected')
scatter.observe(update_scatter_text, 'selected')
update_brush_text()
update_scatter_text()
# Display
VBox([fig, text_brush, text_scatter])
"""
Explanation: Brush Selectors <a class="anchor" id="brushselectors"></a>
Selects a rectangular region of the Figure.
Usage:
Click and drag to create a new brush
Drag the edge of the brush to change its width
Drag the inside of the brush to translate it
Clicking and dragging outside of the brush deletes it and creates a new one.
End of explanation
"""
brushintsel.marks = [scatter]
"""
Explanation: Linking the brush to the scatter
Passing a mark (or several) to the selector, will link the mark's selected indices to the selector.
End of explanation
"""
def create_figure(selector, **selector_kwargs):
'''
Returns a Figure with a Scatter and a Selector.
Arguments
---------
selector: The type of Selector, one of
{'BrushIntervalSelector', 'BrushSelector', 'FastIntervalSelector', 'IndexSelector', 'LassoSelector'}
selector_kwargs: Arguments to be passed to the Selector
'''
scatter = Scatter(x=dates_actual, y=prices, scales=scales, colors=['orange'],
selected_style={'opacity': '1'}, unselected_style={'opacity': '0.2'})
sel = selector(marks=[scatter], **selector_kwargs)
text_brush = HTML()
if selector != LassoSelector:
def update_text(*args):
text_brush.value = '{}.selected = {}'.format(selector.__name__, sel.selected)
sel.observe(update_text, 'selected')
update_text()
x_ax = Axis(label='Index', scale=scales['x'])
x_ay = Axis(label=(symbol + ' Price'), scale=scales['y'], orientation='vertical')
fig = Figure(marks=[scatter], axes=[x_ax, x_ay], title='{} Example'.format(selector.__name__),
interaction=sel)
return VBox([fig, text_brush])
"""
Explanation: From now on we will stop printing out the selected indices, but rather use the selected_style and unselected_style attributes of the Marks to check which elements are selected.
End of explanation
"""
create_figure(BrushIntervalSelector, orientation='vertical', scale=scales['y'])
"""
Explanation: BrushIntervalSelector on the y-axis
The attribute orientation can be set to 'vertical' to select on the y-axis. Be careful to pass the corresponding y-scale.
End of explanation
"""
create_figure(BrushSelector, x_scale=scales['x'], y_scale=scales['y'])
"""
Explanation: 2d BrushSelector
The BrushSelector is 2d, and must be fed 2 scales, x_scale and y_scale.
Note that BrushSelector.selected is now 2x2. It is the coordinates of the lower left-hand and upper right-hand corners of the rectangle.
End of explanation
"""
create_figure(FastIntervalSelector, scale=scales['x'])
"""
Explanation: FastIntervalSelector <a class="anchor" id="fastintervalselector"></a>
The FastIntervalSelector is functionally like a BrushIntervalSelector, but provides a more fluid and rapid interaction.
Usage:
The first click creates the selector.
Moving the mouse up and down widens and narrows the interval width.
Moving the mouse left and right translates the interval left and right.
Subsequent clicks will freeze/unfreeze the interval width
A double-click will freeze both the width and the translation
Experiment and get a feel for it in the example below.
End of explanation
"""
create_figure(LassoSelector)
"""
Explanation: As of the latest version, FastIntervalSelector is only supported for 1d interaction along the x-axis
LassoSelector <a class="anchor" id="lassoselector"></a>
This 2-D selector enables the user to select multiple sets of data points
by drawing lassos on the figure.
Usage:
Click and drag to draw a new lasso
Click a lasso to select (de-select) it. Mult
Press the 'Delete' button to delete the selected lassos
End of explanation
"""
create_figure(IndexSelector, scale=scales['x'])
"""
Explanation: IndexSelector <a class="anchor" id="indexselector"></a>
This 1-D selector selects a unique value on its scale. The attached Mark's selected element is the closest element to that value.
Usage:
First click creates and activates the selector
Moving the mouse translates the selector
Subsequent clicks freeze/unfreeze the selector
End of explanation
"""
create_figure(MultiSelector, scale=scales['x'])
"""
Explanation: As of the latest version, IndexSelector is only supported for interaction along the x-axis.
MultiSelector <a class="anchor" id="multiselector"></a>
This 1-D selector is equivalent to multiple brush selectors.
Usage:
The first brush works like a regular brush.
Ctrl + click creates a new brush, which works like the regular brush.
The active brush has a Green border while all the inactive brushes have a Red border.
Shift + click deactivates the current active brush. Now, click on any inactive brush to make it active.
Ctrl + Shift + click clears and resets all the brushes.
Each brush has a name (0, 1, 2, ... by default), and the selected attribute is a dict {brush_name: brush_extent}
End of explanation
"""
|
thalesians/tsa
|
src/jupyter/python/conditions.ipynb
|
apache-2.0
|
import os, sys
sys.path.append(os.path.abspath('../../main/python'))
from thalesians.tsa.conditions import precondition, postcondition
"""
Explanation: Conditions
Introduction
Python lacks the power, flexibility — and also the quirks — of the C++ preprocessor. It does not support conditional compilation. When implementing numerical routines, one faces the dilemma: many sanity checks are essential during the research and development phase, but introduce a prohibitive performance hit in production. Yet, some checks are also essential in production — as evidenced by the relatively recent electronic trading disasters, which they could have helped avoid.
As we said, there is no conditional compilation in Python. But to some extent it may be simulated using decorators. We make extensive use of decorators in thalesians.tsa. One place, where they are particularly useful, is the evaluation of pre- and post-conditions.
First, let us load some modules...
End of explanation
"""
class Subtractor(object):
@precondition(lambda self, arg1, arg2: arg1 >= 0, 'arg1 must be greater than or equal to 0')
@precondition(lambda self, arg1, arg2: arg2 >= 0, 'arg2 must be greater than or equal to 0')
@postcondition(lambda result: result >= 0, 'result must be greater than or equal to 0')
def subtract(self, arg1, arg2):
return arg1 - arg2
"""
Explanation: Pre- and post-conditions using decorators
Consider the following (somewhat contrived) example:
End of explanation
"""
subtractor = Subtractor()
subtractor.subtract(300, 200)
"""
Explanation: (Notice how lambdas facilitate lazy evaluation. We often use them in thalesians.tsa to avoid computing things unnecessarily.)
Now, the following will pass the conditions:
End of explanation
"""
MIN_PRECONDITION_LEVEL = 5
MIN_POSTCONDITION_LEVEL = 7
"""
Explanation: Whereas the following would raise an AssertionError:
How can we selectively enable/disable pre- and post-conditions? Notice that the decorators precondition and postcondition take the optional argument level, which defaults to 1. In tsa_settings.py we declare MIN_PRECONDITION_LEVEL and MIN_POSTCONDITION_LEVEL. They default to 1 if __debug__ and sys.maxsize if not. The user can override these in a local_tsa_settings module of his/her project, e.g.
End of explanation
"""
|
CNS-OIST/STEPS_Example
|
other_tutorials/OCNC2017/OCNC2017 STEPS tutorial execises.ipynb
|
gpl-2.0
|
# Import biochemical model module
import steps.model as smod
# Create model container
mdl = smod.Model()
# Create chemical species
A = smod.Spec('A', mdl)
B = smod.Spec('B', mdl)
C = smod.Spec('C', mdl)
# Create reaction set container
vsys = smod.Volsys('vsys', mdl)
# Create reaction
# A + B - > C with rate 200 /uM.s
reac_f = smod.Reac('reac_f', vsys, lhs=[A,B], rhs = [C])
reac_f.setKcst(200e6)
"""
Explanation: OCNC2017 STEPS Tutorial Execise
In this notebook we will try to create a STEPS simulation script from scratch by modifying the examples given in the tutorial. Please follow the tutor's instruction step by step.
Build a biochemical model
Here is the example of a 2nd order reaction $A+B\overset{k}{\rightarrow}C$, where the reaction constant $k$ is set to 200 /uM.s:
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
# MEKpERK -> MEKp + ERK, rate constant 0.6
# MEKpERK -> MEKp + ERKp, rate constant 0.15
"""
Explanation: For complex model, we can break it down into elementary reactions, for example, the following model
$E+S\underset{k_{-1}}{\overset{k_{1}}{\rightleftarrows}}ES\overset{k_{2}}{\rightarrow}E+P$
is broken down into 3 reactions in STEPS
1: $E+S\overset{k_{1}}{\rightarrow}ES$
2: $ES\overset{k_{-1}}{\rightarrow}E+S$
3: $ES\overset{k_{2}}{\rightarrow}E+P$
Execise 1: Create a kinase reaction model in STEPS
Modify the script below for this kinase reaction system:
$MEKp+ERK\underset{0.6}{\overset{16.2*10^{6}}{\rightleftarrows}}MEKpERK\overset{0.15}{\rightarrow}MEKp+ERKp$
Hint: Break it down in to these elementary reactions
1: $MEKp+ERK\overset{16.2*10^{6}}{\rightarrow}MEKpERK$
2: $MEKpERK\overset{0.6}{\rightarrow}MEKp+ERK$
3: $MEKpERK\overset{0.15}{\rightarrow}MEKp+ERKp$
End of explanation
"""
# Import geometry module
import steps.geom as sgeom
# Create well-mixed geometry container
wmgeom = sgeom.Geom()
# Create cytosol compartment
cyt = sgeom.Comp('cyt', wmgeom)
# Give volume to cyt (1um^3)
cyt.setVol(1.0e-18)
# Assign reaction set to compartment
cyt.addVolsys('vsys')
"""
Explanation: Setup geometry
You can easily setup the geometry of a well-mixed model by providing the volume of the geometry as well as the voume system it associated with.
End of explanation
"""
# Import random number generator module
import steps.rng as srng
# Create random number generator, with buffer size as 256
r = srng.create('mt19937', 256)
# Initialise with some seed
r.initialize(899)
# Could use time to get random seed
#import time
#r.initialize(int(time.time()))
"""
Explanation: Create a random number generator
You can use the follow code to create a random number generator for the simulation, currently available generators are "mt19937" and "r123".
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
MEKp_ERK_to_MEKpERK = smod.Reac('MEKp_ERK_to_MEKpERK', execise_vsys, lhs=[MEKp,ERK], rhs = [MEKpERK])
MEKp_ERK_to_MEKpERK.setKcst(16.2e6)
# MEKpERK -> MEKp + ERK, rate constant 0.6
MEKpERK_to_MEKp_ERK = smod.Reac('MEKpERK_to_MEKp_ERK', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERK])
MEKpERK_to_MEKp_ERK.setKcst(0.6)
# MEKpERK -> MEKp + ERKp, rate constant 0.15
MEKpERK_to_MEKp_ERKp = smod.Reac('MEKpERK_to_MEKp_ERKp', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERKp])
MEKpERK_to_MEKp_ERKp.setKcst(0.15)
####### You script after execise 1 should look like above #######
# Create a compartment of 0.1um^3
# Associate the compartment with the volume system 'vsys'
# Create and initialize a 'r123' random number generator
"""
Explanation: Execise 2: Create the geometry and random number generator for the kinase reaction model
Let's continue our kinase reaction simulation script, here are the tasks:
1. Create a compartment of $0.1um^{3}$ (note that STEPS uses S.I units)
2. Associate the compartment with the volume system we've previously created
3. Create a "r123" random number generator and initialize it with with some seed
End of explanation
"""
# Import solver module
import steps.solver as ssolv
# Create Well-mixed Direct solver
sim_direct = ssolv.Wmdirect(mdl, wmgeom, r)
# Inject 10 ‘A’ molecules
sim_direct.setCompCount('cyt','A', 10)
# Set concentration of ‘B’ molecules
sim_direct.setCompConc('cyt', 'B', 0.0332e-6)
"""
Explanation: Create and initialize a solver
For well-mixed simulation, we create a "wmdirect" solver and initialize it by adding molecules to the compartment.
End of explanation
"""
# Run simulation for 0.1s
sim_direct.run(0.1)
# Return the number of A molecules
sim_direct.getCompCount('cyt', 'A')
"""
Explanation: Run the solver and gather simulation data
After that we can run the solver until it reaches a specific time point, say 0.1 second. You can gather simulation data such as molecule counts using many STEPS APIs, for example
End of explanation
"""
# Reset the solver and reinitizlize molecule counts
sim_direct.reset()
# Inject 10 ‘A’ molecules
sim_direct.setCompCount('cyt','A', 10)
# Set concentration of ‘B’ molecules
sim_direct.setCompConc('cyt', 'B', 0.0332e-6)
# Import numpy
import numpy as np
# Create time-point numpy array, starting at time 0, end at 0.5 second and record data every 0.001 second
tpnt = np.arange(0.0, 0.501, 0.001)
# Calculate number of time points
n_tpnts = len(tpnt)
# Create data array, initialised with zeros
res_direct = np.zeros([n_tpnts, 3])
# Run simulation and record data
for t in range(0, n_tpnts):
sim_direct.run(tpnt[t])
res_direct[t,0] = sim_direct.getCompCount('cyt','A')
res_direct[t,1] = sim_direct.getCompCount('cyt','B')
res_direct[t,2] = sim_direct.getCompCount('cyt','C')
"""
Explanation: In practice, it is often necessary to store simulation data in a numpy array or a file for plotting or further analysis. For example, here we record the number of molcules using numpy array.
End of explanation
"""
print(res_direct)
"""
Explanation: Let's check what is inside the array now:
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
MEKp_ERK_to_MEKpERK = smod.Reac('MEKp_ERK_to_MEKpERK', execise_vsys, lhs=[MEKp,ERK], rhs = [MEKpERK])
MEKp_ERK_to_MEKpERK.setKcst(16.2e6)
# MEKpERK -> MEKp + ERK, rate constant 0.6
MEKpERK_to_MEKp_ERK = smod.Reac('MEKpERK_to_MEKp_ERK', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERK])
MEKpERK_to_MEKp_ERK.setKcst(0.6)
# MEKpERK -> MEKp + ERKp, rate constant 0.15
MEKpERK_to_MEKp_ERKp = smod.Reac('MEKpERK_to_MEKp_ERKp', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERKp])
MEKpERK_to_MEKp_ERKp.setKcst(0.15)
####### You script after execise 1 should look like above #######
# Create a compartment of 0.1um^3
import steps.geom as sgeom
execise_wmgeom = sgeom.Geom()
execise_cyt = sgeom.Comp('execise_cyt', execise_wmgeom)
execise_cyt.setVol(0.1e-18)
# Associate the compartment with the volume system 'vsys'
execise_cyt.addVolsys('execise_vsys')
# Create and initialize a 'r123' random number generator
import steps.rng as srng
execise_r = srng.create('r123', 256)
execise_r.initialize(1)
####### You script after execise 2 should look like above #######
# Create a "wmdirect" solver and set the initial condition:
# MEKp = 1uM
# ERK = 1.5uM
# Run the simulation for 30 seconds, record concerntrations of each molecule every 0.01 seconds.
"""
Explanation: Execise 3: Run your kinase model in STEPS
Here are the tasks:
1. Create a "wmdirect" solver and set the initial condition:
* MEKp = 1uM
* ERK = 1.5uM
2. Run the simulation for 30 seconds, record concerntrations of each molecule every 0.01 seconds.
End of explanation
"""
from pylab import *
%matplotlib inline
plot(tpnt, res_direct[:,0], label='A')
plot(tpnt, res_direct[:,1], label='B')
plot(tpnt, res_direct[:,2], label='C')
ylabel('Number of molecules')
xlabel('Time(sec)')
legend()
show()
"""
Explanation: Visuzalize simulation data
Visuzliation is often needed to analyze the behavior of the simulation, here we use Matplotlib to plot the data.
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
MEKp_ERK_to_MEKpERK = smod.Reac('MEKp_ERK_to_MEKpERK', execise_vsys, lhs=[MEKp,ERK], rhs = [MEKpERK])
MEKp_ERK_to_MEKpERK.setKcst(16.2e6)
# MEKpERK -> MEKp + ERK, rate constant 0.6
MEKpERK_to_MEKp_ERK = smod.Reac('MEKpERK_to_MEKp_ERK', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERK])
MEKpERK_to_MEKp_ERK.setKcst(0.6)
# MEKpERK -> MEKp + ERKp, rate constant 0.15
MEKpERK_to_MEKp_ERKp = smod.Reac('MEKpERK_to_MEKp_ERKp', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERKp])
MEKpERK_to_MEKp_ERKp.setKcst(0.15)
####### You script after execise 1 should look like above #######
# Create a compartment of 0.1um^3
import steps.geom as sgeom
execise_wmgeom = sgeom.Geom()
execise_cyt = sgeom.Comp('execise_cyt', execise_wmgeom)
execise_cyt.setVol(0.1e-18)
# Associate the compartment with the volume system 'vsys'
execise_cyt.addVolsys('execise_vsys')
# Create and initialize a 'r123' random number generator
import steps.rng as srng
execise_r = srng.create('r123', 256)
execise_r.initialize(143)
####### You script after execise 2 should look like above #######
# Create a "wmdirect" solver and set the initial condition:
# MEKp = 1uM
# ERK = 1.5uM
import steps.solver as ssolv
execise_sim = ssolv.Wmdirect(execise_mdl, execise_wmgeom, execise_r)
execise_sim.setCompConc('execise_cyt','MEKp', 1e-6)
execise_sim.setCompConc('execise_cyt','ERK', 1.5e-6)
# Run the simulation for 30 seconds, record concerntrations of each molecule every 0.01 seconds.
import numpy as np
execise_tpnts = np.arange(0.0, 30.01, 0.01)
n_tpnts = len(execise_tpnts)
execise_res = np.zeros([n_tpnts, 4])
# Run simulation and record data
for t in range(0, n_tpnts):
execise_sim.run(execise_tpnts[t])
execise_res[t,0] = execise_sim.getCompCount('execise_cyt','MEKp')
execise_res[t,1] = execise_sim.getCompCount('execise_cyt','ERK')
execise_res[t,2] = execise_sim.getCompCount('execise_cyt','MEKpERK')
execise_res[t,3] = execise_sim.getCompCount('execise_cyt','ERKp')
####### You script after execise 3 should look like above #######
# Plot execise_res
"""
Explanation: Execise 4: Plot the results of the kinase simulation
Let's now plot the result of our execise.
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
MEKp_ERK_to_MEKpERK = smod.Reac('MEKp_ERK_to_MEKpERK', execise_vsys, lhs=[MEKp,ERK], rhs = [MEKpERK])
MEKp_ERK_to_MEKpERK.setKcst(16.2e6)
# MEKpERK -> MEKp + ERK, rate constant 0.6
MEKpERK_to_MEKp_ERK = smod.Reac('MEKpERK_to_MEKp_ERK', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERK])
MEKpERK_to_MEKp_ERK.setKcst(0.6)
# MEKpERK -> MEKp + ERKp, rate constant 0.15
MEKpERK_to_MEKp_ERKp = smod.Reac('MEKpERK_to_MEKp_ERKp', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERKp])
MEKpERK_to_MEKp_ERKp.setKcst(0.15)
####### You script after execise 1 should look like above #######
# Create a compartment of 0.1um^3
import steps.geom as sgeom
execise_wmgeom = sgeom.Geom()
execise_cyt = sgeom.Comp('execise_cyt', execise_wmgeom)
execise_cyt.setVol(0.1e-18)
# Associate the compartment with the volume system 'vsys'
execise_cyt.addVolsys('execise_vsys')
# Create and initialize a 'r123' random number generator
import steps.rng as srng
execise_r = srng.create('r123', 256)
execise_r.initialize(143)
####### You script after execise 2 should look like above #######
# Create a "wmdirect" solver and set the initial condition:
# MEKp = 1uM
# ERK = 1.5uM
import steps.solver as ssolv
execise_sim = ssolv.Wmdirect(execise_mdl, execise_wmgeom, execise_r)
execise_sim.setCompConc('execise_cyt','MEKp', 1e-6)
execise_sim.setCompConc('execise_cyt','ERK', 1.5e-6)
# Run the simulation for 30 seconds, record concerntrations of each molecule every 0.01 seconds.
import numpy as np
execise_tpnts = np.arange(0.0, 30.01, 0.01)
n_tpnts = len(execise_tpnts)
execise_res = np.zeros([n_tpnts, 4])
# Run simulation and record data
for t in range(0, n_tpnts):
execise_sim.run(execise_tpnts[t])
execise_res[t,0] = execise_sim.getCompCount('execise_cyt','MEKp')
execise_res[t,1] = execise_sim.getCompCount('execise_cyt','ERK')
execise_res[t,2] = execise_sim.getCompCount('execise_cyt','MEKpERK')
execise_res[t,3] = execise_sim.getCompCount('execise_cyt','ERKp')
####### You script after execise 3 should look like above #######
# Plot execise_res
from pylab import *
plot(execise_tpnts, execise_res[:,0], label='MEKp')
plot(execise_tpnts, execise_res[:,1], label='ERK')
plot(execise_tpnts, execise_res[:,2], label='MEKpERK')
plot(execise_tpnts, execise_res[:,3], label='ERKp')
ylabel('Number of molecules')
xlabel('Time(sec)')
legend()
show()
####### You script after execise 4 should look like above #######
"""
Explanation: Here is the complete script for our well-mixed kinase simulation:
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
mdl = smod.Model()
# Create chemical species
A = smod.Spec('A', mdl)
B = smod.Spec('B', mdl)
C = smod.Spec('C', mdl)
# Create reaction set container
vsys = smod.Volsys('vsys', mdl)
# Create reaction
# A + B - > C with rate 200 /uM.s
reac_f = smod.Reac('reac_f', vsys, lhs=[A,B], rhs = [C])
reac_f.setKcst(200e6)
###### Above is the previous well-mixed biochemical model
# We add diffusion rules for species A, B and C
diff_a = smod.Diff('diff_a', vsys, A)
diff_a.setDcst(0.02e-9)
diff_b = smod.Diff('diff_b', vsys, B)
diff_b.setDcst(0.02e-9)
diff_c = smod.Diff('diff_c', vsys, C)
diff_c.setDcst(0.02e-9)
"""
Explanation: From well-mixed simulation to spatial simulation
To convert a well-mixed simulation to a spatial one, here are the basic steps:
1. Add diffusion rules for every diffusive species in the biochemical model.
2. Change the well-mixed geoemtry to a tetrahedral mesh.
3. Change the solver to "Tetexact".
First, let's see how to add diffusion rules in our example well-mixed model:
End of explanation
"""
'''
# Import geometry module
import steps.geom as sgeom
# Create well-mixed geometry container
wmgeom = sgeom.Geom()
# Create cytosol compartment
cyt = sgeom.Comp('cyt', wmgeom)
# Give volume to cyt (1um^3)
cyt.setVol(1.0e-18)
# Assign reaction set to compartment
cyt.addVolsys('vsys')
'''
##### above is the old well-mixed geometry ##########
import steps.geom as sgeom
import steps.utilities.meshio as meshio
# Import the mesh
mesh = meshio.importAbaqus('meshes/1x1x1_cube.inp', 1.0e-6)[0]
# Create mesh-based compartment
cyt = sgeom.TmComp('cyt', mesh, range(mesh.ntets))
# Add volume system to the compartment
cyt.addVolsys('vsys')
"""
Explanation: We now import a tetrahedral mesh using the steps.utilities.meshio module to replace the well-mixed geometry:
End of explanation
"""
# Import solver module
import steps.solver as ssolv
'''
# Create Well-mixed Direct solver
sim_direct = ssolv.Wmdirect(mdl, wmgeom, r)
'''
##### above is the old well-mixed Wmdirect solver ##########
# Create a spatial Tetexact solver
sim_tetexact = ssolv.Tetexact(mdl, mesh, r)
"""
Explanation: Finally, we replace the "Wmdirect" solver with the spatial "Tetexact" solver:
End of explanation
"""
# Inject 10 ‘A’ molecules
sim_tetexact.setCompCount('cyt','A', 10)
# Set concentration of ‘B’ molecules
sim_tetexact.setCompConc('cyt', 'B', 0.0332e-6)
# Import numpy
import numpy as np
# Create time-point numpy array, starting at time 0, end at 0.5 second and record data every 0.001 second
tpnt = np.arange(0.0, 0.501, 0.001)
# Calculate number of time points
n_tpnts = len(tpnt)
# Create data array, initialised with zeros
res_tetexact = np.zeros([n_tpnts, 3])
# Run simulation and record data
for t in range(0, n_tpnts):
sim_tetexact.run(tpnt[t])
res_tetexact[t,0] = sim_tetexact.getCompCount('cyt','A')
res_tetexact[t,1] = sim_tetexact.getCompCount('cyt','B')
res_tetexact[t,2] = sim_tetexact.getCompCount('cyt','C')
from pylab import *
plot(tpnt, res_tetexact[:,0], label='A')
plot(tpnt, res_tetexact[:,1], label='B')
plot(tpnt, res_tetexact[:,2], label='C')
ylabel('Number of molecules')
xlabel('Time(sec)')
legend()
show()
"""
Explanation: The "Wmdirect" solver and the "Tetexact" solver share most of the APIs, so we can reuse our old script for simulation control and plotting:
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
MEKp_ERK_to_MEKpERK = smod.Reac('MEKp_ERK_to_MEKpERK', execise_vsys, lhs=[MEKp,ERK], rhs = [MEKpERK])
MEKp_ERK_to_MEKpERK.setKcst(16.2e6)
# MEKpERK -> MEKp + ERK, rate constant 0.6
MEKpERK_to_MEKp_ERK = smod.Reac('MEKpERK_to_MEKp_ERK', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERK])
MEKpERK_to_MEKp_ERK.setKcst(0.6)
# MEKpERK -> MEKp + ERKp, rate constant 0.15
MEKpERK_to_MEKp_ERKp = smod.Reac('MEKpERK_to_MEKp_ERKp', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERKp])
MEKpERK_to_MEKp_ERKp.setKcst(0.15)
########### execise 5.1: Add diffusion constants
# * MEKp = 30e-12 m^2/s
# * ERK = 30e-12 m^2/s
# * MEKpERK = 10e-12 m^2/s
####### You script after execise 1 should look like above #######
########### execise 5.2: Replace the geometry to use mesh 'meshes/sp_0.1v_1046.inp'
# Create a compartment of 0.1um^3
import steps.geom as sgeom
execise_wmgeom = sgeom.Geom()
execise_cyt = sgeom.Comp('execise_cyt', execise_wmgeom)
execise_cyt.setVol(0.1e-18)
# Associate the compartment with the volume system 'vsys'
execise_cyt.addVolsys('execise_vsys')
# Create and initialize a 'r123' random number generator
import steps.rng as srng
execise_r = srng.create('r123', 256)
execise_r.initialize(143)
####### You script after execise 2 should look like above #######
# Create a "wmdirect" solver and set the initial condition:
# MEKp = 1uM
# ERK = 1.5uM
import steps.solver as ssolv
########### execise 5.3: Change the solver to Tetexact
execise_sim = ssolv.Wmdirect(execise_mdl, execise_wmgeom, execise_r)
execise_sim.setCompConc('execise_cyt','MEKp', 1e-6)
execise_sim.setCompConc('execise_cyt','ERK', 1.5e-6)
# Run the simulation for 30 seconds, record concerntrations of each molecule every 0.01 seconds.
import numpy as np
execise_tpnts = np.arange(0.0, 30.01, 0.01)
n_tpnts = len(execise_tpnts)
execise_res = np.zeros([n_tpnts, 4])
# Run simulation and record data
for t in range(0, n_tpnts):
execise_sim.run(execise_tpnts[t])
execise_res[t,0] = execise_sim.getCompCount('execise_cyt','MEKp')
execise_res[t,1] = execise_sim.getCompCount('execise_cyt','ERK')
execise_res[t,2] = execise_sim.getCompCount('execise_cyt','MEKpERK')
execise_res[t,3] = execise_sim.getCompCount('execise_cyt','ERKp')
####### You script after execise 3 should look like above #######
# Plot execise_res
from pylab import *
plot(execise_tpnts, execise_res[:,0], label='MEKp')
plot(execise_tpnts, execise_res[:,1], label='ERK')
plot(execise_tpnts, execise_res[:,2], label='MEKpERK')
plot(execise_tpnts, execise_res[:,3], label='ERKp')
ylabel('Number of molecules')
xlabel('Time(sec)')
legend()
show()
####### You script after execise 4 should look like above #######
"""
Explanation: Execise 5: Modify your well-mixed kinase simulation to a spatial one
Let's now convert the below well-mixed kinase model to a spatial one, here are the tasks:
Add diffusion constants:
MEKp = 30e-12 $m^2/s$
ERK = 30e-12 $m^2/s$
MEKpERK = 10e-12 $m^2/s$
Replace the geometry to use mesh 'meshes/sp_0.1v_1046.inp'
Change the solver to Tetexact
Run the simulation again
End of explanation
"""
# Import biochemical model module
import steps.model as smod
# Create model container
execise_mdl = smod.Model()
# Create chemical species
MEKp = smod.Spec('MEKp', execise_mdl)
ERK = smod.Spec('ERK', execise_mdl)
MEKpERK = smod.Spec('MEKpERK', execise_mdl)
ERKp = smod.Spec('ERKp', execise_mdl)
# Create reaction set container (volume system)
execise_vsys = smod.Volsys('execise_vsys', execise_mdl)
# Create reactions (Do it yourself)
# MEKp + ERK -> MEKpERK, rate constant 16.2*10e6
MEKp_ERK_to_MEKpERK = smod.Reac('MEKp_ERK_to_MEKpERK', execise_vsys, lhs=[MEKp,ERK], rhs = [MEKpERK])
MEKp_ERK_to_MEKpERK.setKcst(16.2e6)
# MEKpERK -> MEKp + ERK, rate constant 0.6
MEKpERK_to_MEKp_ERK = smod.Reac('MEKpERK_to_MEKp_ERK', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERK])
MEKpERK_to_MEKp_ERK.setKcst(0.6)
# MEKpERK -> MEKp + ERKp, rate constant 0.15
MEKpERK_to_MEKp_ERKp = smod.Reac('MEKpERK_to_MEKp_ERKp', execise_vsys, lhs = [MEKpERK], rhs=[MEKp,ERKp])
MEKpERK_to_MEKp_ERKp.setKcst(0.15)
########### execise 5.1: Add diffusion constants
# * MEKp = 30e-12 m^2/s
# * ERK = 30e-12 m^2/s
# * MEKpERK = 10e-12 m^2/s
diff_MEKp = smod.Diff('diff_MEKp', execise_vsys, MEKp)
diff_MEKp.setDcst(30e-12)
diff_ERK = smod.Diff('diff_ERK', execise_vsys, ERK)
diff_ERK.setDcst(30e-12)
diff_MEKpERK = smod.Diff('diff_MEKpERK', execise_vsys, MEKpERK)
diff_MEKpERK.setDcst(10e-12)
####### You script after execise 1 should look like above #######
########### execise 5.2: Replace the geometry to use mesh 'meshes/sp_0.1v_1046.inp'
import steps.geom as sgeom
import steps.utilities.meshio as meshio
mesh = meshio.importAbaqus('meshes/sp_0.1v_1046.inp', 1.0e-6)[0]
execise_cyt = sgeom.TmComp('execise_cyt', mesh, range(mesh.ntets))
execise_cyt.addVolsys('execise_vsys')
# Create and initialize a 'r123' random number generator
import steps.rng as srng
execise_r = srng.create('r123', 256)
execise_r.initialize(143)
####### You script after execise 2 should look like above #######
# Create a "wmdirect" solver and set the initial condition:
# MEKp = 1uM
# ERK = 1.5uM
import steps.solver as ssolv
########### execise 5.3: Change the solver to Tetexact
execise_sim = ssolv.Tetexact(execise_mdl, mesh, execise_r)
execise_sim.setCompConc('execise_cyt','MEKp', 1e-6)
execise_sim.setCompConc('execise_cyt','ERK', 1.5e-6)
# Run the simulation for 30 seconds, record concerntrations of each molecule every 0.01 seconds.
import numpy as np
execise_tpnts = np.arange(0.0, 30.01, 0.01)
n_tpnts = len(execise_tpnts)
execise_res = np.zeros([n_tpnts, 4])
# Run simulation and record data
for t in range(0, n_tpnts):
execise_sim.run(execise_tpnts[t])
execise_res[t,0] = execise_sim.getCompCount('execise_cyt','MEKp')
execise_res[t,1] = execise_sim.getCompCount('execise_cyt','ERK')
execise_res[t,2] = execise_sim.getCompCount('execise_cyt','MEKpERK')
execise_res[t,3] = execise_sim.getCompCount('execise_cyt','ERKp')
####### You script after execise 3 should look like above #######
# Plot execise_res
from pylab import *
plot(execise_tpnts, execise_res[:,0], label='MEKp')
plot(execise_tpnts, execise_res[:,1], label='ERK')
plot(execise_tpnts, execise_res[:,2], label='MEKpERK')
plot(execise_tpnts, execise_res[:,3], label='ERKp')
ylabel('Number of molecules')
xlabel('Time(sec)')
legend()
show()
####### You script after execise 4 should look like above #######
"""
Explanation: Here is the modified script
End of explanation
"""
|
damienstanton/nanodegree
|
CarND-LaneLines-P1/P1.ipynb
|
mit
|
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
"""
Explanation: run your solution on all test_images and make copies into the test_images directory).
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image with lines are drawn on lanes)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/strings_to_datetime.ipynb
|
mit
|
from datetime import datetime
from dateutil.parser import parse
import pandas as pd
"""
Explanation: Title: Converting Strings To Datetime
Slug: strings_to_datetime
Summary: Converting Strings To Datetime
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Import modules
End of explanation
"""
war_start = '2011-01-03'
"""
Explanation: Create a string variable with the war start time
End of explanation
"""
datetime.strptime(war_start, '%Y-%m-%d')
"""
Explanation: Convert the string to datetime format
End of explanation
"""
attack_dates = ['7/2/2011', '8/6/2012', '11/13/2013', '5/26/2011', '5/2/2001']
"""
Explanation: Create a list of strings as dates
End of explanation
"""
[datetime.strptime(x, '%m/%d/%Y') for x in attack_dates]
"""
Explanation: Convert attack_dates strings into datetime format
End of explanation
"""
parse(war_start)
"""
Explanation: Use parse() to attempt to auto-convert common string formats
End of explanation
"""
[parse(x) for x in attack_dates]
"""
Explanation: Use parse() on every element of the attack_dates string
End of explanation
"""
parse(war_start, dayfirst=True)
"""
Explanation: Use parse, but designate that the day is first
End of explanation
"""
data = {'date': ['2014-05-01 18:47:05.069722', '2014-05-01 18:47:05.119994', '2014-05-02 18:47:05.178768', '2014-05-02 18:47:05.230071', '2014-05-02 18:47:05.230071', '2014-05-02 18:47:05.280592', '2014-05-03 18:47:05.332662', '2014-05-03 18:47:05.385109', '2014-05-04 18:47:05.436523', '2014-05-04 18:47:05.486877'],
'value': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
df = pd.DataFrame(data, columns = ['date', 'value'])
print(df)
"""
Explanation: Create a dataframe
End of explanation
"""
pd.to_datetime(df['date'])
"""
Explanation: Convert df['date'] from string to datetime
End of explanation
"""
|
fraserw/PyMOP
|
tutorial/trippytutorial.ipynb
|
gpl-2.0
|
#%matplotlib inline
import numpy as num, astropy.io.fits as pyf,pylab as pyl
from trippy import psf, pill, psfStarChooser
from trippy import scamp,MCMCfit
import scipy as sci
from os import path
import os
from astropy.visualization import interval, ZScaleInterval
"""
Explanation: TRIPPy examples
Introduction: SExtractor and emcee
To perform photometry and source subtraction, in addition to having a good PSF (which trippy will generate) one needs three very important parameters: x, y, and m, or source position and amplitude.
When one has the PSF and TSF already generated, one can run a fitting routine to solve for these. For this purpose, we use emcee. emcee is an MCMC routine which allows for good estimates of (x,y,m) and their uncertainties. We use a likelihood definition as the natural log likelihood of the exponential flux, basically exactly what you'd expect. If you are uncertain of what this means, or care for more detail, please go read the emcee documentation.
If the PSF or TSF is not yet known, to get a centroid (x,y), we need to use some other software. We haven't included this inside trippy because there is no point in reinventing a wheel that has already been nearly perfected. For this purpose, we use the venerable SExtractor. All jokes on its name aside, sextractor does exactly what we need, as well as we would ever need it to be done.
Trippy includes a module trippy.scamp with functions defined in scamp.py and makeParFiles.py that mearly provide convenient wrappers to call sextractor. This has been done in a couple other packages, but not in a way that satisfies me. Hence my own implementation. A couple details to note: makeParFiles creates all the parameter files in the working directory (eg. makeParFiles.writeConv()), and scamp is responsible for sextractor execution and catalog reading (scamp.runSex() and scamp.getCatalog). Catalogs are stored in FITS_LDAC format. This choice was done to facilitate execution of the sextractor sister program scamp, though we won't need to know what that means for full use of trippy. If you are unfamiliar with sextractor and its use, don't adopt trippy as a blackbox. RTFM!
With that out of the way, on to actual business.
The trippy tutorial
The first thing to do is import all the necessary packages. Note that this notebook assumes you have the optional packages installed, as well as SExtractor available on your command line.
NOTE: proper use of psfStarChooser requires plot interaction. So for this tutorial you'd best comment out the first line, %matplotlib inline. But for my web presentation, I leave inline.
End of explanation
"""
def trimCatalog(cat):
good=[]
for i in range(len(cat['XWIN_IMAGE'])):
try:
a = int(cat['XWIN_IMAGE'][i])
b = int(cat['YWIN_IMAGE'][i])
m = num.max(data[b-4:b+5,a-4:a+5])
except: pass
dist = num.sort(((cat['XWIN_IMAGE']-cat['XWIN_IMAGE'][i])**2+(cat['YWIN_IMAGE']-cat['YWIN_IMAGE'][i])**2)**0.5)
d = dist[1]
if cat['FLAGS'][i]==0 and d>30 and m<70000:
good.append(i)
good=num.array(good)
outcat = {}
for i in cat:
outcat[i] = cat[i][good]
return outcat
"""
Explanation: The function trim catalog is a convenience function to simply return only those sources that are well enough isolated for PSF generation. It rejects any sources within 30 pixels of another source, any sources with peak pixel above 70,000, and any sources that sextractor has flagged for what ever reason. We may fold this into psfStarChooser in the future.
End of explanation
"""
inputFile='Polonskaya.fits'
if not path.isfile(inputFile):
os.system('wget -O Polonskaya.fits http://www.canfar.phys.uvic.ca/vospace/nodes/fraserw/Polonskaya.fits?view=data')
else:
print("We already have the file.")
"""
Explanation: Get the image this tutorial assumes you have. If wget fails then you are likely on a mac, and should just download it manually
End of explanation
"""
with pyf.open(inputFile) as han:
data = han[0].data
header = han[0].header
EXPTIME = header['EXPTIME']
"""
Explanation: First load the fits image and get out the header, data, and exposure time.
End of explanation
"""
scamp.makeParFiles.writeSex('example.sex',
minArea=3.,
threshold=5.,
zpt=27.8,
aperture=20.,
min_radius=2.0,
catalogType='FITS_LDAC',
saturate=55000)
scamp.makeParFiles.writeConv()
scamp.makeParFiles.writeParam(numAps=1) #numAps is thenumber of apertures that you want to use. Here we use 1
scamp.runSex('example.sex', inputFile ,options={'CATALOG_NAME':'example.cat'},verbose=False)
catalog = trimCatalog(scamp.getCatalog('example.cat',paramFile='def.param'))
"""
Explanation: Next run sextractor on the images, and use trimCatalog to create a trimmed down list of isolated sources.
makeParFiles handles the creation of all the sextractor files, including the .sex file which we call example.sex, the default.conv, the param file which is saved as def.param.
.runSex creates example.cat which is read by .getCatalog. getCatalog takes as input the catalog name and the parameter file "def.param".
The parameters that are actually used by psfStarChooser and psf.genLookupTable are XWIN_IMAGE, YWIN_IMAGE, FLUX_AUTO, and FLUXERR_AUTO, which are the x,y coordinates, the flux, and the flux uncertainty estimate respectively. The latter two are used in the SNR cut that psfStarChooser makes.
End of explanation
"""
dist = ((catalog['XWIN_IMAGE']-811)**2+(catalog['YWIN_IMAGE']-4005)**2)**0.5
args = num.argsort(dist)
xt = catalog['XWIN_IMAGE'][args][0]
yt = catalog['YWIN_IMAGE'][args][0]
rate = 18.4588 # "/hr
angle = 31.11+1.1 # degrees counter clockwise from horizontal, right
"""
Explanation: Finally, find the source closest to 811, 4005 which is the bright asteroid, 2006 Polonskaya. Also, set the rate and angle of motion. These were found from JPL horizons. The 1 degree increase is to account for the slight rotation of the image.
Note: in this image, the asteroid is near (4005,811) and we apply a distance sort to the catalog to find correct catalog entry, and the source centroid, which we store in (xt,yt).
Setting the important asteroid parameters. xt,yt contain the location of the asteroid itself (near 811,4005), rate and angle are the rate and angle of traililng, in "/hr and degrees. We find the actual centroid as the location closest to that point.
End of explanation
"""
starChooser=psfStarChooser.starChooser(data,
catalog['XWIN_IMAGE'],catalog['YWIN_IMAGE'],
catalog['FLUX_AUTO'],catalog['FLUXERR_AUTO'])
(goodFits,goodMeds,goodSTDs) = starChooser(30,200,noVisualSelection=False,autoTrim=True,
bgRadius=15, quickFit = False,
printStarInfo = True,
repFact = 5, ftol=1.49012e-08)
print(goodFits)
print(goodMeds)
"""
Explanation: Now use psfStarChooser to select the PSF stars. The first and second parameters to starChooser are the fitting box width in pixels, and the SNR minimum required for a star to be considered as a potential PSF star.
Optional but important inputs are autoTrim and noVisualSelection. The former, when True, uses bgFinder.fraserMode to attempt to determine what FWHM corresponds to actual stars, and rejects all sources with FWHM outside +-0.5 pixels of the modal value. noVisualSelection determines if manual input is required. When set to false, all stars are considered. Until you know the software, I suggest you use noVisualSelection=True for manual selection, and autoTrim=False to see all sources in the plot window.
For each star provided to psfStarChooser, it will print a line to screen of x,y and best fit alpha, beta, and FWHM of the moffat profile fit.
Then psfStarChooser will pop-up a multipanel window. Top left: histogram of fit chi values. Top right: chi vs. FWHM for each fitted source. Middle right: histogram of FWHM. Bottom right: image display of the currently selected source. Bottom left: Radial profiles of all sources displayed in the top right scatter plot.
The point of this window is to select only good stars for PSF generation, done by zooming to the good sources, and rejecting those that are bad.
Use the zoom tool to select the region containing the stars. In this image, that's a cluser at FWHM~3.5 pixels.
Left and right clicks will select a source, now surrounded by a diamond, displaying the radial profile bottom left, and the actual image bottom right.
Right click will oscillate between accepted source and rejected source (blue and red respectively).
Keyboard funcitonality is now also implemented. Use the left/right arrow keys (or a/d) to cycle through each source, and the up/down keys (or w/d) to mark a source as rejected (red) or accepted (blue). This is probably the fastest way to cycle through sources. Note that for some mac python installs, key presses won't be recognized inside a pylab window. To solve this, invoke your trippy script with pythonw instead of python.
When the window is closed, only those sources shown as blue points, and within the zoom of the top right plot will be used to generate the PSF.
The array goodFits is returned for convenience and contains the moffat fit details of each accepted source. Each entry is [FWHM, chi, alpha, beta, x, y, local background value].
The array goodMeds is just the median of goodFits, and provides the median moffat alpha and beta of the selected stars.
Note on a couple starChooser options:
--bgRadius is the radius outside of which the image background level is sampled. The fitting is relatively insensitive to this value, however, if you happen to know what the FWHM is approximately, then the best fitting results can be had with bgRadius>~3xFWHM in pixels.
--ftol is the least squares fitting tolerance parameter passed to the scipy least sqaures fitter. Increasing this number can result in dramatic performance improvements. Default is 1.4e-8 to provide an extremely accurate fit. Good enough fits can be had with 1.e-7 or even 1.e-6 if one has a need for speed.
--repFact defaults to 5. If you want to run faster but still preserve most accuracy in the fitting procedure, use repFact = 3
--quickFit = True will provide the fastest moffat fitting. The speed improvement over quickFit = False is dramatic, but results in slightly less accurate moffat fit parameters. For the majority of use cases, where the number of good psf stars are more than a few, the degredation in PSF accuracy will not be appreciable because of the fact that a lookup table is used. But the user should confirm this be comparing PSFs generated in both circumstances.
--printStarInfo = True will display an inset in the starchooser plot that shows the parameters of the selected source, such as alpha, beta, and FWHM, among others.
End of explanation
"""
goodPSF = psf.modelPSF(num.arange(61),num.arange(61), alpha=goodMeds[2],beta=goodMeds[3],repFact=10)
goodPSF.genLookupTable(data,goodFits[:,4],goodFits[:,5],verbose=False)
fwhm = goodPSF.FWHM() ###this is the FWHM with lookuptable included
fwhm = goodPSF.FWHM(fromMoffatProfile=True) ###this is the pure moffat FWHM.
print("Full width at half maximum {:5.3f} (in pix).".format(fwhm))
zscale = ZScaleInterval()
(z1, z2) = zscale.get_limits(goodPSF.lookupTable)
normer = interval.ManualInterval(z1,z2)
pyl.imshow(normer(goodPSF.lookupTable))
pyl.show()
"""
Explanation: Generate the PSF. We want a 61 pixel wide PSF, adopt a repFactor of 10, and use the mean star fits chosen above.
always use odd values for the dimensions. Even values (eg. 60 instead of 61) result in off centered lookup tables.
Repfactors of 5 and 10 have been tested thoroughly. Larger is pointless, smaller is inaccurate. 5 is faster than 10, 10 is more accurate than 5.
The PSF has to be wide/tall enough to handle the trailing length and the seeing disk. For Polonskaya, the larger is trailing, at ~19"/hr*480s/3600/0.185"/pix = 14 pixels. Choose something a few times larger. Also, stick with odd width PSFs, as the even ones have some funny centroid stuff that I haven't fully sorted out.
The full PSF is created with instantiation, and running both genLookupTable and genPSF.
End of explanation
"""
goodPSF.line(rate,angle,EXPTIME/3600.,pixScale=0.185,useLookupTable=True)
"""
Explanation: Now generate the TSF, which we call the line/long PSF interchangeably through the code...
Rate is in units of length/time and pixScale is in units of length/pixel, time and length are in units of your choice. Sanity suggests arcseconds and hours. Then rate in "/hr and pixScale in "/pix. Angle is in degrees counter clockwise from horizontal between +-90 degrees.
This can be rerun to create a TSF with different rate/angle of motion, though keep in mind that the psf class only contains one longPSF (one rate/angle) at any given time.
End of explanation
"""
goodPSF.computeRoundAperCorrFromPSF(psf.extent(0.8*fwhm,4*fwhm,10),display=False,
displayAperture=False,
useLookupTable=True)
roundAperCorr = goodPSF.roundAperCorr(1.4*fwhm)
goodPSF.computeLineAperCorrFromTSF(psf.extent(0.1*fwhm,4*fwhm,10),
l=(EXPTIME/3600.)*rate/0.185,a=angle,display=False,displayAperture=False)
lineAperCorr = goodPSF.lineAperCorr(1.4*fwhm)
print(lineAperCorr,roundAperCorr)
"""
Explanation: Now calculate aperture corrections for the PSF and TSF. Store for values of r=1.4*FWHM.
Note that the precision of the aperture correction depends lightly on the sampling from the compute functions. 10 is generally enough to preserve 1% precision in the .roundAperCorr() and lineAperCorr() functions which use linear interpolation to get the value one actually desires.
NOTE: Set useLookupTable=False if one wants to calculate from the moffat profile alone. Generally, not accuarate for small apertures however.
End of explanation
"""
goodPSF.psfStore('psf.fits', psfV2=True)
"""
Explanation: Store the PSF. In TRIPPy v1.0 we introduced a new psf save format which decreases the storage requirements by roughly half, at the cost of increase CPU time when restoring the stored PSF. The difference is that the moffat component of the PSF was originally saved in the fits file's first extension. This is no longer saved, as it's pretty quick to calculate.
Default behaviour is the old PSF format, but the new format can be flagged with psfV2=True as shown below.
End of explanation
"""
#goodPSF = psf.modelPSF(restore='psf.fits')
"""
Explanation: If we've already done the above once, we could doing it again by restoring the previously constructed PSF by the following commented out code.
End of explanation
"""
#goodPSF.line(new_rate,new_angle,EXPTIME/3600.,pixScale=0.185,useLookupTable=True)
"""
Explanation: And we could generate a new line psf by recalling .line with a new rate and angle
End of explanation
"""
#initiate the pillPhot object
phot = pill.pillPhot(data,repFact=10)
#get photometry, assume ZPT=26.0
#enableBGselection=True allows you to zoom in on a good background region in the aperture display window
#trimBGhighPix is a sigma cut to get rid of the cosmic rays. They get marked as blue in the display window
#background is selected inside the box and outside the skyRadius value
#mode is th background mode selection. Options are median, mean, histMode (JJ's jjkmode technique), fraserMode (ask me about it), gaussFit, and "smart". Smart does a gaussian fit first, and if the gaussian fit value is discrepant compared to the expectation from the background std, it resorts to the fraserMode. "smart" seems quite robust to nearby bright sources
#examples of round sources
phot(goodFits[0][4], goodFits[0][5],radius=3.09*1.1,l=0.0,a=0.0,
skyRadius=4*3.09,width=6*3.09,
zpt=26.0,exptime=EXPTIME,enableBGSelection=True,display=True,
backupMode="fraserMode",trimBGHighPix=3.)
#example of a trailed source
phot(xt,yt,radius=fwhm*1.4,l=(EXPTIME/3600.)*rate/0.185,a=angle,
skyRadius=4*fwhm,width=6*fwhm,
zpt=26.0,exptime=EXPTIME,enableBGSelection=True,display=True,
backupMode="smart",trimBGHighPix=3.)
"""
Explanation: Now let's do some pill aperture photometry. Instantiate the class, then call the object you created to get photometry of Polonskaya. Again assume repFact=10.
pillPhot takes as input the same coordinates as outputted by sextractor.
First example is of a round star which I have manually taken the coordinates from above. Second example is for the asteroid itself.
New feature! The input radii can either be singletons like in the example below, or a numpy array of radii. If photometry of the same source using multiple radii are needed, the numpy array is much much faster than passing individual singletons.
enableBGselection=True will cause a popup display of the source, in which one can zoom to a section with no background source.
The detault background selection technique is "smart". See bgFinder documentation for what that means. If you want to change this away from 'fraserMode', take a look at the options in bgFinder.
display=True to see the image subsection
r is the radius of the pill, l is the length, a is the angle. Sky radius is the radius of a larger pill aperture. The pixels in this larger aperture, but outside the smaller aperture are ignored. Anything outside the larger pill, but inside +-width is used for background estimation.
Trimbghighpix is mostly made not important if mode=smart. But if you want to use a mean or median for some reason, then this value is used to reject pixels with values trimBGhighPix standard deviations above the mean of the cutout.
End of explanation
"""
phot.SNR(verbose=True)
#get those values
print(phot.magnitude)
print(phot.dmagnitude)
print(phot.sourceFlux)
print(phot.snr)
print(phot.bg)
"""
Explanation: The SNR function calculates the SNR of the aperture,as well as provide an estiamte of the magnitude/flux uncertainties. Select useBGstd=True if you wish to use the background noise level instead of sqrt of the background level in your uncertainty estimate. Note: currently, this uncertainty estimate is approximate, good to a few percent. Future improvements will be made to get this a bit more accurate.
If the photometry radius was an array, then so are the products created using the SNR function.
verbose=True puts some nice terminal output in your face. These values can be accessed with their internal names.
End of explanation
"""
phot.computeRoundAperCorrFromSource(goodFits[0,4],goodFits[0,5],num.linspace(1*fwhm,4*fwhm,10),
skyRadius=5*fwhm, width=6*fwhm,displayAperture=False,display=True)
print('Round aperture correction for a 4xFWHM aperture is {:.3f}.'.format(phot.roundAperCorr(1.4*fwhm)))
"""
Explanation: Let's get aperture corrections measured directly from a star.
End of explanation
"""
Data = data[int(yt)-200:int(yt)+200,int(xt)-200:int(xt)+200]-phot.bg
zscale = ZScaleInterval()
(z1, z2) = zscale.get_limits(Data)
normer = interval.ManualInterval(z1,z2)
pyl.imshow(normer(Data))
pyl.show()
"""
Explanation: Finally, let's do some PSF source subtraction. This is only possible with emcee and sextractor installed.
First get the cutout. This makes everything faster later. Also, remove the background, just because.
This also provides an example of how to use zscale now built into trippy and astropy.visualization to display an astronomy image using the zscale scaling.
End of explanation
"""
fitter = MCMCfit.MCMCfitter(goodPSF,Data)
fitter.fitWithModelPSF(200+xt-int(xt)-1,200+yt-int(yt)-1, m_in=1000.,
fitWidth=10,
nWalkers=20, nBurn=20, nStep=20,
bg=phot.bg, useLinePSF=True, verbose=False,useErrorMap=False)
"""
Explanation: Now instantiate the MCMCfitter class, and then perform the fit. Verbose=False will not put anything to terminal. Setting to true will dump the result of each step. Only good idea if you insist on seeing what's happening. Do you trust black boxes?
Set useLinePSF to True if you are fitting a trailed source, False if a point source.
Set useErrorMap to True if you care to use an estimate of the poisson noise in each pixel during your fit. This produces honest confidence ranges.
I personally like nWalkers=nBurn=nStep=40. To get a reasonable fit however, that's overkill. But to get the best... your mileage will vary.
This will take a while on a computer. ~1 minute on a modern i5 processor, much longer if you computer is a few years old. You can reduce the number of walkers, nBurn and nStep to ~10 each if you are impatient. This will drop the run time by ~4x
End of explanation
"""
(fitPars, fitRange) = fitter.fitResults(0.67)
print(fitPars)
print(fitRange)
"""
Explanation: Now get the fits results, including best fit and confidence region using the input value. 0.67 for 1-sigma is shown
End of explanation
"""
modelImage = goodPSF.plant(fitPars[0],fitPars[1],fitPars[2],Data,addNoise=False,useLinePSF=True,returnModel=True)
pyl.imshow(normer(modelImage))
pyl.show()
"""
Explanation: Finally, lets produce the model best fit image, and perform a subtraction. Plant will plant a fake source with the given input x,y,amplitude into the input data. If returnModel=True, then no source is planted, but the model image that would have been planted is returned.
remove will do the opposite of plant given input data (it actually just calls plant).
End of explanation
"""
removed = goodPSF.remove(fitPars[0],fitPars[1],fitPars[2],Data,useLinePSF=True)
pyl.imshow(normer(removed))
pyl.show()
"""
Explanation: Now show the image and the image with model removed for comparison.
End of explanation
"""
|
liganega/Gongsu-DataSci
|
previous/y2017/GongSu08_Files_and_Lists.ipynb
|
gpl-3.0
|
result_f = open("data/scores_list.txt") # 파일 열기
for line in result_f: # 각 줄 내용 출력하기
print(line)
result_f.close() # 파일 닫기
"""
Explanation: 텍스트 파일 불러오기와 리스트 활용
수정 사항
적절한 연습문제 추가 필요
처리해야 할 데이터 양이 많아지면 파일에 저장한 후에 필요한 경우 재활용해야 한다.
또한 개별 데이터를 따로따로 처리하기 보다는 하나의 데이터로 묶어서 처리할 수 있어야 한다.
많은 데이처를 하나의 데이터로 묶어서 처리하는 다양한 자료형이 제공되며 여기서는 파이썬의 리스트 자료형의 활용을 알아본다.
주요 내용
텍스트 파일의 내용을 읽어드리는 방법과 파이썬에 내장되어 있는 컬렉션 자료형 중의 하나인 리스트(list)를
활용하는 방법에 대해 알아본다.
리스트(lists): 파이썬에서 사용할 수 있는 임의의 값들을 모아서
하나의 값으로 취급하는 자료형
사용 형태: 대괄호 사용
even_numbers_list = [2, 4, 6, 8, 10]
todays_datatypes_list = ['list', 'tuple', 'dictionary']
특징: 임의의 자료형 값들을 섞어서 항목으로 사용 가능
mixed_list = [1, 'abs', [2.1, 4.5]]
인덱스 또는 슬라이싱을 이용하여 각각의 항목에 또는 여러 개의 항목에 대한
정보를 활용할 수 있다. 사용법은 문자열의 경우와 동일.
리스트는 수정 가능하다. 즉, 가변 자료형이다.
리스트와 관련되어 많이 사용되는 메소드는 다음과 같다.
append(): 기존의 리스트 끝에 항목 추가
extend(): 두 개의 리스트 이어붙이기
insert(): 기존의 리스트 중간에 항목 삽입
pop(), remove(), del: 항목 삭제
count(): 리스트에 포함된 특정 항목이 몇 번 나타나는지 세어 줌.
index(): 특정 항목의 인덱스가 몇 번인지 확인해 줌.
오늘의 주요 예제
data 디렉토리에 위치한
scores_list.txt 파일은 선수 여덟 명의 점수를 담고 있다.
txt
Name Score
player1 21.09
player2 20.32
player3 21.81
player4 22.97
player5 23.29
player6 22.09
player7 21.20
player8 22.16
목표: 위 파일로부터 1~3등 선수의 점수를 아래와 같이 확인하기
txt
1등 23.29
2등 22.97
3등 22.16
참조: Head First Programming(한빛미디어) 4장
준비 사항
파일에 저장된 데이터를 불러오거나 파일에 데이터를 저장하는 방법에 대한 설명은
여기를
참조한다.
리스트의 정의와 기초적인 활용법에 대한 자세한 설명은
여기를
참조한다.
파일에 저장된 데이터 불러오기
즉, 첫째 줄은 선수이름(Name)과 점수(Score)의 항목이 표시되어 있으며
둘째 줄부터 선수이름과 점수가 작성되어 있다.
위 파일의 내용을 아래와 같이 파이썬 코드로 확인할 수 있다.
End of explanation
"""
result_f = open("data/scores_list.txt")
for line in result_f:
print(line.strip()) # strip 메소드 활용하기
result_f.close()
"""
Explanation: 주의사항
줄 사이에 새로운 줄이 포함된 이유는 파일을 작성하면서 줄바꾸기를 할 때 사용하는 엔터에 의해 줄바꾸기 기호(\n)가
각 줄의 맨 끝에 포함되기 때문이다.
따라서 줄바꾸기를 한 번 더 하는 것을 방지하기 위해서 strip 메소드를 활용하는 것이 좋다.
End of explanation
"""
file = open('data/no_file.txt')
"""
Explanation: 주의사항
strip 메소드를 활용하여 데이터를 보다 깔끔하게 정리하는 것은 좋은 버릇이다.
하지만 반드시 필요한 것은 아닐 수도 있기 때문에 사용여부를 판단해야 한다.
경우에 따라 strip 메소드를 사용해도 되고 그렇지 않아도 된다.
이제 1등 점수를 확인하고자 한다. 이때 이전에 배운 예외처리 기술을 활용해보자.
예제
아래 예제는 없는 파일을 open 함수로 열려고 할 때 발생하는 문제를 처리하는 기술이다.
먼저 없는 파일을 열려고 할 때 오류가 발생함을 확인하자.
End of explanation
"""
try:
file = open('data/no_file.txt')
except:
print("열고자 하는 파일이 존재하지 않습니다.")
"""
Explanation: 이런 경우에는 열고자 하는 파일이 존재하지 않는다는 정보를 전달하는 것이 단순히 오류가 발생하면서 실행이 멈추는 것보다 훨씬 유익하다.
End of explanation
"""
'Name Score'.split()
"""
Explanation: 1, 2, 3등 점수 확인하기
1등 점수 확인하기
앞서 파일 내용을 확인해 보았듯 각 줄마다 선수이름과 점수가 공백을 사이로 두고 각 줄에 적혀 있다.
따라서 아래와 같이 split 메소드를 활용하여 각 줄을 쪼개어 두 번째 항목을 확인할 수 있다.
주의사항
split 메소드의 기능을 확인해야 한다.
예를 들어 Name Score라는 문자열을 공백을 기준으로 쪼개면 길이가 두 개의 단어로 구성된 리스트가 생성된다.
End of explanation
"""
result_f = open("data/scores_list.txt")
for line in result_f:
record = line.split()
print(record[1])
result_f.close()
"""
Explanation: 파일의 각 줄이 동일한 모양을 갖고 있다는 점에 착안하여 아래와 같이 각줄의 내용 중에서 점수에 해당하는 부분을
아래와 같이 확인할 수 있다.
주의: 리스트의 색인도 문자열의 경우처럼 0부터 시작한다. 따라서 리스트의 둘째 항목의 색인은 1인다.
End of explanation
"""
result_f = open("data/scores_list.txt")
highest_score = 0 # 1등 점수 저장
for line in result_f:
record = line.split()
try: # 첫줄 제외 용도
score = float(record[1])
except:
continue
if highest_score < score: # 1등 점수 갱신 경우 확인
highest_score = score
else:
continue
result_f.close()
print("1등 점수는", highest_score, "입니다.")
"""
Explanation: 그런데 첫째 줄 내용은 점수를 비교하는 데에 필요없다.
따라서 무시하는 방법을 사용하도록 하자.
특정 줄을 무시하는 방법은 여러 기술이 있지만 여기서는 try ... except ... 명령문을 이용한 예외처리 기술을 활용한다.
주의:
여기서 예외처리 기술을 이용하는 이유는 다음과 같다.
* split 메소드로 쪼개진 값들은 모두 문자열로 처리된다.
* 하지만 점수를 비교하기 위해서는 부동소수점으로 형변환 시키는 것이 좋다.
* 그런데 첫째 줄에 float 함수를 적용하면 오류가 발생한다.
* 따라서 오류가 발생할 때 프로그램의 실행을 멈추지 않고 다른 일을 하도록 예외처리를 해주어야 한다.
* 아래 코드에서는 float 함수를 실행할 때 오류가 발생하면 무시하고 다음 줄로 넘어가는 식으로 오류처리를 하였다.
End of explanation
"""
result_f = open("data/scores_list.txt")
highest_score = 0
second_highest_score = 0 # 2등 점수 저장
for line in result_f:
record = line.split()
try:
score = float(record[1])
except:
continue
if highest_score < score: # 1, 2등 점수 갱신 경우 확인
second_highest_score = highest_score
highest_score = score
elif second_highest_score < score: # 2등 점수 갱신 경우 확인
second_highest_score = score
else:
continue
result_f.close()
print("1등 점수는", highest_score, "입니다.")
print("2등 점수는", second_highest_score, "입니다.")
"""
Explanation: 2등 점수 확인하기
2등 점수까지 확인하려면 2등 점수를 기억할 변수가 하나 더 필요하며
확인된 점수가 기존의 1등 점수보다 큰지, 2등 점수보다 큰지 여부에 따라 1, 2등 점수를 기억하는 변수의 값들을
업데이트 해야 한다.
End of explanation
"""
result_f = open("data/scores_list.txt")
highest_score = 0
second_highest_score = 0
third_highest_score = 0 # 3등 점수 저장
for line in result_f:
record = line.split()
try:
score = float(record[1])
except:
continue
if highest_score < score: # 1, 2, 3등 점수 갱신 확인
third_highest_score = second_highest_score
second_highest_score = highest_score
highest_score = score
elif second_highest_score < score: # 2, 3등 점수 갱신 확인
third_highest_score = second_highest_score
second_highest_score = score
elif third_highest_score < score: # 3등 점수 갱신 확인
third_highest_score = score
else:
continue
result_f.close()
print("1등 점수는", highest_score, "입니다.")
print("2등 점수는", second_highest_score, "입니다.")
print("3등 점수는", third_highest_score, "입니다.")
"""
Explanation: 3등 점수 확인하기
이제 3등 점수까지 확인하려면 코드를 더 많이 수정해야 하며, 더 많은 변수와 조건문을 사용해야 한다.
End of explanation
"""
result_f = open("data/scores_list.txt")
score_list = [] # 점수 저장 리스트 생성
for line in result_f:
(name, score) = line.split() # 각 줄을 두 단어의 리스트로 쪼개기
try:
score_list.append(float(score)) # 첫째 줄 제외. 숫자만 scores 리스트에 추가
except:
continue
result_f.close()
score_list.sort() # 리스트를 크기순으로 정렬(오름차순)
score_list.reverse() # 리스트의 항목들의 순서 뒤집기
print("The top scores were:")
print(score_list[0]) # 0번 색인값 = 1등 점수
print(score_list[1]) # 1번 색인값 = 2등 점수
print(score_list[2]) # 2번 색인값 = 3등 점수
"""
Explanation: 나쁜 프로그래밍
앞서 1등까지, 2등까지, 3등까지 점수를 확인하는 코드는 각자 다르며, 점처 길어지고 복잡해졌다.
코드를 이런 식으로 구현하면 안된다.
무엇보다도 원하는 등수에 따라 코드 자체가 수정되어야 하는 방식으로 프로그래밍을 하면 절대 안된다.
그럼 어떻게 할까?
앞선 코드의 근본적인 문제점은 각 선수의 점수를 따라따로 관리하기 때문에 발생한다.
따라서 선수의 점수를 모아서 한꺼번에 처리하는 기술이 요구된다.
여기서는 리스트 자료형을 활용하여 원하는 등수와 선수의 수에 상관없이 동일한 코드로 원하는 결과를
리턴하는 프로그램을 구현하고자 한다.
리스트 활용
몇 등 점수를 알아내야 하는가와 상관없이 모든 질문에 답을 하는 하나의 프로그램을 리스트를 활용하여
구현하고자 하며, 아이디어는 다음과 같다.
서핑 대회 참가선수들의 점수만을 따로 모아 놓은 리스트를 생성한다.
리스트의 항목들을 숫자크기 역순으로 정렬(sorting)한다.
역순, 즉 내림차순으로 정렬된 리스트의 색인을 이용하여 원하는 등수의 점수를 확인한다.
기본 아이디어
질문: 그렇다면 점수만 뽑아서 모은 다음에 점수들을 순서대로 나열하는 방법이 있으면 좋지 않을까?
답: 매우 그렇다.
방법: split(), append() 메소드를 아래와 같이 for 문과 함께 활용하면 됨.
End of explanation
"""
result_f = open("data/scores_list.txt")
score_list = []
for line in result_f:
(name, score) = line.split()
try:
score_list.append(float(score))
except:
continue
result_f.close()
score_list.sort(reverse=True) # 리스트를 내림차순으로 정렬
print("The top scores were:")
print(score_list[0])
print(score_list[1])
print(score_list[2])
"""
Explanation: 주의사항
위 코드의 4번 줄에 사용된 line.split()이 선수이름과 점수를 쪼개는 과정이다.
아래 코드는 위 코드를 좀 더 세련되게 구현한 것이다.
아래 코드의 4번 줄 내용은 split() 메소드를 이용하여 선수 이름과 점수로 쪼개진
각각의 값을 갖는 변수를 동시에 선언하고 있다.
(주의: split()의 결과로 길이가 2인 리스트를 얻는다는 것을 미리 예상하였음에 주의하라.)
python
(name, score) = line.split()
위와 같이 하면 다음 처럼 한 것과 동일한 일을 하게 된다.
python
name = line.split()[0]
score = line.split()[1]
주의사항
아래 두 줄의 코드는 리스트를 내림차순으로 정렬한다.
python
score_list.sort()
score_list.reverse()
위 두 줄의 코드를 아래와 같이 한 줄로 구현할 수 있다.
python
score_list.sort(reverse=True)
End of explanation
"""
def ranking(rank): # 원하는 등수를 인자로 사용
result_f = open("data/scores_list.txt")
score_list = []
for line in result_f:
(name, score) = line.split()
try:
score_list.append(float(score))
except:
continue
result_f.close()
score_list.sort(reverse=True)
return score_list[rank-1] # 원하는 등수의 점수 리턴
"""
Explanation: 함수 활용
앞서 살펴본 코드를 함수로 추상화하면 원하는 등수의 점수를 함수호출로 간단하게 확인할 수 있다.
주의: 함수의 정의화 기초적인 활용법에 대한 자세한 설명은
여기를
참조한다.
End of explanation
"""
print(ranking(1), ranking(2), ranking(3))
"""
Explanation: 이제 1, 2, 3등 점수를 가볍게 확인 할 수 있다.
End of explanation
"""
empty_list = []
"""
Explanation: 연습문제
연습
End of explanation
"""
len(empty_list)
"""
Explanation: 빈 리스트의 길이는 0이다.
End of explanation
"""
empty_list[0]
"""
Explanation: 빈 리스트는 아무 것도 포함하지 않는다.
따라서 0번 인덱스 값도 없다.
End of explanation
"""
empty_list = list()
"""
Explanation: 주의
빈 리스트를 아래와 같이 작성할 수도 있다.
End of explanation
"""
a_singleton = [[]]
"""
Explanation: 반면에 아래 리스트는 빈 리스트가 아니다.
End of explanation
"""
len(a_singleton)
"""
Explanation: 위 리스트는 빈 리스트를 포함한 리스트이다.
따라서 길이가 1이다.
End of explanation
"""
a_singleton[0]
"""
Explanation: 포함된 유일한 항목은 빈 리스트이다.
End of explanation
"""
a_nested_list = [1, 2, [3, 4], [[5, 6, 7], 8]]
"""
Explanation: 연습
리스트는 중첩을 허용한다.
아래 리스트는 3중 리스트이다.
End of explanation
"""
a_nested_list[1]
"""
Explanation: 첫째, 둘째 항목은 정수인 1과 2이다.
셋쩨 항목은 3과 4로 이루어진 길이가 2인 리스트 [3, 4]이다.
넷째 항목은 리스트 [5, 6, 7]과 정수 8로 이루어진 리스트 [[5, 6, 7], 8]이다.
질문: 위 리스트에서 2를 인덱스로 얻는 방법은?
견본답안:
End of explanation
"""
a_nested_list[2]
"""
Explanation: 질문: [3, 4]를 인덱스로 얻는 방법은?
견본답안:
End of explanation
"""
a_nested_list[2][0]
"""
Explanation: 질문: 3을 인덱스로 얻는 방법은?
견본답안: 인덱스를 연속해서 적용한다.
End of explanation
"""
a_nested_list[3][0]
"""
Explanation: 질문: [5, 6, 7]을 인덱스로 얻는 방법은?
견본답안: 역시 인덱스를 연속해서 적용한다.
End of explanation
"""
a_nested_list[3][0][2]
"""
Explanation: 질문: 7을 인덱스로 얻는 방법은?
견본답안: 역시 인덱스를 연속해서 적용한다.
End of explanation
"""
animals = ['dog', 'cat', 'pig']
"""
Explanation: 연습: 슬라이싱과 인덱싱의 차이점
아래 예제는 슬라이싱과 인덱싱의 작동방식이 다르다는 것을 잘 보여준다.
동물들의 리스트 animals를 아래와 같이 정의하자.
End of explanation
"""
animals[1] = ['tiger', 'lion', 'rabbit']
"""
Explanation: 이제 인덱싱을 사용하여 1번 색인값으로 cat 대신에 새로운 리스트인 ['tiger', 'lion', 'rabbit']를 지정해보자.
End of explanation
"""
animals
"""
Explanation: 그러면 animals는 이 경우 2중 리스트가 된다.
End of explanation
"""
animals = ['dog', 'cat', 'pig']
animals[1:2] = ['tiger', 'lion', 'rabbit']
"""
Explanation: 반면에 아래와 같이 슬라이싱을 사용하면 전혀 다른 결과를 얻는다.
End of explanation
"""
animals
"""
Explanation: 슬라이싱을 활용하면 2중 리스트 대신에 확장된 리스트를 얻게 된다.
End of explanation
"""
animals[2:4] = []
animals
"""
Explanation: 슬라이싱을 활용하여 특정 항목을 삭제할 수도 있다.
예를 들어, 2번 ~ 3번 색인값인 tiger와 lion을 삭제하려면 아래와 같이 할 수 있다.
End of explanation
"""
animals = ['dog', 'cat', 'pig']
"""
Explanation: 연습: 리스트의 중요 메소드 활용
End of explanation
"""
animals.append('coq')
animals
"""
Explanation: 문자열에 포함된 문자들의 순서가 중요하듯 리스트에 포함된 항목들의 순서도 절대적으로 중요하다.
문자열과는 달리 리스트는 수정이 가능하다.
예를 들어, append() 메소드는 리스트의 끝에 항목을 하나 추가한다.
End of explanation
"""
animals.append(['eagle', 'bear'])
animals
"""
Explanation: 동시에 여러 개의 항목을 추가하고자 할 때는 append() 메소드를 아래처럼 이용하면 된다고 생각하면 안된다.
End of explanation
"""
animals.remove(['eagle', 'bear'])
animals
"""
Explanation: 위에서는 원래의 리스트에 다른 리스트 하나를 마지막 항목으로 추가한 것이다.
그게 아니라 eagle과 bear 두 개의 항목을 원래의 리스트에 추가하고자 한다면 append() 메소드를 두 번 적용하거나
아니면 extend() 메소드를 사용하면 된다.
먼저 앞서 추가한 항목을 제거하자.
End of explanation
"""
animals.extend(['eagle', 'bear'])
animals
"""
Explanation: extend() 메소드의 활용은 다음과 같다.
End of explanation
"""
animals[1] = 'cow'
animals
"""
Explanation: 두 개의 리스트를 덧셈 기호를 이용하여 확장할 수도 있다. 하지만 원래의 리스트를 변경하는 게 아니라 새로운 리스트를 생성한다.
항목 추가 및 제거 이외에도 항목 자체를 변경할 수도 있다.
cat를 cow으로 변경해보자.
방법은 간단하게 인덱싱을 사용한다.
End of explanation
"""
animals.index('pig')
"""
Explanation: 리스트에 포함된 항목의 인덱스를 알고자 한다면 index() 메소드를 이용한다.
End of explanation
"""
animals.append('pig')
animals
animals.index('pig')
"""
Explanation: 주의: 만약에 'pig'가 여러 번 포함되어 있으면 index() 메소드는 가장 작은 인덱스를 리턴한다.
End of explanation
"""
animals.pop()
animals.pop(2)
"""
Explanation: pop() 메소드는 인자가 없을 경우 맨 끝에 위치한 항목을 삭제하며, 인덱스를 인자로 사용하면 해당 항목을 삭제한다.
End of explanation
"""
animals
"""
Explanation: animals에 할당된 리스트가 어떻게 변경되었는지 확인해야 한다.
End of explanation
"""
animals.insert(5, 'leopard')
animals
"""
Explanation: 특정 인덱스 위치에 항목을 추가할 경우 insert() 메소드를 사용한다.
End of explanation
"""
animals.insert(2, 'hamster')
print(animals)
removed_pet = animals.remove('hamster')
print(animals)
"""
Explanation: 주의: 각 메소드의 리턴값에 주의해야 한다.
pop(): 리스트에서 삭제한 항목을 리턴한다.
append(), remove(), insert() 등은 기존의 리스트를 변경하지만 리턴값은 None, 즉 아무 것도 리턴하지 않는다.
주의: pop() 메소드는 인덱스를 사용하거나 아니면 맨 끝 항목만 삭제한다. 인덱스 번호를 모를 경우에 특정 항목을 삭제하고자 한다면 remove()
메소드를 사용한다.
End of explanation
"""
animals.remove('hamster')
animals
"""
Explanation: 주의:
특정 항목이 여러 번 포함되어 있을 경우 remove() 메소드는 맨 왼쪽에 위치한 항목 하나만 삭제한다.
더 삭제하려면 또 사용해야 한다.
remove(), index() 등은 삭제 또는 찾고자 하는 항목이 없을 경우 오류를 발생시킨다.
End of explanation
"""
del animals[-1]
animals
animals_sample = ['dog']
del animals_sample
animals_sample
"""
Explanation: 이외에 del 함수를 이용하여 리스트의 일부 또는 전체를 삭제할 수 있다.
주의: del 함수(메소드 아님)는 매우 주의해서 사용해야 한다. 잘못하면 데이터 자체를 메모리에서 삭제시킬 수 있다.
End of explanation
"""
print('기존 동물 리스트: ', animals)
animals.reverse()
print('뒤집어진 동물 리스트: ', animals)
"""
Explanation: reverse() 메소드는 리스트의 순서를 뒤집는다.
End of explanation
"""
print('기존 동물 리스트', animals)
animals.sort()
print('정렬된 동물 리스트', animals)
"""
Explanation: sort() 메소드를 이용하여 리스트의 항목들을 정렬할 수 있다.
숫자의 경우는 크기 순서대로.
문자열의 경우는 사전식으로.
End of explanation
"""
animals.append('horse')
print(animals)
print(sorted(animals))
print(animals)
"""
Explanation: 주의:
sort()와 reverse() 메소드는 원래의 리스트 자체를 변경한다.
원래의 리스트를 건드리지 않으면서 정렬된 또는 뒤집어진 리스트를 생성하고자 한다면 sorted() 또는 reversed() 함수(메소드 아님)를 사용한다.
End of explanation
"""
menu_input = 0
name_list = []
while menu_input != 9:
print("==========")
print("1. 참가 여부 확인")
print("2. 참가 신청")
print("3. 참가 취소")
print("4. 참가자명 변경")
print("5. 참가명단")
print("6. 참가 인원수 확인")
print("9. 종료")
print("==========")
try:
menu_input = int(input("원하는 항목 번호를 입력하세요: "))
if menu_input == 1:
name = input("확인할 이름을 입력하세요: ")
if name in name_list:
print("참가자 명단에 있습니다.")
else:
print("참가자 명단에 없습니다.")
if menu_input == 2:
name = input("참가 신청할 이름을 입력하세요: ")
name_list.append(name)
if menu_input == 3:
name = input("참가 취소할 이름을 입력하세요: ")
name_list.remove(name)
if menu_input == 4:
name = input("등록한 이름을 입력하세요: ")
name_re = input("수정할 이름을 입력하세요: ")
index = name_list.index(name)
name_list[index] = name_re
if menu_input == 5:
print("참가자 명단은 ", name_list, "입니다.")
if menu_input == 6:
print("참가 인원수는 ", len(name_list), "입니다.")
if menu_input == 9:
print("프로그램을 종료합니다.")
break
except:
print("정수를 입력하세요.")
"""
Explanation: reversed() 함수에 대해서는 자세히 알아보지 않는다.
연습
어떤 모임에 등록하는 참가자들의 리스트를 관리하는 간단한 프로그램을 아래와 같이 구현할 수 있다.
End of explanation
"""
|
hektor-monteiro/python-notebooks
|
aula-10_Eq_nao_lineares.ipynb
|
gpl-2.0
|
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return 2-x-np.exp(-x)
x = np.linspace(-10, 10, 400)
y = f(x)
plt.figure()
plt.plot(x, y)
# melhorando a escala para visualizar as possíveis raízes
plt.figure()
plt.plot(x, y)
plt.hlines(0,x.min(),x.max(),colors='C1',linestyles='dashed')
plt.ylim(-5,5)
# implementação numérica do ponto fixo
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return 2-x-np.exp(-x)
xplot = np.linspace(-10, 10, 400)
yplot = f(xplot)
# chute inicial para metodo iterativo
x = 1
for i in range(10):
x = 2 - np.exp(-x)
print(x)
# faz um gráfico para mostrar a solução
plt.figure()
plt.plot(xplot, yplot)
plt.hlines(0,xplot.min(),xplot.max(),colors='C1',linestyles='dashed')
plt.vlines(x,yplot.min(),yplot.max(),colors='C1',linestyles='dashed')
plt.ylim(-5,5)
"""
Explanation: Equações Não lineares e Raízes
Muitas das equações que temos em física não são lineares. Diversos problemas que temos interesse em resolver são representados por equações não lineares. Essas equações em geral são mais trabalhosas de serem tratadas numericamente. Aqui mostraremos algumas técnicas para a solução desse tipo de equação.
Dada uma função f contínua e real, queremos encontrar uma solução x tal que satisfaça a equação não linear:
$$ f(x) = 0 $$
De modo geral podemos conceber o procedimento de busca da solução em 3 etapas:
encontrar uma região onde possam existir soluções da equação e se possível, isolar os intervalos que contém apenas 1 solução;
dado um intervalo de interesse com uma solução, determinar uma aproximação inicial $x_{0}$ da solução para cada intervalo;
a partir da aproximação inicial, usar uma sequência $x_n$ que em principio deve convergir para a solução.
de um modo geral os métodos usados são iterativos.
Método do ponto fixo
Para detalhes sobre este método leia: https://pt.wikipedia.org/wiki/Itera%C3%A7%C3%A3o_de_ponto_fixo
Resumidamente, este método consiste em re-escrever a função de interesse de modo que tenhamos uma expressão na forma $u(x)=x$.
Por exemplo, considere $f(x)=2-x-e^{-x}$. Podemos re-escrever a expreção como:
$$ x = 2-e^{-x} $$
Antes de iniciar o procedimento iterativo, vamos analisar a função graficamente:
End of explanation
"""
# implementação numérica do ponto fixo com expressão alternativa
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return 2-x-np.exp(-x)
xplot = np.linspace(-10, 10, 400)
yplot = f(xplot)
# chute inicial para metodo iterativo
x = -1
for i in range(10):
x = -np.log(2-x)
print(x)
# faz um gráfico para mostrar a solução
plt.figure()
plt.plot(xplot, yplot)
plt.hlines(0,xplot.min(),xplot.max(),colors='C1',linestyles='dashed')
plt.vlines(x,yplot.min(),yplot.max(),colors='C1',linestyles='dashed')
plt.ylim(-5,5)
"""
Explanation: Note que a função tem duas raízes no entanto, por este método não temos como convergir para a raíz negativa. Uma alternativa é obter maneiras distintas de escrever a expressão $u(x)=x$. No caso de nosso exemplo podemmos fazer:
$$ x = -ln(2-x) $$
End of explanation
"""
import numpy as np
def f(x):
return x**2 - x - 1
# definir intervalo inicial
a = 1.0
b = 2.0
# checa se existe raiz dentro do intervalo inicial
if f(a)*f(b) >= 0:
print("não existe raiz no intervalo inicial.")
a_n = a
b_n = b
N = 10 # numero de iterações
for n in range(N):
m_n = (a_n + b_n)/2 # ponto médio do intervalo N
f_m_n = f(m_n) # valor da função f(x) no ponto médio
if f(a_n)*f_m_n < 0:
a_n = a_n
b_n = m_n
elif f(b_n)*f_m_n < 0:
a_n = m_n
b_n = b_n
else:
print("não foi encontrada raiz.")
print('A raiz encontrada foi: %8.6f +/- %8.6f'%(m_n,(b-a)/2**(N+1)))
"""
Explanation: http://docs.scipy.org/doc/scipy/reference/optimize.html
Método da Bisecção
Um dos métodos mais simples para se encontrar raízes de equações é o método da bisecção. O algoritmo pode ser usado para qualquer função contínua em um intervalo em que o valor da função muda de sinal. A ideia básica é: 1) divida o intervalo em duas partes; 2) cheque em que parte ocorre a mudança de sinal e 3) repita até atingir precisão desejada.
Este metodo não produz uma solução exata e o erro cometido é relacionado ao tamanho dos intervalos após $N$ divisões:
$$\left| \ x_{\text{real}} - x_N \, \right| \leq \frac{b-a}{2^{N+1}}$$
Veja mais detalhes aqui: https://pt.wikipedia.org/wiki/M%C3%A9todo_da_bisse%C3%A7%C3%A3o
End of explanation
"""
# Exemplo de aplicação do método de Newton–Raphson
import numpy as np
def f(x):
return x**2 - x - 1
def df(x):
return 2*x - 1
#definina o ponto inicial x_0
x0 = 1.
# defina a tolerancia com a qual a raiz será determinada
eps = 1.0e-3
x = x0
count=0
while abs(f(x)) > eps:
x = x - f(x)/df(x)
print('solution for iteration %i is %f'%(count,x))
count += 1
"""
Explanation: Método de Newton–Raphson
É um outro método bastante conhecido e usado para encontrar raízes. É um dos métodos mais rápidos no entanto não há como garantir a convergência para uma solução.
Basicamente o método consiste em buscar a raíz a partir de um valor inicial $x_0$ para o qual se calcula a equação da reta tangente usando derivada.
Usamos o fato de que para uma dada função $f(x)$ o coeficente da reta tangente no ponto $x_0$ é $f'(x_0)$. Com isso podemos escrever:
sendo $y=ax+b$ obtemos $y = f(x_0) + f'(x_0)(x - x_0)$. Como queremos achar uma aproximação para a raíz precisamos achar o ponto em que essa reta intersecta com o eixo $x$ fazendo $y=0$. Com isso obtemos: $ x = x_0 - \frac{f(x_0)}{f'(x_0)}$
Para achar a raíz repetimos este procedimento até que $f(x_0)$ seja proxima o suficinet de zero. A formula de recorrencia é então dada por:
$$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} $$
Para uma descrição mais detalhada do método veja:
https://pt.wikipedia.org/wiki/M%C3%A9todo_de_Newton%E2%80%93Raphson
Vejamos um exemplo abaixo
End of explanation
"""
|
LeoArruda/Titanic
|
Titanic Predict.ipynb
|
apache-2.0
|
import warnings
warnings.filterwarnings('ignore')
# SKLearn Model Algorithms
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression , Perceptron
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC, LinearSVC
# SKLearn ensemble classifiers
from sklearn.ensemble import RandomForestClassifier , GradientBoostingClassifier
from sklearn.ensemble import ExtraTreesClassifier , BaggingClassifier
from sklearn.ensemble import VotingClassifier , AdaBoostClassifier
# SKLearn Modelling Helpers
from sklearn.preprocessing import Imputer , Normalizer , scale
from sklearn.cross_validation import train_test_split , StratifiedKFold
from sklearn.feature_selection import RFECV
# Handle table-like data and matrices
import numpy as np
import pandas as pd
# Visualisation
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
# plot functions
import pltFunctions as pfunc
# Configure visualisations
%matplotlib inline
mpl.style.use( 'ggplot' )
sns.set_style( 'white' )
pylab.rcParams[ 'figure.figsize' ] = 8 , 6
"""
Explanation: Challenge understanding
Objective
Predict survival on Titanic dataset
Competition Description
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
https://www.kaggle.com/c/titanic
Initial Idea
Load Library Modules
Load Datasets
Explore datasets
Analyse relations between features
Analyse missing values
Analyse features
Prepare for modelling
Modelling
Prepare the prediction for submission
1. Loading Library Modules
End of explanation
"""
train = pd.read_csv("./input/train.csv")
test = pd.read_csv("./input/test.csv")
#combined = pd.concat([train.drop('Survived',1),test])
#combined = train.append( test, ignore_index = True)
full = train.append( test, ignore_index = True)
del train, test
#train = full[ :891 ]
#combined = combined.drop( 'Survived',1)
#print ('Datasets:' , 'combined:' , combined.shape , 'full:' , full.shape , 'train:' , train.shape)
"""
Explanation: 2. Loading Datasets
End of explanation
"""
full.head(10)
print(full.isnull().sum())
pd.crosstab(full['Pclass'], full['Sex'])
print( full.groupby(['Sex','Pclass'])['Age'].mean() )
agedf = full.groupby(['Sex','Pclass'])['Age'].mean()
type( agedf )
#for age in full:
# if full['Age'].isnull():
# print (agedf.where(agedf['Sex'] == full['Sex'])&(agedf['Pclass']==full['Pclass']))
def fillMissingAge(dframe):
dframe['Age'] = dframe['Age'].fillna( dframe['Age'].mean())
return dframe
def fillMissingFare(dframe):
dframe['Fare'] = dframe['Fare'].fillna( dframe['Fare'].mean() )
return dframe
full = fillMissingAge(full)
full = fillMissingFare(full)
print(full.isnull().sum())
print(full[full['Embarked'].isnull()])
pd.crosstab(full['Embarked'], full['Sex'].where(full['Sex'] == 1))
full.where((full['Sex']==1) & (full['Pclass']==1)).groupby(['Embarked','Pclass','Parch','SibSp']).size()
nt=(115+60+291)
pC=115/nt
pQ=60/nt
pS=291/nt
print('Prob C :', pC, 'Prob Q :', pQ ,'Prob S :' , pS)
nC=(30+2+20)
p0C=30/nC
p0Q=2/nC
p0S=20/nC
print('Prob C :', p0C, 'Prob Q :', p0Q ,'Prob S :' , p0S)
print( 'Sum of probabilities')
print('Prob C :', pC+p0C, 'Prob Q :', pQ+p0Q ,'Prob S :' , pS+p0S)
# Trying S for both passengers
full['Embarked'].iloc[61] = "S"
full['Embarked'].iloc[829] = "S"
print(full.isnull().sum())
def fillCabin(dframe):
dframe[ 'Cabin' ] = dframe['Cabin'].fillna( 'U' )
dframe[ 'Cabin' ] = dframe[ 'Cabin' ].map( lambda c : c[0] )
# dummy encoding ...
dframe = pd.get_dummies( dframe['Cabin'] , prefix = 'Cabin' )
return dframe
print(fillCabin(full))
newDF = fillCabin(full)
full = pd.concat([full, newDF], axis=1)
#full = full.drop('Cabin',1)
full
#print( full.where((full['Sex'] == 0) & (full['Pclass'] == 1)).groupby(['Pclass','Sex'])['Age'].mean() )
print( full['Sex'].isnull().sum() )
#byTicket = full.where(full['Cabin'].isnull()).groupby(['Name'])['Ticket']
#byFare = full.where(full['Cabin'].isnull()).groupby(['Pclass'])['Fare']
#byTicket.head(5)
#byFare.head(5)
full = pfunc.convertSexToNum(full)
full.head()
# Naming the Deck accordingly to the Cabin description
# Naming the Deck as U due to unknown Cabin description
full = pfunc.fillDeck(full)
pd.crosstab(full['Deck'], full['Survived'])
print(full.isnull().sum())
print("========================================")
print(full.info())
print(pfunc.featureEng( full ))
full = pfunc.featureEng( full )
#pfunc.pltCorrel( combined )
#pfunc.pltCorrel( full )
#pfunc.pltCorrel( full )
"""
Explanation: 3. Exploring datasets
End of explanation
"""
# Plot distributions of Age of passangers who survived or did not survive
#pfunc.pltDistro( train , var = 'Age' , target = 'Survived' , row = 'Sex' )
# Plot distributions of Fare of passangers who survived or did not survive
#pfunc.pltDistro( train , var = 'Survived' , target = 'Pclass' , row = 'Sex' )
# Plot distributions of Parch of passangers who survived or did not survive
#pfunc.pltDistro( train , var = 'Parch' , target = 'Survived' , row = 'Sex' )
full.head(5)
# Plot distributions of Age of passangers who survived or did not survive
#pfunc.pltCategories( train , cat = 'Embarked' , target = 'Survived' )
#pfunc.pltCategories( train , cat = 'Pclass' , target = 'Survived' )
#pfunc.pltCategories( train , cat = 'Sex' , target = 'Survived' )
#pfunc.pltCategories( train , cat = 'Parch' , target = 'Survived' )
#pfunc.pltCategories( train , cat = 'SibSp' , target = 'Survived' )
#pfunc.pltDistro( train , var = 'Age' , target = 'Survived' , row = 'Sex' )
full = full.drop('Survived',1)
def getTitles(dframe):
dframe['Title'] = dframe['Name'].map(lambda name:name.split(',')[1].split('.')[0].strip())
myDict = { "Capt": "Officer",
"Col": "Officer",
"Major": "Officer",
"Dr": "Officer",
"Rev": "Officer",
"Lady" : "Royalty",
"Jonkheer": "Royalty",
"Don": "Royalty",
"Sir" : "Royalty",
"the Countess":"Royalty",
"Dona": "Royalty",
"Mme": "Mrs",
"Mlle": "Miss",
"Ms": "Mrs",
"Mr" : "Mr",
"Mrs" : "Mrs",
"Miss" : "Miss",
"Master" : "Master"
}
dframe['Title'] = dframe.Title.map(myDict)
return dframe
full = getTitles(full)
full.head()
# plot functions
import pltFunctions as pfunc
train_X, test_X, target_y = pfunc.prepareTrainTestTarget(full)
#train_valid_X = full[ 0:891 ]
#train_valid_y = full.Survived
#test_X = full[ 891: ]
#train_X , valid_X , train_y , valid_y = train_test_split( train_X , train_valid_y , train_size = .7 )
print (full.shape , train_X.shape , target_y.shape , test_X.shape)
model = RandomForestClassifier(n_estimators=100)
#model = SVC()
model.fit( train_X , target_y )
"""
Explanation: Correlations to Investigate
Pclass is correlated to Fare ( 1st class tickets would be more expensive than other classes )
Pclass x Age
SibSp X Age
SibSp x Fare
SibSp is correlate to Parch ( large families would have high values of parents aboard and solo travellers would have zero parents aboard )
Pclass noticeable correlates to Survived ( Expected correlation with higher classes to survive as known )
End of explanation
"""
|
akhambhati/rs-NMF_CogControl
|
Analysis_Notebooks/e01-Measure_Dynamic_Functional_Networks.ipynb
|
gpl-3.0
|
try:
%load_ext autoreload
%autoreload 2
%reset
except:
print 'NOT IPYTHON'
from __future__ import division
import os
import sys
import glob
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import statsmodels.api as sm
import scipy.io as io
import h5py
import matplotlib
import matplotlib.pyplot as plt
echobase_path = '/Users/akhambhati/Developer/hoth_research/Echobase'
#echobase_path = '/data/jag/akhambhati/hoth_research/Echobase'
sys.path.append(echobase_path)
import Echobase
convert_conn_vec_to_adj_matr = Echobase.Network.Transforms.configuration.convert_conn_vec_to_adj_matr
convert_adj_matr_to_cfg_matr = Echobase.Network.Transforms.configuration.convert_adj_matr_to_cfg_matr
rcParams = Echobase.Plotting.fig_format.update_rcparams(matplotlib.rcParams)
path_Remotes = '/Users/akhambhati/Remotes'
#path_Remotes = '/data/jag/bassett-lab/akhambhati'
path_CoreData = path_Remotes + '/CORE.fMRI_cogcontrol.medaglia'
path_PeriphData = path_Remotes + '/RSRCH.NMF_CogControl'
path_ExpData = path_PeriphData + '/e01-FuncNetw'
path_AtlasData = path_Remotes + '/CORE.MRI_Atlases'
path_Figures = './e01-Figures'
for path in [path_CoreData, path_PeriphData, path_ExpData, path_Figures]:
if not os.path.exists(path):
print('Path: {}, does not exist'.format(path))
os.makedirs(path)
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Measure-Dynamic-Functional-Connectivity" data-toc-modified-id="Measure-Dynamic-Functional-Connectivity-1"><span class="toc-item-num">1 </span>Measure Dynamic Functional Connectivity</a></div><div class="lev2 toc-item"><a href="#Initialize-Environment" data-toc-modified-id="Initialize-Environment-11"><span class="toc-item-num">1.1 </span>Initialize Environment</a></div><div class="lev2 toc-item"><a href="#Load-CoreData" data-toc-modified-id="Load-CoreData-12"><span class="toc-item-num">1.2 </span>Load CoreData</a></div><div class="lev2 toc-item"><a href="#Compute-Functional-Connectivity" data-toc-modified-id="Compute-Functional-Connectivity-13"><span class="toc-item-num">1.3 </span>Compute Functional Connectivity</a></div><div class="lev3 toc-item"><a href="#Functional-Connectivity-FuncDef" data-toc-modified-id="Functional-Connectivity-FuncDef-131"><span class="toc-item-num">1.3.1 </span>Functional Connectivity FuncDef</a></div><div class="lev3 toc-item"><a href="#Process-Navon" data-toc-modified-id="Process-Navon-132"><span class="toc-item-num">1.3.2 </span>Process Navon</a></div><div class="lev3 toc-item"><a href="#Process-Stroop" data-toc-modified-id="Process-Stroop-133"><span class="toc-item-num">1.3.3 </span>Process Stroop</a></div><div class="lev2 toc-item"><a href="#Generate-Population-Configuration-Matrix" data-toc-modified-id="Generate-Population-Configuration-Matrix-14"><span class="toc-item-num">1.4 </span>Generate Population Configuration Matrix</a></div><div class="lev3 toc-item"><a href="#Dictionary-of-all-adjacency-matrices" data-toc-modified-id="Dictionary-of-all-adjacency-matrices-141"><span class="toc-item-num">1.4.1 </span>Dictionary of all adjacency matrices</a></div><div class="lev3 toc-item"><a href="#Create-Lookup-Table-and-Full-Configuration-Matrix" data-toc-modified-id="Create-Lookup-Table-and-Full-Configuration-Matrix-142"><span class="toc-item-num">1.4.2 </span>Create Lookup-Table and Full Configuration Matrix</a></div><div class="lev2 toc-item"><a href="#Checking-Correlation-Biases" data-toc-modified-id="Checking-Correlation-Biases-15"><span class="toc-item-num">1.5 </span>Checking Correlation Biases</a></div><div class="lev3 toc-item"><a href="#Across-Subjects" data-toc-modified-id="Across-Subjects-151"><span class="toc-item-num">1.5.1 </span>Across Subjects</a></div><div class="lev3 toc-item"><a href="#Positive-vs-Negative" data-toc-modified-id="Positive-vs-Negative-152"><span class="toc-item-num">1.5.2 </span>Positive vs Negative</a></div><div class="lev3 toc-item"><a href="#Fixation-vs-Task" data-toc-modified-id="Fixation-vs-Task-153"><span class="toc-item-num">1.5.3 </span>Fixation vs Task</a></div><div class="lev3 toc-item"><a href="#Within-Experiment-(Hi-vs-Lo)" data-toc-modified-id="Within-Experiment-(Hi-vs-Lo)-154"><span class="toc-item-num">1.5.4 </span>Within Experiment (Hi vs Lo)</a></div><div class="lev3 toc-item"><a href="#Between-Experiment-(Stroop-vs-Navon)" data-toc-modified-id="Between-Experiment-(Stroop-vs-Navon)-155"><span class="toc-item-num">1.5.5 </span>Between Experiment (Stroop vs Navon)</a></div><div class="lev3 toc-item"><a href="#Performance-Between-Experiment" data-toc-modified-id="Performance-Between-Experiment-156"><span class="toc-item-num">1.5.6 </span>Performance Between Experiment</a></div><div class="lev1 toc-item"><a href="#System-Level-Connectivity" data-toc-modified-id="System-Level-Connectivity-2"><span class="toc-item-num">2 </span>System-Level Connectivity</a></div><div class="lev2 toc-item"><a href="#Assign-Lausanne-to-Yeo-Systems" data-toc-modified-id="Assign-Lausanne-to-Yeo-Systems-21"><span class="toc-item-num">2.1 </span>Assign Lausanne to Yeo Systems</a></div><div class="lev2 toc-item"><a href="#System-Level-Adjacency-Matrices" data-toc-modified-id="System-Level-Adjacency-Matrices-22"><span class="toc-item-num">2.2 </span>System-Level Adjacency Matrices</a></div><div class="lev3 toc-item"><a href="#Plot-Population-Average-Adjacency-Matrices-(Expr-+-Pos/Neg)" data-toc-modified-id="Plot-Population-Average-Adjacency-Matrices-(Expr-+-Pos/Neg)-221"><span class="toc-item-num">2.2.1 </span>Plot Population Average Adjacency Matrices (Expr + Pos/Neg)</a></div><div class="lev3 toc-item"><a href="#Construct-System-Adjacency-Matrices" data-toc-modified-id="Construct-System-Adjacency-Matrices-222"><span class="toc-item-num">2.2.2 </span>Construct System Adjacency Matrices</a></div><div class="lev2 toc-item"><a href="#Check-Contrasts" data-toc-modified-id="Check-Contrasts-23"><span class="toc-item-num">2.3 </span>Check Contrasts</a></div><div class="lev3 toc-item"><a href="#Stroop-vs-Navon" data-toc-modified-id="Stroop-vs-Navon-231"><span class="toc-item-num">2.3.1 </span>Stroop vs Navon</a></div><div class="lev3 toc-item"><a href="#Lo-vs-Hi" data-toc-modified-id="Lo-vs-Hi-232"><span class="toc-item-num">2.3.2 </span>Lo vs Hi</a></div>
# Measure Dynamic Functional Connectivity
## Initialize Environment
End of explanation
"""
# Load BOLD
df_navon = io.loadmat('{}/NavonBlockedSeriesScale125.mat'.format(path_CoreData), struct_as_record=False)
df_stroop = io.loadmat('{}/StroopBlockedSeriesScale125.mat'.format(path_CoreData), struct_as_record=False)
n_subj = 28
n_fix_block = 12 # Disregard the final fixation block
n_tsk_block = 6
n_roi = 262
bad_roi = [242]
n_good_roi = n_roi-len(bad_roi)
# Load Motion Data
df_motion = {'Stroop': io.loadmat('{}/StroopMove.mat'.format(path_CoreData))['move'][:, 0],
'Navon': io.loadmat('{}/NavonMove.mat'.format(path_CoreData))['move'][:, 0]}
# Load Behavioral Data
df_blk = io.loadmat('{}/BlockwiseDataCorrectTrialsOnly.mat'.format(path_CoreData))
bad_subj_ix = [1, 6]
good_subj_ix = np.setdiff1d(np.arange(n_subj+2), bad_subj_ix)
df_perf = {'Stroop': {'lo': {'accuracy': df_blk['StroopData'][good_subj_ix, 1, :],
'meanRT': df_blk['StroopData'][good_subj_ix, 4, :],
'medianRT': df_blk['StroopData'][good_subj_ix, 5, :]},
'hi': {'accuracy': df_blk['StroopData'][good_subj_ix, 0, :],
'meanRT': df_blk['StroopData'][good_subj_ix, 2, :],
'medianRT': df_blk['StroopData'][good_subj_ix, 3, :]}
},
'Navon' : {'lo': {'accuracy': df_blk['NavonData'][good_subj_ix, 1, :],
'meanRT': df_blk['NavonData'][good_subj_ix, 4, :],
'medianRT': df_blk['NavonData'][good_subj_ix, 5, :]},
'hi': {'accuracy': df_blk['NavonData'][good_subj_ix, 0, :],
'meanRT': df_blk['NavonData'][good_subj_ix, 2, :],
'medianRT': df_blk['NavonData'][good_subj_ix, 3, :]}
}
}
"""
Explanation: Load CoreData
End of explanation
"""
def comp_fconn(bold, alpha=0.05, dependent=False):
n_roi, n_tr = bold.shape
adj = np.arctanh(np.corrcoef(bold))
cfg_vec = convert_adj_matr_to_cfg_matr(adj.reshape(-1, n_roi, n_roi))[0, :]
# Separate edges based on sign
cfg_vec_pos = cfg_vec.copy()
cfg_vec_pos[cfg_vec_pos < 0] = 0
cfg_vec_neg = -1*cfg_vec.copy()
cfg_vec_neg[cfg_vec_neg < 0] = 0
adj_pos = convert_conn_vec_to_adj_matr(cfg_vec_pos)
adj_neg = convert_conn_vec_to_adj_matr(cfg_vec_neg)
return adj_pos, adj_neg
"""
Explanation: Compute Functional Connectivity
Functional Connectivity FuncDef
End of explanation
"""
for subj_id in xrange(n_subj):
proc_item = '{}/Subject_{}.Navon'.format(path_ExpData, subj_id)
print(proc_item)
adj_dict = {'lo': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))}
},
'hi': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
}}
# Process Fixation Blocks
cnt = 0
for fix_block in xrange(n_fix_block):
data = np.array(df_navon['data'][subj_id][fix_block].NFix, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
if (fix_block % 2) == 0:
adj_dict['lo']['fix']['pos'][cnt, :, :], adj_dict['lo']['fix']['neg'][cnt, :, :] = comp_fconn(data)
if (fix_block % 2) == 1:
adj_dict['hi']['fix']['pos'][cnt, :, :], adj_dict['hi']['fix']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
# Process Task Blocks
cnt = 0
for tsk_block in xrange(n_tsk_block):
# Low demand
data = np.array(df_navon['data'][subj_id][tsk_block].NS, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['lo']['task']['pos'][cnt, :, :], adj_dict['lo']['task']['neg'][cnt, :, :] = comp_fconn(data)
# High demand
data = np.array(df_navon['data'][subj_id][tsk_block].S, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['hi']['task']['pos'][cnt, :, :], adj_dict['hi']['task']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
np.savez(proc_item, adj_dict=adj_dict)
"""
Explanation: Process Navon
End of explanation
"""
for subj_id in xrange(n_subj):
proc_item = '{}/Subject_{}.Stroop'.format(path_ExpData, subj_id)
print(proc_item)
adj_dict = {'lo': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))}
},
'hi': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
}}
# Process Fixation Blocks
cnt = 0
for fix_block in xrange(n_fix_block):
data = np.array(df_stroop['data'][subj_id][fix_block].SFix, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
if (fix_block % 2) == 0:
adj_dict['lo']['fix']['pos'][cnt, :, :], adj_dict['lo']['fix']['neg'][cnt, :, :] = comp_fconn(data)
if (fix_block % 2) == 1:
adj_dict['hi']['fix']['pos'][cnt, :, :], adj_dict['hi']['fix']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
# Process Task Blocks
cnt = 0
for tsk_block in xrange(n_tsk_block):
# Low demand
data = np.array(df_stroop['data'][subj_id][tsk_block].IE, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['lo']['task']['pos'][cnt, :, :], adj_dict['lo']['task']['neg'][cnt, :, :] = comp_fconn(data)
# High demand
data = np.array(df_stroop['data'][subj_id][tsk_block].E, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['hi']['task']['pos'][cnt, :, :], adj_dict['hi']['task']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
np.savez(proc_item, adj_dict=adj_dict)
"""
Explanation: Process Stroop
End of explanation
"""
expr_dict = {}
for expr_id in ['Stroop', 'Navon']:
df_list = glob.glob('{}/Subject_*.{}.npz'.format(path_ExpData, expr_id))
for df_subj in df_list:
subj_id = int(df_subj.split('/')[-1].split('.')[0].split('_')[1])
if subj_id not in expr_dict.keys():
expr_dict[subj_id] = {}
expr_dict[subj_id][expr_id] = df_subj
"""
Explanation: Generate Population Configuration Matrix
Dictionary of all adjacency matrices
End of explanation
"""
# Generate a dictionary of all key names
cfg_key_names = ['Subject_ID', 'Experiment_ID', 'Condition_ID', 'Task_ID', 'CorSign_ID', 'Block_ID']
cfg_key_label = {'Subject_ID': np.arange(n_subj),
'Experiment_ID': ['Stroop', 'Navon'],
'Condition_ID': ['lo', 'hi'],
'Task_ID': ['fix', 'task'],
'CorSign_ID': ['pos', 'neg'],
'Block_ID': np.arange(n_tsk_block)}
cfg_obs_lut = np.zeros((len(cfg_key_label[cfg_key_names[0]]),
len(cfg_key_label[cfg_key_names[1]]),
len(cfg_key_label[cfg_key_names[2]]),
len(cfg_key_label[cfg_key_names[3]]),
len(cfg_key_label[cfg_key_names[4]]),
len(cfg_key_label[cfg_key_names[5]])))
# Iterate over all cfg key labels and generate a LUT matrix and a config matrix
key_cnt = 0
cfg_matr = []
for key_0_ii, key_0_id in enumerate(cfg_key_label[cfg_key_names[0]]):
for key_1_ii, key_1_id in enumerate(cfg_key_label[cfg_key_names[1]]):
adj_dict = np.load(expr_dict[key_0_id][key_1_id])['adj_dict'][()]
for key_2_ii, key_2_id in enumerate(cfg_key_label[cfg_key_names[2]]):
for key_3_ii, key_3_id in enumerate(cfg_key_label[cfg_key_names[3]]):
for key_4_ii, key_4_id in enumerate(cfg_key_label[cfg_key_names[4]]):
for key_5_ii, cfg_vec in enumerate(convert_adj_matr_to_cfg_matr(adj_dict[key_2_id][key_3_id][key_4_id])):
cfg_obs_lut[key_0_ii, key_1_ii, key_2_ii,
key_3_ii, key_4_ii, key_5_ii] = key_cnt
cfg_matr.append(cfg_vec)
key_cnt += 1
cfg_matr = np.array(cfg_matr)
cfg_matr_orig = cfg_matr.copy()
# Normalize sum of edge weights to 1
cfg_L1 = np.linalg.norm(cfg_matr, axis=1, ord=1)
cfg_L1[cfg_L1 == 0] = 1.0
cfg_matr = (cfg_matr.T / cfg_L1).T
# Rescale edge weight to unit L2-Norm
cfg_L2 = np.zeros_like(cfg_matr)
for subj_ii in xrange(len(cfg_key_label['Subject_ID'])):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :].reshape(-1), dtype=int)
cfg_L2[grp_ix, :] = np.linalg.norm(cfg_matr[grp_ix, :], axis=0, ord=2)
cfg_L2[cfg_L2 == 0] = 1.0
cfg_matr = cfg_matr / cfg_L2
np.savez('{}/Population.Configuration_Matrix.npz'.format(path_ExpData),
cfg_matr_orig=cfg_matr_orig,
cfg_matr=cfg_matr,
cfg_L2=cfg_L2,
cfg_obs_lut=cfg_obs_lut,
cfg_key_label=cfg_key_label,
cfg_key_names=cfg_key_names)
"""
Explanation: Create Lookup-Table and Full Configuration Matrix
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = []
for grp_ii in xrange(n_grp):
grp_ix = np.array(cfg_obs_lut[grp_ii, :, :, :, :, :].reshape(-1), dtype=int)
grp_edge_wt.append(np.mean(cfg_matr[grp_ix, :], axis=1))
grp_edge_wt = np.array(grp_edge_wt)
mean_grp_edge_wt = np.mean(grp_edge_wt, axis=1)
grp_ord_ix = np.argsort(mean_grp_edge_wt)[::-1]
### Plot Subject Distribution
print(stats.f_oneway(*(grp_edge_wt)))
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt[grp_ord_ix, :].T, sym='', patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels([])
ax.set_xlabel('Subjects')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Subjects.svg'.format(path_Figures))
plt.show()
"""
Explanation: Checking Correlation Biases
Across Subjects
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['CorSign_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, :, :, grp_ii, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
""
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['CorSign_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.CorSign.svg'.format(path_Figures))
plt.show()
"""
Explanation: Positive vs Negative
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Task_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, :, grp_ii, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Task_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Task.svg'.format(path_Figures))
plt.show()
"""
Explanation: Fixation vs Task
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Condition_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, grp_ii, :, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Condition_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Condition.svg'.format(path_Figures))
plt.show()
"""
Explanation: Within Experiment (Hi vs Lo)
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Experiment_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][grp_ii, :, :, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Experiment_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Expriment.svg'.format(path_Figures))
plt.show()
"""
Explanation: Between Experiment (Stroop vs Navon)
End of explanation
"""
perf_stroop_hi = df_perf['Stroop']['hi']['meanRT'].mean(axis=1)
perf_stroop_lo = df_perf['Stroop']['lo']['meanRT'].mean(axis=1)
perf_stroop_cost = perf_stroop_hi-perf_stroop_lo
perf_navon_hi = df_perf['Navon']['hi']['meanRT'].mean(axis=1)
perf_navon_lo = df_perf['Navon']['lo']['meanRT'].mean(axis=1)
perf_navon_cost = perf_navon_hi-perf_navon_lo
print(stats.ttest_rel(perf_stroop_cost, perf_navon_cost))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot([perf_stroop_cost, perf_navon_cost], patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(2)])
#ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(['Stroop', 'Navon'])
ax.set_xlabel('')
ax.set_ylabel('Reaction Time Cost (Hi-Lo)')
plt.savefig('{}/RT_Cost.Expriment.svg'.format(path_Figures))
plt.show()
"""
Explanation: Performance Between Experiment
End of explanation
"""
import nibabel as nib
df_yeo_atlas = nib.load('{}/Yeo_JNeurophysiol11_MNI152/Yeo2011_7Networks_MNI152_FreeSurferConformed1mm_LiberalMask.nii.gz'.format(path_AtlasData))
yeo_matr = df_yeo_atlas.get_data()[..., 0]
yeo_roi = np.unique(yeo_matr)[1:]
yeo_names = ['VIS', 'SMN', 'DAN', 'VAN', 'LIM', 'FPN', 'DMN']
yeo_xyz = {}
M = df_yeo_atlas.affine[:3, :3]
abc = df_yeo_atlas.affine[:3, 3]
for yeo_id in yeo_roi:
yeo_ijk = np.array(np.nonzero(yeo_matr == yeo_id)).T
yeo_xyz[yeo_id] = M.dot(yeo_ijk.T).T + abc.T
df_laus_atlas = nib.load('{}/Lausanne/ROIv_scale125_dilated.nii.gz'.format(path_AtlasData))
laus_matr = df_laus_atlas.get_data()
laus_roi = np.unique(laus_matr)[1:]
laus_xyz = {}
M = df_laus_atlas.affine[:3, :3]
abc = df_laus_atlas.affine[:3, 3]
for laus_id in laus_roi:
laus_ijk = np.array(np.nonzero(laus_matr == laus_id)).T
laus_xyz[laus_id] = M.dot(laus_ijk.T).T + abc.T
laus_yeo_assign = []
for laus_id in laus_roi:
dists = []
for yeo_id in yeo_roi:
dists.append(np.min(np.sum((yeo_xyz[yeo_id] - laus_xyz[laus_id].mean(axis=0))**2, axis=1)))
laus_yeo_assign.append(yeo_names[np.argmin(dists)])
laus_yeo_assign = np.array(laus_yeo_assign)
pd.DataFrame(laus_yeo_assign).to_csv('{}/Lausanne/ROIv_scale125_dilated.Yeo2011_7Networks_MNI152.csv'.format(path_AtlasData))
# Manually replaced subcortical and cerebellar structures as SUB and CBR, respectively.
"""
Explanation: System-Level Connectivity
Assign Lausanne to Yeo Systems
End of explanation
"""
# Read in Yeo Atlas
df_laus_yeo = pd.read_csv('{}/LausanneScale125.csv'.format(path_CoreData))
df_laus_yeo = df_laus_yeo[df_laus_yeo.Label_ID != bad_roi[0]+1]
system_lbl = np.array(df_laus_yeo['Yeo2011_7Networks'].as_matrix())
system_name = np.unique(df_laus_yeo['Yeo2011_7Networks'])
n_system = len(system_name)
n_roi = len(system_lbl)
triu_ix, triu_iy = np.triu_indices(n_roi, k=1)
sys_triu_ix, sys_triu_iy = np.triu_indices(n_system, k=0)
# Reorder System Labels and Count ROIs per System
system_srt_ix = np.argsort(system_lbl)
system_cnt = np.array([len(np.flatnonzero(system_lbl == sys_name))
for sys_name in system_name])
system_demarc = np.concatenate(([0], np.cumsum(system_cnt)))
np.savez('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData),
df_laus_yeo=df_laus_yeo,
yeo_lbl=system_lbl,
yeo_name=system_name,
sort_laus_to_yeo=system_srt_ix,
yeo_adj_demarc=system_demarc,
laus_triu=np.triu_indices(n_roi, k=1),
yeo_triu=np.triu_indices(n_system, k=0))
"""
Explanation: System-Level Adjacency Matrices
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
plt.figure(figsize=(5,5));
cnt = 0
for expr_ii, expr_id in enumerate(df['cfg_key_label'][()]['Experiment_ID']):
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
grp_ix = np.array(cfg_obs_lut[:, expr_ii, :, :, :, :][:, :, :, sgn_ii, :].reshape(-1), dtype=int)
sel_cfg_matr = cfg_matr[grp_ix, :].mean(axis=0)
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
# Plot
ax = plt.subplot(2, 2, cnt+1)
mat = ax.matshow(adj_yeo,
cmap='magma', vmin=0.025)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='w', lw=0.5)
ax.hlines(xx, 0, n_laus, color='w', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_title('{}-{}'.format(expr_id, sgn_id), fontsize=5.0)
cnt += 1
plt.show()
"""
Explanation: Plot Population Average Adjacency Matrices (Expr + Pos/Neg)
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_matr = df['cfg_matr']
# Compute Brain System Adjacency Matrices
sys_adj_matr = np.zeros((cfg_matr.shape[0], n_system, n_system))
for sys_ii, (sys_ix, sys_iy) in enumerate(zip(sys_triu_ix, sys_triu_iy)):
sys1 = system_name[sys_ix]
sys2 = system_name[sys_iy]
sys1_ix = np.flatnonzero(system_lbl[triu_ix] == sys1)
sys2_iy = np.flatnonzero(system_lbl[triu_iy] == sys2)
inter_sys_ii = np.intersect1d(sys1_ix, sys2_iy)
if len(inter_sys_ii) == 0:
sys1_ix = np.flatnonzero(system_lbl[triu_ix] == sys2)
sys2_iy = np.flatnonzero(system_lbl[triu_iy] == sys1)
inter_sys_ii = np.intersect1d(sys1_ix, sys2_iy)
mean_conn_sys1_sys2 = np.mean(cfg_matr[:, inter_sys_ii], axis=1)
sys_adj_matr[:, sys_ix, sys_iy] = mean_conn_sys1_sys2
sys_adj_matr[:, sys_iy, sys_ix] = mean_conn_sys1_sys2
np.savez('{}/Full_Adj.Yeo2011_7Networks.npz'.format(path_ExpData),
sys_adj_matr=sys_adj_matr,
cfg_obs_lut=df['cfg_obs_lut'],
cfg_key_label=df['cfg_key_label'],
cfg_key_names=df['cfg_key_names'])
"""
Explanation: Construct System Adjacency Matrices
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
coef_ix = np.array(cfg_obs_lut, dtype=int)
cfg_matr_reshape = cfg_matr[coef_ix, :]
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
sel_cfg_matr = (cfg_matr_reshape[:, :, :, 1, sgn_ii, :, :]).mean(axis=-2).mean(axis=-2)
sel_cfg_matr_tv = np.nan*np.zeros(cfg_matr.shape[1])
sel_cfg_matr_pv = np.nan*np.zeros(cfg_matr.shape[1])
for cc in xrange(cfg_matr.shape[1]):
tv, pv = stats.ttest_rel(*sel_cfg_matr[:, :, cc].T)
mean_stroop = np.mean(sel_cfg_matr[:, :, cc], axis=0)[0]
mean_navon = np.mean(sel_cfg_matr[:, :, cc], axis=0)[1]
dv = (mean_stroop - mean_navon) / np.std(sel_cfg_matr[:, :, cc].reshape(-1))
sel_cfg_matr_tv[cc] = dv
sel_cfg_matr_pv[cc] = pv
sig_pv = Echobase.Statistics.FDR.fdr.bhp(sel_cfg_matr_pv, alpha=0.05, dependent=True)
sel_cfg_matr_tv[sig_pv == False] = 0.0
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr_tv)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
adj_yeo[np.diag_indices_from(adj_yeo)] = np.nan
# Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
mat = ax.matshow(adj_yeo,
cmap='PuOr', vmin=-1.0, vmax=1.0)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='k', lw=0.5)
ax.hlines(xx, 0, n_laus, color='k', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
plt.savefig('{}/Contrast.Expr.{}.svg'.format(path_Figures, sgn_id))
plt.show()
"""
Explanation: Check Contrasts
Stroop vs Navon
End of explanation
"""
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
for expr_ii, expr_id in enumerate(df['cfg_key_label'][()]['Experiment_ID']):
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
coef_ix = np.array(cfg_obs_lut, dtype=int)
cfg_matr_reshape = cfg_matr[coef_ix, :]
sel_cfg_matr = cfg_matr_reshape[:, expr_ii, :, 1, sgn_ii, :, :].mean(axis=-2)
sel_cfg_matr_tv = np.nan*np.zeros(cfg_matr.shape[1])
sel_cfg_matr_pv = np.nan*np.zeros(cfg_matr.shape[1])
for cc in xrange(cfg_matr.shape[1]):
tv, pv = stats.ttest_rel(*sel_cfg_matr[:, :, cc].T)
mean_lo = np.mean(sel_cfg_matr[:, :, cc], axis=0)[0]
mean_hi = np.mean(sel_cfg_matr[:, :, cc], axis=0)[1]
dv = (mean_hi - mean_lo) / np.std(sel_cfg_matr[:, :, cc].reshape(-1))
sel_cfg_matr_tv[cc] = dv
sel_cfg_matr_pv[cc] = pv
sig_pv = Echobase.Statistics.FDR.fdr.bhp(sel_cfg_matr_pv, alpha=0.05, dependent=True)
sel_cfg_matr_tv[sig_pv == False] = np.nan
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr_tv)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
adj_yeo[np.diag_indices_from(adj_yeo)] = np.nan
# Plot
plt.figure(figsize=(3,3), dpi=300)
ax = plt.subplot(111)
mat = ax.matshow(adj_yeo,
cmap='coolwarm', vmin=-0.5, vmax=0.5)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='k', lw=0.5)
ax.hlines(xx, 0, n_laus, color='k', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_title('{}-{}'.format(expr_id, sgn_id), fontsize=5.0)
plt.savefig('{}/Contrast.{}.{}.Hi_Lo.svg'.format(path_Figures, expr_id, sgn_id))
plt.show()
"""
Explanation: Lo vs Hi
End of explanation
"""
|
ivazquez/clonal-heterogeneity
|
src/figure5.ipynb
|
mit
|
# Load external dependencies
from setup import *
# Load internal dependencies
import config,plot,utils
%load_ext autoreload
%autoreload 2
%matplotlib inline
"""
Explanation: Supplemental Information:
"Clonal heterogeneity influences the fate of new adaptive mutations"
Ignacio Vázquez-García, Francisco Salinas, Jing Li, Andrej Fischer, Benjamin Barré, Johan Hallin, Anders Bergström, Elisa Alonso-Pérez, Jonas Warringer, Ville Mustonen, Gianni Liti
Figure 5
This IPython notebook is provided for reproduction of Figure 5 of the paper. It can be viewed by copying its URL to nbviewer and it can be run by opening it in binder.
End of explanation
"""
# Load data
loh_length_df = pd.read_csv(dir_data+'seq/loh/homozygosity_length.csv')
loh_length_df = loh_length_df.set_index("50kb_bin_center")
loh_length_df = loh_length_df.reindex(columns=['HU','RM','YPD'])
loh_length_df.head()
"""
Explanation: Data import
Length distribution of homozygosity tracts
End of explanation
"""
# Read csv file containing the competition assay data
loh_fluctuation_df = pd.read_csv(dir_data+'fluctuation/fluctuation_assay_rates.csv')
loh_fluctuation_df = loh_fluctuation_df.sort_values('background', ascending=False)
loh_fluctuation_df = loh_fluctuation_df.groupby(['background','environment'],sort=False)[['mean_LOH_rate','lower_LOH_rate','upper_LOH_rate']].mean()
loh_fluctuation_df = loh_fluctuation_df.ix[['WA/WA','NA/NA','WA/NA']].unstack('background')
loh_fluctuation_df = loh_fluctuation_df.ix[['HU','RM','YPD']]
loh_fluctuation_df
"""
Explanation: Fluctuation assay
Luria-Delbrück fluctuation assay.
End of explanation
"""
fig = plt.figure(figsize=(4,6))
grid = gridspec.GridSpec(nrows=3, ncols=2, height_ratios=[15, 7, 5], hspace=0.7, wspace=0.3)
gs = {}
gs['length'] = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[0,0])
gs['fluctuation'] = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=grid[0,1])
gs[('loh','WAxNA_F12_1_HU_3')] = gridspec.GridSpecFromSubplotSpec(7, 1, subplot_spec=grid[1:2,:], hspace=0)
gs[('loh','WAxNA_F12_2_RM_1')] = gridspec.GridSpecFromSubplotSpec(5, 1, subplot_spec=grid[2:3,:], hspace=0)
### Left panel ###
ax = plt.subplot(gs['length'][:])
ax.text(-0.185, 1.055, 'A', transform=ax.transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
data = loh_length_df.rename(columns=config.selection['short_label'])
kwargs = {
'color': [config.selection['color'][e] for e in loh_length_df.columns]
}
plot.loh_length(data, ax, **kwargs)
### Right panel ###
ax = plt.subplot(gs['fluctuation'][:])
ax.text(-0.2, 1.05, 'B', transform=ax.transAxes,
fontsize=9, fontweight='bold', va='top', ha='right')
data = loh_fluctuation_df['mean_LOH_rate']
kwargs = {
'yerr': loh_fluctuation_df[['lower_LOH_rate','upper_LOH_rate']].T.values,
'color': [config.background['color'][b] for b in loh_fluctuation_df['mean_LOH_rate'].columns]
}
plot.loh_fluctuation(data, ax, **kwargs)
# Axes limits
for ax in fig.get_axes():
ax.xaxis.label.set_size(6)
ax.yaxis.label.set_size(6)
ax.tick_params(axis='both', which='major', size=3, labelsize=6)
ax.tick_params(axis='both', which='minor', size=2, labelsize=4)
plot.save_figure(dir_paper+'figures/figure5/figure5')
plt.show()
"""
Explanation: Figure 5 - Loss of heterozygosity
End of explanation
"""
|
davebshow/DH3501
|
class19.ipynb
|
mit
|
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
g = nx.Graph([("A", "B")])
nx.draw_networkx(g)
"""
Explanation: <div align="left">
<h4><a href="index.ipynb">RETURN TO INDEX</a></h4>
</div>
<div align="center">
<h1><a href="index.ipynb">DH3501: Advanced Social Networks</a><br/><br/><em>Class 19</em>: Bargaining, Stability, and Balance in Networks</h1>
</div>
<div style="float:left">
<b>Western University</b><br/>
<b>Department of Modern Languages and Literatures</b><br/>
<b>Digital Humanities – DH 3501</b><br/>
<br/>
<b>Instructor</b>: David Brown<br/>
<b>E-mail</b>: <a href="mailto:dbrow52@uwo.ca">dbrow52@uwo.ca</a><br/>
<b>Office</b>: AHB 1R14<br/>
</div>
<div style="float:left">
<img style="width:200px; margin-left:100px" src="http://www.bsr.org/images/blog/networks.jpg" />
</div>
What determines an individual's power?
Is it an individual characteristic?
A network property?
"Indeed, as Richard Emerson has observed in his fundamental work on this subject, power is not so much a property of an individual as it is a property of a relation between two individuals -- it makes more sense to study the conditions under which one person has power over another, rather than simply asserting that a particular person is "powerful". E & K, 340
End of explanation
"""
g.add_edges_from([("B", "C"), ("B", "D"), ("D", "E")])
nx.draw_networkx(g)
"""
Explanation: Value in relationship
If we assume that a relationsip holds some sort of value, how is that value divided?
Think about it...what kinds of value could a relationship hold?
If we think about power in terms of an imbalance in social exchange, how is the value of a relationship distributed based on the power of the individuals of the network?
Where does power come from?
Network Exchange Theory addresses questions of social imbalance and its relation to network structure.
<img style="float:left; width: 400px" src="img/Nelson_and_bart.gif" />
Principles of power
End of explanation
"""
g = nx.Graph([("A", "B")])
nx.draw_networkx(g)
plt.title("2-Node Path")
g = nx.Graph([("A", "B"), ("B", "C")])
nx.draw_networkx(g)
plt.title("3-Node Path")
g = nx.Graph([("A", "B"), ("B", "C"), ("C", "D")])
nx.draw_networkx(g)
plt.title("4-Node Path")
g = nx.Graph([("A", "B"), ("B", "C"), ("C", "D"), ("D", "E")])
nx.draw_networkx(g)
plt.title("5-Node Path")
"""
Explanation: Dependence - if relationships confer value, nodes A and C are completely dependent on node B for value.
Exclusion - node B can easily exclude node A or C from the value conferred by the network.
Satiation - at a certain point, nodes like B begin to see diminishing returns and only maintains relations from which they can receive an unequal share of the value.
Betweenness - can confer power, this sort of centrality allows nodes like B to take advantages of structural holes and also control the flow of information througout the network. Note: high betweenness does not always confer an advantage in bargaining situations (as we will soon see).
Experimental methodology: Riddle me this...
<img style="float:left; width: 300px" src="img/experiment_comic.jpg" />
Recall the experimental methodology typically used to study power and exchange? Get together with your pods and refresh your memories...there are five steps.
Are the results of these experiments considered to be robust?
Why or why not (according to E & K)?
Application: The following visualizations show 4 commonly tested paths. What were the experimental results for each path?
End of explanation
"""
g = nx.Graph([("A", "B"), ("B", "C"), ("B", "D"), ("C", "D")])
nx.draw_networkx(g)
plt.title("Triangle with outlier")
"""
Explanation: How about power in a network that looks like this?
End of explanation
"""
g = nx.Graph([("A", "B"), ("B", "C"), ("C", "A")])
nx.draw_networkx(g)
plt.title("Triangle")
"""
Explanation: Or this?
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/ukesm1-0-mmh/aerosol.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-mmh', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NERC
Source ID: UKESM1-0-MMH
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
ernestyalumni/MLgrabbag
|
LogReg-sklearn.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # take the first two features. # EY : 20160503 type(X) is numpy.ndarray
Y = iris.target # EY : 20160503 type(Y) is numpy.ndarray
h = .02 # step size in the mesh
print "X shape: %s, Y shape: %s" % X.shape, Y.shape
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X,Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mest [x_min, x_max]x[y_min, y_max]
x_min, x_max = X[:,0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:,1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4,3))
plt.pcolormesh(xx,yy,Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
"""
Explanation: Logistic Regression
cf. sklearn.linear_model.LogisticRegression documentation
Let's take a look at the examples in the LogisticRegression documentation of sklearn.
The Logistic Regression 3-class Classifier¶ has been credited to
Code source: Gaël Varoquaux
Modified for documentation by Jaques Grobler
License: BSD 3 clause
End of explanation
"""
import os
print os.getcwd()
print os.path.abspath("./") # find out "where you are" and "where Data folder is" with these commands
"""
Explanation: Loading files and dealing with local I/O
End of explanation
"""
ex2data1 = np.loadtxt("./Data/ex2data1.txt",delimiter=',') # you, the user, may have to change this, if the directory that you're running this from is somewhere else
ex2data2 = np.loadtxt("./Data/ex2data2.txt",delimiter=',')
X_ex2data1 = ex2data1[:,0:2]
Y_ex2data1 = ex2data1[:,2]
X_ex2data2 = ex2data2[:,:2]
Y_ex2data2 = ex2data2[:,2]
logreg.fit(X_ex2data1,Y_ex2data1)
def trainingdat2mesh(X,marginsize=.5, h=0.2):
rows, features = X.shape
ranges = []
for feature in range(features):
minrange = X[:,feature].min()-marginsize
maxrange = X[:,feature].max()+marginsize
ranges.append((minrange,maxrange))
if len(ranges) == 2:
xx, yy = np.meshgrid(np.arange(ranges[0][0], ranges[0][1], h), np.arange(ranges[1][0], ranges[1][1], h))
return xx, yy
else:
return ranges
xx_ex2data1, yy_ex2data1 = trainingdat2mesh(X_ex2data1,h=0.2)
Z_ex2data1 = logreg.predict(np.c_[xx_ex2data1.ravel(),yy_ex2data1.ravel()])
Z_ex2data1 = Z_ex2data1.reshape(xx_ex2data1.shape)
plt.figure(2)
plt.pcolormesh(xx_ex2data1,yy_ex2data1,Z_ex2data1)
plt.scatter(X_ex2data1[:, 0], X_ex2data1[:, 1], c=Y_ex2data1, edgecolors='k')
plt.show()
"""
Explanation: Let's load the data for Exercise 2 of Machine Learning, taught by Andrew Ng, of Coursera.
End of explanation
"""
logreg.predict_proba(np.array([[45,85]])).flatten()
print "The student has a probability of no admission of %s and probability of admission of %s" % tuple( logreg.predict_proba(np.array([[45,85]])).flatten() )
"""
Explanation: Get the probability estimates; say a student has an Exam 1 score of 45 and an Exam 2 score of 85.
End of explanation
"""
logreg2 = linear_model.LogisticRegression()
logreg2.fit(X_ex2data2,Y_ex2data2)
xx_ex2data2, yy_ex2data2 = trainingdat2mesh(X_ex2data2,h=0.02)
Z_ex2data2 = logreg.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()])
Z_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape)
plt.figure(3)
plt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2)
plt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k')
plt.show()
"""
Explanation: Let's change the "regularization" with the C parameter/option for LogisticRegression. Call this logreg2
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_features = PolynomialFeatures(degree=6,include_bias=False)
pipeline = Pipeline([("polynomial_features", polynomial_features),("logistic_regression",logreg2)])
pipeline.fit(X_ex2data2,Y_ex2data2)
Z_ex2data2 = pipeline.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()])
Z_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape)
plt.figure(3)
plt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2)
plt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k')
plt.show()
"""
Explanation: As one can see, the "dataset cannot be separated into positive and negative examples by a straight-line through the plot." cf. ex2.pdf
We're going to need polynomial terms to map onto.
Use this code: cf. Underfitting vs. Overfitting¶
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/csir-csiro/cmip6/models/sandbox-2/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-2', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
donaghhorgan/COMP9033
|
labs/08a - k nearest neighbours classification.ipynb
|
gpl-3.0
|
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV, StratifiedKFold, cross_val_predict
from sklearn.pipeline import make_pipeline
from sklearn.neighbors import KNeighborsClassifier
"""
Explanation: Lab 08a: $k$ nearest neighbours classification
Introduction
This lab focuses on SMS message spam detection using $k$ nearest neighbours classification. It's a direct counterpart to the rule-based spam detection from Lab 05 and the decision tree models from Lab 07a. At the end of the lab, you should be able to use scikit-learn to:
Create a $k$ nearest neighbours classification model.
Use the model to predict new values.
Measure the accuracy of the model.
Getting started
Let's start by importing the packages we'll need. This week, we're going to use the neighbors subpackage from scikit-learn to build k nearest neighbours models. We'll also use the dummy package to build a baseline model from we which can gauge how good our final model is.
End of explanation
"""
data_file = 'data/sms.csv'
"""
Explanation: Next, let's load the data. Write the path to your sms.csv file in the cell below:
End of explanation
"""
sms = pd.read_csv(data_file, sep='\t', header=None, names=['label', 'message'])
sms.head()
"""
Explanation: Execute the cell below to load the CSV data into a pandas data frame with the columns label and message.
Note: This week, the CSV file is not comma separated, but instead tab separated. We can tell pandas about the different format using the sep argument, as shown in the cell below. For more information, see the read_csv documentation.
End of explanation
"""
sample = sms.sample(frac=0.25, random_state=0) # Randomly subsample a quarter of the available data
X = sample['message']
y = sample['label']
"""
Explanation: Next, let's select our feature ($X$) and target ($y$) variables from the data. Usually, we would use all of the available data but, for speed ($k$ nearest neighbours can be CPU intensive), let's just select a random sample. We can do this using the sample method in pandas, as follows:
End of explanation
"""
KNeighborsClassifier().get_params()
"""
Explanation: $k$ nearest neighbours
Let's build a nearest neighbours classification model of the SMS message data. scikit-learn supports nearest neighbours functionality via the neighbors subpackage. This subpackage supports both nearest neighbours regression and classification. We can use the KNeighborsClassifier class to build our model.
KNeighborsClassifier accepts a number of different hyperparameters and the model we build may be more or less accurate depending on their values. We can get a list of these modelling parameters using the get_params method of the estimator (this works on any scikit-learn estimator), like this:
End of explanation
"""
pipeline = make_pipeline(
TfidfVectorizer(stop_words='english'),
KNeighborsClassifier()
)
# Build models for different values of n_neighbors (k), distance metric and weight scheme
parameters = {
'kneighborsclassifier__n_neighbors': [2, 5, 10],
'kneighborsclassifier__metric': ['manhattan', 'euclidean'],
'kneighborsclassifier__weights': ['uniform', 'distance']
}
# Use inner CV to select the best model
inner_cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=0) # K = 5
clf = GridSearchCV(pipeline, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
print(classification_report(y, y_pred)) # Print the classification report
"""
Explanation: You can find a more detailed description of each parameter in the scikit-learn documentation.
Let's use a grid search to select the optimal nearest neighbours classification model from a set of candidates. First, we need to build a pipeline, just as we did last week. Next, we define the parameter grid. Finally, we use a grid search to select the best model via an inner cross validation and an outer cross validation to measure the accuracy of the selected model.
Note: When using grid search with pipelines, we have to adjust the names of our hyperparameters, prepending the name of the class they apply to (in lowercase). This is so that scikit-learn can distinguish which hyperparameters apply to what classes. Below, we prepend the string 'kneighborsclassifier__' to each hyperparameter name because they all apply to the KNeighborsClassifier class.
End of explanation
"""
clf.best_params_
"""
Explanation: The model is much more accurate than the rule-based model from Lab 05, but not as accurate as the random forest model from Lab 07a. Specifically, we can say that:
92% of the messages we labelled as ham were actually ham (precision for ham = 0.92).
100% of the messages we labelled as spam were actually spam (precision for spam = 1.00).
We labelled every actual ham as ham (recall for ham = 1.00).
We labelled 44% of spam as spam (recall for spam = 0.44).
While no ham was misclassified as spam, we only managed to filter 44% of spam emails (approximately one in every two).
As before, we can check the parameters of the selected model using the best_params_ attribute of the fitted grid search:
End of explanation
"""
|
juanshishido/tufte
|
tufte-in-python.ipynb
|
gpl-2.0
|
%matplotlib inline
import string
import random
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import tufte
"""
Explanation: Tufte
A Jupyter notebook with examples of how to use tufte.
Introduction
Currently, there are four supported plot types:
* bar
* boxplot
* line
* scatter
The designs are based on Edward R. Tufte's designs in The Visual Display of Quantitative Information.
This module is built on top of matplotlib, which means that it's possible to use those functions or methods in conjunction with tufte plots. In addition, an effort has been made to keep most changes to matplotlibrc properties contained within the module. That is, we try not to make global changes that will affect other plots.
Use
Let's start by importing several libraries.
End of explanation
"""
tufte.line(range(3), range(3), figsize=(5, 5))
"""
Explanation: tufte plots can take inputs of several types: list, np.ndarray, pd.Series, and, in some cases, pd.DataFrame.
To create a line plot, do the following. (Note: if you'd like higher resolution plots, use mpl.rc('savefig', dpi=200).)
End of explanation
"""
x = range(1967, 1977 + 1)
y = [310.2, 330, 375, 385, 385.6, 395, 387.5, 380, 392, 407.1, 380]
tufte.line(x, y, figsize=(8, 4))
"""
Explanation: You'll notice that the default Tufte line style includes circle markers with gaps between line segments. You are also able to specify the figure size directly to the line function.
There are several other differences. We'll create another plot below as an example.
End of explanation
"""
np.random.seed(8675309)
fig, ax = tufte.scatter(np.random.randint(5, 95, 100), np.random.randint(1000, 1234, 100), figsize=(8, 4))
plt.title('Title')
ax.set_xlabel('x-axis')
"""
Explanation: First, we use Tufte's range-frame concept, which aims to make the frame (axis) lines "effective data-communicating element[s]" by showing the minimum and maximum values in each axis. This way, the tick labels are more informative. In this example, the range of the outcome variable is 96.9 units (407.1 - 310.2). Similarly, this data covers the years 1967 through 1977, inclusive.
The range-frame is applied to both axes for line and scatter plots.
End of explanation
"""
np.random.seed(8675309)
tufte.bar(range(10),
np.random.randint(1, 25, 10),
label=['First', 'Second', 'Third', 'Fourth', 'Fifth',
'Sixth', 'Seventh', 'Eight', 'Ninth', 'Tenth'],
figsize=(8, 4))
"""
Explanation: You'll also notice that tufte.scatter() returns figure and axis objects. This is true for all tufte plots. With this, we can add a title to the figure and a label to the x-axis, for example. tufte plots are meant to be able to interact with matplotlib functions and methods.
When you need to create a bar plot, do the following.
End of explanation
"""
np.random.seed(8675309)
tufte.bar(range(10),
np.random.randint(1, 25, 10),
label=['First', 'Second', 'Third', 'Fourth', 'Fifth',
'Sixth', 'Lucky 7th', 'Eight', 'Ninth', 'Tenth'],
figsize=(8, 4))
"""
Explanation: A feature of the bar() function is the ability for x-axis labels to auto-rotate. We can see this when we change the one of the labels.
End of explanation
"""
n_cols = 10 # Must be less than or equal to 26
size = 100
letters = string.ascii_lowercase
df_dict = defaultdict(list)
for c in letters[:n_cols]:
df_dict[c] = np.random.randint(random.randint(25, 50), random.randint(75, 100), size)
df = pd.DataFrame(df_dict)
tufte.bplot(df, figsize=(8, 4))
"""
Explanation: Tufte's boxplot is, perhaps, the most radical redesign of an existing plot. His approach is to maximize data-ink, the "non-erasable core of a graphic," by removing unnecessary elements. The boxplot removes boxes (which is why we refer to it as bplot()) and caps and simply shows a dot between two lines. This plot currently only takes a list, np.ndarray, or pd.DataFrame.
Let's create a DataFrame.
End of explanation
"""
np.random.seed(8675309)
tufte.scatter(np.random.randn(100), np.random.randn(100), figsize=(8, 4))
"""
Explanation: The dot represents the median and the lines correspond to the top and bottom 25% of the data. The empty space between the lines is the interquartile range.
Issues
Range-Frame
You may have noticed—if you cloned this repo and ran the notebook—that the range-frame feature isn;t perfect. It is possible, for example, for a minimum or maximum value to be too close to an existing tick label, causing overlap.
Additionally, in cases where the data in a given dimension (x or y) contains float values, the tick labels are converted to float. (This isn't the issue.)
End of explanation
"""
|
Brunel-Visualization/Brunel
|
python/src/examples/.ipynb_checkpoints/Whiskey-checkpoint.ipynb
|
apache-2.0
|
import pandas as pd
from numpy import log, abs, sign, sqrt
import ibmcognitive
ibmcognitive.brunel.set_brunel_service_url("http://localhost:8080/BrunelServices")
data = pd.read_csv("data/whiskey.csv")
print('Data on whiskies:', ', '.join(data.columns))
"""
Explanation: Whiskey Data
This data set contains data on a small number of whiskies
End of explanation
"""
brunel x(country, category) color(rating) treemap label(name:3) tooltip(#all) style('.label {font-size:7pt}') legends(none):: width=900, height=600
brunel bubble color(rating:red) sort(rating) size(abv) label(name:6) tooltip(#all) filter(price, category) :: height=500
%%brunel
line x(age) y(rating) mean(rating) label(country) split(country) using(interpolate) bin(age:8) color(#selection) legends(none)
| treemap x(category) interaction(select) size(#count) color(#selection) legends(none) sort(#count:ascending) bin(category:9)
tooltip(country) list(country) label(#count) style('.labels .label {font-size:14px}')
:: width=900
%%brunel
bubble label(country:3) bin(country) size(#count) color(#selection) sort(#count) interaction(select) tooltip(name) list(name) legends(none)
| x(abv) y(rating) color(#count:blue) legends(none) bin(abv:8) bin(rating:5) style('symbol:rect; stroke:none; size:100%') at(0,10,70,100)
interaction(select) label(#selection) list(#selection) at(60,15,100,100) tooltip(rating, abv,#count) legends(none)
| bar label(brand:70) list(brand) at(0,0, 100, 10) axes(none) color(#selection) legends(none) interaction(filter)
:: width=900, height=600
"""
Explanation: Summaries
Shown below are the following charts:
A treemap display for each whiskey, broken down by country and category. The cells are colored by the rating, with lower-rated whiskies in blue, and higher-rated in reds. Missing data for ratings show as black.
A filtered chart allowing you to select whiskeys based on price and category
A line chart showing the relationship between age and rating. A simple treemap of categories is linked to this chart
A bubble chart of countries linked to a heatmap of alcohol level (ABV) by rating
End of explanation
"""
from sklearn import tree
D = data[['Name', 'ABV', 'Age', 'Rating', 'Price']].dropna()
X = D[ ['ABV', 'Age', 'Rating'] ]
y = D['Price']
clf = tree.DecisionTreeRegressor(min_samples_leaf=4)
clf.fit(X, y)
D['Predicted'] = clf.predict(X)
f = D['Predicted'] - D['Price']
D['Diff'] = sqrt(abs(f)) * sign(f)
D['LPrice'] = log(y)
%brunel y(diff) x(LPrice) tooltip(name, price, predicted, rating) color(rating) :: width=700
"""
Explanation: Some Analysis
Here we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value.
We transform the output for plotting purposes, but note that the tooltips give the original data
End of explanation
"""
%%brunel bar x(country) y(#count)
| bar color(category) y(#count) polar stack label(category) legends(none)
:: width=900, height=300
"""
Explanation: Simple Linked Charts
End of explanation
"""
|
GoogleCloudPlatform/practical-ml-vision-book
|
04_detect_segment/04ab_retinanet_arthropods_train.ipynb
|
apache-2.0
|
# Use your own GCS bucket here. GCS is required if training on TPU.
# On GPU, a local folder will work.
MODEL_ARTIFACT_BUCKET = 'gs://ml1-demo-martin/arthropod_jobs/'
MODEL_DIR = MODEL_ARTIFACT_BUCKET + str(int(time.time()))
# If you are running on Colaboratory, you must authenticate
# for Colab to have write access to the bucket.
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user()
"""
Explanation: GCS bucket
This bucket will receive:
- Tensorboard summaries that allow you to follow the training
- checkpoints
- the saved model after training
End of explanation
"""
try: # detect TPUs
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError: # detect GPUs or multi-GPU machines
strategy = tf.distribute.MirroredStrategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
"""
Explanation: TPU / GPU detection
End of explanation
"""
TRAIN_DATA_PATH_PATTERN = 'gs://practical-ml-vision-book/arthropod_detection_tfr/size_w1024px/*.train.tfrec'
VALID_DATA_PATH_PATTERN = 'gs://practical-ml-vision-book/arthropod_detection_tfr/size_w1024px/*.test.tfrec'
SPINET_MOBILE_CHECKPOINT = 'gs://practical-ml-vision-book/arthropod_detection_tfr/spinenet49mobile_checkpoint'
# The spinet mobile 49 checkpoint is published here:
# https://github.com/tensorflow/models/blob/master/official/vision/MODEL_GARDEN.md#mobile-size-retinanet-trained-from-scratch
BATCH_SIZE = 32 * strategy.num_replicas_in_sync
EPOCHS = 80
RAW_CLASSES = ['Lepidoptera', 'Hymenoptera', 'Hemiptera', 'Odonata', 'Diptera', 'Araneae', 'Coleoptera',
'_truncated', '_blurred', '_occluded', ]
CLASSES = [klass for klass in RAW_CLASSES if klass not in ['_truncated', '_blurred', '_occluded']]
# Lepidoptera = butterfies and moths
# Hymenoptera = wasps, bees and ants
# Hemiptera = true bugs (cicadas, aphids, shield bugs, ...)
# Odonata = dragonflies
# Diptera = fies
# Araneae = spiders
# Coleoptera = beetles
# NOT IN DATASET
# Orthoptera = grasshoppers
print("Model dir:", MODEL_DIR)
"""
Explanation: Configuration
End of explanation
"""
def count_data_items(filenames):
# the number of data items is written in the name of the .tfrec files, i.e. flowers00-230.tfrec = 230 data items
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return int(np.sum(n))
TRAIN_FILENAMES = tf.io.gfile.glob(TRAIN_DATA_PATH_PATTERN)
NB_TRAIN_IMAGES = count_data_items(TRAIN_FILENAMES)
STEPS_PER_EPOCH = NB_TRAIN_IMAGES // BATCH_SIZE
VALID_FILENAMES = tf.io.gfile.glob(VALID_DATA_PATH_PATTERN)
NB_VALID_IMAGES = count_data_items(VALID_FILENAMES)
VALID_STEPS = NB_VALID_IMAGES // BATCH_SIZE
print("Training dataset:")
print(f" {len(TRAIN_FILENAMES)} TFRecord files.")
print(f" {NB_TRAIN_IMAGES} images")
print(" Steps per epoch:", STEPS_PER_EPOCH)
print()
print("Validation dataset:")
print(f" {len(VALID_FILENAMES)} TFRecord files.")
print(f" {NB_VALID_IMAGES} images")
print(" Validation steps:", VALID_STEPS)
print()
print("Global batch size:", BATCH_SIZE)
"""
Explanation: Load data files
The dataset is already prepared in TFRecord format.<br/>
The script that prepared the data is in "04aa_retinanet_arthropods_dataprep.ipynb"<br/>
To parse the TFRecord files by hand and visulaize their contents, see code in "04ac_retinanet_arthropods_predict.ipynb"
End of explanation
"""
IMAGE_SIZE = [384, 384]
# default parameters can be overriden in two ways:
# 1) params.override({'task': {'model': {'backbone': backbone_cfg.as_dict()}}})
# 2) params.task.model.backbone = backbone_cfg
# params.override checks that the dictionary keys exist
# the second options will silently add new keys
params = tfm.core.exp_factory.get_exp_config('retinanet')
params.task.model.num_classes = len(CLASSES)+1 # class 0 is reserved for backgrounds
params.task.model.input_size = [*IMAGE_SIZE, 3] # this automatically configures the input reader to random crop training images
params.task.init_checkpoint = SPINET_MOBILE_CHECKPOINT
params.task.init_checkpoint_modules = 'backbone'
params.task.model.backbone = tfm.vision.configs.backbones.Backbone(type='spinenet_mobile',
spinenet_mobile=tfm.vision.configs.backbones.SpineNetMobile())
train_data_cfg=tfm.vision.configs.retinanet.DataConfig(
input_path=TRAIN_DATA_PATH_PATTERN,
is_training=True,
global_batch_size=BATCH_SIZE,
parser=tfm.vision.configs.retinanet.Parser(aug_rand_hflip=True, aug_scale_min=0.7, aug_scale_max=2.0))
valid_data_cfg=tfm.vision.configs.retinanet.DataConfig(
input_path=VALID_DATA_PATH_PATTERN,
is_training=False,
global_batch_size=BATCH_SIZE)
params.override({'task': {'train_data': train_data_cfg.as_dict(), 'validation_data': valid_data_cfg.as_dict()}})
trainer_cfg=tfm.core.config_definitions.TrainerConfig(
train_steps=EPOCHS * STEPS_PER_EPOCH,
validation_steps=VALID_STEPS,
validation_interval=8*STEPS_PER_EPOCH,
steps_per_loop=STEPS_PER_EPOCH,
summary_interval=STEPS_PER_EPOCH,
checkpoint_interval=8*STEPS_PER_EPOCH)
optim_cfg = tfm.optimization.OptimizationConfig({
'optimizer': {
'type': 'sgd',
'sgd': {'momentum': 0.9}},
'learning_rate': {'type': 'stepwise',
'stepwise': {'boundaries': [15 * STEPS_PER_EPOCH,
30 * STEPS_PER_EPOCH,
45 * STEPS_PER_EPOCH,
60 * STEPS_PER_EPOCH,
75 * STEPS_PER_EPOCH],
'values': [0.016, #0.01,
0.008, #0.005,
0.004, #0.0025,
0.002, #0.001,
0.001, #0.0005,
0.0005]} #0.00025]}
},
#'warmup': {'type': 'linear','linear': {'warmup_steps': 5*STEPS_PER_EPOCH, 'warmup_learning_rate': 0.00001}}
})
trainer_cfg.override({'optimizer_config': optim_cfg})
params.override({'trainer': trainer_cfg})
pp.pprint(params.as_dict())
"""
Explanation: Model configuration
End of explanation
"""
task = tfm.core.task_factory.get_task(params.task, logging_dir=MODEL_DIR)
# this works too:
#task = official.vision.beta.tasks.retinanet.RetinaNetTask(params.task)
# this returns a RetinaNetModel
#task.build_model()
# note: none of the expected model functionalities work: model.fit(), model.predict(), model.save()
# this returns the training dataset
#train_dataset = task.build_inputs(train_data_cfg)
# note: the dataset already includes FPN level and anchor pairing and is therefore not very readable
# this returns the validation dataset
#valid_dataset = task.build_inputs(valid_data_cfg)
# note: the dataset already includes FPN level and anchor pairing and is therefore not very readable
# this code allows you to see if the TFRecord fields are read correctly
#ds = tf.data.TFRecordDataset(tf.io.gfile.glob(TRAIN_DATA_PATH_PATTERN))
#dec = official.vision.beta.dataloaders.tf_example_decoder.TfExampleDecoder()
#ds = ds.map(dec.decode)
# training and validatoin data parsing happens in:
# official.vision.beta.dataloaders.retinanet_input.Parser._parse_train_data
# official.vision.beta.dataloaders.retinanet_input.Parser._parse_eval_data
# official.vision.beta.dataloaders.Parser.parse() # dispatches between _parse_train_data and _parse_eval_data
"""
Explanation: Create the model
End of explanation
"""
print(MODEL_DIR)
model,_ = tfm.core.train_lib.run_experiment(
distribution_strategy=strategy,
task=task,
mode="train_and_eval", # 'train', 'eval', 'train_and_eval' or 'continuous_eval'
params=params,
model_dir=MODEL_DIR)
"""
Explanation: Train the model
Training takes approximately 30min on a TPUv3-8, 40min on a TPUv2-8 on Colab
End of explanation
"""
tfm.vision.serving.export_saved_model_lib.export_inference_graph(
input_type='image_tensor',
batch_size=4,
input_image_size=IMAGE_SIZE,
params=params,
checkpoint_path=MODEL_DIR,
export_dir=MODEL_DIR,
export_checkpoint_subdir='saved_chkpt',
export_saved_model_subdir='saved_model')
"""
Explanation: Export the model
To test the exported model, please use the notebook "04ac_retinanet_arthropods_predict.ipynb"
End of explanation
"""
|
astyonax/IPyNotebooks
|
quakes.ipynb
|
gpl-2.0
|
#xyz=records[['Latitude','Longitude','Magnitude','Depth/Km','deltaT']].values[1:].T
lxyz=xyz.T.copy()
lxyz=lxyz[:,2:]
lxyz/=lxyz.std(axis=0)
"Magnitude,Depth,deltaT"
print lxyz.shape
l,e,MD=pma.pma(lxyz)
X=pma.get_XY(lxyz,e)
sns.plt.plot(np.cumsum(l)/np.sum(l),'o-')
sns.plt.figure()
sns.plt.plot(e[:,:3])
sns.plt.legend("123")
ax=sns.plt.gca()
ax.xaxis.set_ticks([0,1,2])
ax.xaxis.set_ticklabels(["Magnitude","Depth","Dt"])
sns.plt.scatter(X[0],X[1],s=X[2]*10)
# colors=sns.plt.cm.jet(xyz[2]**.25)
#f,a=sns.plt.subplots(1,1,figsize=(10,8))
x1,x2,x3,x4=records['Latitude'].min(),records['Latitude'].max(),records['Longitude'].min(),records['Longitude'].max()
print x1,x2,x3,x4
m=Basemap(projection='mill',llcrnrlat=x1,urcrnrlat=x2,llcrnrlon=x3/2,urcrnrlon=x4,resolution = 'c')
m.drawcoastlines()
m.drawcountries()
#m.bluemarble()
m.fillcontinents(color="#dbc8b2")
print
txyz=xyz[:,xyz[0]<40]
x,y=m(txyz[1],txyz[0])
m.scatter(x,y,alpha=1,lw=0,s=xyz[2]*10,zorder=1e5,c=colors)
import scipy.spatial.distance as ssd
import scipy
print txyz.shape
pdists=ssd.squareform(ssd.pdist(xyz.T[:,:2]))
#zz=np.asarray([xyz[-1],xyz[-1]]).T
#tdists=ssd.squareform(ssd.pdist(zz,'braycurtis'))
print pdists.shape
#print tdists.shape
mx,my=scipy.meshgrid(xyz[-2],xyz[-2])
tdists=lambda u,v:u-v
tdists=tdists(mx,my)
tdists[tdists<0]=np.nan
print tdists.shape
print (tdists<0).sum()/tdists.shape[0]**2.
d_n_t=pdists/(tdists+1)
print np.isnan(d_n_t).sum()
sns.plt.imshow((d_n_t),origin="bottom")
sns.plt.colorbar()
sns.plt.figure()
#_=sns.plt.hist(np.ma.masked_invalid(d_n_t[:,0]),bins=np.arange(0,6,.2))
#_=sns.plt.hist(np.ma.masked_invalid(pdists[d_n_t[:,0]<2,0]),bins=np.arange(0,6,.2))
# colors=sns.plt.cm.jet(xyz[2]**.25)
#f,a=sns.plt.subplots(1,1,figsize=(10,8))
x1,x2,x3,x4=records['Latitude'].min(),records['Latitude'].max(),records['Longitude'].min(),records['Longitude'].max()
print x1,x2,x3,x4
m=Basemap(projection='mill',llcrnrlat=x1,urcrnrlat=x2,llcrnrlon=x3/2,urcrnrlon=x4,resolution = 'c')
m.drawcoastlines()
m.drawcountries()
#m.bluemarble()
m.fillcontinents(color="#dbc8b2")
print txyz.shape
trs=.25
t=np.where(d_n_t[0]>trs)[0]
t2=np.where(d_n_t[t[0]]>trs)[0]
t3=np.where(d_n_t[t2[0]]<trs)[0]
tfxyz=xyz[:,t3]
x,y=m(tfxyz[1],tfxyz[0])
N=x.shape[0]
colors=sns.plt.cm.BrBG(np.arange(0.,N,1.)/N)
m.scatter(x,y,alpha=1,lw=0,s=30,zorder=1e5,c=colors)
m.scatter(x[0],y[0],alpha=1,lw=0,s=80,zorder=1e5,c=colors)
pl=sns.plt
pl.figure()
pl.scatter(tfxyz[0],tfxyz[1])
"""
Explanation: Linear PCA
End of explanation
"""
from sklearn.decomposition import KernelPCA
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=100, fit_inverse_transform=1)
kpca_in =xyz[:,:2]
kpca_out=scikit_kpca.fit_transform(kpca_in)
kpca_out_inv=scikit_kpca.inverse_transform(kpca_out)
print "doing pca"
l,e,_=pma.pma(kpca_in)
pca_out=np.asarray(pma.get_XY(kpca_in,e))
sns.plt.scatter(*kpca_in.T,c='r')
#sns.plt.scatter(*kpca_out.T,c='r')
sns.plt.scatter(*kpca_out_inv.T,s=xyz[:,3]*20)
#sns.plt.scatter(*pca_out,c='g')
"""
Explanation: non linear PCA
http://sebastianraschka.com/Articles/2014_kernel_pca.html#nonlinear-dimensionality-reduction
End of explanation
"""
|
hanezu/cs231n-assignment
|
assignment2/BatchNormalization.ipynb
|
mit
|
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
"""
Explanation: Batch Normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.
End of explanation
"""
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
NathanYee/ThinkBayes2
|
code/chap05soln.ipynb
|
gpl-2.0
|
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Beta
import thinkplot
"""
Explanation: Think Bayes: Chapter 5
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
def Odds(p):
return p / (1-p)
"""
Explanation: Odds
The following function converts from probabilities to odds.
End of explanation
"""
def Probability(o):
return o / (o+1)
"""
Explanation: And this function converts from odds to probabilities.
End of explanation
"""
p = 0.2
Odds(p)
"""
Explanation: If 20% of bettors think my horse will win, that corresponds to odds of 1:4, or 0.25.
End of explanation
"""
o = 1/5
Probability(o)
"""
Explanation: If the odds against my horse are 1:5, that corresponds to a probability of 1/6.
End of explanation
"""
prior_odds = 1
likelihood_ratio = 0.75 / 0.5
post_odds = prior_odds * likelihood_ratio
post_odds
"""
Explanation: We can use the odds form of Bayes's theorem to solve the cookie problem:
End of explanation
"""
post_prob = Probability(post_odds)
post_prob
"""
Explanation: And then we can compute the posterior probability, if desired.
End of explanation
"""
likelihood_ratio = 0.25 / 0.5
post_odds *= likelihood_ratio
post_odds
"""
Explanation: If we draw another cookie and it's chocolate, we can do another update:
End of explanation
"""
post_prob = Probability(post_odds)
post_prob
"""
Explanation: And convert back to probability.
End of explanation
"""
like1 = 0.01
like2 = 2 * 0.6 * 0.01
likelihood_ratio = like1 / like2
likelihood_ratio
"""
Explanation: Oliver's blood
The likelihood ratio is also useful for talking about the strength of evidence without getting bogged down talking about priors.
As an example, we'll solve this problem from MacKay's {\it Information Theory, Inference, and Learning Algorithms}:
Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type 'O' blood. The blood groups of the two traces are found to be of type 'O' (a common type in the local population, having frequency 60) and of type 'AB' (a rare type, with frequency 1). Do these data [the traces found at the scene] give evidence in favor of the proposition that Oliver was one of the people [who left blood at the scene]?
If Oliver is
one of the people who left blood at the crime scene, then he
accounts for the 'O' sample, so the probability of the data
is just the probability that a random member of the population
has type 'AB' blood, which is 1%.
If Oliver did not leave blood at the scene, then we have two
samples to account for. If we choose two random people from
the population, what is the chance of finding one with type 'O'
and one with type 'AB'? Well, there are two ways it might happen:
the first person we choose might have type 'O' and the second
'AB', or the other way around. So the total probability is
$2 (0.6) (0.01) = 1.2$%.
So the likelihood ratio is:
End of explanation
"""
post_odds = 1 * like1 / like2
Probability(post_odds)
"""
Explanation: Since the ratio is less than 1, it is evidence against the hypothesis that Oliver left blood at the scence.
But it is weak evidence. For example, if the prior odds were 1 (that is, 50% probability), the posterior odds would be 0.83, which corresponds to a probability of:
End of explanation
"""
# Solution
post_odds = Odds(0.9) * like1 / like2
Probability(post_odds)
# Solution
post_odds = Odds(0.1) * like1 / like2
Probability(post_odds)
"""
Explanation: So this evidence doesn't "move the needle" very much.
Exercise: Suppose other evidence had made you 90% confident of Oliver's guilt. How much would this exculpatory evince change your beliefs? What if you initially thought there was only a 10% chance of his guilt?
Notice that evidence with the same strength has a different effect on probability, depending on where you started.
End of explanation
"""
rhode = Beta(1, 1, label='Rhode')
rhode.Update((22, 11))
wei = Beta(1, 1, label='Wei')
wei.Update((21, 12))
"""
Explanation: Comparing distributions
Let's get back to the Kim Rhode problem from Chapter 4:
At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. They each hit 15 of 25 targets, sending the match into sudden death. In the first round, both hit 1 of 2 targets. In the next two rounds, they each hit 2 targets. Finally, in the fourth round, Rhode hit 2 and Wei hit 1, so Rhode won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games.
But after all that shooting, what is the probability that Rhode is actually a better shooter than Wei? If the same match were held again, what is the probability that Rhode would win?
I'll start with a uniform distribution for x, the probability of hitting a target, but we should check whether the results are sensitive to that choice.
First I create a Beta distribution for each of the competitors, and update it with the results.
End of explanation
"""
thinkplot.Pdf(rhode.MakePmf())
thinkplot.Pdf(wei.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability')
"""
Explanation: Based on the data, the distribution for Rhode is slightly farther right than the distribution for Wei, but there is a lot of overlap.
End of explanation
"""
iters = 1000
count = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
if x1 > x2:
count += 1
count / iters
"""
Explanation: To compute the probability that Rhode actually has a higher value of p, there are two options:
Sampling: we could draw random samples from the posterior distributions and compare them.
Enumeration: we could enumerate all possible pairs of values and add up the "probability of superiority".
I'll start with sampling. The Beta object provides a method that draws a random value from a Beta distribution:
End of explanation
"""
rhode_sample = rhode.Sample(iters)
wei_sample = wei.Sample(iters)
np.mean(rhode_sample > wei_sample)
"""
Explanation: Beta also provides Sample, which returns a NumPy array, so we an perform the comparisons using array operations:
End of explanation
"""
def ProbGreater(pmf1, pmf2):
total = 0
for x1, prob1 in pmf1.Items():
for x2, prob2 in pmf2.Items():
if x1 > x2:
total += prob1 * prob2
return total
pmf1 = rhode.MakePmf(1001)
pmf2 = wei.MakePmf(1001)
ProbGreater(pmf1, pmf2)
pmf1.ProbGreater(pmf2)
pmf1.ProbLess(pmf2)
"""
Explanation: The other option is to make Pmf objects that approximate the Beta distributions, and enumerate pairs of values:
End of explanation
"""
import random
def flip(p):
return random.random() < p
"""
Explanation: Exercise: Run this analysis again with a different prior and see how much effect it has on the results.
Simulation
To make predictions about a rematch, we have two options again:
Sampling. For each simulated match, we draw a random value of x for each contestant, then simulate 25 shots and count hits.
Computing a mixture. If we knew x exactly, the distribution of hits, k, would be binomial. Since we don't know x, the distribution of k is a mixture of binomials with different values of x.
I'll do it by sampling first.
End of explanation
"""
iters = 1000
wins = 0
losses = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
count1 = count2 = 0
for _ in range(25):
if flip(x1):
count1 += 1
if flip(x2):
count2 += 1
if count1 > count2:
wins += 1
if count1 < count2:
losses += 1
wins/iters, losses/iters
"""
Explanation: flip returns True with probability p and False with probability 1-p
Now we can simulate 1000 rematches and count wins and losses.
End of explanation
"""
rhode_rematch = np.random.binomial(25, rhode_sample)
thinkplot.Hist(Pmf(rhode_rematch))
wei_rematch = np.random.binomial(25, wei_sample)
np.mean(rhode_rematch > wei_rematch)
np.mean(rhode_rematch < wei_rematch)
"""
Explanation: Or, realizing that the distribution of k is binomial, we can simplify the code using NumPy:
End of explanation
"""
from thinkbayes2 import MakeBinomialPmf
def MakeBinomialMix(pmf, label=''):
mix = Pmf(label=label)
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
for k, p in binom.Items():
mix[k] += prob * p
return mix
rhode_rematch = MakeBinomialMix(rhode.MakePmf(), label='Rhode')
wei_rematch = MakeBinomialMix(wei.MakePmf(), label='Wei')
thinkplot.Pdf(rhode_rematch)
thinkplot.Pdf(wei_rematch)
thinkplot.Config(xlabel='hits')
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
"""
Explanation: Alternatively, we can make a mixture that represents the distribution of k, taking into account our uncertainty about x:
End of explanation
"""
from thinkbayes2 import MakeMixture
def MakeBinomialMix2(pmf):
binomials = Pmf()
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
binomials[binom] = prob
return MakeMixture(binomials)
"""
Explanation: Alternatively, we could use MakeMixture:
End of explanation
"""
rhode_rematch = MakeBinomialMix2(rhode.MakePmf())
wei_rematch = MakeBinomialMix2(wei.MakePmf())
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
"""
Explanation: Here's how we use it.
End of explanation
"""
iters = 1000
pmf = Pmf()
for _ in range(iters):
k = rhode_rematch.Random() + wei_rematch.Random()
pmf[k] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
"""
Explanation: Exercise: Run this analysis again with a different prior and see how much effect it has on the results.
Distributions of sums and differences
Suppose we want to know the total number of targets the two contestants will hit in a rematch. There are two ways we might compute the distribution of this sum:
Sampling: We can draw samples from the distributions and add them up.
Enumeration: We can enumerate all possible pairs of values.
I'll start with sampling:
End of explanation
"""
ks = rhode_rematch.Sample(iters) + wei_rematch.Sample(iters)
pmf = Pmf(ks)
thinkplot.Hist(pmf)
"""
Explanation: Or we could use Sample and NumPy:
End of explanation
"""
def AddPmfs(pmf1, pmf2):
pmf = Pmf()
for v1, p1 in pmf1.Items():
for v2, p2 in pmf2.Items():
pmf[v1 + v2] += p1 * p2
return pmf
"""
Explanation: Alternatively, we could compute the distribution of the sum by enumeration:
End of explanation
"""
pmf = AddPmfs(rhode_rematch, wei_rematch)
thinkplot.Pdf(pmf)
"""
Explanation: Here's how it's used:
End of explanation
"""
pmf = rhode_rematch + wei_rematch
thinkplot.Pdf(pmf)
"""
Explanation: The Pmf class provides a + operator that does the same thing.
End of explanation
"""
# Solution
pmf = rhode_rematch - wei_rematch
thinkplot.Pdf(pmf)
# Solution
# On average, we expect Rhode to win by about 1 clay.
pmf.Mean(), pmf.Median(), pmf.Mode()
# Solution
# But there is, according to this model, a 2% chance that she could win by 10.
sum([p for (x, p) in pmf.Items() if x >= 10])
"""
Explanation: Exercise: The Pmf class also provides the - operator, which computes the distribution of the difference in values from two distributions. Use the distributions from the previous section to compute the distribution of the differential between Rhode and Wei in a rematch. On average, how many clays should we expect Rhode to win by? What is the probability that Rhode wins by 10 or more?
End of explanation
"""
iters = 1000
pmf = Pmf()
for _ in range(iters):
ks = rhode_rematch.Sample(6)
pmf[max(ks)] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
"""
Explanation: Distribution of maximum
Suppose Kim Rhode continues to compete in six more Olympics. What should we expect her best result to be?
Once again, there are two ways we can compute the distribution of the maximum:
Sampling.
Analysis of the CDF.
Here's a simple version by sampling:
End of explanation
"""
iters = 1000
ks = rhode_rematch.Sample((6, iters))
ks
"""
Explanation: And here's a version using NumPy. I'll generate an array with 6 rows and 10 columns:
End of explanation
"""
maxes = np.max(ks, axis=0)
maxes[:10]
"""
Explanation: Compute the maximum in each column:
End of explanation
"""
pmf = Pmf(maxes)
thinkplot.Hist(pmf)
"""
Explanation: And then plot the distribution of maximums:
End of explanation
"""
pmf = rhode_rematch.Max(6).MakePmf()
thinkplot.Hist(pmf)
"""
Explanation: Or we can figure it out analytically. If the maximum is less-than-or-equal-to some value k, all 6 random selections must be less-than-or-equal-to k, so:
$ CDF_{max}(x) = CDF(x)^6 $
Pmf provides a method that computes and returns this Cdf, so we can compute the distribution of the maximum like this:
End of explanation
"""
def Min(pmf, k):
cdf = pmf.MakeCdf()
cdf.ps = 1 - (1-cdf.ps)**k
return cdf
pmf = Min(rhode_rematch, 6).MakePmf()
thinkplot.Hist(pmf)
"""
Explanation: Exercise: Here's how Pmf.Max works:
def Max(self, k):
"""Computes the CDF of the maximum of k selections from this dist.
k: int
returns: new Cdf
"""
cdf = self.MakeCdf()
cdf.ps **= k
return cdf
Write a function that takes a Pmf and an integer n and returns a Pmf that represents the distribution of the minimum of k values drawn from the given Pmf. Use your function to compute the distribution of the minimum score Kim Rhode would be expected to shoot in six competitions.
End of explanation
"""
# Solution
n_allergic = 4
n_non = 6
p_allergic = 0.5
p_non = 0.1
pmf = MakeBinomialPmf(n_allergic, p_allergic) + MakeBinomialPmf(n_non, p_non)
thinkplot.Hist(pmf)
# Solution
pmf.Mean()
"""
Explanation: Exercises
Exercise: Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze?
End of explanation
"""
# Solution
# Here's a class that models the study
class Gluten(Suite):
def Likelihood(self, data, hypo):
"""Computes the probability of the data under the hypothesis.
data: tuple of (number who identified, number who did not)
hypothesis: number of participants who are gluten sensitive
"""
# compute the number who are gluten sensitive, `gs`, and
# the number who are not, `ngs`
gs = hypo
yes, no = data
n = yes + no
ngs = n - gs
pmf1 = MakeBinomialPmf(gs, 0.95)
pmf2 = MakeBinomialPmf(ngs, 0.4)
pmf = pmf1 + pmf2
return pmf[yes]
# Solution
prior = Gluten(range(0, 35+1))
thinkplot.Pdf(prior)
# Solution
posterior = prior.Copy()
data = 12, 23
posterior.Update(data)
# Solution
thinkplot.Pdf(posterior)
thinkplot.Config(xlabel='# who are gluten sensitive',
ylabel='PMF', legend=False)
# Solution
posterior.CredibleInterval(95)
"""
Explanation: Exercise This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.
Here is a description of the study:
"We studied 35 non-CD subjects (31 females) that were on a gluten-free diet (GFD), in a double-blind challenge study. Participants were randomised to receive either gluten-containing flour or gluten-free flour for 10 days, followed by a 2-week washout period and were then crossed over. The main outcome measure was their ability to identify which flour contained gluten.
"The gluten-containing flour was correctly identified by 12 participants (34%)..."
Since 12 out of 35 participants were able to identify the gluten flour, the authors conclude "Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity."
This conclusion seems odd to me, because if none of the patients were sensitive to gluten, we would expect some of them to identify the gluten flour by chance. So the results are consistent with the hypothesis that none of the subjects are actually gluten sensitive.
We can use a Bayesian approach to interpret the results more precisely. But first we have to make some modeling decisions.
Of the 35 subjects, 12 identified the gluten flour based on resumption of symptoms while they were eating it. Another 17 subjects wrongly identified the gluten-free flour based on their symptoms, and 6 subjects were unable to distinguish. So each subject gave one of three responses. To keep things simple I follow the authors of the study and lump together the second two groups; that is, I consider two groups: those who identified the gluten flour and those who did not.
I assume (1) people who are actually gluten sensitive have a 95% chance of correctly identifying gluten flour under the challenge conditions, and (2) subjects who are not gluten sensitive have only a 40% chance of identifying the gluten flour by chance (and a 60% chance of either choosing the other flour or failing to distinguish).
Using this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval?
End of explanation
"""
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
"""
Explanation: Exercise Coming soon: the space invaders problem.
End of explanation
"""
|
zzsza/Datascience_School
|
19. 문서 전처리/01. Python 문자열 인코딩.ipynb
|
mit
|
c = "a"
c
print(c)
x = "가"
x
print(x)
print(x.__repr__())
x = ["가"]
print(x)
x = "가"
len(x)
x = "ABC"
y = "가나다"
print(len(x), len(y))
print(x[0], x[1], x[2])
print(y[0], y[1], y[2])
print(y[0], y[1], y[2], y[3])
"""
Explanation: Python 문자열 인코딩
문자와 인코딩
문자의 구성
바이트 열 Byte Sequence: 컴퓨터에 저장되는 자료. 각 글자에 바이트 열을 지정
글리프 Glyph: 눈에 보이는 그림
http://www.asciitable.com/
http://www.kreativekorp.com/charset/encoding.php?name=CP949
코드 포인트 Code Point: 각 글자에 바이트 열과는 독립적인 숫자를 지정 (유니코드)
인코딩 (방식)
바이트 열을 지정하는 방식
기본 Ascii 인코딩
한글 인코딩
euc-kr
cp949
utf-8
참고
http://d2.naver.com/helloworld/19187
http://d2.naver.com/helloworld/76650
Python 2 문자열
string 타입 (기본)
컴퓨터 환경에서 지정한 인코딩을 사용한 byte string
unicode 타입
유니코드 코드 포인트(Unicode Code Point)를 사용한 내부 저장
string(byte string)과의 변환을 위해 encode(인코딩)/decode(디코딩) 명령 사용
Python 3에서는 unicode 타입이 기본
Python의 문자열 표시
__repr__()
그냥 변수이름을 쳤을 때 나오는 표시
다른 객체의 원소인 경우
아스키 테이블로 표시할 수 없는 문자는 string 포맷으로 표시
print() 명령
가능한 글리프(폰트)를 찾아서 출력
End of explanation
"""
y = u"가"
y
print(y)
y = u"가나다"
print(y[0], y[1], y[2])
"""
Explanation: 유니코드 리터럴(Literal)
따옴표 앞에 u자를 붙이면 unicode 문자열로 인식
내부적으로 유니코드 포인트로 저장
End of explanation
"""
print(type(y))
z1 = y.encode("cp949")
print(type(z1))
print(z1)
print(type(y))
z2 = y.encode("utf-8")
print(type(z2))
print(z2)
print(type(z1))
y1 = z1.decode("cp949")
print(type(y1))
print(y1)
print(type(z2))
y2 = z2.decode("utf-8")
print(type(y2))
print(y2)
"""
Explanation: 유니코드 인코딩(Encoding) / 디코딩(Decoding)
encode
unicode 타입의 메소드
unicode -> string (byte sequence)
decode
str 타입의 메소드
str -> unicode
End of explanation
"""
"가".encode("utf-8")
unicode("가", "ascii").encode("utf-8")
u"가".decode("utf-8")
u"가".encode("ascii").decode("utf-8")
"""
Explanation: str에 encode 메소드를 적용하면 또는 unicode에 decode 메소드를 적용하면?
End of explanation
"""
u"가".encode("utf-8"), u"가".encode("cp949"), "가"
import sys
print(sys.getdefaultencoding())
print(sys.stdin.encoding)
print(sys.stdout.encoding)
import locale
print(locale.getpreferredencoding())
"""
Explanation: str에 encode 메소드를 적용:
내부적으로 유니코드로 변환 시도
unicode에 decode 메소드를 적용:
바이트열이 스트링이라고 가정해 버린다.
디폴트 인코딩
End of explanation
"""
|
thinkingmachines/deeplearningworkshop
|
codelab_1_NN_Numpy.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Creating a 2 Layer Neural Network in 30 Lines of Python
Modified from an existing exercise. Credit for the original code to Stanford CS 231n
To demonstrate with code the math we went over earlier, we're going to generate some data that is not linearly separable, training a linear classifier, training a 2 layer neural network with a sigmoid activation function, then compare results for both.... just with plain ol Python!
Importing Libraries
End of explanation
"""
N = 100 # points per class
D = 2 # dimensionality at 2 so we can eyeball it
K = 3 # number of classes
X = np.zeros((N*K, D)) # generate an empty matrix to hold X features
y = np.zeros(N*K, dtype='uint8') # generate an empty vector to hold y labels
# for 3 classes, evenly generates spiral arms
for j in xrange(K):
ix = range(N*j, N*(j+1))
r = np.linspace(0.0,1,N) #radius
t = np.linspace(j*4, (j+1)*4, N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
"""
Explanation: Generating a Spiral Training Dataset
We'll be using this 2D dataset because it's easy to visually see the classifier performance, and because it's impossible to linearly separate the classes nicely.
End of explanation
"""
plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap=plt.cm.Spectral)
plt.show()
"""
Explanation: Quick question, what are the dimensions of X and y?
Let's visualize this. Setting S=20 (size of points) so that the color/label differences are more visible.
End of explanation
"""
# random initialization of starting params. recall that it's best to randomly initialize at a small value.
# how many parameters should this linear classifier have? remember there are K output classes, and 2 features per observation.
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
print "W shape", W.shape
print "W values", W
# Here are some hyperparameters that we're not going to worry about too much right now
learning_rate = 1e-0 # the step size in the descent
reg = 1e-3
scores = np.dot(X, W) + b
print scores.shape
"""
Explanation: Training a Linear Classifier
Let's start by training a a simple y = WX + b linear classifer on this dataset. We need to compute some Weights (W) and a bias vector (b) for all classes.
End of explanation
"""
num_examples = X.shape[0]
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Let's look at one example to verify the softmax transform
print "Score: ", scores[50]
print "Class Probabilities: ", probs[50]
"""
Explanation: We're going to compute the normalized softmax of these scores...
End of explanation
"""
correct_logprobs = -np.log(probs[range(num_examples),y])
# data loss is L1 loss plus regularization loss
data_loss = np.sum(correct_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
# this gets the gradient of the scores
# class probabilities minus - divided by num_examples
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# this backpropages the gradient into W and b
dW = np.dot(X.T, dscores) # don't forget to transpose! otherwise, you'll be forwarding the gradient
dW += 0.5*W # regularization gradient
db = np.sum(dscores, axis=0, keepdims=True)
"""
Explanation: The array correct_logprobs is a 1D array of the probabilities assigned to the correct classes for each example.
End of explanation
"""
# this updates the W and b parameters
W += -learning_rate * dW
b += -learning_rate * db
"""
Explanation: Updating the Parameters
We update the parameters W and B in the direction of the negative gradient in order to decrease the loss.
End of explanation
"""
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
# evaluated for 200 steps
for i in xrange(200):
# evaluate class scores, [N x K]
scores = np.dot(X, W) + b
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
# for every 10 iterations print the loss
if i % 10 == 0:
print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # regularization gradient
# perform a parameter update
W += -step_size * dW
b += -step_size * db
"""
Explanation: Full Code for the Training the Linear Softmax Classifier
Using gradient descent method for optimization.
Using L1 for loss funtion.
This ought to converge to a loss of around 0.78 after 150 iterations
End of explanation
"""
scores = np.dot(X, W) + b
predicted_class = np.argmax(scores, axis=1)
print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
"""
Explanation: Evaluating the Training Accuracy
The training accuracy here ought to be at around 0.5
This is better than change for 3 classes, where the expected accuracy of randomly selecting one of out 3 labels is 0.33. But not that much better.
End of explanation
"""
# plot the resulting classifier
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
"""
Explanation: Let's eyeball the decision boundaries to get a better feel for the split.
End of explanation
"""
# init parameters
np.random.seed(100) # so we all have the same numbers
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
h = 100 # size of hidden layer. a hyperparam in itself.
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K))
"""
Explanation: Training a 2 Layer Neural Network
Let's see what kind of improvement we'll get with adding a single hidden layer.
End of explanation
"""
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
"""
Explanation: Let's use a ReLU activation function. See how we're passing the scores from one layer into the hidden layer.
End of explanation
"""
# backpropate the gradient to the parameters of the hidden layer
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
# gradient of the outputs of the hidden layer (the local gradient)
dhidden = np.dot(dscores, W2.T)
# backprop through the ReLU function
dhidden[hidden_layer <= 0] = 0
# back right into the parameters W and b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True)
"""
Explanation: The loss computation and the dscores gradient computation remain the same. The major difference lies in the the chaining backpropagation of the dscores all the way back up to the parameters W and b.
End of explanation
"""
# initialize parameters randomly
np.random.seed(100) # so we all have the same numbers
h = 100 # size of hidden layer
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# optimization: gradient descent loop
num_examples = X.shape[0]
for i in xrange(10000):
# feed forward
# evaluate class scores, [N x K]
hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation
scores = np.dot(hidden_layer, W2) + b2
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2)
loss = data_loss + reg_loss
if i % 1000 == 0:
print "iteration %d: loss %f" % (i, loss)
# backprop
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters
# first backprop into parameters W2 and b2
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
# next backprop into hidden layer
dhidden = np.dot(dscores, W2.T)
# backprop the ReLU non-linearity
dhidden[hidden_layer <= 0] = 0
# finally into W,b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True)
# add regularization gradient contribution
dW2 += reg * W2
dW += reg * W
# perform a parameter update
W += -step_size * dW
b += -step_size * db
W2 += -step_size * dW2
b2 += -step_size * db2
"""
Explanation: Full Code for Training the 2 Layer NN with ReLU activation
Very similar to the linear classifier!
End of explanation
"""
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
predicted_class = np.argmax(scores, axis=1)
print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
"""
Explanation: Evaluating the Training Set Accuracy
This should be around 0.98, which is hugely better than the 0.50 we were getting from the linear classifier!
End of explanation
"""
# plot the resulting classifier
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b), W2) + b2
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
#fig.savefig('spiral_net.png')
"""
Explanation: Let's visualize this to get a more dramatic sense of just how good the split is.
End of explanation
"""
|
streety/biof509
|
Wk04-Data-retrieval-and-preprocessing-Solutions.ipynb
|
mit
|
# required packages:
import numpy as np
import pandas as pd
import sklearn
import skimage
import sqlalchemy as sa
import urllib.request
import requests
import sys
import json
import pickle
import gzip
from pathlib import Path
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
!pip install pymysql
import pymysql
"""
Explanation: Week 4 - Data retrieval and dataset preprocessing
Exercise Solutions
End of explanation
"""
ICGC_API = 'https://dcc.icgc.org/api/v1/download?fn=/release_18/Projects/BRCA-US/'
expression_fname = 'protein_expression.BRCA-US.tsv.gz'
if not Path(expression_fname).is_file():
print("Downloading file", ICGC_API + expression_fname, "saving it as", expression_fname)
urllib.request.urlretrieve(ICGC_API + expression_fname, expression_fname);
else:
print("Local file exists:", expression_fname)
def get_genome_sequence_ensembl(chrom, start, end):
"""
API described here http://rest.ensembl.org/documentation/info/sequence_region
"""
url = 'https://rest.ensembl.org/sequence/region/human/{0}:{1}..{2}:1?content-type=application/json'.format(chrom, start, end)
r = requests.get(url, headers={"Content-Type": "application/json"}, timeout=10.000)
if not r.ok:
print("REST Request FAILED")
decoded = r.json()
print(decoded['error'])
return
else:
print("REST Request OK")
decoded = r.json()
return decoded['seq']
sequence = get_genome_sequence_ensembl(7, 200000,200100)
print(sequence)
# Loading data into pandas
E = pd.read_csv(expression_fname, delimiter='\t')
# E[E['gene_name'] == 'EGFR'].head()
E[E['gene_name'] == 'EGFR']['normalized_expression_level'].hist()
engine = sa.create_engine('mysql+pymysql://genome@genome-mysql.cse.ucsc.edu/hg38', poolclass=sa.pool.NullPool)
"""
Explanation: Retrieving remote data
End of explanation
"""
snp_table = sa.Table('snp147Common',
meta,
sa.PrimaryKeyConstraint('name'),
extend_existing=True)
expr = sa.select([snp_table]).where(snp_table.c.chrom == 'chrY').count()
# pd.read_sql(expr, engine)
# pd_table.group_by('chrom').name.nunique()
"""
Explanation: Pandas can read data directly from the database
End of explanation
"""
import sklearn.linear_model
x = np.array([[0, 0], [1, 1], [2, 2]])
y = np.array([0, 1, 2])
print(x,y)
clf = sklearn.linear_model.LinearRegression()
clf.fit(x, y)
print(clf.coef_)
x_missing = np.array([[0, 0], [1, np.nan], [2, 2]])
print(x_missing, y)
print()
try:
clf = sklearn.linear_model.LinearRegression()
clf.fit(x_missing, y)
print(clf.coef_)
except ValueError as e:
print(e)
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
[4,1,7,9,0,2,np.nan]], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
y = pd.Series([29.0,
31.2,
63.25,
57.27,
66.3,
26.21,
48.24])
print(x, y)
x1 = x.dropna()
x.fillna(value={'A':1000,'B':2000,'C':3000,'D':4000,'E':5000})
x.fillna(value=x.mean())
"""
Explanation: Exercises:
Plot the distribution of expression levels for EGFR (filter by gene_name) in breast cancer samples from protein_expression.BRCA-US.tsv.gz
Count the number of common SNPs on Chromosome M (based on snp147Common table)
Tabular data
Missing data
Normalization
Categorical data
Missing data
There are a number of ways to handle missing data:
Drop all records with a value missing
Substitute all missing values with an average value
Substitute all missing values with some placeholder value, i.e. 0, 1e9, -1e9, etc
Predict missing values based on other attributes
Add additional feature indicating when a value is missing
If the machine learning model will be used with new data it is important to consider the possibility of receiving records with values missing that we have not observed previously in the training dataset.
The simplest approach is to remove any records that have missing data. Unfortunately missing values are often not randomly distributed through a dataset and removing them can introduce bias.
An alternative approach is to substitute the missing values. This can be with the mean of the feature across all the records or the value can be predicted based on the values of the other features in the dataset. Placeholder values can also be used with decision trees but do not work as well for most other algorithms.
Finally, missing values can themselves be useful features. Adding an additional feature indicating when a value is missing is often used to include this information.
End of explanation
"""
x_filled = x.fillna(value=x.mean())
print(x_filled)
x_norm = (x_filled - x_filled.min()) / (x_filled.max() - x_filled.min())
print(x_norm)
scaling = sklearn.preprocessing.MinMaxScaler().fit(x_filled)
scaling.transform(x_filled)
"""
Explanation: Normalization
Many machine learning algorithms expect features to have similar distributions and scales.
A classic example is gradient descent, if features are on different scales some weights will update faster than others because the feature values scale the weight updates.
There are two common approaches to normalization:
Z-score standardization
Min-max scaling
Z-score standardization
Z-score standardization rescales values so that they have a mean of zero and a standard deviation of 1. Specifically we perform the following transformation:
$$z = \frac{x - \mu}{\sigma}$$
Min-max scaling
An alternative is min-max scaling that transforms data into the range of 0 to 1. Specifically:
$$x_{norm} = \frac{x - x_{min}}{x_{max} - x_{min}}$$
Min-max scaling is less commonly used but can be useful for image data and in some neural networks.
End of explanation
"""
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
['Green','Red','Blue','Blue','Green','Red','Green']], ).T
x.columns = ['A', 'B', 'C', 'D', 'E']
print(x)
x_cat = x.copy()
for val in x['E'].unique():
x_cat['E_{0}'.format(val)] = x_cat['E'] == val
x_cat
"""
Explanation: Categorical data
Categorical data can take one of a number of possible values. The different categories may be related to each other or be largely independent and unordered.
Continuous variables can be converted to categorical variables by applying a threshold.
End of explanation
"""
# http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#example-color-exposure-plot-equalize-py
matplotlib.rcParams['font.size'] = 8
import skimage.data
import warnings
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
def plot_img_and_hist(img, axes, bins=256):
"""Plot an image along with its histogram and cumulative histogram.
"""
img = skimage.img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = skimage.exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
# Load an example image
img = skimage.data.moon()
# Contrast stretching
p2, p98 = np.percentile(img, (2, 98))
img_rescale = skimage.exposure.rescale_intensity(img, in_range=(p2, p98))
# Equalization
img_eq = skimage.exposure.equalize_hist(img)
# Adaptive Equalization
img_adapteq = skimage.exposure.equalize_adapthist(img, clip_limit=0.03)
# Display results
fig = plt.figure(figsize=(8, 5))
axes = np.zeros((2,4), dtype=np.object)
axes[0,0] = fig.add_subplot(2, 4, 1)
for i in range(1,4):
axes[0,i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0])
for i in range(0,4):
axes[1,i] = fig.add_subplot(2, 4, 5+i)
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
ax_img.set_title('Low contrast image')
y_min, y_max = ax_hist.get_ylim()
ax_hist.set_ylabel('Number of pixels')
ax_hist.set_yticks(np.linspace(0, y_max, 5))
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
ax_img.set_title('Contrast stretching')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
ax_img.set_title('Histogram equalization')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
ax_img.set_title('Adaptive equalization')
ax_cdf.set_ylabel('Fraction of total intensity')
ax_cdf.set_yticks(np.linspace(0, 1, 5))
# prevent overlap of y-axis labels
fig.tight_layout()
plt.show()
img = skimage.data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
import sklearn.feature_extraction
patches = sklearn.feature_extraction.image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
import sklearn.datasets
digits = sklearn.datasets.load_digits()
# print(digits.DESCR)
fig, ax = plt.subplots(1,1, figsize=(1,1))
ax.imshow(digits.data[0].reshape((8,8)), cmap=plt.cm.gray, interpolation='nearest')
"""
Explanation: Exercises
Substitute missing values in x with the column mean and add an additional column to indicate when missing values have been substituted. The isnull method on the pandas dataframe may be useful.
Convert x to the z-scaled values. The StandardScaler method in the preprocessing module can be used or the z-scaled values calculated directly.
Convert x['C'] into a categorical variable using a threshold of 0.125
Image data
Depending on the type of task being performed there are a variety of steps we may want to take in working with images:
Histogram normalization
Windows and pyramids (for detection at different scales)
Centering
Occasionally the camera used to generate an image will use 10- to 14-bits while a 16-bit file format will be used. In this situation all the pixel intensities will be in the lower values. Rescaling to the full range (or to 0-1) can be useful.
Further processing can be done to alter the histogram of the image.
When looking for particular features in an image a sliding window can be used to check different locations. This can be combined with an image pyramid to detect features at different scales. This is often needed when objects can be at different distances from the camera.
If objects are sparsely distributed in an image a faster approach than using sliding windows is to identify objects with a simple threshold and then test only the bounding boxes containing objects. Before running these through a model centering based on intensity can be a useful approach. Small offsets, rotations and skewing can be used to generate additional training data.
End of explanation
"""
twenty_train = sklearn.datasets.fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
count_vect = sklearn.feature_extraction.text.CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
tfidf_transformer = sklearn.feature_extraction.text.TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
# print(X_train_tfidf.shape, X_train_tfidf[:5,:15].toarray())
print(twenty_train.data[0])
count_vect = sklearn.feature_extraction.text.CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
"""
Explanation: Text
When working with text the simplest approach is known as bag of words. In this approach we simply count the number of instances of each word, and then adjust the values based on how commonly the word is used.
The first task is to break a piece of text up into individual tokens. The number of occurrences of each word is then recorded. More rarely used words are likely to be more interesting and so word counts are scaled by the inverse document frequency.
We can extend this to look at not just individual words but also bigrams and trigrams.
End of explanation
"""
import skimage.data
import sklearn.feature_extraction
import skimage.transform
page_img = skimage.data.page()
plt.imshow(img, cmap=plt.cm.gray)
plt.show()
patches_s10 = sklearn.feature_extraction.image.extract_patches_2d(page_img, (10, 10), max_patches=10, random_state=0)
# print(patches_s10)
plt.imshow(patches_s10[3], cmap=plt.cm.gray)
plt.show()
print("OK")
for patch_size in (10, 20, 40):
patches = sklearn.feature_extraction.image.extract_patches_2d(page_img, (patch_size, patch_size), max_patches=3, random_state=0)
for i, patch in enumerate(patches):
scaling_factor = 20.0 / patch_size
rescaled_patch = skimage.transform.rescale(patch, scale=scaling_factor)
plt.imshow(rescaled_patch, cmap=plt.cm.gray)
plt.show()
import sklearn.datasets
twenty_train = sklearn.datasets.fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
count_vect = sklearn.feature_extraction.text.CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(count_vect.get_feature_names()[-10:])
print(X_train_counts.shape)
count_vect = sklearn.feature_extraction.text.CountVectorizer(stop_words=('Hi', 'Best', 'software'))
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(count_vect.get_feature_names()[-10:])
print(X_train_counts.shape)
print()
count_vect = sklearn.feature_extraction.text.CountVectorizer(ngram_range=(1, 2))
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(count_vect.get_feature_names()[-10:])
print(X_train_counts.shape)
"""
Explanation: Exercises
Choose one of the histogram processing methods and apply it to the page example.
Take patches for the page example used above at different scales (10, 20 and 40 pixels). The resulting patches should be rescaled to have the same size.
Change the vectorization approach to ignore very common words such as 'the' and 'a'. These are known as stop words. Reading the documentation should help.
Change the vectorization approach to consider both single words and sequences of 2 words. Reading the documentation should help.
End of explanation
"""
|
sz2472/foundations-homework
|
data and database/.ipynb_checkpoints/database class 8 June16-checkpoint.ipynb
|
mit
|
input_str = "Yes, my zip code is 12345. I heard that Gary's zip code is 23456. But 212 is not a zip code."
import re
zips= re.findall(r"\d{5}", input_str)
zips
from urllib.request import urlretrieve
urlretrieve("https://raw.githubusercontent.com/ledeprogram/courses/master/databases/data/enronsubjects.txt", "enronsubjects.txt")
subjects = [x.strip() for x in open("enronsubjects.txt").readlines()] #x.trip()[\n]
subjects[:10]
subjects[-10]
[line for line in subjects if line.startswith("Hi!")]
import re
[line for line in subjects if re.search("shipping", line)] # if line string match the "" parameter
"""
Explanation: regular expressions
End of explanation
"""
[line for line in subjects if re.search("sh.pping", line)] #. means any single character sh.pping is called class
# subjects that contain a time, e.g., 5: 52pm or 12:06am
[line for line in subjects if re.search("\d:\d\d\wm", line)] # \d:\d\d\wm a template read character by character
[line for line in subjects if re.search("\.\.\.\.\.", line)]
# subject lines that have dates, e.g. 12/01/99
[line for line in subjects if re.search("\d\d/\d\d/\d\d", line)]
[line for line in subjects if re.search("6/\d\d/\d\d", line)]
"""
Explanation: metacharacters
special characters that you can use in regular expressions that have a
special meaning: they stand in for multiple different characters of the same "class"
.: any char
\w any alphanumeric char (a-z, A-Z, 0-9)
\s any whitespace char ("_", \t,\n)
\S any non-whitespace char
\d any digit(0-9)
. actual period
End of explanation
"""
[line for line in subjects if re.search("[aeiou][aeiou][aeiou][aeiou]",line)]
[line for line in subjects if re.search("F[wW]:", line)] #F followed by either a lowercase w followed by a uppercase W
# subjects that contain a time, e.g., 5: 52pm or 12:06am
[line for line in subjects if re.search("\d:[012345]\d[apAP][mM]", line)]
"""
Explanation: define your own character classes
inside your regular expression, write [aeiou]`
End of explanation
"""
# begin with the New York character #anchor the search of the particular string
[line for line in subjects if re.search("^[Nn]ew [Yy]ork", line)]
[line for line in subjects if re.search("\.\.\.$", line)]
[line for line in subjects if re.search("!!!!!$", line)]
# find sequence of characters that match "oil"
[line for line in subjects if re.search("\boil\b", line)]
"""
Explanation: metacharacters 2: anchors
^ beginning of a str
$ end of str
\b word boundary
End of explanation
"""
x = "this is \na test"
print(x)
x= "this is\t\t\tanother test"
print(x)
# ascii backspace
print("hello there\b\b\b\b\bhi")
print("hello\nthere")
print("hello\\nthere")
normal = "hello\nthere"
raw = r"hello\nthere" #don't interpret any escape character in the raw string
print("normal:", normal)
print("raw:", raw)
[line for line in subjects if re.search(r"\boil\b", line)] #r for regular expression, include r for regular expression all the time
[line for line in subjects if re.search(r"\b\.\.\.\b", line)]
[line for line in subjects if re.search(r"\banti", line)] #\b only search anti at the beginning of the word
"""
Explanation: aside: matacharacters and escape characters
escape sequences \n: new line; \t: tab \backslash \b: word boundary
End of explanation
"""
[line for line in subjects if re.search(r"[A-Z]{15,}", line)]
[line for line in subjects if re.search(r"[aeiou]{4}", line)] #find words that have 4 characters from aeiou for each line
[line for line in subjects if re.search(r"^F[wW]d?:", line)] # find method that begins with F followed by either w or W and either a d or not d is not there; ? means find that character d or not
[line for line in subjects if re.search(r"[nN]ews.*!$", line)] # * means any characters between ews and !
[line in line in subjects if re.search(r"^R[eE]:.*\b[iI]nvestor", line)]
### more metacharacters: alternation
(?:x|y) match either x or y
(?:x|y|z) match x,y, or z
[line for line in subjects if re.search(r"\b(?:[Cc]at|[kK]itty|[kK]itten)\b", line)]
[line for line in subjects if re.search(r"(energy|oil|electricity)\b", line)]
"""
Explanation: metacharacters 3: quantifiers
{n} matches exactly n times
{n.m} matches at least n times, but no more than m times
{n,} matches at least n times, but maybe infinite times
+ match at least onece ({1})
* match zero or more times
? match one time or zero times
End of explanation
"""
all_subjects = open("enronsubjects.txt").read()
all_subjects[:1000]
# domain names: foo.org, cheese.net, stuff.come
re.findall(r"\b\w+\.(?:come|net|org)\b", all_subjects)
## differences between re.search() yes/no
##re.findall []
input_str = "Yes, my zip code is 12345. I heard that Gary's zip code is 23456. But 212 is not a zip code."
re.search(r"\b\d{5}\b", input_str)
re.findall(r"New York \b\w+\b", all_subjects)
re.findall(r"New York (\b\w+\b)", all_subjects) #the things in (): to group for the findall method
"""
Explanation: capturing
read the whole corpus in as one big string
End of explanation
"""
src = "this example has been used 423 times"
if re.search(r"\d\d\d\d", src):
print("yup")
else:
print("nope")
src = "this example has been used 423 times"
match = re.search(r"\d\d\d", src)
type(match)
print(match.start())
print(match.end())
print(match.group())
for line in subjects:
match = re.search(r"[A-Z]{15,}", line)
if match: #if find that match
print(match.group())
courses=[
]
print "Course catalog report:"
for item in courses:
match = re.search(r"^(\w+) (\d+): (.*)$", item)
print(match.group(1)) #group 1: find the item in first group
print("Course dept", match.group(1))
print("Course #", match.group(2))
print("Course title", match.group(3))
"""
Explanation: using re.search to capture
End of explanation
"""
|
AllenDowney/ModSimPy
|
soln/chap01soln.ipynb
|
mit
|
try:
import pint
except ImportError:
!pip install pint
import pint
try:
from modsim import *
except ImportError:
!pip install modsimpy
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Chapter 1
Copyright 2020 Allen Downey
License: Creative Commons Attribution 4.0 International
Jupyter
Welcome to Modeling and Simulation, welcome to Python, and welcome to Jupyter.
This is a Jupyter notebook, which is a development environment where you can write and run Python code. Each notebook is divided into cells. Each cell contains either text (like this cell) or Python code.
Selecting and running cells
To select a cell, click in the left margin next to the cell. You should see a blue frame surrounding the selected cell.
To edit a code cell, click inside the cell. You should see a green frame around the selected cell, and you should see a cursor inside the cell.
To edit a text cell, double-click inside the cell. Again, you should see a green frame around the selected cell, and you should see a cursor inside the cell.
To run a cell, hold down SHIFT and press ENTER.
If you run a text cell, Jupyter formats the text and displays the result.
If you run a code cell, Jupyter runs the Python code in the cell and displays the result, if any.
To try it out, edit this cell, change some of the text, and then press SHIFT-ENTER to format it.
Adding and removing cells
You can add and remove cells from a notebook using the buttons in the toolbar and the items in the menu, both of which you should see at the top of this notebook.
Try the following exercises:
From the Insert menu select "Insert cell below" to add a cell below this one. By default, you get a code cell, as you can see in the pulldown menu that says "Code".
In the new cell, add a print statement like print('Hello'), and run it.
Add another cell, select the new cell, and then click on the pulldown menu that says "Code" and select "Markdown". This makes the new cell a text cell.
In the new cell, type some text, and then run it.
Use the arrow buttons in the toolbar to move cells up and down.
Use the cut, copy, and paste buttons to delete, add, and move cells.
As you make changes, Jupyter saves your notebook automatically, but if you want to make sure, you can press the save button, which looks like a floppy disk from the 1990s.
Finally, when you are done with a notebook, select "Close and Halt" from the File menu.
Using the notebooks
The notebooks for each chapter contain the code from the chapter along with additional examples, explanatory text, and exercises. I recommend you
Read the chapter first to understand the concepts and vocabulary,
Run the notebook to review what you learned and see it in action, and then
Attempt the exercises.
If you try to work through the notebooks without reading the book, you're gonna have a bad time. The notebooks contain some explanatory text, but it is probably not enough to make sense if you have not read the book. If you are working through a notebook and you get stuck, you might want to re-read (or read!) the corresponding section of the book.
Installing modules
These notebooks use standard Python modules like NumPy and SciPy. I assume you already have them installed in your environment.
They also use two less common modules: Pint, which provides units, and modsim, which contains code I wrote specifically for this book.
The following cells check whether you have these modules already and tries to install them if you don't.
End of explanation
"""
!python --version
!jupyter-notebook --version
"""
Explanation: The first time you run this on a new installation of Python, it might produce a warning message in pink. That's probably ok, but if you get a message that says modsim.py depends on Python 3.7 features, that means you have an older version of Python, and some features in modsim.py won't work correctly.
If you need a newer version of Python, I recommend installing Anaconda. You'll find more information in the preface of the book.
You can find out what version of Python and Jupyter you have by running the following cells.
End of explanation
"""
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
"""
Explanation: Configuring Jupyter
The following cell:
Uses a Jupyter "magic command" to specify whether figures should appear in the notebook, or pop up in a new window.
Configures Jupyter to display some values that would otherwise be invisible.
Select the following cell and press SHIFT-ENTER to run it.
End of explanation
"""
meter = UNITS.meter
second = UNITS.second
"""
Explanation: The penny myth
The following cells contain code from the beginning of Chapter 1.
modsim defines UNITS, which contains variables representing pretty much every unit you've ever heard of. It uses Pint, which is a Python library that provides tools for computing with units.
The following lines create new variables named meter and second.
End of explanation
"""
a = 9.8 * meter / second**2
"""
Explanation: To find out what other units are defined, type UNITS. (including the period) in the next cell and then press TAB. You should see a pop-up menu with a list of units.
Create a variable named a and give it the value of acceleration due to gravity.
End of explanation
"""
t = 4 * second
"""
Explanation: Create t and give it the value 4 seconds.
End of explanation
"""
a * t**2 / 2
"""
Explanation: Compute the distance a penny would fall after t seconds with constant acceleration a. Notice that the units of the result are correct.
End of explanation
"""
# Solution
a * t
"""
Explanation: Exercise: Compute the velocity of the penny after t seconds. Check that the units of the result are correct.
End of explanation
"""
# Solution
# a + t
"""
Explanation: Exercise: Why would it be nonsensical to add a and t? What happens if you try?
End of explanation
"""
h = 381 * meter
"""
Explanation: The error messages you get from Python are big and scary, but if you read them carefully, they contain a lot of useful information.
Start from the bottom and read up.
The last line usually tells you what type of error happened, and sometimes additional information.
The previous lines are a "traceback" of what was happening when the error occurred. The first section of the traceback shows the code you wrote. The following sections are often from Python libraries.
In this example, you should get a DimensionalityError, which is defined by Pint to indicate that you have violated a rules of dimensional analysis: you cannot add quantities with different dimensions.
Before you go on, you might want to delete the erroneous code so the notebook can run without errors.
Falling pennies
Now let's solve the falling penny problem.
Set h to the height of the Empire State Building:
End of explanation
"""
t = sqrt(2 * h / a)
"""
Explanation: Compute the time it would take a penny to fall, assuming constant acceleration.
$ a t^2 / 2 = h $
$ t = \sqrt{2 h / a}$
End of explanation
"""
v = a * t
"""
Explanation: Given t, we can compute the velocity of the penny when it lands.
$v = a t$
End of explanation
"""
mile = UNITS.mile
hour = UNITS.hour
v.to(mile/hour)
"""
Explanation: We can convert from one set of units to another like this:
End of explanation
"""
# Solution
foot = UNITS.foot
pole_height = 10 * foot
h + pole_height
# Solution
pole_height + h
"""
Explanation: Exercise: Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from h plus 10 feet.
Define a variable named foot that contains the unit foot provided by UNITS. Define a variable named pole_height and give it the value 10 feet.
What happens if you add h, which is in units of meters, to pole_height, which is in units of feet? What happens if you write the addition the other way around?
End of explanation
"""
# Solution
v_terminal = 18 * meter / second
# Solution
t1 = v_terminal / a
print('Time to reach terminal velocity', t1)
# Solution
h1 = a * t1**2 / 2
print('Height fallen in t1', h1)
# Solution
t2 = (h - h1) / v_terminal
print('Time to fall remaining distance', t2)
# Solution
t_total = t1 + t2
print('Total falling time', t_total)
"""
Explanation: Exercise: In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating.
As a simplification, let's assume that the acceleration of the penny is a until the penny reaches 18 m/s, and then 0 afterwards. What is the total time for the penny to fall 381 m?
You can break this question into three parts:
How long until the penny reaches 18 m/s with constant acceleration a.
How far would the penny fall during that time?
How long to fall the remaining distance with constant velocity 18 m/s?
Suggestion: Assign each intermediate result to a variable with a meaningful name. And assign units to all quantities!
End of explanation
"""
|
Mashimo/datascience
|
03-NLP/introNLTK.ipynb
|
apache-2.0
|
sampleText1 = "The Elephant's 4 legs: THE Pub! You can't believe it or can you, the believer?"
sampleText2 = "Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29."
"""
Explanation: Introduction to NLTK
We have seen how to do some basic text processing in Python, now we introduce an open source framework for natural language processing that can further help to work with human languages: NLTK (Natural Language ToolKit).
Tokenise a text
Let's start with a simple text in a Python string:
End of explanation
"""
import nltk
s1Tokens = nltk.word_tokenize(sampleText1)
s1Tokens
len(s1Tokens)
"""
Explanation: Tokens
The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.
We have seen how we can extract tokens by splitting the text at the blank spaces.
NTLK has a function word_tokenize() for it:
End of explanation
"""
s2Tokens = nltk.word_tokenize(sampleText2)
s2Tokens
"""
Explanation: 21 tokens extracted, which include words and punctuation.
Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't".
Let's see another example:
End of explanation
"""
# If you would like to work with the raw text you can use 'bookRaw'
with open('../datasets/ThePrince.txt', 'r') as f:
bookRaw = f.read()
bookTokens = nltk.word_tokenize(bookRaw)
bookText = nltk.Text(bookTokens) # special format
nBookTokens= len(bookTokens) # or alternatively len(bookText)
print ("*** Analysing book ***")
print ("The book is {} chars long".format (len(bookRaw)))
print ("The book has {} tokens".format (nBookTokens))
"""
Explanation: And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
End of explanation
"""
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text1)
len(sentences)
sentences
"""
Explanation: As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens.
Sentences
NTLK has a function to tokenise a text not in words but in sentences.
End of explanation
"""
sentences = nltk.sent_tokenize(bookRaw) # extract sentences
nSent = len(sentences)
print ("The book has {} sentences".format (nSent))
print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
"""
Explanation: As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99).
It also splits correctly sentences after question or exclamation marks but not after commas.
End of explanation
"""
def get_top_words(tokens):
# Calculate frequency distribution
fdist = nltk.FreqDist(tokens)
return fdist.most_common()
topBook = get_top_words(bookTokens)
# Output top 20 words
topBook[:20]
"""
Explanation: Most common tokens
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.
Its most_common() method then returns a list of tuples where each tuple is of the form (token, frequency). The list is sorted in descending order of frequency.
End of explanation
"""
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400]
topWords
"""
Explanation: Comma is the most common: we need to remove the punctuation.
Most common alphanumeric tokens
We can use isalpha() to check if the token is a word and not punctuation.
End of explanation
"""
def preprocessText(text, lowercase=True):
if lowercase:
tokens = nltk.word_tokenize(text.lower())
else:
tokens = nltk.word_tokenize(text)
return [word for word in tokens if word.isalpha()]
bookWords = preprocessText(bookRaw)
topBook = get_top_words(bookWords)
# Output top 20 words
topBook[:20]
print ("*** Analysing book ***")
print ("The text has now {} words (tokens)".format (len(bookWords)))
"""
Explanation: We can also remove any capital letters before tokenising:
End of explanation
"""
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80]
sorted(meaningfulWords)
"""
Explanation: Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ...
As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis.
Meaningful most common tokens
A simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
End of explanation
"""
from nltk.corpus import stopwords
stopwordsEN = set(stopwords.words('english')) # english language
betterWords = [w for w in bookWords if w not in stopwordsEN]
topBook = get_top_words(betterWords)
# Output top 20 words
topBook[:20]
"""
Explanation: This would work but would leave out also tokens such as I and you which are actually significative.
The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words.
NLTK has a corpus of stop words in several languages:
End of explanation
"""
'princes' in betterWords
betterWords.count("prince") + betterWords.count("princes")
"""
Explanation: Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
End of explanation
"""
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
"""
Explanation: Stemming
Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb).
Stemmer is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
End of explanation
"""
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
"""
Explanation: And now we apply one of the NLTK stemmer, the Porter stemmer:
End of explanation
"""
stemmedWords = [porter.stem(w) for w in betterWords]
topBook = get_top_words(stemmedWords)
topBook[:20] # Output top 20 words
"""
Explanation: As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
End of explanation
"""
from nltk.stem.snowball import SnowballStemmer
stemmerIT = SnowballStemmer("italian")
inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi"
wordsIT = inputIT.split(' ')
[stemmerIT.stem(w) for w in wordsIT]
"""
Explanation: Now the word princ is counted 281 times, exactly like the sum of prince and princes.
A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes.
Prince and princes become princ.
A different flavour is the lemmatisation that we will see in one second, but first a note about stemming in other languages than English.
Stemming in other languages
Snowball is an improvement created by Porter: a language to create stemmers and have rules for many more languages than English.
For example Italian:
End of explanation
"""
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
words1
[lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
"""
Explanation: Lemma
Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.
While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration.
NLTK has also a built-in lemmatiser, so let's see it in action:
End of explanation
"""
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
"""
Explanation: We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
End of explanation
"""
words2 = ['good', 'better']
[porter.stem(w) for w in words2]
[lemmatizer.lemmatize(w, 'a') for w in words2]
"""
Explanation: We get a different result if we say that the words are verbs.
They have all the same lemma, in fact they could be all different inflections or conjugation of a verb.
The type of words that can be used are:
'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
End of explanation
"""
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords]
topBook = get_top_words(lemmatisedWords)
topBook[:20] # Output top 20 words
"""
Explanation: It works with different adjectives, it doesn't look only at prefixes and suffixes.
You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough.
Now we lemmatise the book:
End of explanation
"""
text1 = "Children shouldn't drink a sugary drink before bed."
tokensT1 = nltk.word_tokenize(text1)
nltk.pos_tag(tokensT1)
"""
Explanation: Yes, the lemma now is prince.
But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word.
Part of speech (PoS)
In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatical properties.
For example, an adjective (red, big, quiet, ...) describe properties while a verb (throw, walk, have) describe actions or states.
Commonly listed parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection.
End of explanation
"""
nltk.help.upenn_tagset('RB')
"""
Explanation: The NLTK function pos_tag() will tag each token with the estimated PoS.
NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
End of explanation
"""
tokensAndPos = nltk.pos_tag(bookTokens)
posList = [thePOS for (word, thePOS) in tokensAndPos]
fdistPos = nltk.FreqDist(posList)
fdistPos.most_common(5)
nltk.help.upenn_tagset('IN')
"""
Explanation: Which are the most common PoS in The Prince book?
End of explanation
"""
# Parsing sentence structure
text2 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
""")
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text2)
for tree in trees:
print(tree)
"""
Explanation: It's not nouns (NN) but interections (IN) such as preposition or conjunction.
Extra note: Parsing the grammar structure
Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective?
Tagging a PoS depends on the context, which can be ambiguous.
Making sense of a sentence is easier if it follows a well-defined grammatical structure, such as : subject + verb + object
NLTK allows to define a formal grammar which can then be used to parse a text. The NLTK ChartParser is a procedure for finding one or more trees (sentences have internal organisation that can be represented using a tree) corresponding to a grammatically well-formed sentence.
End of explanation
"""
|
nimagh/MachineLearning
|
GaussianProcesses/GRP.ipynb
|
gpl-2.0
|
def get_kernel(X1,X2,sigmaf,l,sigman):
k = lambda x1,x2,sigmaf,l,sigman:(sigmaf**2)*np.exp(-(1/float(2*(l**2)))*np.dot((x1-x2),(x1-x2).T)) + (sigman**2);
K = np.zeros((X1.shape[0],X2.shape[0]))
for i in range(0,X1.shape[0]):
for j in range(0,X2.shape[0]):
if i==j:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,sigman);
else:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,0);
return K
"""
Explanation: Gassuain Processes Regression (GPR):
We have noisy sensor readings (indicated by errorbars) and we want to predict readings for desired new points. In GPs every new point will be a dimension to infinite dimensional multivariate gaussian distribution (usualy mean zero) in which covariance between points will be defined by the kernel function.
In other words, in GPs we can have infinte number of random variables, that any finite subset of them will be jointly gaussian as well. GP can then be called a distribution over functions where value of each function at a certain point will be a gussian RV.
By fiting a GP to our (n) trainig points (observations) we can get then nearly back with a single sample from a n-dimentional GP.
Train set points:
$$\mathbf{x} = \begin{bmatrix}
x_1 & x_2 & \cdots & x_n
\end{bmatrix}$$
Test set points:
$$\mathbf{x} = \begin{bmatrix}
x_{1} & x{2} & \cdots & x_{m}
\end{bmatrix}$$
Kernel function with integrated reading noise:
$$ k(x,x') = \sigma_f^2 e^{\frac{-(x-x')^2}{2l^2}} + \sigma_n^2\delta(x,x')$$
and then our GP kernel will read:
\begin{equation}
\begin{bmatrix}
\mathbf{y}\
\mathbf{y}_
\end{bmatrix}
\sim N\Bigl(
0,\begin{bmatrix}
\mathbf{K} & \mathbf{K}^T \
\mathbf{K}_ & \mathbf{K}{*} \
\end{bmatrix}
)
\end{equation}
where
$$\mathbf{K} = \begin{bmatrix}
k(x_1,x_1) & k(x_1,x_2) & \cdots & k(x_1,x_n) \
k(x_2,x_1) & k(x_2,x_2) & \cdots & k(x_2,x_n) \
\vdots & \vdots & \ddots & \vdots \
k(x_n,x_1) & k(x_n,x_2) & \cdots & k(x_n,x_n) \\end{bmatrix}$$
$$\mathbf{K} = \begin{bmatrix}
k(x_{1},x_1) & k(x{1},x_2) & \cdots & k(x_{1},x_n) \
k(x_{2},x_1) & k(x_{2},x_2) & \cdots & k(x_{2},x_n) \
\vdots & \vdots & \ddots & \vdots \
k(x_{m},x_1) & k(x_{m},x_2) & \cdots & k(x_{m},x_n) \\end{bmatrix}$$
$$\mathbf{K}{} = \begin{bmatrix}
k(x_{1},x_{1}) & k(x{1},x_{2}) & \cdots & k(x_{1},x_{m}) \
k(x_{2},x_{1}) & k(x_{2},x_{2}) & \cdots & k(x_{2},x_{m}) \
\vdots & \vdots & \ddots & \vdots \
k(x_{m},x_{1}) & k(x_{m},x_{2}) & \cdots & k(x_{m},x_{m}) \ \end{bmatrix}$$
Next, for prediction we are interested to know the conditional probability of $y_$ given data which will also follow a gaussian distribution acroding to equation below:
$$y_ \vert y \sim N\Bigl(K_K^{-1}y,K_{}-K_K^{-1}K_*^T)$$
the mean will be our best estimate and the variance will indicate our uncertainty.
We will define kernel function here:
End of explanation
"""
n_pts = 1
x = np.array([-1.2, -1., -0.8, -0.6, -.4, -0.2, 0.0, 0.2, 0.4, 0.6],ndmin=2).T
y = np.array([-2, -1, -0.5, -0.25, 0.5, 0.4, 0.0, 1.2, 1.7, 1.4],ndmin=2).T
x_predict = np.array([0.8,]).reshape(n_pts,1)
sigman = 0.1; # noise of the reading
sigmaf = 1.1; # parameters of the GP - next to be computed by optimization
l = 0.2; #lenght-scale of our GP with squared exponential kernel
K = get_kernel(x, x, sigmaf, l, sigman) #+ np.finfo(float).eps*np.identity(x.size) # numerically stable
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
y_predict_mean = np.dot(np.dot(K_s,np.linalg.inv(K)),y).reshape(n_pts,1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(np.linalg.inv(K),K_s.T)))).reshape(n_pts,1)
plt.errorbar(x[:,0], y[:,0], sigman*np.ones_like(y),linestyle='None',marker = '.')
plt.errorbar(x_predict[:,0], y_predict_mean[:,0], y_predict_var[:,0], linestyle='None',marker = '.')
plt.xlabel('x');plt.ylabel('y');plt.title('single point prediction')
plt.show()
y_predict_var.shape
"""
Explanation: Single point prediction with GPR
We wont use optimized kernel hyperparameters here and only guess some values and predict the target for a single point.
End of explanation
"""
n_pts = 100
sigmaf=.1; l=0.5;
x_predict = np.linspace(-1.7,1,n_pts).reshape(n_pts,1)
K = get_kernel(x, x, sigmaf, l, sigman) #+ np.finfo(float).eps*np.identity(x.size) # numerically stable
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
y_predict_mean = np.dot(np.dot(K_s,np.linalg.inv(K)),y).reshape(n_pts,1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(np.linalg.inv(K),K_s.T)))).reshape(n_pts,1)
plt.errorbar(x[:,0], y[:,0], sigman*np.ones_like(y),linestyle='None',marker = '.')
plt.errorbar(x_predict[:,0], y_predict_mean[:,0], y_predict_var[:,0], linestyle='None',marker = '.')
plt.xlabel('x');plt.ylabel('y');plt.title('multiple prediction with non-optimized hyperparamters');
plt.show()
"""
Explanation: Next we will predict 100 points:
End of explanation
"""
p = [1, 0.1]; # inital start point
fun = lambda p: 0.5*(np.dot(y.T,np.dot(np.linalg.inv(get_kernel(x,x,p[0],p[1],sigman)),y)) + np.log(np.linalg.det(get_kernel(x,x,p[0],p[1],sigman))) + x.shape[0]*np.log(2*np.pi));
p = fmin(func=fun, x0=p)
sigmaf, l = p;
print sigmaf,l
"""
Explanation: GPR parameter optimization
Our last model doesn't fit the data that well and that is because kernel parameters aren' adjusted according to data need.
To address this probelm we will determine the best hyperparameters of our GP. We will use MAP estimate:
$$log p(y \vert x,\theta) = -\frac{1}{2}(y^TK^{-1}y-log(det(K))-nlog2\pi) $$
End of explanation
"""
K = get_kernel(x, x, sigmaf, l, sigman) #+ np.finfo(float).eps*np.identity(x.size) # numerically stable
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
y_predict_mean = np.dot(np.dot(K_s,np.linalg.inv(K)),y).reshape(n_pts,1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(np.linalg.inv(K),K_s.T)))).reshape(n_pts,1)
plt.errorbar(x[:,0], y[:,0], sigman*np.ones_like(y),linestyle='None',marker = '.')
plt.errorbar(x_predict[:,0], y_predict_mean[:,0], y_predict_var[:,0], linestyle='None',marker = '.')
plt.xlabel('x');plt.ylabel('y');plt.title('multiple prediction with MAP hyperparamters')
plt.show()
"""
Explanation: With current optimized parameters we will check our GP again:
End of explanation
"""
|
google/eng-edu
|
ml/cc/exercises/numpy_ultraquick_tutorial.ipynb
|
apache-2.0
|
import numpy as np
"""
Explanation: NumPy UltraQuick Tutorial
NumPy is a Python library for creating and manipulating vectors and matrices. This Colab is not an exhaustive tutorial on NumPy. Rather, this Colab teaches you just enough to use NumPy in the Colab exercises of Machine Learning Crash Course.
About Colabs
Machine Learning Crash Course uses Colaboratories (Colabs) for all programming exercises. Colab is Google's implementation of Jupyter Notebook. Like all Jupyter Notebooks, a Colab consists of two kinds of components:
Text cells, which contain explanations. You are currently reading a text cell.
Code cells, which contain Python code for you to run. Code cells have a light gray background.
You read the text cells and run the code cells.
Running code cells
You must run code cells in order. In other words, you may only run a code cell once all the code cells preceding it have already been run.
To run a code cell:
Place the cursor anywhere inside the [ ] area at the top left of a code cell. The area inside the [ ] will display an arrow.
Click the arrow.
Alternatively, you may invoke Runtime->Run all. Note, though, that some of the code cells will fail because not all the coding is complete. (You'll complete the coding as part of the exercise.)
If you see errors...
The most common reasons for seeing code cell errors are as follows:
You didn't run all of the code cells preceding the current code cell.
If the code cell is labeled as a Task, then:
You haven't yet written the code that implements the task.
You did write the code, but the code contained errors.
Import NumPy module
Run the following code cell to import the NumPy module:
End of explanation
"""
one_dimensional_array = np.array([1.2, 2.4, 3.5, 4.7, 6.1, 7.2, 8.3, 9.5])
print(one_dimensional_array)
"""
Explanation: Populate arrays with specific numbers
Call np.array to create a NumPy matrix with your own hand-picked values. For example, the following call to np.array creates an 8-element vector:
End of explanation
"""
two_dimensional_array = np.array([[6, 5], [11, 7], [4, 8]])
print(two_dimensional_array)
"""
Explanation: You can also use np.array to create a two-dimensional matrix. To create a two-dimensional matrix, specify an extra layer of square brackets. For example, the following call creates a 3x2 matrix:
End of explanation
"""
sequence_of_integers = np.arange(5, 12)
print(sequence_of_integers)
"""
Explanation: To populate a matrix with all zeroes, call np.zeros. To populate a matrix with all ones, call np.ones.
Populate arrays with sequences of numbers
You can populate an array with a sequence of numbers:
End of explanation
"""
random_integers_between_50_and_100 = np.random.randint(low=50, high=101, size=(6))
print(random_integers_between_50_and_100)
"""
Explanation: Notice that np.arange generates a sequence that includes the lower bound (5) but not the upper bound (12).
Populate arrays with random numbers
NumPy provides various functions to populate matrices with random numbers across certain ranges. For example, np.random.randint generates random integers between a low and high value. The following call populates a 6-element vector with random integers between 50 and 100.
End of explanation
"""
random_floats_between_0_and_1 = np.random.random([6])
print(random_floats_between_0_and_1)
"""
Explanation: Note that the highest generated integer np.random.randint is one less than the high argument.
To create random floating-point values between 0.0 and 1.0, call np.random.random. For example:
End of explanation
"""
random_floats_between_2_and_3 = random_floats_between_0_and_1 + 2.0
print(random_floats_between_2_and_3)
"""
Explanation: Mathematical Operations on NumPy Operands
If you want to add or subtract two vectors or matrices, linear algebra requires that the two operands have the same dimensions. Furthermore, if you want to multiply two vectors or matrices, linear algebra imposes strict rules on the dimensional compatibility of operands. Fortunately, NumPy uses a trick called broadcasting to virtually expand the smaller operand to dimensions compatible for linear algebra. For example, the following operation uses broadcasting to add 2.0 to the value of every item in the vector created in the previous code cell:
End of explanation
"""
random_integers_between_150_and_300 = random_integers_between_50_and_100 * 3
print(random_integers_between_150_and_300)
"""
Explanation: The following operation also relies on broadcasting to multiply each cell in a vector by 3:
End of explanation
"""
feature = ? # write your code here
print(feature)
label = ? # write your code here
print(label)
#@title Double-click to see a possible solution to Task 1.
feature = np.arange(6, 21)
print(feature)
label = (feature * 3) + 4
print(label)
"""
Explanation: Task 1: Create a Linear Dataset
Your goal is to create a simple dataset consisting of a single feature and a label as follows:
Assign a sequence of integers from 6 to 20 (inclusive) to a NumPy array named feature.
Assign 15 values to a NumPy array named label such that:
label = (3)(feature) + 4
For example, the first value for label should be:
label = (3)(6) + 4 = 22
End of explanation
"""
noise = ? # write your code here
print(noise)
label = ? # write your code here
print(label)
#@title Double-click to see a possible solution to Task 2.
noise = (np.random.random([15]) * 4) - 2
print(noise)
label = label + noise
print(label)
"""
Explanation: Task 2: Add Some Noise to the Dataset
To make your dataset a little more realistic, insert a little random noise into each element of the label array you already created. To be more precise, modify each value assigned to label by adding a different random floating-point value between -2 and +2.
Don't rely on broadcasting. Instead, create a noise array having the same dimension as label.
End of explanation
"""
|
OpenChemistry/mongochemserver
|
girder/notebooks/notebooks/notebooks/ChemML.ipynb
|
bsd-3-clause
|
import openchemistry as oc
"""
Explanation: Open Chemistry JupyterLab ChemML calculations
End of explanation
"""
mol = oc.find_structure('InChI=1S/C6H6/c1-2-4-6-5-3-1/h1-6H')
mol.structure.show()
"""
Explanation: Start by finding structures using online databases (or cached local results). This uses an InChI for a known structure that will be added if not already present using Open Babel.
End of explanation
"""
image_name = 'openchemistry/chemml:0.6.0'
input_parameters = {}
"""
Explanation: Set up the calculation, by specifying the name of the Docker image that will be used, and by providing input parameters that are known to the specific image
End of explanation
"""
result = mol.calculate(image_name, input_parameters)
result.properties.show()
"""
Explanation: Predict Properties from ML Model
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/sandbox-2/aerosol.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
poldrack/fmri-analysis-vm
|
analysis/orthogonalization/orthogonalization.ipynb
|
mit
|
%pylab inline
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(precision=2)
npts=100
X = np.random.multivariate_normal([0,0],[[1,0.5],[0.5,1]],npts)
X = X-np.mean(X,0)
params = [1,2]
y_noise = 0.2
Y = np.dot(X,params) + y_noise*np.random.randn(npts)
Y = Y-np.mean(Y) # remove mean so we can skip ones in design mtx
"""
Explanation: In the analysis of neuroimaging data using general linear models (GLMs), it is often common to find that regressors of interest
are correlated with one another. While this inflates the variance of the estimated parameters, the GLM ensures that the
estimated parameters only reflect the unique variance associated with the particular regressor; any shared variance
between regressors, while accounted for in the total model variance, is not reflected in the individual parameter
estimates. In general, this is as it should be; when it is not possible to uniquely attribute variance to any
particular regressor, then it should be left out.
Unfortunately, there is a tendency within the fMRI literature to overthrow this feature of the GLM by "orthogonalizing"
variables that are correlated. This, in effect, assigns the shared variance to one of the correlated variables based
on the experimenter's decision. While statistically valid, this raises serious conceptual concerns about the
interpretation of the resulting parameter estimates.
The first point to make is that, contrary to claims often seen in fMRI papers, the presence of correlated regressors
does not require the use of orthogonalization; in fact, in our opinion there are very few cases in which it is appropriate
to use orthogonalization, and its use will most often result in problematic conclusions.
What is orthogonalization?
As an example of how the GLM deals with correlated regressors and how this is affected by orthogonalization,
we first generate some synthetic data to work with.
End of explanation
"""
for i in range(2):
print('correlation(X[%d],Y))'%i, '= %4.3f' % np.corrcoef(X[:,i],Y)[0,1])
plt.subplot(1,2,i+1)
plt.scatter(X[:,i],Y)
"""
Explanation: Plot the relations between the two columns in X and the Y variable.
End of explanation
"""
params_est = np.linalg.lstsq(X,Y)[0]
print(params_est)
"""
Explanation: Now let's compute the parameters for the two columns in X using linear regression. They should come out very close
to the values specified for params above.
End of explanation
"""
x0_slope=numpy.linalg.lstsq(X[:,0].reshape((npts,1)),X[:,1].reshape((npts,1)))[0]
X_orth=X.copy()
X_orth[:,1]=X[:,1] - X[:,0]*x0_slope
print('Correlation matrix for original design matrix')
print (numpy.corrcoef(X.T))
print ('Correlation matrix for orthogonalized design matrix')
print (numpy.corrcoef(X_orth.T))
"""
Explanation: Now let's orthogonalize the second regressor (X[1]) with respect to the first (X[0]) and create a new orthogonalized
design matrix X_orth. One way to do this is to fit a regression and then take the residuals.
End of explanation
"""
params_est_orth = numpy.linalg.lstsq(X_orth,Y)[0]
print (params_est_orth)
"""
Explanation: As intended, the correlation between the two regressors is effectively zero after orthogonalization. Now
let's estimate the model parameters using the orthogonalized design matrix:
End of explanation
"""
# Make X nptsx10
X = np.random.normal(0,1,(npts,10))
X = X - X.mean(axis=0)
X0 = X[:,:2]
X1 = X[:,2:]
# Orthogonolizing X0 with respect to X1:
X0_orthog_wrt_X1 = X0 - np.dot(X1,np.linalg.pinv(X1)).dot(X0)
# reconstruct the new X matrix : Xorth
Xorth = np.hstack((X0_orthog_wrt_X1, X1))
# checking that the covariance of the two first regressors with others is 0
# look at the 5 first regressors
print (np.corrcoef(Xorth.T)[:5,:5])
"""
Explanation: Note that the parameter estimate for the orthogonalized regressor is exactly the same as it was in the original model;
it is only the estimate for the other (orthogonalized-against) regressor that changes after orthogonalization. That's
because shared variance between the two regressors has been assigned to X[0], whereas previously it was unassigned.
Note also that testing the second regressor will yield exactly the same test value. Testing for the first regressor, on the contrary, will yield a much smaller p value as the variance explained by this regressor contains the shared variance of both regressors.
More generally, orthogonalizing the two first regressors $X_0$ of the design matrix $X$ will look like:
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_dics.ipynb
|
bsd-3-clause
|
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD (3-clause)
"""
Explanation: DICS for power mapping
In this tutorial, we're going to simulate two signals originating from two
locations on the cortex. These signals will be sine waves, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll be using dynamic imaging of coherent sources (DICS) [1]_ to map out
spectral power along the cortex. Let's see if we can find our two simulated
sources.
End of explanation
"""
import os.path as op
import numpy as np
from scipy.signal import welch, coherence
from mayavi import mlab
from matplotlib import pyplot as plt
import mne
from mne.simulation import simulate_raw
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# Suppress irrelevant output
mne.set_log_level('ERROR')
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
mri_path = op.join(subjects_dir, 'sample')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
raw_fname = op.join(meg_path, 'sample_audvis_raw.fif')
trans_fname = op.join(meg_path, 'sample_audvis_raw-trans.fif')
src_fname = op.join(mri_path, 'bem/sample-oct-6-src.fif')
bem_fname = op.join(mri_path, 'bem/sample-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
cov_fname = op.join(meg_path, 'sample_audvis-cov.fif')
# Seed for the random number generator
rand = np.random.RandomState(42)
"""
Explanation: Setup
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
End of explanation
"""
sfreq = 50. # Sampling frequency of the generated signal
times = np.arange(10. * sfreq) / sfreq # 10 seconds of signal
n_times = len(times)
def coh_signal_gen():
"""Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
"""
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
signal = np.sin(2.0 * np.pi *
(base_freq * np.arange(n_times) / sfreq +
np.cumsum(t_rand * rand.randn(n_times))))
# Add some random fluctuations to the signal.
signal += std * rand.randn(n_times)
# Scale the signal to be in the right order of magnitude (~100 nAm)
# for MEG data.
signal *= 100e-9
return signal
"""
Explanation: Data simulation
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
End of explanation
"""
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
fig, axes = plt.subplots(2, 2, figsize=(8, 4))
# Plot the timeseries
ax = axes[0][0]
ax.plot(times, 1e9 * signal1, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',
title='Signal 1')
ax = axes[0][1]
ax.plot(times, 1e9 * signal2, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
ax = axes[1][0]
# Only plot the first 100 frequencies
ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],
ylabel='Power (dB)', title='Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
ax = axes[1][1]
ax.plot(f[:50], coh[:50], lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',
title='Coherence between the timeseries')
fig.tight_layout()
"""
Explanation: Let's simulate two timeseries and plot some basic information about them.
End of explanation
"""
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
source_vert1 = 146374
source_vert2 = 33830
# The timeseries at each vertex: one part signal, one part silence
timeseries1 = np.hstack([signal1, np.zeros_like(signal1)])
timeseries2 = np.hstack([signal2, np.zeros_like(signal2)])
# Construct a SourceEstimate object that describes the signal at the cortical
# level.
stc = mne.SourceEstimate(
np.vstack((timeseries1, timeseries2)), # The two timeseries
vertices=[[source_vert1], [source_vert2]], # Their locations
tmin=0,
tstep=1. / sfreq,
subject='sample', # We use the brain model of the MNE-Sample dataset
)
"""
Explanation: Now we put the signals at two locations on the cortex. We construct a
:class:mne.SourceEstimate object to store them in.
The timeseries will have a part where the signal is active and a part where
it is not. The techniques we'll be using in this tutorial depend on being
able to contrast data that contains the signal of interest versus data that
does not (i.e. it contains only noise).
End of explanation
"""
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
"""
Explanation: Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
End of explanation
"""
# Read the info from the sample dataset. This defines the location of the
# sensors and such.
info = mne.io.read_info(raw_fname)
info.update(sfreq=sfreq, bads=[])
# Only use gradiometers
picks = mne.pick_types(info, meg='grad', stim=True, exclude=())
mne.pick_info(info, picks, copy=False)
# This is the raw object that will be used as a template for the simulation.
raw = mne.io.RawArray(np.zeros((info['nchan'], len(stc.times))), info)
# Define a covariance matrix for the simulated noise. In this tutorial, we use
# a simple diagonal matrix.
cov = mne.cov.make_ad_hoc_cov(info)
cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR
# Simulate the raw data, with a lowpass filter on the noise
raw = simulate_raw(raw, stc, trans_fname, src_fname, bem_fname, cov=cov,
random_state=rand, iir_filter=[4, -4, 0.8])
"""
Explanation: Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
Some noise is added based on the baseline noise covariance matrix from the
sample dataset, scaled to implement the desired SNR.
End of explanation
"""
t0 = raw.first_samp # First sample in the data
t1 = t0 + n_times - 1 # Sample just before the second trial
epochs = mne.Epochs(
raw,
events=np.array([[t0, 0, 1], [t1, 0, 2]]),
event_id=dict(signal=1, noise=2),
tmin=0, tmax=10,
preload=True,
)
# Plot some of the channels of the simulated data that are situated above one
# of our simulated sources.
picks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal'))
epochs.plot(picks=picks)
"""
Explanation: We create an :class:mne.Epochs object containing two trials: one with
both noise and signal and one with just noise
End of explanation
"""
# Compute the inverse operator
fwd = mne.read_forward_solution(fwd_fname)
inv = make_inverse_operator(epochs.info, fwd, cov)
# Apply the inverse model to the trial that also contains the signal.
s = apply_inverse(epochs['signal'].average(), inv)
# Take the root-mean square along the time dimension and plot the result.
s_rms = np.sqrt((s ** 2).mean())
brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,
size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('MNE-dSPM inverse (RMS)', height=0.9)
"""
Explanation: Power mapping
With our simulated dataset ready, we can now pretend to be researchers that
have just recorded this from a real subject and are going to study what parts
of the brain communicate with each other.
First, we'll create a source estimate of the MEG data. We'll use both a
straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
which is specifically designed to work with oscillatory data.
Computing the inverse using MNE-dSPM:
End of explanation
"""
# Estimate the cross-spectral density (CSD) matrix on the trial containing the
# signal.
csd_signal = csd_morlet(epochs['signal'], frequencies=[10])
# Compute the spatial filters for each vertex, using two approaches.
filters_approach1 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=True,
inversion='single', weight_norm=None)
filters_approach2 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=False,
inversion='matrix', weight_norm='unit-noise-gain')
# Compute the DICS power map by applying the spatial filters to the CSD matrix.
power_approach1, f = apply_dics_csd(csd_signal, filters_approach1)
power_approach2, f = apply_dics_csd(csd_signal, filters_approach2)
# Plot the DICS power maps for both approaches.
for approach, power in enumerate([power_approach1, power_approach2], 1):
brain = power.plot('sample', subjects_dir=subjects_dir, hemi='both',
figure=approach + 1, size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('DICS power map, approach %d' % approach, height=0.9)
"""
Explanation: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
A beamformer will construct for each vertex a spatial filter that aims to
pass activity originating from the vertex, while dampening activity from
other sources as much as possible.
The :func:make_dics function has many switches that offer precise control
over the way the filter weights are computed. Currently, there is no clear
consensus regarding the best approach. This is why we will demonstrate two
approaches here:
The approach as described in [2]_, which first normalizes the forward
solution and computes a vector beamformer.
The scalar beamforming approach based on [3]_, which uses weight
normalization instead of normalizing the forward solution.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/feature_engineering/labs/4_keras_adv_feat_eng-lab.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import datetime
import logging
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from tensorflow.keras import models
# set TF error log verbosity
logging.getLogger("tensorflow").setLevel(logging.ERROR)
print(tf.version.VERSION)
"""
Explanation: Advanced Feature Engineering in Keras
Learning Objectives
Process temporal feature columns in Keras.
Use Lambda layers to perform feature engineering on geolocation features.
Create bucketized and crossed feature columns.
Introduction
In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the Solution Notebook for reference.
Set up environment variables and load necessary libraries
We will start by importing the necessary libraries for this lab.
End of explanation
"""
if not os.path.isdir("../data"):
os.makedirs("../data")
# The `gsutil cp` command allows you to copy data between the bucket and current directory.
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/taxi-train1_toy.csv ../data
!gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/taxi-valid1_toy.csv ../data
"""
Explanation: Load taxifare dataset
The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict.
First, let's download the .csv data by copying the data from a cloud storage bucket.
End of explanation
"""
!ls -l ../data/*.csv
!head ../data/*.csv
"""
Explanation: Let's check that the files were copied correctly and look like we expect them to.
End of explanation
"""
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# A function to define features and labesl
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a Pandas Dataframe
def load_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(pattern,
batch_size,
CSV_COLUMNS,
DEFAULTS)
dataset = dataset.map(features_and_labels) # features, label
if mode == 'train':
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
"""
Explanation: Create an input pipeline
Typically, you will use a two step process to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.
End of explanation
"""
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred): # Root mean square error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
"""
Explanation: Create a Baseline DNN Model in Keras
Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.
End of explanation
"""
model = build_dnn_model()
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
"""
Explanation: We'll build our DNN model and inspect the model architecture.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 59621 * 5
NUM_EVALS = 5
NUM_EVAL_EXAMPLES = 14906
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
'train')
evalds = load_dataset('../data/taxi-valid*',
1000,
'eval').take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Train the model
To train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.
End of explanation
"""
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
"""
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
End of explanation
"""
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
"""
Explanation: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.
End of explanation
"""
# TODO 1a - Your code here
# TODO 1b - Your code here
# TODO 1c - Your code here
"""
Explanation: Improve Model Performance Using Feature Engineering
We now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation.
Temporal Feature Columns
Lab Task #1: Processing temporal feature columns in Keras
We incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.
End of explanation
"""
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
"""
Explanation: Geolocation/Coordinate Feature Columns
The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
Computing Euclidean distance
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
End of explanation
"""
def scale_longitude(lon_column):
return (lon_column + 78)/8.
"""
Explanation: Scaling latitude and longitude
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation features. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.
First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitudinal value and then divide by 8 to return a scaled value.
End of explanation
"""
def scale_latitude(lat_column):
return (lat_column - 37)/8.
"""
Explanation: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
End of explanation
"""
def transform(inputs, numeric_cols, string_cols, nbuckets):
print("Inputs before features transformation: {}".format(inputs.keys()))
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
# TODO 2a
# TODO -- Your code here.
# Scaling latitude from range [37, 45] to [0, 1]
# TODO 2b
# TODO -- Your code here.
# add Euclidean distance
transformed['euclidean'] = layers.Lambda(
euclidean,
name='euclidean')([inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# TODO 3a
# TODO -- Your code here.
# TODO 3b
# TODO -- Your code here.
# create embedding columns
feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100)
print("Transformed features: {}".format(transformed.keys()))
print("Feature columns: {}".format(feature_columns.keys()))
return transformed, feature_columns
"""
Explanation: Putting it all together
We will create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclidean distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.
Lab Task #2: We will use Lambda layers to create two new "geo" functions for our model.
Lab Task #3: Creating the bucketized and crossed feature columns
End of explanation
"""
NBUCKETS = 10
# DNN MODEL
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs,
numeric_cols=NUMERIC_COLS,
string_cols=STRING_COLS,
nbuckets=NBUCKETS)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# Compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
"""
Explanation: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.
End of explanation
"""
tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
'train')
evalds = load_dataset('../data/taxi-valid*',
1000,
'eval').take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS+3,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Let's see how our model architecture has changed now.
End of explanation
"""
plot_curves(history, ['loss', 'mse'])
"""
Explanation: As before, let's visualize the DNN model layers.
End of explanation
"""
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
"""
Explanation: Let's a prediction with this new model with engineered features on the example we had above.
End of explanation
"""
|
solgaardlab/dphox
|
doc/source/01_fundamentals.ipynb
|
mit
|
import dphox as dp
import numpy as np
import holoviews as hv
hv.extension('bokeh')
"""
Explanation: Fundamentals: patterns and curves
A Pattern in dphox is analogous to shapely's MultiPolygon, and contains a set of polygons represented by a list of $2 \times N$ numpy arrays.
A Pattern can be treated pretty much like a shapely geometry in many respects. While we wrap boolean operations in Pattern using shapely's API, we do not use shapely's transform operations. This is because those are not vectorized efficiently over all of the geometries as we do in Pattern.
A Curve in dphox is analogous to shapely's MultiLineString, and consists of a list of $2 \times N$ numpy arrays like Pattern, except we do not assume the first and last points are connected.
A path is a Pattern, which is a Curve with thickness or width, which may vary along a curve.
dphox supports any curve or path that can be represented by piecewise parametric function(s): straight lines, circular and elliptical turns, Euler and Archimedian spiral turns, Manhattan routes, and much more. These are very useful for photonic and metal routing.
In dphox (and similar libraries as gdspy), a parametric function is generally defined in terms of a variable $t \in [0, 1]$. This can be used to define both the curve and the varying widths along the curve. The resolution or number of evaluations of the path, is generally defined for any curve that isn't "straight" and we typically use 100 as the default here, though that can vary.
Imports
End of explanation
"""
pi = dp.text(r"$\pi$")
pi.port['p'] = dp.Port(3, 1)
pi.hvplot().opts(title='pi')
"""
Explanation: Patterns
A very important philosophy in dphox is to only implement things not already implemented in shapely unless there is a much more efficient batch implementation (e.g. vectorized transforms using numpy arrays). To this end, we will present many functionalities below that are very simple extensions of shapely transformations, owing to the seamless translation between Pattern and shapely geometries.
Text rendering
Using the dp.text function it is possible to render any text using LaTeX assuming you've installed the fonts in your computer or using default fonts (as is the case in a default Colab kernel). Behind the scenes, we leverage matplotlib's LaTeX path patches. Below, we will generate the symbol $\pi$ and manipulate it in the further examples.
We also add a port $p$ of width $1$ at some location $(3, 1)$ with angle $0$, and you should note how it transforms along with the overall geometry.
End of explanation
"""
pi1 = pi.copy.translate() # no translation
pi2 = pi.copy.translate(10) # translation by 10
pi3 = pi.copy.translate(10, 10) # translation by (10, 10)
b = dp.Pattern(pi1, pi2, pi3).bounds
(pi1.hvplot() * pi2.hvplot('blue') * pi3.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='translation')
"""
Explanation: translate
First let's experiment with translating the pattern. Note that I need to make a copy of the original pattern each time (this is a deepcopy) because I do not want to apply the transformations sequentially. The transformations also return object itself.
End of explanation
"""
pi1 = pi.copy.rotate(45) # rotate by 45 degrees about the origin
pi2 = pi.copy.rotate(90) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot() * pi1.hvplot('blue') * pi2.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='rotation')
"""
Explanation: rotate
Now, let's rotate and see what happens, again using the copy trick.
End of explanation
"""
pi1 = pi.copy.rotate(45, pi.center) # rotate by 45 degrees about the origin
pi2 = pi.copy.rotate(90, pi.center) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot() * pi1.hvplot('blue') * pi2.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='rotation')
"""
Explanation: We can choose any point of rotation so let's also do this about the center.
End of explanation
"""
pi1 = pi.copy.scale(4, origin=pi.center) # rotate by 45 degrees about the origin
pi2 = pi.copy.scale(2, 2, pi.center) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot() * pi1.hvplot('blue') * pi2.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='scale')
"""
Explanation: scale
We can rescale our $\pi$ geometry in the x and/or y dimensions as follows.
End of explanation
"""
pi1 = pi.copy.skew(0.5, origin=pi.center) # rotate by 45 degrees about the origin
pi2 = pi.copy.skew(0, -0.5, pi.center) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot().opts(title='no skew') + pi1.hvplot('blue').opts(title='xskew') + pi2.hvplot('red').opts(title='yskew'))
"""
Explanation: skew
We can skew our $\pi$ geometry in the x and/or y dimensions as follows.
End of explanation
"""
circle = dp.Circle(5)
circle.align(pi)
b = dp.Pattern(circle, pi).bounds
(pi.hvplot() * circle.hvplot('green')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='scale')
"""
Explanation: align
Sometimes, it might be easier to just align and/or stack designs next to each other, especially in cases where no port reference points / orientations are defined. In such a case, we may use the align, halign, valign functions. Inspiration for this feature comes from phidl.
End of explanation
"""
box = dp.Box((3, 3)) # centered at (0, 0) by default.
aligned_boxes = {
'default': box.copy.halign(circle),
'opposite=True': box.copy.halign(circle, opposite=True),
'left=False': box.copy.halign(circle, left=False),
'left=False,opposite=True': box.copy.halign(circle, left=False, opposite=True),
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(circle, bx, pi).bounds
plots.append(
(pi.hvplot() * circle.hvplot('green') * bx.hvplot('blue', plot_ports=False)).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
"""
Explanation: halign
Here we now align another smaller box to the edge of the circle using halign and valign.
End of explanation
"""
box.halign(circle, opposite=True) # to create a wider plot
aligned_boxes = {
'default': box.copy.valign(circle),
'opposite=True': box.copy.valign(circle, opposite=True),
'bottom=False': box.copy.valign(circle, bottom=False),
'bottom=False,opposite=True': box.copy.valign(circle, bottom=False, opposite=True),
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(circle, bx, pi).bounds
plots.append(
(pi.hvplot() * circle.hvplot('green') * bx.hvplot('blue', plot_ports=False)).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
"""
Explanation: valign
End of explanation
"""
box = dp.Box((3, 3))
box.port = {'n': dp.Port(a=45)} # 45 degree reference port.
aligned_boxes = {
'to n from origin': pi.copy.to(box.port['n']),
'to n from p': pi.copy.to(box.port['n'], from_port='p')
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(bx, box).bounds
plots.append(
(box.hvplot() * bx.hvplot('blue')).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
aligned_boxes = {
'to p from origin': box.copy.to(pi.port['p']),
'to p from n': box.copy.to(pi.port['p'], from_port='n')
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(bx, pi).bounds
plots.append(
(bx.hvplot() * pi.hvplot('blue')).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
"""
Explanation: to
The to command allows ports in different devices to be aligned to each other. If a from_port is not specified, assume the port is at the origin (0, 0) with an angle of $180^\circ$ in the reference plane of the pattern.
End of explanation
"""
straight_curve = dp.straight(3) # A turn of radius 5.
straight_path = dp.straight(3).path(1) # A turn of radius 5 and width 1
straight_curve.hvplot().opts(title='straight curve', ylim=(-2, 2)) + straight_path.hvplot().opts(title='straight path', ylim=(-2, 2))
hv.DynamicMap(lambda width, length: dp.straight(length).path(width).hvplot().opts(
xlim=(0, 5), ylim=(-2, 2)),
kdims=['width', 'length']).redim.range(
width=(0.1, 0.5), length=(1, 5)).opts(framewise=True)
"""
Explanation: Curves and paths
Before we discuss the link, offset, symmetrize, loopify, and turn_connect operations, we will discuss the various fundamental building blocks or elements for curves and paths.
straight
A straight path or waveguide can be defined based on a width $w$ and length $\ell$ and simply consists of two points.
End of explanation
"""
turn_curve = dp.turn(5, 90) # A turn of radius 5.
turn_path = dp.turn(5, 90).path(1) # A turn of radius 5 and width 1
turn_curve.hvplot().opts(title='turn curve') + turn_path.hvplot().opts(title='turn path')
dmap = hv.DynamicMap(lambda width, radius, angle, euler: dp.turn(radius, angle, euler).path(width).hvplot().opts(
xlim=(-10, 10), ylim=(-10, 10)),
kdims=['width', 'radius', 'angle', 'euler'])
dmap.redim.range(width=(0.3, 0.7), radius=(3., 5.), angle=(-180, 180), euler=(0, 0.5)).redim.step(radius=0.1, euler=0.05).redim.default(angle=90, width=0.5, radius=5)
"""
Explanation: turn
A smooth turn can be defined based on a width $w$ or taper function $w(t)$, radius $r$, Euler fraction $e$ (linearly ramps the curvature to reduce photonic bend loss).
Note that the Euler parameter increases the length of the bend but takes up the roughly same bounding box area.
End of explanation
"""
cubic = dp.taper(5).path(dp.cubic_taper_fn(1, 0.5))
quad = dp.taper(5).path(dp.quad_taper_fn(1, 0.5))
linear = dp.taper(5).path(dp.linear_taper_fn(1, 0.5))
linear_plot = linear.hvplot().opts(title='linear taper (1 to 0.5)', ylim=(-2, 2))
quad_plot = quad.hvplot().opts(title='quadratic taper (1 to 0.5)', ylim=(-2, 2))
cubic_plot = cubic.hvplot().opts(title='cubic taper (1 to 0.5)', ylim=(-2, 2))
linear_plot + quad_plot + cubic_plot
def taper_plot(length, init_w, final_w):
cubic = dp.taper(length).path(dp.cubic_taper_fn(init_w, final_w))
quad = dp.taper(length).path(dp.quad_taper_fn(init_w, final_w))
linear = dp.taper(length).path(dp.linear_taper_fn(init_w, final_w))
linear_plot = linear.hvplot().opts(title=f'linear taper ({init_w} to {final_w})', xlim=(0, 10), ylim=(-5, 5))
quad_plot = quad.hvplot().opts(title=f'quadratic taper ({init_w} to {final_w})', xlim=(0, 10), ylim=(-5, 5))
cubic_plot = cubic.hvplot().opts(title=f'cubic taper ({init_w} to {final_w})', xlim=(0, 10), ylim=(-5, 5))
return linear_plot + quad_plot + cubic_plot
dmap = hv.DynamicMap(lambda length, init_w, final_w: taper_plot(length, init_w, final_w), kdims=['length', 'init_w', 'final_w'])
dmap.redim.range(length=(5., 10.), init_w=(3., 5.), final_w=(2., 6.)).redim.default(length=10)
"""
Explanation: taper
A taper follows the polynomial width function $w(t)$.
We typically define this based on a polynomial function $w(t) = w_0 + w_1 t + w_2 t^2 + w_3 t^3 \cdots$ , so we define these in dphox explicitly. The nice thing about this form $w(t)$ is that the sum of the coefficients gives the overall width at the end ($t = 1$) and $w_0$ gives the initial width.
Why use a nonlinear taper?
- A quadratic taper is the minimal function that allows for $C_2$ smooth tapering transition at either one end.
- A cubic taper is the minimal function that allows for $C_2$ smooth tapering transitions on both ends.
End of explanation
"""
curve = dp.arc(120, 5)
path = curve.path(1)
path_taper = curve.path(dp.cubic_taper_fn(0.5, 2))
arc_curve_plot = curve.hvplot().opts(xlim=(0, 6), ylim=(-5, 5), title='arc curve')
arc_path_plot = path.hvplot().opts(xlim=(0, 6), ylim=(-5, 5), title='arc path')
arc_path_taper_plot = path_taper.hvplot().opts(xlim=(0, 6), ylim=(-5, 5), title='arc path, cubic taper')
arc_curve_plot + arc_path_plot + arc_path_taper_plot
"""
Explanation: arc
An arc of specified angle $\alpha$, radiu $r$, similar to a circular bend except now the center is at the origin.
End of explanation
"""
curve = dp.bezier_sbend(bend_x=15, bend_y=10)
path = dp.bezier_sbend(15, 10).path(1)
path_taper = dp.bezier_sbend(15, 10).path(dp.cubic_taper_fn(0.5, 2))
curve.hvplot().opts(title='bezier curve') + path.hvplot().opts(title='bezier path') + path_taper.hvplot().opts(title='bezier path, cubic taper')
"""
Explanation: bezier_sbend
An sbend following a classic cubic, 4-pole bezier structure defined based on a width $w$ or taper function $w(t)$, bend width displacement $\delta x$, bend height displacement $\delta y$. The poles are placed at $(0, 0), (\delta x / 2, 0), (\delta x / 2, \delta y), (\delta x, \delta y)$.
End of explanation
"""
curve = dp.turn_sbend(height=5, radius=5)
path = dp.turn_sbend(5, 5).path(1)
path_taper = dp.turn_sbend(5, 5).interpolated.path(dp.cubic_taper_fn(0.5, 2))
curve.hvplot().opts(title='turn_sbend curve') + path.hvplot().opts(title='turn_sbend path') + path_taper.hvplot().opts(title='turn_sbend path, cubic taper')
"""
Explanation: turn_sbend
An sbend based on circular/Euler turns rather than bezier curves, and involve bending up and down by the same angle (assumed to be less than 90 degrees). The input parameters are an effective radius $r$ and a bend height $\delta y$ for the sbend. If the radius is smaller than twice the bend height, we use 90 degree turns and allow the straight segment to cover the full bend height.
End of explanation
"""
def racetrack(radius: float, length: float):
return dp.link(dp.left_uturn(radius), length, dp.left_uturn(radius), length)
def trombone(radius: float, length: float):
return dp.link(dp.left_turn(radius), length, dp.right_uturn(radius), length, dp.left_turn(radius))
racetrack_curve = racetrack(5, 10)
trombone_curve = trombone(5, 10)
racetrack_plot = racetrack_curve.path(1).hvplot(alpha=0.2) * racetrack_curve.hvplot(alternate_color='green', line_width=4)
trombone_plot = trombone_curve.path(2).hvplot(alpha=0.2) * trombone_curve.hvplot(alternate_color='green', line_width=4)
(racetrack_plot.opts(title='racetrack') + trombone_plot.opts(title='trombone')).opts(shared_axes=False)
"""
Explanation: Operations
link
The link operation is your friend. It allows you to compose elements into a full path. Think of link like building a road. As an example of link consider the trombone and racetrack functions below (also defined with more options in dphox).
End of explanation
"""
racetrack_segments = racetrack_curve.segments
xmin, ymin, xmax, ymax = racetrack_curve.bounds
hv.Overlay([segment.hvplot() for segment in racetrack_segments]).opts(xlim=(xmin - 2, xmax + 2), ylim=(ymin - 2, ymax + 2))
"""
Explanation: segments
We can also visualize the individual elements of link by plotting all of the geometries in teh racetrack curve, which we refer here as segments.
End of explanation
"""
taper = dp.taper(5).path(dp.cubic_taper_fn(1, 0.5))
reverse_taper = dp.taper(5).reverse().path(dp.cubic_taper_fn(1, 0.5))
(taper.hvplot().opts(title='forward') + reverse_taper.hvplot().opts(title='backward')).opts(shared_axes=False).cols(1)
"""
Explanation: reverse
The reverse() operation simply reverses the curve to move in the opposite direction and flips the ports.
End of explanation
"""
path_taper = dp.turn_sbend(20, 5).path(dp.cubic_taper_fn(0.5, 2))
path_taper_interp = dp.turn_sbend(20, 5).interpolated.path(dp.cubic_taper_fn(0.5, 2))
path_taper.hvplot().opts(title='noninterpolated', fontsize=10) + path_taper_interp.hvplot().opts(title='interpolated', fontsize=10)
"""
Explanation: interpolated
Interpolation of a curve is important in cases where there are multiple segments to a curve with varying resolution. Interpolation allows for tapering of geometries with equal segment lengths along the curve, and can be called using .interpolated. Below is an example for when the radius of a turn_sbend is smaller than twice the bend height; as you can see the taper is more evenly distributed in the interpolated case.
End of explanation
"""
trombone_taper = path_taper_interp.symmetrized()
trombone_taper.hvplot(alpha=0.5) * trombone_taper.curve.hvplot(alternate_color='red', line_width=6)
"""
Explanation: symmetrized
The symmetrization of a curve or path mirrors any curve or path at its endpoint.
End of explanation
"""
path1 = dp.link(dp.turn(5, -45).path(0.5), trombone_taper, dp.turn(5, -45).path(0.5)).symmetrized().symmetrized()
path2 = dp.link(dp.turn(5, -45).path(0.5), trombone_taper.symmetrized(), dp.turn(5, -45).path(0.5)).symmetrized().symmetrized()
(path1.hvplot() * path1.curve.hvplot(alternate_color='red') + path2.hvplot() * path2.curve.hvplot(alternate_color='red')).opts(shared_axes=False)
"""
Explanation: We can apply the symmetrization many times to build funky ring structures.
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp
|
skew_detection/03_covertype_drift_detection_tfdv.ipynb
|
apache-2.0
|
!pip install -U -q tensorflow
!pip install -U -q tensorflow_data_validation
!pip install -U -q pandas
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Drift detection with TensorFlow Data Validation
This tutorial shows how to use TensorFlow Data Validation (TFDV) to identify and analyze different data skews in request-response serving data logged by AI Platform Prediction in BigQuery.
The tutorial has three parts:
Part 1: Produce baseline statistics and a reference schema
Download training data.
Compute baseline statistics from the training data.
Generate a reference schema using the baseline statistics.
Part 2: Detect data skews
Generate baseline statistics and a reference schema from training data using TFDV.
Read request-response serving data from BigQuery and save it to CSV files.
Compute statistics from the serving data.
Validate serving statistics against the reference schema and baseline statistics to detect anomalies (if any).
Part 3: Analyze statistics and anomalies
Use TFDV to visualize and display the statistics and anomalies.
Analyze how statistics change over time.
We use the covertype from UCI Machine Learning Repository.
The dataset has been preprocessed, split, and uploaded to a public Cloud Storage location:
gs://workshop-datasets/covertype
The notebook code uses this version of the preprocessed dataset. For more information, see Cover Type Dataset on GitHub.
In this notebook, you use the training data split to generate a reference schema and to gather statistics from for validating serving data.
Setup
Install packages and dependencies
End of explanation
"""
PROJECT_ID = "sa-data-validation"
BUCKET = "sa-data-validation"
BQ_DATASET_NAME = 'prediction_logs'
BQ_VIEW_NAME = 'vw_covertype_classifier_logs_v1'
MODEL_NAME = 'covertype_classifier'
MODEL_VERSION = 'v1'
!gcloud config set project $PROJECT_ID
"""
Explanation: Configure Google Cloud environment settings
End of explanation
"""
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
"""
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab
End of explanation
"""
import os
import tensorflow as tf
import tensorflow_data_validation as tfdv
from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2, anomalies_pb2
import apache_beam as beam
import pandas as pd
from datetime import datetime
import json
import numpy as np
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
print("TF version: {}".format(tf.__version__))
print("TFDV version: {}".format(tfdv.__version__))
print("Beam version: {}".format(beam.__version__))
"""
Explanation: Import libraries
End of explanation
"""
WORKSPACE = './workspace'
DATA_DIR = os.path.join(WORKSPACE, 'data')
TRAIN_DATA = os.path.join(DATA_DIR, 'train.csv')
if tf.io.gfile.exists(WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(WORKSPACE)
tf.io.gfile.makedirs(DATA_DIR)
"""
Explanation: Create a local workspace
End of explanation
"""
!gsutil cp gs://workshop-datasets/covertype/data_validation/training/dataset.csv {TRAIN_DATA}
!wc -l {TRAIN_DATA}
sample = pd.read_csv(TRAIN_DATA).head()
sample.T
"""
Explanation: Part 1: Generate Baseline Statistics and Reference Schema
We use TDV to generate baseline statistics, based on the training data, as well as a reference schema, to validate the serving data against.
1. Download data
End of explanation
"""
baseline_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA,
stats_options = tfdv.StatsOptions(
sample_count=10000
)
)
"""
Explanation: 2. Compute baseline statistics
End of explanation
"""
reference_schema = tfdv.infer_schema(baseline_stats)
# Set Soil_Type to be categorical
tfdv.set_domain(reference_schema, 'Soil_Type', schema_pb2.IntDomain(
name='Soil_Type', is_categorical=True))
# Set Cover_Type to be categorical
tfdv.set_domain(reference_schema, 'Cover_Type', schema_pb2.IntDomain(
name='Cover_Type', is_categorical=True))
baseline_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA,
stats_options=tfdv.StatsOptions(
schema=reference_schema,
sample_count=10000
)
)
reference_schema = tfdv.infer_schema(baseline_stats)
# Set Soil_Type to be categorical
tfdv.set_domain(reference_schema, 'Soil_Type', schema_pb2.IntDomain(
name='Soil_Type', is_categorical=True))
# Set Cover_Type to be categorical
tfdv.set_domain(reference_schema, 'Cover_Type', schema_pb2.IntDomain(
name='Cover_Type', is_categorical=True))
# Set max and min values for Elevation
tfdv.set_domain(reference_schema,
'Elevation',
tfdv.utils.schema_util.schema_pb2.IntDomain(
min=1000,
max=5000))
# Allow no missing values
tfdv.get_feature(reference_schema,
'Slope').presence.min_fraction = 1.0
# Set distribution skew detector for Wilderness_Area
tfdv.get_feature(reference_schema,
'Wilderness_Area').skew_comparator.infinity_norm.threshold = 0.05
"""
Explanation: 3. Generate reference schema
End of explanation
"""
tfdv.display_schema(
schema=reference_schema)
"""
Explanation: Display the reference schema
End of explanation
"""
tfdv.visualize_statistics(baseline_stats)
"""
Explanation: Visualize baseline statistics
End of explanation
"""
TARGET_FEATURE_NAME = 'Cover_Type'
FEATURE_NAMES = [feature.name for feature in reference_schema.feature
if feature.name != TARGET_FEATURE_NAME]
"""
Explanation: Part 2: Detecting Serving Data Skews
2. Export Serving Data from BigQuery
Although TFDV provides a utility function to calculate statistics on a Pandas dataframe - tfdv.generate_statistics_from_dataframe - that would simplify interactive analysis, the function does not support slicing. Since we need slicing for calculating statistics over different time windows, we will use tfdv.generate_statistics_from_csv instead.
Thus, we read the request-response serving logs from BigQuery and save the results to CSV files, in order to use tfdv.generate_statistics_from_csv.
End of explanation
"""
def generate_query(source, features, target, start_time, end_time):
query = """
SELECT
FORMAT_TIMESTAMP('%Y-%m-%d', time) AS time,
{},
predicted_class AS {}
FROM `{}`
WHERE time BETWEEN '{}' AND '{}'
;
""".format(features, target, source, start_time, end_time)
return query
start_time = '2020-05-01 00:00:00 UTC'
end_time = '2020-07-01 00:50:00 UTC'
source = "{}.{}".format(BQ_DATASET_NAME, BQ_VIEW_NAME)
features = ', '.join(FEATURE_NAMES)
query = generate_query(source, features, TARGET_FEATURE_NAME, start_time, end_time)
serving_data = pd.io.gbq.read_gbq(
query, project_id=PROJECT_ID)
print(len(serving_data.index))
serving_data.head(5).T
"""
Explanation: 2.1. Read serving data from BigQuery
End of explanation
"""
serving_data_file = os.path.join(DATA_DIR, 'serving.csv')
serving_data.to_csv(serving_data_file, index=False)
"""
Explanation: 2.2. Save serving data to CSV
End of explanation
"""
slice_fn = tfdv.get_feature_value_slicer(features={'time': None})
serving_stats_list = tfdv.generate_statistics_from_csv(
data_location=serving_data_file,
stats_options=tfdv.StatsOptions(
slice_functions=[slice_fn],
schema=reference_schema
)
)
slice_keys = sorted([dataset.name for dataset in serving_stats_list.datasets])
slice_keys
"""
Explanation: 3. Compute Statistics from Serving Data
In addition to calculating statistics for the full dataset, we also configure TFDV to calculate statistics for each time window
End of explanation
"""
anomalies_list = []
for slice_key in slice_keys[1:]:
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
anomalies = tfdv.validate_statistics(
serving_stats,
schema=reference_schema,
previous_statistics=baseline_stats
)
anomalies_list.append(anomalies)
"""
Explanation: 4. Validate Serving Statistics
End of explanation
"""
slice_key = slice_keys[1]
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
tfdv.visualize_statistics(
baseline_stats, serving_stats, 'baseline', 'current')
"""
Explanation: Part 2: Analyzing Serving Data Statistics and Anomalies
1. Visualize Statistics
Visualize statistics for a time window with normal data points
End of explanation
"""
slice_key = slice_keys[-1]
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
tfdv.visualize_statistics(
baseline_stats, serving_stats, 'baseline', 'current')
"""
Explanation: Visualize statistics for a time window with skewed data points
End of explanation
"""
for i, anomalies in enumerate(anomalies_list):
tfdv.utils.anomalies_util.remove_anomaly_types(
anomalies, [anomalies_pb2.AnomalyInfo.SCHEMA_NEW_COLUMN])
print("Anomalies for {}".format(slice_keys[i+1]), )
tfdv.display_anomalies(anomalies)
"""
Explanation: 2. Display Anomalies
End of explanation
"""
categorical_features = [
feature.steps()[0]
for feature in tfdv.utils.schema_util.get_categorical_features(
reference_schema)
]
"""
Explanation: 3. Analyze Statistics Change Over time
3.1. Numerical feature means over time
End of explanation
"""
baseline_means = dict()
for feature in baseline_stats.datasets[0].features:
if feature.path.step[0] == 'time': continue
if feature.path.step[0] not in categorical_features:
mean = feature.num_stats.mean
baseline_means[feature.path.step[0]] = mean
from collections import defaultdict
feature_means = defaultdict(list)
for slice_key in slice_keys[1:]:
ds = tfdv.get_slice_stats(serving_stats_list, slice_key).datasets[0]
for feature in ds.features:
if feature.path.step[0] == 'time': continue
if feature.path.step[0] not in categorical_features:
mean = feature.num_stats.mean
feature_means[feature.path.step[0]].append(mean)
import matplotlib.pyplot as plt
dataframe = pd.DataFrame(feature_means, index=slice_keys[1:])
num_features = len(feature_means)
ncolumns = 3
nrows = int(num_features // ncolumns) + 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(25, 25))
for i, col in enumerate(dataframe.columns[:num_features]):
r = i // ncolumns
c = i % ncolumns
p = dataframe[col].plot.line(ax=axes[r][c], title=col, rot=10)
p.hlines(baseline_means[col], xmin=0, xmax=len(dataframe.index), color='red')
p.text(0, baseline_means[col], 'baseline mean', fontsize=15)
"""
Explanation: Get mean values from baseline statistics
End of explanation
"""
categorical_feature_stats = dict()
for feature_name in categorical_features:
categorical_feature_stats[feature_name] = dict()
for slice_key in slice_keys[1:]:
categorical_feature_stats[feature_name][slice_key] = dict()
ds = tfdv.get_slice_stats(serving_stats_list, slice_key).datasets[0]
for feature in ds.features:
if feature.path.step[0] == feature_name:
val_freq = list(feature.string_stats.top_values)
for item in val_freq:
categorical_feature_stats[feature_name][slice_key][item.value] = item.frequency
break
num_features = len(categorical_features)
ncolumns = 2
nrows = int(num_features // ncolumns) + 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(25, 15))
for i, feature_name in enumerate(categorical_features):
dataframe = pd.DataFrame(
categorical_feature_stats[feature_name]).T
r = i // ncolumns
c = i % ncolumns
dataframe.plot.bar(ax=axes[r][c], stacked=True, rot=10)
"""
Explanation: 3.3. Categorical feature distribution over time
End of explanation
"""
|
clarka34/exploring-ship-logbooks
|
scripts/second_dataset.ipynb
|
mit
|
import exploringShipLogbooks
import zipfile
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
import os.path as op
import pandas as pd
import exploringShipLogbooks.wordcount as wc
from exploringShipLogbooks.basic_utils import clean_data
from exploringShipLogbooks.basic_utils import remove_undesired_columns
"""
Explanation: Second dataset - http://www.slavevoyages.org
http://www.slavevoyages.org/documents/download/Codebook2013_5-3_final.pdf
Import necessary packages
End of explanation
"""
import rpy2
from rpy2.robjects import pandas2ri
pandas2ri.activate()
"""
Explanation: Import packages related to R
End of explanation
"""
data_path = op.join(exploringShipLogbooks.__path__[0], 'data')
filename = data_path + '/tastdb-exp-2010.sav'
data = rpy2.robjects.r('foreign::read.spss("%s", to.data.frame=TRUE)' % filename)
"""
Explanation: Import data using R
End of explanation
"""
data.to_pickle(data_path + 'tastdb-exp-2010')
store = pd.HDFStore(data_path + '/tastdb-exp-2010.h5')
store['df'] = data
"""
Explanation: save the data loaded from R for use on windows computer
End of explanation
"""
store = pd.HDFStore(data_path + '/tastdb-exp-2010.h5')
data = store['df']
data.head()
data.columns.values
desired_columns=['portdep', 'portret', 'shipname', 'rig', 'national', 'yeardep']
undesired_columns = remove_undesired_columns(data, desired_columns)
data = data.drop(undesired_columns, axis=1)
data.columns = ['ShipName', 'Nationality', 'ShipType', 'VoyageFrom', 'VoyageTo', 'Year']
logbook_data = clean_data(data)
logbook_data.head()
"""
Explanation: Import data (h5)
http://stackoverflow.com/questions/17098654/how-to-store-data-frame-using-pandas-python
End of explanation
"""
from exploringShipLogbooks.basic_utils import encode_data_df
encoded_data_df = encode_data_df(logbook_data, 'Naive Bayes')
classification_array = np.array(encoded_data_df)
encoded_data_df.head()
"""
Explanation: One hot encoding
End of explanation
"""
|
tschijnmo/drudge
|
docs/examples/ccsd.ipynb
|
mit
|
from pyspark import SparkContext
ctx = SparkContext('local[*]', 'ccsd')
"""
Explanation: Automatic derivation of CCSD theory
This notebook serves as an example of interactive usage of drudge for complex symbolic manipulations in Jupyter notebooks. Here we can see how the classical CCSD theory can be derived automatically.
Preparatory work
First, we need to set up the Spark environment. Here we just use parallelization on the local machine.
End of explanation
"""
from dummy_spark import SparkContext
ctx = SparkContext()
"""
Explanation: Or we can also use the dummy spark to emulate the Spark environment in a purely serial way. Note that we need just one Spark context. These two cells should not be both evaluated.
End of explanation
"""
from sympy import *
from drudge import *
dr = PartHoleDrudge(ctx)
dr.full_simplify = False
p = dr.names
c_ = p.c_
c_dag = p.c_dag
a, b = p.V_dumms[:2]
i, j = p.O_dumms[:2]
"""
Explanation: With the Spark context, we can construct the drudge specific for this problem. Then we can define some names that is going to be used frequently.
End of explanation
"""
t = IndexedBase('t')
clusters = dr.einst(
t[a, i] * c_dag[a] * c_[i] +
t[a, b, i, j] * c_dag[a] * c_dag[b] * c_[j] * c_[i] / 4
)
"""
Explanation: Cluster excitation operator
Here, we by using the Einstein summation convention tensor creator, we can just define the cluster operator in a way very similar to how we would writen them down on paper.
End of explanation
"""
clusters.display()
"""
Explanation: We can have a peek at the cluster operator.
End of explanation
"""
dr.set_dbbar_base(t, 2)
"""
Explanation: Now we need tell the system about the symmetry on $t^2$, so that it can be used in simplification.
End of explanation
"""
%%time
curr = dr.ham
h_bar = dr.ham
for order in range(0, 4):
curr = (curr | clusters).simplify() / (order + 1)
curr.cache()
h_bar += curr
h_bar.repartition(cache=True)
"""
Explanation: Similarity transform of the Hamiltonian
Here we can use a loop to nest the commutation conveniently. And IPython magic can be used to time the operation. Note that after the simplification, we explicitly redistribute the terms in the transformed Hamiltonian for better parallel performance in later operations. Note that drudge does not automatically cache the result of tensor computations. The cache method should be called explicitly when a tensor is going to be used multiple times.
End of explanation
"""
h_bar.n_terms
"""
Explanation: The transformed Hamiltonian can be very complex. Instead of reading its terms, we can just have a peek by get a count of the number of terms it contains.
End of explanation
"""
en_eqn = h_bar.eval_fermi_vev().simplify()
"""
Explanation: Working equation derivation
With the similarity transformed Hamiltonian, we are now ready to derive the actual working equations. First, the energy equation can be derived by taking the vacuum expectation value of the transformed Hamiltonian.
End of explanation
"""
en_eqn.display()
"""
Explanation: We can have a look at its contents to see if it is what we would expect.
End of explanation
"""
proj = c_dag[i] * c_[a]
t1_eqn = (proj * h_bar).eval_fermi_vev().simplify()
"""
Explanation: Next, we can create a projector to derive the working equation for the singles amplitude.
End of explanation
"""
t1_eqn.display()
"""
Explanation: In the same way, we can display its content.
End of explanation
"""
%%time
proj = c_dag[i] * c_dag[j] * c_[b] * c_[a]
t2_eqn = (proj * h_bar).eval_fermi_vev().simplify()
"""
Explanation: The working equation for the doubles amplitude can be done in the same way, just it can be slower.
End of explanation
"""
t2_eqn = t2_eqn.sort()
t2_eqn.display()
"""
Explanation: Since the equation can be slightly complex, we can vaguely sort the terms in increasing complexity before display them.
End of explanation
"""
from gristmill import *
working_eqn = [
dr.define(Symbol('e'), en_eqn),
dr.define(t[a, i], t1_eqn),
dr.define(t[a, b, i, j], t2_eqn)
]
"""
Explanation: Working equation optimization
Evaluating the working equation takes a lot of effort. Outside drudge, a sister package named gristmill is available for the optimization and automatic code generation for tensor contractions. To start with, we need to put the working equations into a tensor definitions with external indices and import the gristmill package.
End of explanation
"""
orig_cost = get_flop_cost(working_eqn, leading=True)
init_printing()
orig_cost
"""
Explanation: We can have an estimation of the FLOP cost without any optimization.
End of explanation
"""
%%time
eval_seq = optimize(
working_eqn, substs={p.nv: 5000, p.no: 1000},
contr_strat=ContrStrat.EXHAUST
)
"""
Explanation: Since normally we have far more virtual orbitals than occupied orbitals, we have make the optimization based on this.
End of explanation
"""
len(eval_seq)
opt_cost = get_flop_cost(eval_seq, leading=True)
opt_cost
"""
Explanation: Now we can have some inspection of the evaluation sequence.
End of explanation
"""
verify_eval_seq(eval_seq, working_eqn, simplify=True)
"""
Explanation: Significant optimization can be seen. Finally we can verify the correctness of the evaluation sequence. This step can be very slow. But it is adviced for mission-critical tasks.
End of explanation
"""
for eqn in eval_seq:
eqn.display(False)
"""
Explanation: Finally, we have have a peek at the details of the intermediates.
End of explanation
"""
|
mdpiper/topoflow-notebooks
|
Meteorology-P-TimeSeries.ipynb
|
mit
|
mps_to_mmph = 1000 * 3600
"""
Explanation: Precipitation in the Meteorology component
Goal: In this example, I give the Meteorology component a time series of precipitation values and check whether it produces output when the model state is updated.
Define a helpful constant:
End of explanation
"""
import numpy as np
n_steps = 10 # can get from cfg file
precip_rates = np.linspace(5, 20, num=n_steps, endpoint=False)
precip_rates
"""
Explanation: Programmatically create a file holding the precipitation rate time series. This will mimic what I'll need to do in WMT, where I'll have access to the model time step and run duration. Start by defining the precipitation rate values:
End of explanation
"""
np.savetxt('./input/precip_rates.txt', precip_rates, fmt='%6.2f')
"""
Explanation: Next, write the values to a file to the input directory, where it's expected by the cfg file:
End of explanation
"""
cat input/precip_rates.txt
"""
Explanation: Check the file:
End of explanation
"""
from topoflow.components.met_base import met_component
m = met_component()
"""
Explanation: BMI component
Import the BMI Meteorology component and create an instance:
End of explanation
"""
m.initialize('./input/meteorology-1.cfg')
m.h_snow = 0.0 # Needed for update
"""
Explanation: Initialize the model. A value of snow depth h_snow is needed for the model to update.
End of explanation
"""
precip = m.get_value('atmosphere_water__precipitation_leq-volume_flux') # `P` internally
print type(precip)
print precip.size
precip * mps_to_mmph
"""
Explanation: Unlike when P is a scalar, the initial model precipitation volume flux is the first value from precip_rates.txt:
End of explanation
"""
m.update()
print '\nCurrent time: {} s'.format(m.get_current_time())
"""
Explanation: Advance the model by one time step:
End of explanation
"""
print precip * mps_to_mmph # note that this is a reference, so it'll take the current value of `P`
"""
Explanation: Unlike the scalar case, there's an output volume flux of precipitation:
End of explanation
"""
time = [m.get_current_time().copy()]
flux = [precip.copy() * mps_to_mmph]
while m.get_current_time() < m.get_end_time():
m.update()
time.append(m.get_current_time().copy())
flux.append(m.get_value('atmosphere_water__precipitation_leq-volume_flux').copy() * mps_to_mmph)
"""
Explanation: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step:
End of explanation
"""
time
flux
"""
Explanation: Check the time and flux values:
End of explanation
"""
from cmt.components import Meteorology
met = Meteorology()
"""
Explanation: Result: Fails. Input precipipation rates do not match output precipitation volume flux because of changes we made to TopoFlow source.
Babel-wrapped component
Import the Babel-wrapped Meteorology component and create an instance:
End of explanation
"""
%cd input
met.initialize('meteorology-1.cfg')
"""
Explanation: Initialize the model.
End of explanation
"""
bprecip = met.get_value('atmosphere_water__precipitation_leq-volume_flux')
print type(bprecip)
print bprecip.size
print bprecip.shape
bprecip * mps_to_mmph
"""
Explanation: The initial model precipitation volume flux is the first value from precip_rates.txt:
End of explanation
"""
time = [met.get_current_time()]
flux = [bprecip.max() * mps_to_mmph]
count = 1
while met.get_current_time() < met.get_end_time():
met.update(met.get_time_step()*count)
time.append(met.get_current_time())
flux.append(met.get_value('atmosphere_water__precipitation_leq-volume_flux').max() * mps_to_mmph)
count += 1
"""
Explanation: Advance the model to the end, saving the model time and output P values (converted back to mm/hr for convenience) at each step:
End of explanation
"""
time
flux
"""
Explanation: Check the time and flux values (noting that I've included the time = 0.0 value here):
End of explanation
"""
|
NervanaSystems/coach
|
tutorials/1. Implementing an Algorithm.ipynb
|
apache-2.0
|
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import tensorflow as tf
from rl_coach.architectures.tensorflow_components.heads.head import Head
from rl_coach.architectures.head_parameters import HeadParameters
from rl_coach.base_parameters import AgentParameters
from rl_coach.core_types import QActionStateValue
from rl_coach.spaces import SpacesDefinition
"""
Explanation: Implementing an Algorithm
In this tutorial we'll build a new agent that implements the Categorical Deep Q Network (C51) algorithm (https://arxiv.org/pdf/1707.06887.pdf), and a preset that runs the agent on the 'Breakout' game of the Atari environment.
Implementing an algorithm typically consists of 3 main parts:
Implementing the agent object
Implementing the network head (optional)
Implementing a preset to run the agent on some environment
The entire agent can be defined outside of the Coach framework, but in Coach you can find multiple predefined agents under the agents directory, network heads under the architecure/tensorflow_components/heads directory, and presets under the presets directory, for you to reuse.
For more information, we recommend going over the following page in the documentation: https://nervanasystems.github.io/coach/contributing/add_agent/
The Network Head
We'll start by defining a new head for the neural network used by this algorithm - CategoricalQHead.
A head is the final part of the network. It takes the embedding from the middleware embedder and passes it through a neural network to produce the output of the network. There can be multiple heads in a network, and each one has an assigned loss function. The heads are algorithm dependent.
The rest of the network can be reused from the predefined parts, and the input embedder and middleware structure can also be modified, but we won't go into that in this tutorial.
The head will typically be defined in a new file - architectures/tensorflow_components/heads/categorical_dqn_head.py.
First - some imports.
End of explanation
"""
class CategoricalQHeadParameters(HeadParameters):
def __init__(self, activation_function: str ='relu', name: str='categorical_q_head_params'):
super().__init__(parameterized_class=CategoricalQHead, activation_function=activation_function, name=name)
class CategoricalQHead(Head):
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str ='relu'):
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function)
self.name = 'categorical_dqn_head'
self.num_actions = len(self.spaces.action.actions)
self.num_atoms = agent_parameters.algorithm.atoms
self.return_type = QActionStateValue
def _build_module(self, input_layer):
self.actions = tf.placeholder(tf.int32, [None], name="actions")
self.input = [self.actions]
values_distribution = tf.layers.dense(input_layer, self.num_actions * self.num_atoms, name='output')
values_distribution = tf.reshape(values_distribution, (tf.shape(values_distribution)[0], self.num_actions,
self.num_atoms))
# softmax on atoms dimension
self.output = tf.nn.softmax(values_distribution)
# calculate cross entropy loss
self.distributions = tf.placeholder(tf.float32, shape=(None, self.num_actions, self.num_atoms),
name="distributions")
self.target = self.distributions
self.loss = tf.nn.softmax_cross_entropy_with_logits(labels=self.target, logits=values_distribution)
tf.losses.add_loss(self.loss)
"""
Explanation: Now let's define a class - CategoricalQHead class. Each class in Coach has a complementary Parameters class which defines its constructor parameters. So we will additionally define the CategoricalQHeadParameters class. The network structure should be defined in the _build_module function, which gets the previous layer output as an argument. In this function there are several variables that should be defined:
* self.input - (optional) a list of any additional input to the head
* self.output - the output of the head, which is also one of the outputs of the network
* self.target - a placeholder for the targets that will be used to train the network
* self.regularizations - (optional) any additional regularization losses that will be applied to the network
* self.loss - the loss that will be used to train the network
Categorical DQN uses the same network as DQN, and only changes the last layer to output #actions x #atoms elements with a softmax function. Additionally, we update the loss function to cross entropy.
End of explanation
"""
from rl_coach.agents.dqn_agent import DQNNetworkParameters
class CategoricalDQNNetworkParameters(DQNNetworkParameters):
def __init__(self):
super().__init__()
self.heads_parameters = [CategoricalQHeadParameters()]
"""
Explanation: The Agent
The agent will implement the Categorical DQN algorithm. Each agent has a complementary AgentParameters class, which allows selecting the parameters of the agent sub modules:
* the algorithm
* the exploration policy
* the memory
* the networks
Now let's go ahead and define the network parameters - it will reuse the DQN network parameters but the head parameters will be our CategoricalQHeadParameters. The network parameters allows selecting any number of heads for the network by defining them in a list, but in this case we only have a single head, so we will point to its parameters class.
End of explanation
"""
from rl_coach.agents.dqn_agent import DQNAlgorithmParameters
from rl_coach.exploration_policies.e_greedy import EGreedyParameters
from rl_coach.schedules import LinearSchedule
class CategoricalDQNAlgorithmParameters(DQNAlgorithmParameters):
def __init__(self):
super().__init__()
self.v_min = -10.0
self.v_max = 10.0
self.atoms = 51
class CategoricalDQNExplorationParameters(EGreedyParameters):
def __init__(self):
super().__init__()
self.epsilon_schedule = LinearSchedule(1, 0.01, 1000000)
self.evaluation_epsilon = 0.001
"""
Explanation: Next we'll define the algorithm parameters, which are the same as the DQN algorithm parameters, with the addition of the Categorical DQN specific v_min, v_max and number of atoms.
We'll also define the parameters of the exploration policy, which is epsilon greedy with epsilon starting at a value of 1.0 and decaying to 0.01 throughout 1,000,000 steps.
End of explanation
"""
from rl_coach.agents.value_optimization_agent import ValueOptimizationAgent
from rl_coach.base_parameters import AgentParameters
from rl_coach.core_types import StateType
from rl_coach.memories.non_episodic.experience_replay import ExperienceReplayParameters
class CategoricalDQNAgentParameters(AgentParameters):
def __init__(self):
super().__init__(algorithm=CategoricalDQNAlgorithmParameters(),
exploration=CategoricalDQNExplorationParameters(),
memory=ExperienceReplayParameters(),
networks={"main": CategoricalDQNNetworkParameters()})
@property
def path(self):
return 'agents.categorical_dqn_agent:CategoricalDQNAgent'
"""
Explanation: Now let's define the agent parameters class which contains all the parameters to be used by the agent - the network, algorithm and exploration parameters that we defined above, and also the parameters of the memory module to be used, which is the default experience replay buffer in this case.
Notice that the networks are defined as a dictionary, where the key is the name of the network and the value is the network parameters. This will allow us to later access each of the networks through self.networks[network_name].
The path property connects the parameters class to its corresponding class that is parameterized. In this case, it is the CategoricalDQNAgent class that we'll define in a moment.
End of explanation
"""
from typing import Union
# Categorical Deep Q Network - https://arxiv.org/pdf/1707.06887.pdf
class CategoricalDQNAgent(ValueOptimizationAgent):
def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None):
super().__init__(agent_parameters, parent)
self.z_values = np.linspace(self.ap.algorithm.v_min, self.ap.algorithm.v_max, self.ap.algorithm.atoms)
def distribution_prediction_to_q_values(self, prediction):
return np.dot(prediction, self.z_values)
# prediction's format is (batch,actions,atoms)
def get_all_q_values_for_states(self, states: StateType):
prediction = self.get_prediction(states)
return self.distribution_prediction_to_q_values(prediction)
def learn_from_batch(self, batch):
network_keys = self.ap.network_wrappers['main'].input_embedders_parameters.keys()
# for the action we actually took, the error is calculated by the atoms distribution
# for all other actions, the error is 0
distributed_q_st_plus_1, TD_targets = self.networks['main'].parallel_prediction([
(self.networks['main'].target_network, batch.next_states(network_keys)),
(self.networks['main'].online_network, batch.states(network_keys))
])
# only update the action that we have actually done in this transition
target_actions = np.argmax(self.distribution_prediction_to_q_values(distributed_q_st_plus_1), axis=1)
m = np.zeros((self.ap.network_wrappers['main'].batch_size, self.z_values.size))
batches = np.arange(self.ap.network_wrappers['main'].batch_size)
for j in range(self.z_values.size):
tzj = np.fmax(np.fmin(batch.rewards() +
(1.0 - batch.game_overs()) * self.ap.algorithm.discount * self.z_values[j],
self.z_values[self.z_values.size - 1]),
self.z_values[0])
bj = (tzj - self.z_values[0])/(self.z_values[1] - self.z_values[0])
u = (np.ceil(bj)).astype(int)
l = (np.floor(bj)).astype(int)
m[batches, l] = m[batches, l] + (distributed_q_st_plus_1[batches, target_actions, j] * (u - bj))
m[batches, u] = m[batches, u] + (distributed_q_st_plus_1[batches, target_actions, j] * (bj - l))
# total_loss = cross entropy between actual result above and predicted result for the given action
TD_targets[batches, batch.actions()] = m
result = self.networks['main'].train_and_sync_networks(batch.states(network_keys), TD_targets)
total_loss, losses, unclipped_grads = result[:3]
return total_loss, losses, unclipped_grads
"""
Explanation: The last step is to define the agent itself - CategoricalDQNAgent - which is a type of value optimization agent so it will inherit the ValueOptimizationAgent class. It could have also inheritted DQNAgent, which would result in the same functionality. Our agent will implement the learn_from_batch function which updates the agent's networks according to an input batch of transitions.
Agents typically need to implement the training function - learn_from_batch, and a function that defines which actions to select given a state - choose_action. In our case, we will reuse the choose_action function implemented by the generic ValueOptimizationAgent, and just update the internal function for fetching q values for each of the actions - get_all_q_values_for_states.
This code may look intimidating at first glance, but basically it is just following the algorithm description in the Distributional DQN paper:
<img src="files/categorical_dqn.png" width=400>
End of explanation
"""
from rl_coach.agents.categorical_dqn_agent import CategoricalDQNAgentParameters
agent_params = CategoricalDQNAgentParameters()
agent_params.network_wrappers['main'].learning_rate = 0.00025
"""
Explanation: Some important things to notice here:
* self.networks['main'] is a NetworkWrapper object. It holds all the copies of the 'main' network:
- a global network which is shared between all the workers in distributed training
- an online network which is a local copy of the network intended to keep the weights static between training steps
- a target network which is a local slow updating copy of the network, and is intended to keep the targets of the training process more stable
In this case, we have the online network and the target network. The global network will only be created if we run the algorithm with multiple workers. The A3C agent would be one kind of example.
* There are two network prediction functions available - predict and parallel_prediction. predict is quite straightforward - get some inputs, forward them through the network and return the output. parallel_prediction is an optimized variant of predict, which allows running a prediction on the online and target network in parallel, instead of running them sequentially.
* The network train_and_sync_networks function makes a single training step - running a forward pass of the online network, calculating the losses, running a backward pass to calculate the gradients and applying the gradients to the network weights. If multiple workers are used, instead of applying the gradients to the online network weights, they are applied to the global (shared) network weights, and then the weights are copied back to the online network.
The Preset
The final part is the preset, which will run our agent on some existing environment with any custom parameters.
The new preset will be typically be defined in a new file - presets/atari_categorical_dqn.py.
First - let's select the agent parameters we defined above.
It is possible to modify internal parameters such as the learning rate.
End of explanation
"""
from rl_coach.environments.gym_environment import Atari, atari_deterministic_v4
env_params = Atari(level='BreakoutDeterministic-v4')
"""
Explanation: Now, let's define the environment parameters. We will use the default Atari parameters (frame skip of 4, taking the max over subsequent frames, etc.), and we will select the 'Breakout' game level.
End of explanation
"""
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.base_parameters import VisualizationParameters
from rl_coach.environments.gym_environment import atari_schedule
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params,
schedule_params=atari_schedule, vis_params=VisualizationParameters())
graph_manager.visualization_parameters.render = True
"""
Explanation: Connecting all the dots together - we'll define a graph manager with the Categorial DQN agent parameters, the Atari environment parameters, and the scheduling and visualization parameters
End of explanation
"""
# let the adventure begin
graph_manager.improve()
"""
Explanation: Running the Preset
(this is normally done from command line by running coach -p Atari_C51 -lvl breakout)
End of explanation
"""
|
nicoguaro/FEM_resources
|
elements/Lumped mass FEM.ipynb
|
mit
|
from sympy import *
init_session()
"""
Explanation: Mass matrix diagonalization (lumping)
End of explanation
"""
def mass_tet4():
"""Mass matrix for a 4 node tetrahedron"""
r, s, t = symbols("r s t")
N = Matrix([1 - r - s - t, r, s, t])
return (N * N.T).integrate((t, 0, 1 - r - s), (s, 0, 1 - r), (r, 0, 1))
def mass_quad8():
"""Mass matrix for a 8 node quadrilateral"""
r, s = symbols("r s")
Haux = Matrix([
(1 - r**2)*(1 + s),
(1 - s**2)*(1 - r),
(1 - r**2)*(1 - s),
(1 - s**2)*(1 + r)])
N = S(1)/4*Matrix([
(1 + r)*(1 + s) - Haux[0] - Haux[3],
(1 - r)*(1 + s) - Haux[0] - Haux[1],
(1 - r)*(1 - s) - Haux[1] - Haux[2],
(1 + r)*(1 - s) - Haux[2] - Haux[3],
2*Haux[0], 2*Haux[1], 2*Haux[2], 2*Haux[3]])
return (N * N.T).integrate((s, -1, 1), (r, -1, 1))
"""
Explanation: Elemental mass matrices
End of explanation
"""
mass_tet4()
mass_quad8()
"""
Explanation: The elemental mass matrices look like
End of explanation
"""
def row_lump(mass_mat):
"""Matrix lumping by row summing"""
return diag(*[sum(mass_mat[i, :]) for i in range(mass_mat.shape[0])])
"""
Explanation: Lumping
One method for lumping is to sum the matrix per rows, i.e.
$$M^\text{(lumped)}{ii}= \sum{j} M_{ij}$$
End of explanation
"""
def diag_scaling_lump(mass_mat):
"""Matrix lumping by diagonal scaling method"""
mass = sum(mass_mat)
trace = mass_mat.trace()
c = mass/trace
return diag(*[c*mass_mat[i, i] for i in range(mass_mat.shape[0])])
def min_dist_lump(mass_mat):
"""
Matrix lumping by minimizing the Frobenius norm subject
to a constraint of conservation of mass.
"""
num = mass_mat.shape[0]
mass = sum(mass_mat)
lamda = symbols("lambda")
Ms = symbols('M0:%d'%num)
var = list(Ms)
mass_diag = diag(*var)
C = mass_mat - mass_diag
fun = (C.T*C).trace() + lamda*(mass - sum(mass_diag))
var.append(lamda)
grad = [diff(fun, x) for x in var]
sol = solve(grad, var)
return diag(*list(sol.values())[:-1])
"""
Explanation: One method for lumping is to sum the matrix per rows, i.e.
$$M^\text{(lumped)}{ii} = c M{ii}$$
with $c$ adjusted to satisfy $\sum_j M^\text{(lumped)}{jj} = \int\Omega \rho d\Omega$. Particularly, we can choose $c = Tr(M)/M_\text{total}$.
End of explanation
"""
row_lump(mass_tet4())
diag_scaling_lump(mass_tet4())
min_dist_lump(mass_tet4())
"""
Explanation: We can compare the methods for the tetrahedron
End of explanation
"""
row_lump(mass_quad8())
diag_scaling_lump(mass_quad8())
min_dist_lump(mass_quad8())
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
"""
Explanation: We can compare the methods for the serendipity quadrilaterals. For this type
of element we can't use the row lumping method since it leads to negative
masses.
End of explanation
"""
|
pastas/pastas
|
examples/notebooks/03_diagnostic_checking.ipynb
|
mit
|
import numpy as np
import pandas as pd
import pastas as ps
from scipy import stats
import matplotlib.pyplot as plt
ps.set_log_level("ERROR")
ps.show_versions(numba=True)
"""
Explanation: Model Diagnostic Checking
R.A. Collenteur, University of Graz, July 2020.
This notebook provides an overview of the different methods that are available for diagnostic checking of the models residuals in Pastas. Readers who want to get a quick overview of how to perform diagnostic checks on Pastas models are referred to section 2, while sections 3 to 6 are recommended for readers looking for in-depth discussions of the individual methods.
Introduction
Diagnostic Checking in Pastas in Practice
Checking for autocorrelation
Checking for Homoscedasticity
Checking for Normality
References
Introduction
Diagnostic checking is a common step in the time series modeling process, subjecting a calibrated model to various statistical tests to ensure that the model adequately describes the observed time series (Hipel & McLeod, 2005). Diagnostics checks are performed on the residual or noise series of a model, depending on whether or not a noisemodel was applied in the modeling process. We will refer to the series that was minimized during parameter estimation as the "residuals". In practice in Pastas models, these can come from ml.noise() or ml.residuals(). Regardless of this, the diagnostics tests that may be performed remain the same.
Why to check: reasons to diagnose
Before we start the discussion of what to check, let's briefly discuss why we would want to perform diagnostic checks at all. In general, diagnostic checks should be performed when you want to make inferences with a model, in particular when the estimated standard errors of the parameters are used to make such inferences. For example, if you want to draw the confidence interval of the estimated step response for a variable, you will use the standard errors of the parameters to do so. This assumes that the standard errors are estimated accurately, which may assumed if the minimized residual series agree with a number of assumptions on the characteristics of the model residuals.
<div class="alert alert-info">
<b>Rule-of-thumb:</b>
when the standard errors of the parameters are used, the model residuals should be diagnostically checked.
</div>
What to check: assumptions of white noise
The methods used for the estimation of these standard errors assume that the model residuals behave as white noise with a mean of zero and noise values that are independent from each other (i.e., no significant autocorrelation). Additionally, it is often assumed that the residuals are homoscedastic (i.e., have a constant variance) and follow a normal distribution. The first two assumptions are the most important, having the largest impact on the estimated standard errors of the parameters (Hipel & McLeod, 2005). Additionally to these four assumptions, the model residuals should be uncorrelated with any of the input time series. If the residuals are found to behave as white noise, we may assume that the standard errors of the parameters have been accurately estimated and we may use them for inferential analyses.
How to check: visualization & hypothesis testing
The assumptions outlined above may be checked through different types of visualization and hypothesis testing of the model residuals. For the latter, statistical tests are used to test the hypothesis that the residuals are e.g., independent, homoscedastic, or normally distributed. These tests typically test a hypothesis with some version of the following Null hypothesis ($H_0$) and the Alternative hypothesis ($H_A$):
$H_0$: The residuals are independent, homoscedastic, or normally distributed
$H_A$: The residuals are not independent, homoscedastic, or normally distributed
All hypothesis tests compute a certain test statistic (e.g., $Q_{test}$), which is compared to a theoretical value according to a certain distribution (e.g., $\chi^2_{\alpha, h}$) that depends on the level of significance (e.g., $\alpha=0.05$) and sometimes the degrees of freedom $h$. The result of a hypothesis test either rejects the Null hypothesis or fails to reject the Null hypothesis, but can never be used to accept the Alternative hypothesis. For example, if an hypothesis test for autocorrelation fails to reject the hypothesis we may conclude that there is no significant autocorrelation in the residuals, but cannot conclude that there is no autocorrelation.
End of explanation
"""
# Import groundwater, rainfall and evaporation time series
head = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
rain = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
evap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
ml = ps.Model(head)
sm = ps.RechargeModel(rain, evap, rfunc=ps.Exponential, name="rch")
ml.add_stressmodel(sm)
ml.solve(report=False)
axes = ml.plots.results(figsize=(10,5))
"""
Explanation: Create and calibrate a pastas Model
To illustrate how to perform diagnostic checking of a Pastas model, a simple model using precipitation and evaporation to simulate the groundwater levels is created. The model is calibrated using a noisemodel with one parameter. Finally, a plot is created using ml.plots.results() that shows the simulated groundwater levels, the model residuals and noise and the calibrated parameters values and their estimated standard errors.
End of explanation
"""
alpha = 0.05
ml.plots.diagnostics();
"""
Explanation: Diagnostics checking of Pastas models
Let's say we want to plot the 95% confidence intervals of the simulated groundwater levels that results from uncertainties in the calibrated parameters. Such an analysis would clearly use the standard errors of the parameters, and before we proceed to compute any confidence intervals we should check if the modeled noise agrees with the assumptions of white noise. A noise model was used during calibrations and therefore the noise returned by the ml.noise() method should be tested on these assumptions.
ml.plots.diagnostics
To quickly diagnose the noise series on the different assumptions of white noise, the noise series may be visualized using the ml.plots.diagnostics() method. This method visualizes the noise series in different ways to test the different assumptions. The method will internally check if a noise model was used during parameter calibration, and select the appropriate residuals series from ml.residuals() or ml.noise().
End of explanation
"""
ml.stats.diagnostics(alpha=0.05)
"""
Explanation: The top-left plot shows the noise time series, which should look more or less random without a clear (seasonal) trend. The title of this plot also includes the number of observations $n$ and the mean value of the modeled noise $\mu$, which should be around zero. The bottom-left plot shows the autocorrelations for lags up to one year and the 95% confidence interval. Approximately 95% of the autocorrelation values should fall between these boundaries. The upper-right plot shows a histogram of the noise series along with a normal distribution fitted to the data. This plot may be used to assess how well the noise series resemble a normal distribution. The bottom-right plot may also be used to assess the normality of the noise series, using a probability plot of the data.
ml.stats.diagnostics
The visual interpretation of the noise series is (clearly) subjective, but still provides a powerful tool to test the noise series and quickly identify any violations of the assumption of white nose. For a more objective evaluation of the model assumptions, hypothesis tests may be used. To perform multiple hypothesis tests on the noise series at once, Pastas provides the ml.stats.diagnostics() method as follows.
End of explanation
"""
random_seed = np.random.RandomState(12345)
index = pd.to_datetime( np.arange(3650), unit="D")
noise = pd.Series(random_seed.normal(0, 1, len(index)), index=index)
noise.plot(figsize=(12,2))
"""
Explanation: The ml.stats.diagnostics method returns a Pandas DataFrame with an overview of the results of the different hypothesis tests. The first column ("Checks") reports what assumption is tested by a certain test and the second column ("Statistic") reports the test statistic that is computed for that test. The probability of each test statistic is reported in the third column ("P-value") and the fourth column ("Reject H0") reports the result of the hypothesis test. Recall that the Null-hypotheses assume that the data resembles white noise. This means that if $H_0$ is rejected (or Reject H0 = True), that test concludes that the data does not agree with one of the assumptions of white noise. The following table provides an overview of the different hypothesis tests that are reported by ml.stats.diagnostics().
| Name | Checks | Pastas/Scipy method | Description | Non-equidistant |
|:-----|:-----|:--------------|:-----------------------------------------|----------------:|
| Shapiro-Wilk | Normality | scipy.stats.shapiro| The Shapiro-Wilk test tests the null hypothesis that the data was drawn from a normal distribution. | Unknown |
| D'Agostino | Normality | scipy.stats.normaltest| This test checks if the noise series comes from a normal distribution (H0 hypothesis). | Unknown |
| Ljung-Box test| Autocorrelation | ps.stats.ljung_box| This test checks whether the autocorrelations of a time series are significantly different from zero.| No |
| Durbin-Watson test | Autocorrelation | ps.stats.durbin_watson | This tests diagnoses for autocorrelation at a lag of one time step. | No |
| Stoffer-Toloi test | Autocorrelation | ps.stats.stoffer_toloi| This test is similar to the Ljung-Box test, but is adapted for missing values | Yes |
| Runs test | Autocorrelation | ps.stats.runs_test | This test checks whether the values of a time series are random without assuming any probability distribution. | Yes |
Some of the test are known to be appropriate to test time series with non-equidistant time steps while others are not. The method ml.stats.diagnostics() will check use different test depending on the existence of non-equidistant time steps or not. All methods are also available as separate methods and may also be used to test time series that are not obtained from a Pastas model.
A closer look at the hypothesis tests
While the results of ml.stats.diagnostics may look straightforward, the interpretation is unfortunately not because the results are highly dependent on the input data. To correctly interpret the hypothesis tests it is particularly important to know whether or not the noise time series have equidistant time steps and how many observations the time series contains. For example, some of the tests are only valid when used on equidistant time series. Other tests are sensitive to too few observation (e.g., Ljung-Box) or too many observations (e.g., Shapiroo-Wilk). In the following sections each of these hypothesis tests is discussed in more detail. To show the functioning of the different hypothesis tests a synthetic time series is created by randomly drawing values from a normal distribution.
End of explanation
"""
ax = ps.plots.acf(noise, acf_options=dict(bin_width=0.5), figsize=(10,3), alpha=0.01)
"""
Explanation: Checking for autocorrelation
The first thing we check is if the values of residual series are independent form each other, or in other words, are not correlated. The correlation of a time series with a lagged version of itself is also referred to as autocorrelation, and we often say that we want to check that there is no significant autocorrelation in the residual time series. The following methods to test for autocorrelation are available in Pastas:
| Name | Pastas method | Description | Non-equidistant |
|:-----|:--------------|:-----------------------------------------|----------------:|
| Visualization | ps.stats.plot_acf | Visualization of the autocorrelation and its confidence intervals. | Yes |
| Ljung-Box test| ps.stats.ljung_box| This test checks whether the autocorrelations of a time series are significantly different from zero.| No |
| Durbin-Watson test | ps.stats.durbin_watson | This tests diagnoses for autocorrelation at a lag of one time step. | No |
| Stoffer-Toloi test |ps.stats.stoffer_toloi| This test is similar to the Ljung-Box test, but is adapted for missing values | Yes |
| Runs test |ps.stats.runs_test | This test checks whether the values of a time series are random without assuming any probability distribution. | Yes |
Whereas many time series models have equidistant time steps, the residuals of Pastas models may have non-equidistant time steps. To deal with this property, functions have been implemented in Pastas that can deal with non-equidistant time steps (Rehfeld et al., 2011). We therefore recommend to use the statistical methods supplied in Pastas, unless the modeler is sure he/she is dealing with equidistant time steps. See the additional Notebook on the autocorrelation function for more details and a proof of concept.
Visual interpretation of the autocorrelation
To diagnose the model residuals for autocorrelation we first plot the autocorrelation function (ACF) using the ps.stats.plot_acf method and perform a visual interpretation of the models residuals. The created plot shows the autocorrelation function up to a time lag of 250 days. The blue-shaded area denotes the 95\% confidence interval (1-$\alpha$). If 95\% of the autocorrelations fall within this confidence interval (that is, 0.95 $\cdot$ 250 = ±237 of them), we may conclude that there is no significant autocorrelation in the residuals.
End of explanation
"""
stat, p = ps.stats.ljung_box(noise, lags=15)
if p > alpha:
print("Failed to reject the Null-hypothesis, no significant autocorrelation. p =", p.round(2))
else:
print("Reject the Null-hypothesis. p =", p.round(2))
dw_stat = ps.stats.durbin_watson(noise)
print(stat)
stat, p = ps.stats.stoffer_toloi(noise, lags=15, freq="D")
if p > alpha:
print("Failed to reject the Null-hypothesis, no significant autocorrelation. p =", p.round(2))
else:
print("Reject the Null-hypothesis")
stat, p = ps.stats.runs_test(noise)
if p > alpha:
print("Failed to reject the Null-hypothesis, no significant autocorrelation. p =", p.round(2))
else:
print("Reject the Null-hypothesis")
"""
Explanation: The number of time lags to control for autocorrelation has to be chosen by the modeler, for example based on the knowledge of hydrological processes. For example, evaporation shows a clear yearly cycle and we may expect autocorrelation at lags of one year as a result of this. We therefore recommend to test for autocorrelation for all lags up to a lag of $k_{max}=365$ days here. It is noted here the number of lags $k$ [-] to calculate the autocorrelation for may depend on the time step of the residuals ($\Delta t$). For example, if daily residuals are available ($\Delta t = 1$ day), the autocorrelation has to be computed for $k=365$ [-] lags.
Tests for autocorrelation
End of explanation
"""
stat, p = stats.shapiro(noise)
if p > alpha:
print("Failed to reject the Null-hypothesis, residuals may come from Normal distribution. p =", np.round(p, 2))
else:
print("Reject the Null-hypothesisp =", np.round(p, 2))
"""
Explanation: Checking for Normality
A common assumption is that the residuals follow a Normal distribution, although in principle it is also possible that the residuals come from another distribution. Testing whether or not a time series may come from a normal distribution is notoriously difficult, especially for larger sample size (e.g., more groundwater level observations). It may therefore not always be easy to objectively determine whether or not the residuals follow a normal distribution. An good initial method to assess the normality of the residuals is to plot a histogram of the residuals and compare that to the theoretical normal distribution, along with a probability plot. The following methods may be used to check the normality of the residual series:
| Name | Scipy method | Description | Non-equidistant Time series |
|:-----|:--------------|:------------|----------------:|
| Histogram plot | numpy.histogram | Plot a histogram of the residuals time series and compare to a normal distribution. | Unknown |
| Probability plot | scipy.stats.probplot| Plot a histogram of the residuals time series and compare to a normal distribution. | Unknown |
| Shapiro-Wilk |scipy.stats.shapiro| The Shapiro-Wilk test tests the null hypothesis that the data was drawn from a normal distribution. | Unknown |
| D'Agostino |scipy.stats.normaltest| This test checks if the noise series comes from a normal distribution (H0 hypothesis). | Unknown |
Shapiro and Wilk (1965) developed a test to test if a time series may come from a normal distribution. Implemented in Scipy as scipy.stats.shapiro.
End of explanation
"""
stat, p = stats.normaltest(noise)
if p > alpha:
print("Failed to reject the Null-hypothesis, residuals may come from Normal distribution. p =", p.round(2))
else:
print("Reject the Null-hypothesis. p =", p.round(2))
"""
Explanation: D'Agostino and Pearson (1973) developed a test to detect non-normality of a time series. This test is implemented in Scipy as scipy.stats.normaltest.
End of explanation
"""
plt.plot(ml.observations(), ml.noise(), marker="o", linestyle=" ")
plt.xlabel("Simulated Groundwater level [m]")
plt.ylabel("Model residual [m]");
"""
Explanation: As the p-value is larger than $\alpha=0.05$ it is possible that the noise series comes from a normal distribution, so the Null hypothesis (series comes from a normal distribution) is not rejected.
Checking for homoscedasticity
The second assumption we check is if the residuals are so-called homoscedastic, which means that the values of the residuals are independent of the observed groundwater levels.
The following tests for homoscedasticity are available:
| Name | Pastas method | Description | Non-equidistant |
|:-----|:--------------|:----------------------------------|----------------:|
|Visualization | | Visualization of residuals| Unknown|
|Engle test| Unavailable | |Unknown|
|Breusch-Pagaan test| Unavailable | |Unknown|
End of explanation
"""
random_seed = np.random.RandomState(12345)
index = pd.to_datetime( np.arange(4.5 * 3650), unit="D")
noise_long = pd.Series(random_seed.normal(0, 1, len(index)), index=index).loc["1990":]
index = pd.read_csv("../data/test_index.csv", parse_dates=True, index_col=0).index.round("D").drop_duplicates()
noise_irregular = noise_long.reindex(index).dropna()
noise_long.plot(figsize=(12,2), label="equidistant time steps")
noise_irregular.plot(label="non-equidistant time steps")
plt.legend(ncol=2);
"""
Explanation: Testing on non-equidistant residuals time series
A time series with non-equidistant time steps is created from the synthetic time series. The original time series is resampled using the indices from a observed groundwater level time series with different observation frequencies.
End of explanation
"""
ps.stats.diagnostics(noise_long, nparam=0)
ps.stats.diagnostics(noise_irregular, nparam=0)
"""
Explanation: Let's run ps.stats.diagnostics on both of these time series and look at the differences in the outcomes:
End of explanation
"""
# import statsmodels.api as sm
# print("Pastas:", ps.stats.ljung_box(noise_long, lags=15))
# print("Pastas StofferToloi:", ps.stats.stoffer_toloi(noise_long, lags=15))
# print("Statsmodels:", sm.stats.acorr_ljungbox(noise_long, lags=[15], return_df=False))
# acf = sm.tsa.acf(noise_long, unbiased=True, fft=True, nlags=15)[1:]
# q, p = sm.tsa.q_stat(acf, noise.size)
# print("Statsmodels:", q[-1], p[-1])
# print("Pastas:", ps.stats.durbin_watson(noise)[0].round(2))
# print("Statsmodels:", sm.stats.durbin_watson(noise).round(2))
# print("Pastas:", ps.stats.runs_test(noise))
# print("Statsmodels:", sm.stats.runstest_1samp(noise))
"""
Explanation: Diagnostic vs. hydrological checking
The diagnostic checks presented in this Notebook are only part of the checks that could be performed before using a model for different purposes. It is noted here that these checks are part of a larger ranges of checks that may be performed on a Pastas model. We also highly recommend checking the model results using hydrological insights and expert judgment. An additional notebook showing this kind of checks will be added in the future.
Open Questions
How well do the tests for normality and homoscedasticity work for time series with non-equidistant time steps?
Could we use the ACF for irregular time steps in combination with Ljung-Box?
References
Hipel, K. W., & McLeod, A. I. (1994). Time series modelling of water resources and environmental systems, Chaper 7: Diagnostic Checking. Elsevier.
Ljung, G. and Box, G. (1978). On a Measure of Lack of Fit in Time Series Models, Biometrika, 65, 297-303.
Stoffer, D. S., & Toloi, C. M. (1992). A note on the Ljung—Box—Pierce portmanteau statistic with missing data. Statistics & probability letters, 13(5), 391-396.
Durbin, J., & Watson, G. S. (1951). Testing for serial correlation in least squares regression. II. Biometrika, 38(1/2), 159-177.
Wald, A., & Wolfowitz, J. (1943). An exact test for randomness in the non-parametric case based on serial correlation. The Annals of Mathematical Statistics, 14(4), 378-388.
D'Agostino, R. and Pearson, E. S. (1973). Tests for departure from normality, Biometrika, 60, 613-622.
Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52(3/4), 591-611.
Rehfeld, K., Marwan, N., Heitzig, J., & Kurths, J. (2011). Comparison of correlation analysis techniques for irregularly sampled time series. Nonlinear Processes in Geophysics, 18(3), 389-404.
Benchmarking built-in Pastas Methods to Statsmodels methods
The following code blocks may be used to verify the output from Pastas methods to Statsmodels methods.
End of explanation
"""
|
unnati-xyz/intro-python-data-science
|
kaggle/santander/notebook/kaggle-santander.ipynb
|
mit
|
import numpy as np
import pandas as pd
#Read train, test and sample submission datasets
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/test.csv")
samplesub = pd.read_csv("../data/sample_submission.csv")
"""
Explanation: Santandar Customer Satisfaction
Step 1: Frame
From frontline support teams to C-suites, customer satisfaction is a key measure of success. Unhappy customers don't stick around. What's more, unhappy customers rarely voice their dissatisfaction before leaving.
Santander Bank is asking Kagglers to help them identify dissatisfied customers early in their relationship. Doing so would allow Santander to take proactive steps to improve a customer's happiness before it's too late.
In this competition, you'll work with hundreds of anonymized features to predict if a customer is satisfied or dissatisfied with their banking experience.
Predict the probability of each customer to be unsatisfied
<img style="float:center" src="img/unhappy_customer.jpg" width=300/>
Step 2: Acquire
The competition is hosted on Kaggle
<img style="float:center" src="img/kaggle.jpg" width=800/>
<br>
<br>
The data section has three files:
1. train.csv Training dataset to create the model. It has the target column - indicating whether the customer was happy or not
2. test.csv Test dataset for which the predictions are the be made
3. sample_submission.csv Format for submitting the predictions on Kaggle's website
The datasets are downloaded and are available at the data folder
Step 3: Explore
Read the datasets
End of explanation
"""
#Create the labels
labels=train.iloc[:,-1]
#Find number of unsatisfied customers using `labels`
"""
Explanation: Exercise 1 Find column types for train and test.
Exercise 2 Find unique column types for train and test
Exercise 3 Find number of rows and columns in train and test
Exercise 4 Find the columns that has missing values
Hint: look up at the pandas function isnull
Exercise 5 Find number of unsatisfied customers in the train dataset
End of explanation
"""
# Step 1: Find standard deviation
train_std =
# Step 2: Find columns that has standard deviation as 0
columns_with_0_variance =
#train.columns.values in columns_with_0_variance.index
train_columns = train.columns.values
columns_with_0_variance_columns = columns_with_0_variance.index.values
#Need to subset columns that are present in train but not in the dataset with 0 variance
selected_columns = np.in1d(train_columns, columns_with_0_variance_columns)
len(selected_columns)
#Create train and test
train_updated = train.iloc[:,~selected_columns[1:len(selected_columns)-1]]
test_updated = test.iloc[:,~selected_columns[1:len(selected_columns)-1]]
#Check if the number of columns in both the datasets are the same
print train_updated.shape, test_updated.shape
#Check if column names in train and test are the same
train_updated.columns.values in test_updated.columns.values
"""
Explanation: Step 4: Refine
Exercise 5 Find features that show no variance
Question: Why is this important?
End of explanation
"""
from sklearn import preprocessing
from sklearn import linear_model
from sklearn import cross_validation
y = np.array(labels)
#Why do we need scaling?
scaler = preprocessing.StandardScaler()
scaler = scaler.fit(train_updated)
train_scaled = scaler.transform(train_updated)
#Remember - need to use the same scaler function on test
test_scaled = scaler.transform(test_updated)
#lr = linear_model.LogisticRegression()
logReg = linear_model.LogisticRegression(tol=0.1, n_jobs=6)
%timeit -n 1 -r 1 logReg.fit(train_scaled, y)
logRegPrediction = logReg.predict(test_scaled)
"""
Explanation: Step 5: Model
We will cover the following
Model 1: Logistic Regression (L1/L2)
Model 4: Decision Tree
Visualizing decision tree
Cross-validation
Error Metrics
Regularization
Regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too complex and overfit or too simple and underfit, either way giving poor predictions.
Logistic Regression(L1/L2)
End of explanation
"""
from sklearn import tree
decisionTreeModel = tree.DecisionTreeClassifier()
decisionTreeModel.fit(train_updated, y)
"""
Explanation: Exercise 6 Predict probability of each customer to be unsatisfied in the test dataset
Exercise 7 Fit L1 Regularization model. Evaluate the results
Exercise 8 Add the prediction to sample sub. Save it as csv. Submit solution to kaggle
Decision Trees
End of explanation
"""
|
dvkonst/ml_mipt
|
task_2/Decision_tree.ipynb
|
gpl-3.0
|
X, y = boston_data.iloc[:, :-1], boston_data.iloc[:, -1]
train_len = int(0.75 * len(X))
X_train, X_test, y_train, y_test = X.iloc[:train_len], X.iloc[train_len:], y.iloc[:train_len], y.iloc[train_len:]
# print(list(map(lambda x: x.shape, (X_train, X_test, y_train, y_test))))
"""
Explanation: Разделим датасет на тренировку и тест
End of explanation
"""
model = DecisionTree(maxdepth=3)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('maxdepth = 3, Error =', metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: С ограничением глубины дерева
End of explanation
"""
model = DecisionTree()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Error =', metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Без ограничения глубины. Возникает переобучение.
End of explanation
"""
model = tree.DecisionTreeRegressor(max_depth=3)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('max_depth = 3, Error =', metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Сравнение с решающим деревом из библиотеки sklearn
End of explanation
"""
args = (-20, 20)
x_tr = np.arange(*args, 2)
y_tr = 1/(1 + np.exp(-10*x_tr)) + 0.1*np.random.rand(x_tr.size)
plt.plot(x_tr, y_tr, 'ro')
DT = DecisionTree(maxdepth=5, maxelemsleaf=3)
DT.fit(x_tr.reshape((len(x_tr), 1)), y_tr)
x_test = np.arange(*args)
xt = x_test.reshape((len(x_test), 1))
y_pr = DT.predict(xt)
plt.step(x_test, y_pr, 'b')
plt.show()
"""
Explanation: Проверка адекватности работы алгоритма на примере регрессии функции одного переменного
End of explanation
"""
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25)
"""
Explanation: Возьмем рандомизированное разбиение на обучение и тест.
End of explanation
"""
model = DecisionTree(maxdepth=3)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('maxdepth = 3, Error =', metrics.mean_squared_error(y_test, y_pred))
model = DecisionTree()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Error =', metrics.mean_squared_error(y_test, y_pred))
model = tree.DecisionTreeRegressor(max_depth=3)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('max_depth = 3, Error =', metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Снова проверим результат работы алгоритмов с ограничением и без
End of explanation
"""
|
goodwordalchemy/thinkstats_notes_and_exercises
|
code/chap03_Pmfs_notes.ipynb
|
gpl-3.0
|
import thinkstats2
pmf = thinkstats2.Pmf([1,2,2,3,5])
#getting pmf values
print pmf.Items()
print pmf.Values()
print pmf.Prob(2)
print pmf[2]
#modifying pmf values
pmf.Incr(2, 0.2)
print pmf.Prob(2)
pmf.Mult(2, 0.5)
print pmf.Prob(2)
#if you modify, probabilities may no longer add up to 1
#to check:
print pmf.Total()
print pmf.Normalize()
print pmf.Total()
#Copy method is also available
"""
Explanation: probability mass function - maps each value to its probability. Alows you to compare two distributions independently from sample size.
probability - frequency expressed as a fraction of the sample size, n.
normalization - dividing frequencies by n.
given a Hist, we can make a dictionary that maps each value to its probability:
n = hist.Total()
d = {}
or x, freq in hist.Items():
d[x] = freq/n
End of explanation
"""
from probability import *
live, firsts, others = first.MakeFrames()
first_pmf = thinkstats2.Pmf(firsts.prglngth, label="firsts")
other_pmf = thinkstats2.Pmf(others.prglngth, label="others")
width = 0.45
#cols option makes grid of figures.
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='weeks',
ylabel='probability',
axis=[27,46,0,0.6])
#second call to preplot resets the color generator
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel='weeks',
ylabel='probability',
axis=[27,46,0,0.6])
thinkplot.Show()
"""
Explanation: To plot a PMF:
* bargraph using thinkplot.Hist
* as step function: thinkplot.Pmf--for use when large number of smooth values.
End of explanation
"""
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
#diff between two points in percentage points
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(title="Difference in PMFs",
xlabel="weeks",
ylabel="percentage points")
thinkplot.Show()
"""
Explanation: Good idea to zoom in on the mode, where the biggest differences occur:
End of explanation
"""
d = {7:8, 12:8, 17:14, 22:4, 27:6,
32:12, 37:8, 42:3, 47:2}
pmf = thinkstats2.Pmf(d, label='actual')
print ('mean', pmf.Mean())
"""
Explanation: Class Size Paradox
End of explanation
"""
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
thinkplot.PrePlot(2)
biased_pmf = BiasPmf(pmf, label="observed")
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(root='class_size1',
xlabel='class size',
ylabel='PMF',
axis=[0, 52, 0, 0.27])
# thinkplot.Show()
print "actual mean", pmf.Mean()
print "biased mean", biased_pmf.Mean()
"""
Explanation: For each class size, x, in the following funtion, we multiply the probability by x, the number of students who observe that class size. This gives a biased distribution
End of explanation
"""
def UnbiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, 1.0 / x)
new_pmf.Normalize()
return new_pmf
print 'unbiased mean:', UnbiasPmf(biased_pmf, "unbiased").Mean()
"""
Explanation: Conclusion: the students are biased because the amount of students in a large class is large, so students who are taking multiple classes are likely taking at least one of these classes, which offsets their personal average class size from the actual.
Think of it this way: if you had one of each class size in range of class sizes from 1 to 10, the average size of the classes would be 5, but far more people would report being in a larger class than being in a smaller class.
this can be corrected, however...
End of explanation
"""
import numpy as np
import pandas
array = np.random.randn(4,2)
df = pandas.DataFrame(array)
df
columns = ['A','B']
df = pandas.DataFrame(array, columns=columns)
df
index = ['a','b','c','d']
df = pandas.DataFrame(array, columns=columns, index=index)
df
#to select a row by label, use loc,
#which returns a series
df.loc['a']
#iloc finds a row by integer position of the row
df.iloc[0]
#loc can also take a list of labels
#in this case it returns a df
indices = ['a','c']
df.loc[indices]
#slicing
#NOTE: first slice method selects inclusively
print df['a':'c']
df[0:2]
"""
Explanation: DataFrame indexing:
End of explanation
"""
def PmfMean(pmf):
mean = 0
for key, prob in pmf.Items():
mean += key * prob
return mean
def PmfVar(pmf):
mean = PmfMean(pmf)
var = 0
for key, prob in pmf.Items():
var += prob * (key - mean) ** 2
return var
print "my Mean:", PmfMean(pmf)
print "answer mean:", pmf.Mean()
print "my Variance:", PmfVar(pmf)
print "answer variance:", pmf.Var()
"""
Explanation: Exercise 3.2
PMFs can be used to calculate probability:
$$
\bar{x} = \sum_{i}p_ix_i
$$
where $x_i$ are the unique values in the PMF and $p_i = PMF(x_i)$
Variance can also be calulated:
$$
S^2 = \sum_{i}p_i(x_i -\bar{x})^2
$$
Write functions PmfMean and PmfVar that take a Pmf object and compute the mean and variance.
End of explanation
"""
df = nsfg.ReadFemPreg()
pregMap = nsfg.MakePregMap(df[df.outcome==1])
lengthDiffs = []
for caseid, pregList in pregMap.iteritems():
first = df[df.index==pregList[0]].prglngth
first = int(first)
for idx in pregList[1:]:
other = df[df.index==idx].prglngth
other = int(other)
diff = first - other
lengthDiffs.append(diff)
diffHist = thinkstats2.Hist(lengthDiffs)
print diffHist
diffPmf = thinkstats2.Pmf(lengthDiffs)
thinkplot.PrePlot(2, cols=2)
thinkplot.SubPlot(1)
thinkplot.Hist(diffHist, label='')
thinkplot.Config(title="Differences (weeks) between first baby and other babies \n born to same mother",
xlabel = 'first_preg_lngth - other_preg_lngth (weeks)',
ylabel = 'freq')
thinkplot.SubPlot(2)
thinkplot.Hist(diffPmf, label='')
thinkplot.Config(title="Differences (weeks) between first baby and other babies \n born to same mother",
xlabel = 'first_preg_lngth - other_preg_lngth (weeks)',
ylabel = 'freq')
thinkplot.Show()
"""
Explanation: Exercise 3.3
End of explanation
"""
pwDiff = defaultdict(list)
for caseid, pregList in pregMap.iteritems():
first = df[df.index==pregList[0]].prglngth
first = int(first)
for i,idx in enumerate(pregList[1:]):
other = df[df.index==idx].prglngth
other = int(other)
diff = first - other
pwDiff[i + 1].append(diff)
pmf_s = []
for i in range(1,6):
diff_pmf = thinkstats2.Pmf(pwDiff[i + 1], label='diff to kid num %d' % i)
pmf_s.append(diff_pmf)
thinkplot.Pmfs(pmf_s)
thinkplot.Config(axis=[-10,10,0,1])
thinkplot.Show()
"""
Explanation: Exercise 3.4
End of explanation
"""
import relay
def ObservedPmf(pmf, runnerSpeed, label):
new_pmf = pmf.Copy(label=label)
for x,p in pmf.Items():
diff = abs(runnerSpeed - x)
#if runner speed is very large wrt x, likely to pass that runner
#else likely to be passed by that runnner
#not likely to see those in between.
new_pmf.Mult(x, diff)
new_pmf.Normalize()
return new_pmf
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, 'unbiased speeds')
thinkplot.PrePlot(2)
thinkplot.Pmf(pmf)
biased_pmf = ObservedPmf(pmf, 7.5, 'biased at 7.5 mph')
thinkplot.Pmf(biased_pmf)
thinkplot.Config(title='PMF of running speed',
xlabel='speed (mph)',
ylabel='probability')
thinkplot.Show()
"""
Explanation: Exercise 3.4
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners' speeds and the speed of the running observer and returns a new PMF representing the distribution of runner's speeds as seen by the observer.
End of explanation
"""
|
pxcandeias/py-notebooks
|
FRF_plots.ipynb
|
mit
|
from __future__ import division, print_function
import sys
import numpy as np
import scipy as sp
import matplotlib as mpl
print('System: {}'.format(sys.version))
print('numpy version: {}'.format(np.__version__))
print('scipy version: {}'.format(sp.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
"""
Explanation: <a id='top'></a>
Frequency Response Functions (FRFs) plots
This notebook is about frequency response functions (FRFs) and the various ways they can be plotted.
Table of contents
Preamble
Dynamic system setup
Frequency response function
Nyquist plot
Bode plot
Nichols plot
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Since it was created with Python 2.7, we will import a few things from the "future". Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots:
End of explanation
"""
from numpy import linalg as LA
from scipy import signal
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: We will also need some specific modules and a litle "IPython magic" to show the plots:
End of explanation
"""
MM = np.asmatrix(np.diag([1., 2.]))
print(MM)
KK = np.asmatrix([[20., -10.],[-10., 10.]])
print(KK)
C1 = 0.1*MM+0.02*KK
print(C1)
"""
Explanation: Back to top
Dynamic system setup
In this example we will simulate a two degree of freedom system (2DOF) as a LTI system. For that purpose, we will define a mass and a stiffness matrix and use proportional damping:
End of explanation
"""
A = np.bmat([[np.zeros_like(MM), np.identity(MM.shape[0])], [LA.solve(-MM,KK), LA.solve(-MM,C1)]])
print(A)
Bf = KK*np.asmatrix(np.ones((2, 1)))
B = np.bmat([[np.zeros_like(Bf)],[LA.solve(MM,Bf)]])
print(B)
Cd = np.matrix((1,0))
Cv = np.asmatrix(np.zeros((1,MM.shape[1])))
Ca = np.asmatrix(np.zeros((1,MM.shape[1])))
C = np.bmat([Cd-Ca*LA.solve(MM,KK),Cv-Ca*LA.solve(MM,C1)])
print(C)
D = Ca*LA.solve(MM,Bf)
print(D)
"""
Explanation: For the LTI system we will use a state space formulation. For that we will need the four matrices describing the system (A), the input (B), the output (C) and the feedthrough (D):
End of explanation
"""
system = signal.lti(A, B, C, D)
"""
Explanation: The LTI system is simply defined as:
End of explanation
"""
w1, v1 = LA.eig(A)
ix = np.argsort(np.absolute(w1)) # order of ascending eigenvalues
w1 = w1[ix] # sorted eigenvalues
v1 = v1[:,ix] # sorted eigenvectors
zw = -w1.real # damping coefficient time angular frequency
wD = w1.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
"""
Explanation: To check the results presented ahead we will need the angular frequencies and damping coefficients of this system. The eigenanalysis of the system matrix yields them after some computations:
End of explanation
"""
w, H = system.freqresp()
fig, ax = plt.subplots(2, 1)
fig.suptitle('Real and imaginary plots')
# Real part plot
ax[0].plot(w, H.real, label='FRF')
ax[0].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[0].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[0].set_ylabel('Real [-]')
ax[0].grid(True)
ax[0].legend()
# Imaginary part plot
ax[1].plot(w, H.imag, label='FRF')
ax[1].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[1].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[1].set_ylabel('Imaginary [-]')
ax[1].set_xlabel('Frequency [rad/s]')
ax[1].grid(True)
ax[1].legend()
plt.show()
"""
Explanation: Back to top
Frequency response function
A frequency response function is a complex valued function of frequency. Let us see how it looks when we plot the real and imaginary parts in separate:
End of explanation
"""
plt.figure()
plt.title('Nyquist plot')
plt.plot(H.real, H.imag, 'b')
plt.plot(H.real, -H.imag, 'r')
plt.xlabel('Real [-]')
plt.ylabel('Imaginary[-]')
plt.grid(True)
plt.axis('equal')
plt.show()
"""
Explanation: Back to top
Nyquist plot
A Nyquist plot represents the real and imaginary parts of the complex FRF in a single plot:
End of explanation
"""
w, mag, phase = system.bode()
fig, ax = plt.subplots(2, 1)
fig.suptitle('Bode plot')
# Magnitude plot
ax[0].plot(w, mag, label='FRF')
ax[0].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[0].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[0].set_ylabel('Magnitude [dB]')
ax[0].grid(True)
ax[0].legend()
# Phase plot
ax[1].plot(w, phase*np.pi/180., label='FRF')
ax[1].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[1].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[1].set_ylabel('Phase [rad]')
ax[1].set_xlabel('Frequency [rad/s]')
ax[1].grid(True)
ax[1].legend()
plt.show()
"""
Explanation: Back to top
Bode plot
A Bode plot represents the complex FRF in magnitude-phase versus frequency:
End of explanation
"""
plt.figure()
plt.title('Nichols plot')
plt.plot(phase*np.pi/180., mag)
plt.xlabel('Phase [rad/s]')
plt.ylabel('Magnitude [dB]')
plt.grid(True)
plt.show()
"""
Explanation: Back to top
Nichols plot
A Nichols plot combines the Bode plot in a single plot of magnitude versus phase:
End of explanation
"""
|
google/trax
|
trax/models/research/examples/hourglass_enwik8.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 Google LLC.
End of explanation
"""
TRAX_GITHUB_URL = 'git+https://github.com/google/trax.git'
!pip install -q --upgrade jax==0.2.21
!pip install -q --upgrade jaxlib==0.1.71+cuda111 -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q $TRAX_GITHUB_URL
!pip install -q pickle5
!pip install -q neptune-client
!pip install -q gin
# Execute this for a proper TPU setup!
# Make sure the Colab Runtime is set to Accelerator: TPU.
import jax
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20200416'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
jax.devices()
"""
Explanation: Hourglass: enwik8 evaluation
This notebook was designed to run on TPU.
To use TPUs in Colab, click "Runtime" on the main menu bar and select Change runtime type. Set "TPU" as the hardware accelerator.
Install dependencies
End of explanation
"""
!wget --continue http://mattmahoney.net/dc/enwik8.zip
!wget https://raw.githubusercontent.com/salesforce/awd-lstm-lm/master/data/enwik8/prep_enwik8.py
!python3 prep_enwik8.py
# The checkpoint was trained with python3.8 which uses pickle5, hence this hack.
layers_base_path = '/usr/local/lib/python3.7/dist-packages/trax/layers/base.py'
with open(layers_base_path, 'r') as f:
lines = f.readlines()
idx = lines.index('import pickle\n')
lines[idx] = 'import pickle5 as pickle\n'
with open(layers_base_path, 'w') as f:
f.writelines(lines)
import tensorflow.compat.v1 as tf
from trax.fastmath import numpy as jnp
def raw_ds_to_tensor(raw_file_path):
with tf.io.gfile.GFile(raw_file_path, mode='rb') as f:
raw_data = f.read()
print(f'Bytes in {raw_file_path}:', len(raw_data))
return jnp.array(list(raw_data))
testset_tensor, validset_tensor = map(raw_ds_to_tensor, [
'/content/test.txt.raw',
'/content/valid.txt.raw',
])
"""
Explanation: Download enwik8 dataset and load data
A standard script for enwik8 preprocessing is used.
End of explanation
"""
!gdown https://drive.google.com/uc?id=18wrzKZLBtLuFOHwzuF-7i_p-rD2miE_6
!tar -zxvf enwik8_checkpoint.tar.gz
import gin
import trax
MODEL_DIR = 'enwik8_checkpoint'
gin.parse_config_file(f'./{MODEL_DIR}/config.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
f'./{MODEL_DIR}/model.pkl.gz',
weights_only=True
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
"""
Explanation: Download and load the trained checkpoint
End of explanation
"""
from trax import fastmath
from trax.fastmath import numpy as jnp
from tqdm import tqdm
def batched_inputs(data_gen, batch_size):
inp_stack, mask_stack = [], []
for input_example, mask in data_gen:
inp_stack.append(input_example)
mask_stack.append(mask)
if len(inp_stack) % batch_size == 0:
if len(set(len(example) for example in inp_stack)) > 1:
for x, m in zip(inp_stack, mask_stack):
yield x, m
else:
input_batch = jnp.stack(inp_stack)
mask_batch = jnp.stack(mask_stack)
yield input_batch, mask_batch
inp_stack, mask_stack = [], []
if len(inp_stack) > 0:
for x, m in zip(inp_stack, mask_stack):
yield x, m
def run_full_evaluation(accelerated_model_with_loss, examples_data_gen,
batch_size, pad_to_len=None):
# Important: we assume batch size per device = 1
assert batch_size % fastmath.local_device_count() == 0
assert fastmath.local_device_count() == 1 or \
batch_size == fastmath.local_device_count()
loss_sum, n_tokens = 0.0, 0
def pad_right(inp_tensor):
if pad_to_len:
return jnp.pad(inp_tensor,
[[0, 0], [0, max(0, pad_to_len - inp_tensor.shape[1])]])
else:
return inp_tensor
batch_gen = batched_inputs(examples_data_gen, batch_size)
def batch_leftover_example(input_example, example_mask):
def extend_shape_to_batch_size(tensor):
return jnp.repeat(tensor, repeats=batch_size, axis=0)
return map(extend_shape_to_batch_size,
(input_example[None, ...], example_mask[None, ...]))
for i, (inp, mask) in tqdm(enumerate(batch_gen)):
leftover_batch = False
# For leftover examples, we yield rank 1 tensors (unbatched) instead of
# rank 2 batches from our `batched_inputs` function. This convention allows
# a special behaviour for the leftover batches that have to be processed
# one by one.
if len(inp.shape) == 1:
inp, mask = batch_leftover_example(inp, mask)
leftover_batch = True
inp, mask = map(pad_right, [inp, mask])
example_losses = accelerated_model_with_loss((inp, inp, mask))
if leftover_batch:
example_losses = example_losses[:1]
mask = mask[:1]
example_lengths = mask.sum(axis=-1)
loss_sum += (example_lengths * example_losses).sum()
n_tokens += mask.sum()
if i % 200 == 0:
print(f'Batches: {i}, current loss: {loss_sum / float(n_tokens)}')
return loss_sum / float(n_tokens)
"""
Explanation: Evaluate on the test set
End of explanation
"""
# Prepare the input generator: it should yield (input, mask) tuples
def contextful_eval_data(bytes_tensor, CHUNK_LEN, N_CHUNKS_BEFORE):
for start in range(0, len(bytes_tensor), CHUNK_LEN):
shifted_chunk = bytes_tensor[max(0, start - (N_CHUNKS_BEFORE * CHUNK_LEN)):
start+CHUNK_LEN]
mask = jnp.zeros_like(shifted_chunk)
masked_len = min(CHUNK_LEN, len(bytes_tensor) - start)
mask = fastmath.index_update(mask, np.s_[-masked_len:], 1)
shifted_chunk = trax.data.inputs._pad_to_multiple_of(shifted_chunk,
CHUNK_LEN, axis=0)
mask = trax.data.inputs._pad_to_multiple_of(mask, CHUNK_LEN, axis=0)
yield shifted_chunk, mask
# Split the input into chunks of 6912
PAD_TO_LEN = 6912 # We need to pad because shorten factor 3 is used.
CHUNK_LEN = 128 #
N_CHUNKS_BEFORE = 53
BATCH_SIZE = 8
test_data_gen = contextful_eval_data(testset_tensor, CHUNK_LEN, N_CHUNKS_BEFORE)
loss = run_full_evaluation(model_eval, test_data_gen, BATCH_SIZE, PAD_TO_LEN)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
"""
Explanation: We evaluate chunks of length $128$ bytes, preceded by a context of $128 \cdot 53$ bytes (total context length is $6912$)
End of explanation
"""
import numpy as np
from tqdm import tqdm
def autoregressive_sample(model, temp=1.0, batch_size=8, l=3072, vocab_size=256):
model = trax.layers.Accelerate(model)
x = np.zeros((batch_size, l), dtype=np.int32)
logits_prev = np.zeros((batch_size, l, vocab_size), dtype=np.float32)
for i in tqdm(range(l)):
logits = model(x)
np.testing.assert_array_almost_equal(logits_prev[:, :i], logits[:, :i])
logits_prev = logits
sample = trax.layers.logsoftmax_sample(logits[:, i, :], temperature=temp)
x[:, i] = sample
return x
samples = autoregressive_sample(model, l=1026)
"""
Explanation: Generate text from the model
End of explanation
"""
bytes((samples[0]).tolist()).decode()
"""
Explanation: Text sample generated by the model (unconditional generation - without any prompts):
End of explanation
"""
|
castelao/CoTeDe
|
docs/notebooks/Configuration.ipynb
|
bsd-3-clause
|
# A different version of CoTeDe might give slightly different outputs.
# Please let me know if you see something that I should update.
import cotede
print("CoTeDe version: {}".format(cotede.__version__))
"""
Explanation: QC Configuration
Objective:
Show different ways to configure a quality control (QC) procedure - explicit inline or calling a pre-set configuration.
For CoTeDe, the most important component is the human operator, hence it should be easy to control which tests to apply and the specific parameters of each test. CoTeDe is based on the principle of a single engine for multiple applications by using a dictionary to describe the QC procedure to be used, since 2011.
End of explanation
"""
from cotede.utils import load_cfg
"""
Explanation: load_cfg(), just for demonstration
Here we will import the load_cfg() function to illustrate different procedures. This is typically not necessary since ProfileQC does that for us. The cfgname argument for load_cfg is the same for ProfileQC, thus when we call
ProfileQC(dataset, cfgname='argo')
the procedure applied to dataset is the same shown by
load_cfg(cfgname='argo')
We will take advantage on that and simplify this notebook by inspecting only the configuration without actually applying it.
End of explanation
"""
cfg = load_cfg('gtspp_realtime')
print(list(cfg.keys()))
"""
Explanation: Built-in tests
The easiest way to configure a QC procedure is by using one of the built-in tests, for example the GTSPP procedure for realtime data, here named 'gtspp_realtime'.
End of explanation
"""
cfg['revision']
print(list(cfg['common'].keys()))
"""
Explanation: The output cfg is a dictionary type of object, more specifically it is an ordered dictionary. The configuration has:
A revision to help to determine how to handle this configuration.
A common item with the common tests for the whole dataset, i.e. the tests that are valid for all variables. For instance, a valid date and time is the same if we are evaluating temperature, salinity, or chlorophyll fluorescence.
A variables, with a list of the variables to evaluate.
Let's check each item:
End of explanation
"""
print(list(cfg['variables'].keys()))
"""
Explanation: So, for GTSSP realtime assessement, all variables must be associated with a valid time and a valid location that is at sea.
End of explanation
"""
print(list(cfg['variables']['sea_water_temperature'].keys()))
"""
Explanation: GTSPP evaluates temperature and salinity. Here we use CF standard names, so temperature is sea_water_temperature. But which tests are applied on temperature measurements?
End of explanation
"""
print(cfg['variables']['sea_water_temperature']['spike'])
"""
Explanation: Let's inspect the spike test.
End of explanation
"""
print(list(cfg['variables']['sea_water_temperature']['global_range']))
"""
Explanation: There is one single item, the threshold, here defined as 2, so that any measured temperature with a spike greater than this threshold will fail on this spike test.
Let's check the global range test.
End of explanation
"""
my_config = {"sea_water_temperature":
{"spike": {
"threshold": 1
}
}
}
cfg = load_cfg(my_config)
print(cfg)
"""
Explanation: Here there are two limit values, the minimum acceptable value and the maximum one. Anything beyond these limits will fail this test.
Check CoTeDe's manual to see what each test does and the possible parameters for each one.
Explicit inline
A QC procedure can also be explicitly defined with a dictionary. For instance, let's consider that we want to evaluate the temperature of a dataset with a single test, the spike test, using a threshold equal to one,
End of explanation
"""
my_config = {"inherit": "gtspp_realtime",
"sea_water_temperature":
{"woa_normbias": {
"threshold": 3
}
}
}
cfg = load_cfg(my_config)
print(cfg.keys())
"""
Explanation: Note that load_cfg took care for us to format it with the 0.21 standard, adding the revision and variables. If a revision is not defined, it is assumed a pre-0.21.
Compound procedure
Many of the recommended QC procedures share several tests in common. One way to simplify a QC procedure definition is by using inheritance to define a QC procedure to be used as a template. For example, let's create a new QC procedure that is based on GTSPP realtime and add a new test to that, the World Ocean Atlas Climatology comparison for temperature, with a threshold of 3 standard deviations.
End of explanation
"""
print(cfg['inherit'])
"""
Explanation: There is a new item, inherit
End of explanation
"""
print(cfg['variables']['sea_water_temperature'].keys())
"""
Explanation: And now sea_water_temperature has all the GTSPP realtime tests plus the WOA comparison,
End of explanation
"""
cfg = load_cfg('gtspp')
print(cfg['inherit'])
"""
Explanation: This new definition is actually the GTSPP recommended procedure for non-realtime data, i.e. the delayed mode. The built-in GTSPP procedure is actually written by inheriting the GTSPP realtime.
End of explanation
"""
my_config = {"inherit": "gtspp_realtime",
"sea_water_temperature":
{"spike": {
"threshold": 1
}
}
}
cfg = load_cfg(my_config)
print(cfg['variables']['sea_water_temperature']['spike'])
"""
Explanation: The inheritance can also be used to modify any parameter from the parent template procedure. For example, let's use the GTSPP recommended procedure but with a more restricted threshold, equal to 1,
End of explanation
"""
|
rflamary/POT
|
notebooks/plot_barycenter_fgw.ipynb
|
mit
|
# Author: Titouan Vayer <titouan.vayer@irisa.fr>
#
# License: MIT License
#%% load libraries
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import math
from scipy.sparse.csgraph import shortest_path
import matplotlib.colors as mcol
from matplotlib import cm
from ot.gromov import fgw_barycenters
#%% Graph functions
def find_thresh(C, inf=0.5, sup=3, step=10):
""" Trick to find the adequate thresholds from where value of the C matrix are considered close enough to say that nodes are connected
Tthe threshold is found by a linesearch between values "inf" and "sup" with "step" thresholds tested.
The optimal threshold is the one which minimizes the reconstruction error between the shortest_path matrix coming from the thresholded adjency matrix
and the original matrix.
Parameters
----------
C : ndarray, shape (n_nodes,n_nodes)
The structure matrix to threshold
inf : float
The beginning of the linesearch
sup : float
The end of the linesearch
step : integer
Number of thresholds tested
"""
dist = []
search = np.linspace(inf, sup, step)
for thresh in search:
Cprime = sp_to_adjency(C, 0, thresh)
SC = shortest_path(Cprime, method='D')
SC[SC == float('inf')] = 100
dist.append(np.linalg.norm(SC - C))
return search[np.argmin(dist)], dist
def sp_to_adjency(C, threshinf=0.2, threshsup=1.8):
""" Thresholds the structure matrix in order to compute an adjency matrix.
All values between threshinf and threshsup are considered representing connected nodes and set to 1. Else are set to 0
Parameters
----------
C : ndarray, shape (n_nodes,n_nodes)
The structure matrix to threshold
threshinf : float
The minimum value of distance from which the new value is set to 1
threshsup : float
The maximum value of distance from which the new value is set to 1
Returns
-------
C : ndarray, shape (n_nodes,n_nodes)
The threshold matrix. Each element is in {0,1}
"""
H = np.zeros_like(C)
np.fill_diagonal(H, np.diagonal(C))
C = C - H
C = np.minimum(np.maximum(C, threshinf), threshsup)
C[C == threshsup] = 0
C[C != 0] = 1
return C
def build_noisy_circular_graph(N=20, mu=0, sigma=0.3, with_noise=False, structure_noise=False, p=None):
""" Create a noisy circular graph
"""
g = nx.Graph()
g.add_nodes_from(list(range(N)))
for i in range(N):
noise = float(np.random.normal(mu, sigma, 1))
if with_noise:
g.add_node(i, attr_name=math.sin((2 * i * math.pi / N)) + noise)
else:
g.add_node(i, attr_name=math.sin(2 * i * math.pi / N))
g.add_edge(i, i + 1)
if structure_noise:
randomint = np.random.randint(0, p)
if randomint == 0:
if i <= N - 3:
g.add_edge(i, i + 2)
if i == N - 2:
g.add_edge(i, 0)
if i == N - 1:
g.add_edge(i, 1)
g.add_edge(N, 0)
noise = float(np.random.normal(mu, sigma, 1))
if with_noise:
g.add_node(N, attr_name=math.sin((2 * N * math.pi / N)) + noise)
else:
g.add_node(N, attr_name=math.sin(2 * N * math.pi / N))
return g
def graph_colors(nx_graph, vmin=0, vmax=7):
cnorm = mcol.Normalize(vmin=vmin, vmax=vmax)
cpick = cm.ScalarMappable(norm=cnorm, cmap='viridis')
cpick.set_array([])
val_map = {}
for k, v in nx.get_node_attributes(nx_graph, 'attr_name').items():
val_map[k] = cpick.to_rgba(v)
colors = []
for node in nx_graph.nodes():
colors.append(val_map[node])
return colors
"""
Explanation: =================================
Plot graphs' barycenter using FGW
=================================
This example illustrates the computation barycenter of labeled graphs using FGW
Requires networkx >=2
.. [18] Vayer Titouan, Chapel Laetitia, Flamary R{'e}mi, Tavenard Romain
and Courty Nicolas
"Optimal Transport for structured data with application on graphs"
International Conference on Machine Learning (ICML). 2019.
End of explanation
"""
#%% circular dataset
# We build a dataset of noisy circular graphs.
# Noise is added on the structures by random connections and on the features by gaussian noise.
np.random.seed(30)
X0 = []
for k in range(9):
X0.append(build_noisy_circular_graph(np.random.randint(15, 25), with_noise=True, structure_noise=True, p=3))
"""
Explanation: Generate data
End of explanation
"""
#%% Plot graphs
plt.figure(figsize=(8, 10))
for i in range(len(X0)):
plt.subplot(3, 3, i + 1)
g = X0[i]
pos = nx.kamada_kawai_layout(g)
nx.draw(g, pos=pos, node_color=graph_colors(g, vmin=-1, vmax=1), with_labels=False, node_size=100)
plt.suptitle('Dataset of noisy graphs. Color indicates the label', fontsize=20)
plt.show()
"""
Explanation: Plot data
End of explanation
"""
#%% We compute the barycenter using FGW. Structure matrices are computed using the shortest_path distance in the graph
# Features distances are the euclidean distances
Cs = [shortest_path(nx.adjacency_matrix(x)) for x in X0]
ps = [np.ones(len(x.nodes())) / len(x.nodes()) for x in X0]
Ys = [np.array([v for (k, v) in nx.get_node_attributes(x, 'attr_name').items()]).reshape(-1, 1) for x in X0]
lambdas = np.array([np.ones(len(Ys)) / len(Ys)]).ravel()
sizebary = 15 # we choose a barycenter with 15 nodes
A, C, log = fgw_barycenters(sizebary, Ys, Cs, ps, lambdas, alpha=0.95, log=True)
"""
Explanation: Barycenter computation
End of explanation
"""
#%% Create the barycenter
bary = nx.from_numpy_matrix(sp_to_adjency(C, threshinf=0, threshsup=find_thresh(C, sup=100, step=100)[0]))
for i, v in enumerate(A.ravel()):
bary.add_node(i, attr_name=v)
#%%
pos = nx.kamada_kawai_layout(bary)
nx.draw(bary, pos=pos, node_color=graph_colors(bary, vmin=-1, vmax=1), with_labels=False)
plt.suptitle('Barycenter', fontsize=20)
plt.show()
"""
Explanation: Plot Barycenter
End of explanation
"""
|
wcchin/colouringmap
|
example/drawing points (part 1).ipynb
|
mit
|
import geopandas as gpd # read and manage attribute table data
import matplotlib.pyplot as plt # prepare the figure
import colouringmap.mapping_point as mpoint # for drawing points
import colouringmap.mapping_polygon as mpoly # for mapping background polygon
import colouringmap.markerset as ms # getting more marker icons
from random import random # just for creating a random colour for demonstration
# the projection of the map, the data is in wgs84(epsg:4326), so need a proj dict for conversion
proj = {u'lon_0': 138, u'ellps': u'WGS84', u'y_0': 0, u'no_defs': True, u'proj': u'eqdc', u'x_0': 0, u'units': u'm', u'lat_2': 40, u'lat_1': 34, u'lat_0': 0}
## magic line for matplotlib
%matplotlib inline
"""
Explanation: Drawing points part 1: major and basic functions
In this tutorial, I will show you how to draw point maps from point shapefile data.
The data is a set of major stations in Tokyo.
An administrative boundary (polygon) shapefile will be used as the background.
The most basic, most frequently used functions for mapping point shapefile is covered in this tutorial.
mpoint.prepare_map: just same as mpoly.prepare_map
mpoint.map_scatter: drawing all points with one simple marker (default='.'), which marker can also be customized using the markerset, and other matplotlib markers ('s' for square, etc.)
markerset.(list_icon_sets, list_icon_names, show_icon, get_marker): these functions are provided for choosing/showing/getting the markers before drawing them to map
drawing poitns with the markers get from the markerset
mpoint.map_category: drawing points according to a column of category (and use marker_order, colour_order, size_order to assign the marker design)
mpoint.map_colour: map points according to a column of colour
mpoint.map_size: map points according to a column of size, with a size_scale to set the size scale
start mapping
preparation: import modules, reading files, projections, prepare map
End of explanation
"""
stations = gpd.read_file('data/tweets_hotspot_station.shp')
stations.head()
stations.crs
"""
Explanation: the file contain some major railway stations.
read the shapefile and take a look.
End of explanation
"""
stations = stations.to_crs(proj)
"""
Explanation: Tthe projection of the file is epsg:4326, which is the latitude and longitude. So, lets project them to a projected crs.
End of explanation
"""
borders = gpd.read_file('data/tokyo_special_ward.shp')
borders.head()
borders = borders.to_crs(proj) # convert the borders projection to the same as the stations
print borders.crs==stations.crs # now check again if the two shapefile have the same projection
"""
Explanation: And now read the borders file, and do the projection.
End of explanation
"""
fig,ax = plt.subplots(figsize=(7,7))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
"""
Explanation: Now they are projected and have the same projection.
Lets, prepare the map with the borders file as a background: use mpoly.map_shape.
End of explanation
"""
fig,ax = plt.subplots(figsize=(7,7))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_scatter(stations, ax, extend_context=False)
"""
Explanation: And, first try: map them using default settings with mpoint.map_scatter, which default to dot (marker='.')
End of explanation
"""
fig,ax = plt.subplots(figsize=(7,7))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_scatter(stations, ax, extend_context=False,
marker='o', size=36, facecolor='red', alpha=.7)
"""
Explanation: Try to change the marker with a circle (marker='o'), and change to red colour.
End of explanation
"""
fig,ax = plt.subplots(figsize=(7,7))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_scatter(stations, ax, extend_context=False,
marker='s', size=36, facecolor='red', alpha=.7)
"""
Explanation: And, change to square.
End of explanation
"""
print ms.list_icon_sets()
"""
Explanation: get some special icon as the marker come with the colouringmap
colouringmap package has included some font icons, which included maki (by mapbox) and so on...
to get a list of the icon_sets:
End of explanation
"""
print ms.list_icon_names('maki')
print ms.list_icon_names('linecons')
"""
Explanation: to get a list of the icon in an icon set:
End of explanation
"""
rail_icon = ms.get_marker('maki', 'rail')
shop_icon = ms.get_marker('linecons', 'shop')
"""
Explanation: to get an icon from the sets and names:
End of explanation
"""
ms.show_icon(shop_icon, size=36, face_colour='green')
ms.show_icon(rail_icon, size=48)
"""
Explanation: The above xxx_icon will be used as the marker for mapping.
You can also use ms.show_icon() to take a look at the chosen icon.
End of explanation
"""
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_scatter(stations, ax, extend_context=False,
marker=shop_icon, size=12, facecolor='red', alpha=.9)
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_scatter(stations, ax, extend_context=False,
marker=rail_icon, size=24, facecolor='#4b0101', alpha=.9)
"""
Explanation: map the points using the special icons
just similar with the previous map_scatter, but change the marker to the show_icon/rail_icon.
End of explanation
"""
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_category(stations,'Company', ax, size=48, extend_context=False)
"""
Explanation: map the points according to a column of category
Sometimes, there are different types of points in a shapefile that you want to draw with different marker shapes, for differentiating them.
One way to do this, is create a temporally geodataframe, with distincted category, and map them with two lines of codes.
colouringmap provided another function named mpoint.map_category, which do the above procedures automatically for you. What you need to do is, provide a list of categories that you want to map (cat_order), and their style (marker_order, size_order, colour_order) in the same sequence.
End of explanation
"""
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_category(stations,'Company', ax, size=28, extend_context=False,
cat_order=['Tokyo Metro', 'Toei'], # category order
marker_order=[shop_icon, rail_icon],
size_order=[24,30],
colour_order=['r', 'g'])
"""
Explanation: By default, map_category function will use some default markers for the different categories.
The following show how to use different sets of styles for each category.
End of explanation
"""
col_list = []
for i in range(len(stations)):
r = random()
g = random()
b = random()
col_list.append((r,g,b))
stations['color'] = col_list
"""
Explanation: colouring the points using different colour
The following demonstrate how to use map_colour to change the colour of the points according to a column that contain the colour info.
In this tutorial, I will just create some random colour for each point. The matplotlib rgb color is a set of three 0.0-1.0 float numbers.
End of explanation
"""
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_colour(stations, 'color', ax, extend_context=False,
marker=rail_icon, size=24,alpha=.9)
"""
Explanation: Now, map the points using the 'color' column.
End of explanation
"""
stemp = stations['DistanceBe'].tolist()
stemp2 = [ float(s)*20 for s in stemp ]
stations['size2'] = stemp2
stations['DistanceBe2'] = [ float(s) for s in stemp ]
stations.head()
"""
Explanation: varying the sizes of the points according to a column with a size number
The following tutorial show how to specify the size for each point, using a column.
The numbers in the column is intended to be used directly as a size. Well, this can also use as the proportional size for a variable.
To make a map with points using sizes that represent the breaking level, see another tutorial (part 2).
Let say we want to change the size of the points according to a column named 'DistanceBe'.
End of explanation
"""
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_size(stations, 'size2', ax, extend_context=False,
marker=rail_icon, facecolor='green', alpha=.9)
"""
Explanation: Now change the size according to the column "size2"
End of explanation
"""
fig,ax = plt.subplots(figsize=(12,12))
ax = mpoint.prepare_map(ax, map_context=borders, background_colour='grey')
ax = mpoly.map_shape(borders, ax, lw=.1, alpha=.7, fc='#c1c6fc')
ax = mpoint.map_size(stations, 'DistanceBe2', ax, extend_context=False, size_scale=20.,
marker=rail_icon, facecolor='green', alpha=.9)
"""
Explanation: Actually, this can also be done by using size_scale=20. (default to 1.).
But, note that the column is 'DistanceBe2', which is a column of numberic numbers, unlike the original column which is a string.
End of explanation
"""
|
arnavd96/Cinemiezer
|
Api_Script.ipynb
|
mit
|
import requests, json
api_key = 'razswfzzubnqy49ry2km9ce9'
sample_request = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=2016-08-13&zip=98056&radius=10&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
#startDate = required (set to today's date), zip/radius can be set optionally based on the user (units is just for if you want miles or kilometers)
data = json.loads(requests.get(sample_request).text)
#one way of getting the json
data[1] #sample entry.
"""
Explanation: Link to documentation: http://developer.tmsapi.com/docs/read/data_v1_1/movies/Movie_showtimes
End of explanation
"""
data[1]['title']
data[1]['showtimes'][1]
data[1]['showtimes'][1]['dateTime']
data[1]['showtimes'][1]['theatre']['name']
data[1]['showtimes'][1]['ticketURI']
data[1]['shortDescription']
data[1]['longDescription']
"""
Explanation: Relevent fields --> longDescription/shortDescription(strings), showtimes(list of dictionaries with 'datetime', 'theatre', and 'ticketURI' (for fandango)), and title (string)
End of explanation
"""
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=2016-08-20&zip=98059&radius=7&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
data = json.loads(requests.get(url).text)
"""
Explanation: Processing the Data
Scenario: I want a list of movies and showtimes in the zipcode '98059' with max radius 7 miles for a movie that will play next week (2016-08-20).
End of explanation
"""
result = list()
test_entry = data.pop()
title = test_entry['title']
test_entry['showtimes'][0:1]
result = list()
showtime_theatre = list()
for showtime in test_entry['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
showtime_theatre
result.append({'title': title, 'showtime_theatre': showtime_theatre})
result
"""
Explanation: Exploring
End of explanation
"""
from itertools import groupby
things = [("animal", "bear"), ("animal", "duck"), ("plant", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")]
for key, group in groupby(things, lambda x: x[0]):
for thing in group:
print("A %s is a %s." % (thing[1], key))
print(" ")
"""
Explanation: Testing 'groupby'
Option 1:
End of explanation
"""
test_showtime_theatre = true_result[0]['showtime_theatre']
test_showtime_theatre
showtimes_by_theatre = defaultdict(list)
for showtime in test_showtime_theatre:
showtimes_by_theatre[showtime['theatre']].append(showtime['showtime'])
showtimes_by_theatre['Regal The Landing Stadium 14 & RPX']
"""
Explanation: Option 2
End of explanation
"""
import requests, json
from collections import defaultdict
true_result = list()
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
true_data = json.loads(requests.get(url).text)
true_result = list()
for movie in true_data:
title = movie['title']
description = movie['shortDescription']
showtime_theatre = list()
for showtime in movie['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
showtimes_by_theatre = defaultdict(list)
for Showtime in showtime_theatre:
showtimes_by_theatre[Showtime['theatre']].append(Showtime['showtime'])
new_movie = {'title': title, 'description': description, 'showtimes_by_theatre': showtimes_by_theatre}
true_result.append(new_movie)
return_list = list()
movie_title = true_result[0]['title']
description = true_result[0]['description']
theatre = 'Cinebarre Issaquah 8'
show_time = true_result[0]['showtimes_by_theatre']['Cinebarre Issaquah 8'][0] + ':00'
show_time
def get_movie(start_date, zip, radius):
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
true_data = json.loads(requests.get(url).text)
true_result = list()
for movie in true_data:
title = movie['title']
description = movie['shortDescription']
showtime_theatre = list()
for showtime in movie['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
showtimes_by_theatre = defaultdict(list)
new_movie = {'title': title, 'description': description, 'showtimes_theatre': showtime_theatre}
true_result.append(new_movie)
return true_result
start_date = '2016-08-14'
zip = '98023'
radius = '10'
result = get_movie(start_date, zip, radius)
result[2]
bad_moms = list(filter(lambda x: x['title'] == 'Bad Moms', result))
showtimes_theatre = bad_moms[0]['showtimes_theatre']
century_federal_way_and_xd = list(filter(lambda x: x['theatre'] == 'Century Federal Way and XD', showtimes_theatre))
showtimes_for_bad_moms_at_century_federal = list()
for showtime in century_federal_way_and_xd:
showtimes_for_bad_moms_at_century_federal.append(showtime['showtime'] + ':00')
showtimes_for_bad_moms_at_century_federal
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
true_data = json.loads(requests.get(url).text)
true_result = list()
title_choice_list = list()
for movie in true_data:
title = movie['title']
description = movie['shortDescription']
showtime_theatre = list()
for showtime in movie['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
showtimes_by_theatre = defaultdict(list)
new_movie = {'title': title, 'description': description, 'showtimes_theatre': showtime_theatre}
true_result.append(new_movie)
title_choice_list = get_title_choices('98056','10','2016-08-15')
chosen_title = title_choice_list[1]
def get_title_choices(zip,radius,date):
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
true_data = json.loads(requests.get(url).text)
true_result = list()
title_choice_list = list()
for movie in true_data:
title = movie['title']
description = movie['shortDescription']
showtime_theatre = list()
for showtime in movie['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
showtimes_by_theatre = defaultdict(list)
new_movie = {'title': title, 'description': description, 'showtimes_theatre': showtime_theatre}
true_result.append(new_movie)
for entry in true_result:
title_choice_list.append(entry['title'])
return title_choice_list
def get_theatre_choices(title, true_result):
title = list(filter(lambda x: x['title'] == title, true_result))
showtimes_theatre = title[0]['showtimes_theatre']
theatre_choices_list = list()
for theatre in showtimes_theatre:
if theatre['theatre'] not in theatre_choices_list:
theatre_choices_list.append(theatre['theatre'])
return theatre_choices_list
def get_showtime_choices(title, theatre, true_result):
title = list(filter(lambda x: x['title'] == title, true_result))
showtimes_theatre = title[0]['showtimes_theatre']
chosen_showtimes_theatre = list(filter(lambda x: x['theatre'] == theatre, showtimes_theatre))
showtime_choice_list = list()
for showtime in chosen_showtimes_theatre:
showtime_choice_list.append(showtime['showtime'] + ':00')
return showtime_choice_list
theatre_choices_list = get_theatre_choices(chosen_title, true_result)
chosen_theatre = theatre_choices_list[0]
chosen_showtime = get_showtime_choices(chosen_title, chosen_theatre, true_result)[0]
"""
Explanation: Implementing
End of explanation
"""
import requests,json
from collections import defaultdict
import datetime
def get_current_json(zip, radius, start_date):
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
true_data = json.loads(requests.get(url).text)
true_result = list()
return_list = list()
for movie in true_data:
title = movie['title']
showtime_theatre = list()
for showtime in movie['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
showtimes_by_theatre = defaultdict(list)
for Showtime in showtime_theatre:
showtimes_by_theatre[Showtime['theatre']].append(Showtime['showtime'])
new_movie = {'title': title, 'description': description, 'showtimes_by_theatre': showtimes_by_theatre}
true_result.append(new_movie)
return true_result
def get_title_choices(current_json):
title_choice_list =list()
for entry in current_json:
title_choice_list.append(entry['title'])
return title_choice_list
def get_theatre_choices(current_json, title):
title = list(filter(lambda x: x['title'] == chosen_title, current_json))
showtimes_theatre = title[0]['showtimes_theatre']
theatre_choices_list = list()
for theatre in showtimes_theatre:
if theatre['theatre'] not in theatre_choices_list:
theatre_choices_list.append(theatre['theatre'])
return theatre_choices_list
def get_showtime_choices(current_json, title, theatre):
title = list(filter(lambda x: x['title'] == chosen_title, current_json))
showtimes_theatre = title[0]['showtimes_theatre']
chosen_showtimes_theatre = list(filter(lambda x: x['theatre'] == chosen_theatre, showtimes_theatre))
showtime_choice_list = list()
for showtime in chosen_showtimes_theatre:
showtime_choice_list.append(showtime['showtime'] + ':00')
return showtime_choice_list
chosen_json = get_current_json('98056', '10', '2016-08-20')
title_choices_list = get_title_choices()
test_date = datetime.date.today()
test_date.strftime('%Y-%m-%d')
def get_current_json(zip, radius, start_date):
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=ywfnykbqh7mgmuuwt5rjxr56'
true_data = json.loads(requests.get(url).text)
true_result = list()
for movie in true_data:
title = movie['title']
# description = movie['shortDescription']
showtime_theatre = list()
for showtime in movie['showtimes']:
new_theatre = showtime['theatre']['name']
new_showtime = showtime['dateTime'][11:]
new_showtime_theatre = {'showtime': new_showtime, 'theatre': new_theatre}
showtime_theatre.append(new_showtime_theatre)
new_movie = {'title': title,'showtime_theatre': showtime_theatre}
true_result.append(new_movie)
return true_result
def get_title_choices(current_json):
title_choice_list = list()
for entry in current_json:
title_choice_list.append((entry['title'], entry['title']))
return title_choice_list
zip = '98056'
radius = '10'
date = '2016-08-16'
title_choices = get_title_choices(get_current_json(zip, radius, date))
title_choices[1][1]
zip = '98056'
radius = '10'
date = '2016-08-16'
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=razswfzzubnqy49ry2km9ce9'
data = get_current_json(zip,radius,date)
def get_theatre_choices(current_json, title):
title = list(filter(lambda x: x['title'] == title, current_json))
showtimes_theatre = title[0]['showtime_theatre']
theatre_choices_list = list()
for theatre in showtimes_theatre:
if theatre['theatre'] not in theatre_choices_list:
theatre_choices_list.append((theatre['theatre'], theatre['theatre']))
return theatre_choices_list
get_theatre_choices(data, title_choices[1][1])
test_string = "5%4%hihihi%"
test_string.split('%')
current_json = get_current_json('98056', '10', '2016-08-16')
zip = '98056'
radius = '10'
start_date = '2016-08-16'
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=ywfnykbqh7mgmuuwt5rjxr56'
true_data = json.loads(requests.get(url).text)
true_data
"""
Explanation: Testing dropDown logic
End of explanation
"""
zip = '98056'
radius = '10'
start_date = '2016-08-20'
def get_current_json():
url = 'http://data.tmsapi.com/v1.1/movies/showings?startDate=' + start_date + '&zip=' + zip + '&radius=' + radius + '&units=mi&api_key=ywfnykbqh7mgmuuwt5rjxr56'
true_data = json.loads(requests.get(url).text)
title_list = list()
current_filter_set = list()
for entry in true_data:
title_list.append((entry['title'], entry['title']))
title_list.sort()
chosen_title = title_list[2][0]
theatre_list = list()
for entry in true_data:
if entry['title'] == chosen_title:
current_filter_set.append(entry)
for showtime in entry['showtimes']:
theatre = showtime['theatre']['name']
if (theatre, theatre) not in theatre_list:
theatre_list.append((theatre, theatre))
theatre_list.sort()
chosen_theatre = theatre_list[0][0]
showtime_list = list()
for entry in current_filter_set:
for showtime in entry['showtimes']:
if showtime['theatre']['name'] == chosen_theatre:
showtime = showtime['dateTime'][11:]
if (showtime, showtime) not in showtime_list:
showtime_list.append((showtime, showtime))
showtime_list.sort()
showtime_list
"""
Explanation: Start from Scratch: The Real Deal
End of explanation
"""
from datetime import date
past = date.today()
past = datetime.strptime(test_date, '%Y-%m-%d')
past < present
"""
Explanation: Deal with Dates
End of explanation
"""
|
jerkos/cobrapy
|
documentation_builder/phenotype_phase_plane.ipynb
|
lgpl-2.1
|
%matplotlib inline
from time import time
import cobra.test
from cobra.flux_analysis import calculate_phenotype_phase_plane
model = cobra.test.create_test_model("textbook")
"""
Explanation: Phenotype Phase Plane
Phenotype phase planes will show distinct phases of optimal growth with different use of two different substrates. For more information, see Edwards et al.
Cobrapy supports calculating and plotting (using matplotlib) these phenotype phase planes. Here, we will make one for the "textbook" E. coli core model.
End of explanation
"""
data = calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e")
data.plot_matplotlib();
"""
Explanation: We want to make a phenotype phase plane to evaluate uptakes of Glucose and Oxygen.
End of explanation
"""
data.plot_matplotlib("Pastel1")
data.plot_matplotlib("Dark2");
"""
Explanation: If brewer2mpl is installed, other color schemes can be used as well
End of explanation
"""
calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e",
reaction1_npoints=20,
reaction2_npoints=20).plot_matplotlib();
"""
Explanation: The number of points which are plotted in each dimension can also be changed
End of explanation
"""
start_time = time()
calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", n_processes=1,
reaction1_npoints=100, reaction2_npoints=100)
print("took %.2f seconds with 1 process" % (time() - start_time))
start_time = time()
calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", n_processes=4,
reaction1_npoints=100, reaction2_npoints=100)
print("took %.2f seconds with 4 process" % (time() - start_time))
"""
Explanation: The code can also use multiple processes to speed up calculations
End of explanation
"""
|
eggie5/ipython-notebooks
|
iris/Iris.ipynb
|
mit
|
from sklearn.datasets import load_iris
iris = load_iris()
iris.feature_names
"""
Explanation: KNN Predictions on the Iris Dataset
This notebook is also hosted at:
http://www.eggie5.com/62-knn-predictions-on-the-iris-dataset
https://github.com/eggie5/ipython-notebooks/blob/master/iris/Iris.ipynb
These are my notes from the "Intro to Python and basic Sci-kit Learn" Seminar on 10/8/2015 @ UCSD
This Notebook will demonstrate the basics of using the k-Nearest Neighbors algorithm (KNN) to make predictions. The dataset is called Iris, and is a collection of flower measurements from which we can train our model to make predictions.
The Dataset
The iris dataset is included int the Sci-kit library. It is a collection 4 dimensional vectors that map flower measurements to a flower species.
End of explanation
"""
iris.data
"""
Explanation: As you can see from the above output, that the dataset is indeed a 4D vector with a length and width measurement for the flower sepal and petal. Lets preview the data for these measurements:
End of explanation
"""
iris.target
"""
Explanation: Now each of these vectors maps directly to a flower species, described in target:
End of explanation
"""
iris.target_names
"""
Explanation: The numbers in the output above are ids. The dataset provides a lookup table to map the ids to a string flower name:
End of explanation
"""
iris.target_names[2]
"""
Explanation: So, id 2 maps to:
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
X = iris.data
Y = iris.target
knn.fit(X,Y)
"""
Explanation: The Model
For this demonstration we will use the KNN algorithm to model a flower species prediction engine. I can't get too deep into the details of the theory behind KNN, but I can try to describe it intuitively below.
KNN Intuition
For simplicity, let's image our dataset is only a 2 dimensional vector instead of 4, (remove width measurements) e.g:
['sepal length (cm)',
'petal length (cm)']
Now we have a collection of (sepal length petal length) pairs that map to iris.target
If we scatter plot this we'd have something like:
Where the x axis is sepal length (cm), the y axis is petal length (cm) and the color (red/green/blue) corresponds to the species id.
Now, from intuition, you could say if you picked a point on the scatter plot and it is surrounded by a majority of blue dots then it can be inferred from the data with a probability degree of certanity that that point is also classified as a blue species. This is in essence what KNN does. It will build a boundary around a clustering of common points. See the image below:
Also, keep in mind that the example above is presented w/ a 2D dataset while ours is 4D, the theory holds.
Now let's test this out and see if we can make some predictions!
Experiment
End of explanation
"""
test_input = [3,5,4,2]
species_id = knn.predict(test_input)
print iris.target_names[species_id]
"""
Explanation: So what we are saying above, is that we have a vector X with corresponding output in vector Y, now train the model w/ these inputs so that we will have a boundary map like the image above that will allow us to make arbitrary predictions.
So for example, say I have a flower w/ these measurements:
sepal length (cm): 3
sepal width (cm): 5
petal length (cm): 4
petal width (cm): 2
If I convert that into a vector and feed it though the model knn it should make a prediction about species.
End of explanation
"""
from sklearn.cross_validation import train_test_split
"""
Explanation: So, we can say w/ a certain degree of certainty that this random sample test_input is of type virginica
Measuring Error
Now I keep using the language "with a certain degree of certainty", etc, in which I'm trying to convey that this an other machine learning/data mining models are only approximations and a degree of error exists. Let's measure the error of our KNN.
Cross Validation w/ Test Train Split
One way we can measure the accuracy of our model is to test it against the data we trained it with. Sci-kit has a convenient function to help us with that.
End of explanation
"""
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=0.4, random_state=4)
actual = Y_test
"""
Explanation: We're basically feeding in input for which we KNOW the correct output to measure the error in our model.
In order to do this we will use the train_test_split function which will divide our dataset into 2 parts. The first part will be used to train the model and the second part will be used to test the model.
End of explanation
"""
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, Y_train)
expected = knn.predict(X_test) #predictions
"""
Explanation: Now train the model w/ the new data:
End of explanation
"""
import sklearn.metrics
score_1 = metrics.accuracy_score(actual, expected)
score_1
"""
Explanation: Now what we could do is to feed in all all the test data in X_test and compare the results to the known answers in Y_test and then measure the error, which is the difference between the expected answer and the actual answer. Then we could compute some type of error function, such as mean-squared-error, etc.
But even more convenient, Sci-kit has a metrics library that will automate a lot of this:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
scores = []
k_range = range(1,26)
for k in k_range:
knn=KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, Y_train)
actual = knn.predict(X_test)
scores.append(metrics.accuracy_score(Y_test, actual))
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
plt.plot(k_range, scores)
"""
Explanation: Lower Error Rate
Our model has an accuracy of .94 where 1 is the max. I don't have anything relative to compare it to but it seems pretty accurate. There are lots of settings in the KNN model, such as the K value, that we can adjust to get a higher acceptance rate. If you noticed, the first time we instantiated KNeighborsClassifier we chose a setting of n_neighbors=1 with no discussion. Lets iterate 1-26 and see if we can't lower the error rate:
End of explanation
"""
knn = KNeighborsClassifier(n_neighbors=20)
knn.fit(X_train, Y_train)
expected = knn.predict(X_test)
score_2 = metrics.accuracy_score(actual, expected)
score_2
(score_1-score_2)/score_1
"""
Explanation: It looks like if we choose a K value of 20 our model should be the most accurate. Lets see if we adjust our K can we get a higher score than the last score score_1 of: 0.94999999999999996
End of explanation
"""
|
alexandrnikitin/algorithm-sandbox
|
courses/DAT256x/Module04/04-01-Data and Visualization.ipynb
|
mit
|
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
df
"""
Explanation: Data and Data Visualization
Machine learning, and therefore a large part of AI, is based on statistical analysis of data. In this notebook, you'll examine some fundamental concepts related to data and data visualization.
Introduction to Data
Statistics are based on data, which consist of a collection of pieces of information about things you want to study. This information can take the form of descriptions, quantities, measurements, and other observations. Typically, we work with related data items in a dataset, which often consists of a collection of observations or cases. Most commonly, we thing about this dataset as a table that consists of a row for each observation, and a column for each individual data point related to that observation - we variously call these data points attributes or features, and they each describe a specific characteristic of the thing we're observing.
Let's take a look at a real example. In 1886, Francis Galton conducted a study into the relationship between heights of parents and their (adult) children. Run the Python code below to view the data he collected (you can safely ignore a deprecation warning if it is displayed):
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of gender counts
genderCounts = df['gender'].value_counts()
# Plot a bar chart
%matplotlib inline
from matplotlib import pyplot as plt
genderCounts.plot(kind='bar', title='Gender Counts')
plt.xlabel('Gender')
plt.ylabel('Number of Children')
plt.show()
"""
Explanation: Types of Data
Now, let's take a closer look at this data (you can click the left margin next to the dataset to toggle between full height and a scrollable pane). There are 933 observations, each one recording information pertaining to an individual child. The information recorded consists of the following features:
- family: An identifier for the family to which the child belongs.
- father: The height of the father.
- mother: The height of the mother.
- midparentHeight: The mid-point between the father and mother's heights (calculated as (father + 1.08 x mother) ÷ 2)
- children: The total number of children in the family.
- childNum: The number of the child to whom this observation pertains (Galton numbered the children in desending order of height, with male children listed before female children)
- gender: The gender of the child to whom this observation pertains.
- childHeight: The height of the child to whom this observation pertains.
It's worth noting that there are several distinct types of data recorded here. To begin with, there are some features that represent qualities, or characteristics of the child - for example, gender. Other feaures represent a quantity or measurement, such as the child's height. So broadly speaking, we can divide data into qualitative and quantitative data.
Qualitative Data
Let's take a look at qualitative data first. This type of data is categorical - it is used to categorize or identify the entity being observed. Sometimes you'll see features of this type described as factors.
Nominal Data
In his observations of children's height, Galton assigned an identifier to each family and he recorded the gender of each child. Note that even though the family identifier is a number, it is not a measurement or quantity. Family 002 it not "greater" than family 001, just as a gender value of "male" does not indicate a larger or smaller value than "female". These are simply named values for some characteristic of the child, and as such they're known as nominal data.
Ordinal Data
So what about the childNum feature? It's not a measurement or quantity - it's just a way to identify individual children within a family. However, the number assigned to each child has some additional meaning - the numbers are ordered. You can find similar data that is text-based; for example, data about training courses might include a "level" attribute that indicates the level of the course as "basic:, "intermediate", or "advanced". This type of data, where the value is not itself a quantity or measurement, but it indicates some sort of inherent order or heirarchy, is known as ordinal data.
Quantitative Data
Now let's turn our attention to the features that indicate some kind of quantity or measurement.
Discrete Data
Galton's observations include the number of children in each family. This is a discrete quantative data value - it's something we count rather than measure. You can't, for example, have 2.33 children!
Continuous Data
The data set also includes height values for father, mother, midparentHeight, and childHeight. These are measurements along a scale, and as such they're described as continuous quantative data values that we measure rather than count.
Sample vs Population
Galton's dataset includes 933 observations. It's safe to assume that this does not account for every person in the world, or even just the UK, in 1886 when the data was collected. In other words, Galton's data represents a sample of a larger population. It's worth pausing to think about this for a few seconds, because there are some implications for any conclusions we might draw from Galton's observations.
Think about how many times you see a claim such as "one in four Americans enjoys watching football". How do the people who make this claim know that this is a fact? Have they asked everyone in the the US about their football-watching habits? Well, that would be a bit impractical, so what usually happens is that a study is conducted on a subset of the population, and (assuming that this is a well-conducted study), that subset will be a representative sample of the population as a whole. If the survey was conducted at the stadium where the Superbowl is being played, then the results are likely to be skewed because of a bias in the study participants.
Similarly, we might look at Galton's data and assume that the heights of the people included in the study bears some relation to the heights of the general population in 1886; but if Galton specifically selected abnormally tall people for his study, then this assumption would be unfounded.
When we deal with statistics, we usually work with a sample of the data rather than a full population. As you'll see later, this affects the way we use notation to indicate statistical measures; and in some cases we calculate statistics from a sample differently than from a full population to account for bias in the sample.
Visualizing Data
Data visualization is one of the key ways in which we can examine data and get insights from it. If a picture is worth a thousand words, then a good graph or chart is worth any number of tables of data.
Let's examine some common kinds of data visualization:
Bar Charts
A bar chart is a good way to compare numeric quantities or counts across categories. For example, in the Galton dataset, you might want to compare the number of female and male children.
Here's some Python code to create a bar chart showing the number of children of each gender.
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of child counts
# there's a row for each child, so we need to filter to one row per family to avoid over-counting
families = df[['family', 'children']].drop_duplicates()
# Now count number of rows for each 'children' value, and sort by the index (children)
childCounts = families['children'].value_counts().sort_index()
# Plot a bar chart
%matplotlib inline
from matplotlib import pyplot as plt
childCounts.plot(kind='bar', title='Family Size')
plt.xlabel('Number of Children')
plt.ylabel('Families')
plt.show()
"""
Explanation: From this chart, you can see that there are slightly more male children than female children; but the data is reasonably evenly split between the two genders.
Bar charts are typically used to compare categorical (qualitative) data values; but in some cases you might treat a discrete quantitative data value as a category. For example, in the Galton dataset the number of children in each family could be used as a way to categorize families. We might want to see how many familes have one child, compared to how many have two children, etc.
Here's some Python code to create a bar chart showing family counts based on the number of children in the family.
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Plot a histogram of midparentHeight
%matplotlib inline
from matplotlib import pyplot as plt
df['father'].plot.hist(title='Father Heights')
plt.xlabel('Height')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: Note that the code sorts the data so that the categories on the x axis are in order - attention to this sort of detail can make your charts easier to read. In this case, we can see that the most common number of children per family is 1, followed by 5 and 6. Comparatively fewer families have more than 8 children.
Histograms
Bar charts work well for comparing categorical or discrete numeric values. When you need to compare continuous quantitative values, you can use a similar style of chart called a histogram. Histograms differ from bar charts in that they group the continuous values into ranges or bins - so the chart doesn't show a bar for each individual value, but rather a bar for each range of binned values. Because these bins represent continuous data rather than discrete data, the bars aren't separated by a gap. Typically, a histogram is used to show the relative frequency of values in the dataset.
Here's some Python code to create a histogram of the father values in the Galton dataset, which record the father's height:
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Plot a histogram of midparentHeight
%matplotlib inline
from matplotlib import pyplot as plt
df['father'].plot.hist(title='Father Heights', bins=19)
plt.xlabel('Height')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: The histogram shows that the most frequently occuring heights tend to be in the mid-range. There are fewer extremely short or exteremely tall fathers.
In the histogram above, the number of bins (and their corresponding ranges, or bin widths) was determined automatically by Python. In some cases you may want to explicitly control the number of bins, as this can help you see detail in the distribution of data values that otherwise you might miss. The following code creates a histogram for the same father's height values, but explicitly distributes them over 20 bins (19 are specified, and Python adds one):
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of gender counts
genderCounts = df['gender'].value_counts()
# Plot a pie chart
%matplotlib inline
from matplotlib import pyplot as plt
genderCounts.plot(kind='pie', title='Gender Counts', figsize=(6,6))
plt.legend()
plt.show()
"""
Explanation: We can still see that the most common heights are in the middle, but there's a notable drop in the number of fathers with a height between 67.5 and 70.
Pie Charts
Pie charts are another way to compare relative quantities of categories. They're not commonly used by data scientists, but they can be useful in many business contexts with manageable numbers of categories because they not only make it easy to compare relative quantities by categories; they also show those quantities as a proportion of the whole set of data.
Here's some Python to show the gender counts as a pie chart:
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of heights (father vs child)
parentHeights = df[['midparentHeight', 'childHeight']]
# Plot a scatter plot chart
%matplotlib inline
from matplotlib import pyplot as plt
parentHeights.plot(kind='scatter', title='Parent vs Child Heights', x='midparentHeight', y='childHeight')
plt.xlabel('Avg Parent Height')
plt.ylabel('Child Height')
plt.show()
"""
Explanation: Note that the chart includes a legend to make it clear what category each colored area in the pie chart represents. From this chart, you can see that males make up slightly more than half of the overall number of children; with females accounting for the rest.
Scatter Plots
Often you'll want to compare quantative values. This can be especially useful in data science scenarios where you are exploring data prior to building a machine learning model, as it can help identify apparent relationships between numeric features. Scatter plots can also help identify potential outliers - values that are significantly outside of the normal range of values.
The following Python code creates a scatter plot that plots the intersection points for midparentHeight on the x axis, and childHeight on the y axis:
End of explanation
"""
import statsmodels.api as sm
df = sm.datasets.elnino.load_pandas().data
df['AVGSEATEMP'] = df.mean(1)
# Plot a line chart
%matplotlib inline
from matplotlib import pyplot as plt
df.plot(title='Average Sea Temperature', x='YEAR', y='AVGSEATEMP')
plt.xlabel('Year')
plt.ylabel('Average Sea Temp')
plt.show()
"""
Explanation: In a scatter plot, each dot marks the intersection point of the two values being plotted. In this chart, most of the heights are clustered around the center; which indicates that most parents and children tend to have a height that is somewhere in the middle of the range of heights observed. At the bottom left, there's a small cluster of dots that show some parents from the shorter end of the range who have children that are also shorter than their peers. At the top right, there are a few extremely tall parents who have extremely tall children. It's also interesting to note that the top left and bottom right of the chart are empty - there aren't any cases of extremely short parents with extremely tall children or vice-versa.
Line Charts
Line charts are a great way to see changes in values along a series - usually (but not always) based on a time period. The Galton dataset doesn't include any data of this type, so we'll use a different dataset that includes observations of sea surface temperature between 1950 and 2010 for this example:
End of explanation
"""
|
climberwb/pycon-pandas-tutorial
|
Exercises-5.ipynb
|
mit
|
r_d = release_dates[(release_dates.title.str.contains("Christmas")) & (release_dates.country == "USA")]
r_d.date.dt.month.value_counts().sort_index().plot(kind="bar")
"""
Explanation: Make a bar plot of the months in which movies with "Christmas" in their title tend to be released in the USA.
End of explanation
"""
r_d = release_dates[(release_dates.title.str.contains("The Hobbit")) & (release_dates.country == "USA")]
r_d
r_d.date.dt.month.value_counts().sort_index().plot(kind="bar")
"""
Explanation: Make a bar plot of the months in which movies whose titles start with "The Hobbit" are released in the USA.
End of explanation
"""
r_d = release_dates[(release_dates.title.str.contains("Romance")) ]
r_d
r_d.date.dt.dayofweek.value_counts().sort_index().plot(kind="bar")
"""
Explanation: Make a bar plot of the day of the week on which movies with "Romance" in their title tend to be released in the USA.
End of explanation
"""
r_d = release_dates[(release_dates.title.str.contains("Action")) ]
r_d
r_d.date.dt.dayofweek.value_counts().sort_index().plot(kind="bar")
"""
Explanation: Make a bar plot of the day of the week on which movies with "Action" in their title tend to be released in the USA.
End of explanation
"""
usa = release_dates[release_dates.country == 'USA']
c = cast
c = c[c.name == 'Judi Dench']
c = c[c.year // 10 * 10 == 1990]
c.merge(usa).sort('date')
"""
Explanation: On which date was each Judi Dench movie from the 1990s released in the USA?
End of explanation
"""
c = cast
c = c[c.name == 'Judi Dench']
m = c.merge(usa).sort('date')
m.date.dt.month.value_counts().sort_index().plot(kind='bar')
c = cast
c = c[c.name == 'Tom Cruise']
m = c.merge(usa).sort('date')
m.date.dt.month.value_counts().sort_index().plot(kind='bar')
"""
Explanation: In which months do films with Judi Dench tend to be released in the USA?
End of explanation
"""
|
rsignell-usgs/notebook
|
CSW/CSW_ServiceType_query.ipynb
|
mit
|
from owslib.csw import CatalogueServiceWeb
from owslib import fes
import numpy as np
endpoint = 'http://geoport.whoi.edu/csw'
#endpoint = 'http://catalog.data.gov/csw-all'
#endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'
#endpoint = 'http://www.nodc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint,timeout=60)
print csw.version
csw.get_operation_by_name('GetRecords').constraints
try:
csw.get_operation_by_name('GetDomain')
csw.getdomain('apiso:ServiceType', 'property')
print(csw.results['values'])
except:
print('GetDomain not supported')
"""
Explanation: Query CSW to find all COAWST WMS services
Find all the COAWST (ocean model) datasets that have WMS services by using the CSW queryables apiso:anyText and apiso:ServiceType on different CSW endpoints.
End of explanation
"""
val = 'COAWST'
filter1 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
filter_list = [ filter1 ]
csw.getrecords2(constraints=filter_list,maxrecords=100,esn='full')
print len(csw.records.keys())
for rec in list(csw.records.keys()):
print csw.records[rec].title
choice=np.random.choice(list(csw.records.keys()))
print(csw.records[choice].title)
csw.records[choice].references
"""
Explanation: Query for all COAWST datasets
End of explanation
"""
val = 'COAWST'
filter1 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?',matchCase=True)
val = 'WMS'
filter2 = fes.PropertyIsLike(propertyname='apiso:ServiceType',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?',matchCase=False)
filter_list = [ [filter1, filter2] ]
csw.getrecords2(constraints=filter_list, maxrecords=1000)
print(len(csw.records.keys()))
for rec in list(csw.records.keys()):
print('title:'+csw.records[rec].title)
print('identifier:'+csw.records[rec].identifier)
print('modified:'+csw.records[rec].modified)
val = 'wms'
filter2 = fes.PropertyIsLike(propertyname='apiso:ServiceType',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?',matchCase=True)
filter_list = [ filter2 ]
csw.getrecords2(constraints=filter_list, maxrecords=1000)
print(len(csw.records.keys()))
"""
Explanation: Query for all COAWST datsets that also contain WMS endpoints
Since all COAWST datasets contain WMS endpoints, this should return the same number of dataset records
End of explanation
"""
|
isb-cgc/examples-Python
|
notebooks/Somatic Mutations.ipynb
|
apache-2.0
|
import gcp.bigquery as bq
somatic_mutations_BQtable = bq.Table('isb-cgc:tcga_201607_beta.Somatic_Mutation_calls')
"""
Explanation: Somatic Mutations
The goal of this notebook is to introduce you to the Somatic Mutations BigQuery table.
This table is based on the open-access somatic mutation calls available in MAF files at the DCC. In addition to uploading all current MAF files from the DCC, the mutations were also annotated using Oncotator. A subset of the columns in the underlying MAF files and a subset of the Oncotator outputs were then assembled in this table.
In addition, the ETL process includes several data-cleaning steps because many tumor types actually have multiple current MAF files and therefore potentially duplicate mutation calls. In some cases, a tumor sample may have had mutations called relative to both a blood-normal and an adjacent-tissue sample, and in other cases MAF files may contain mutations called on more than one aliquot from the same sample. Every effort was made to include all of the available data at the DCC while avoiding having multiple rows in the mutation table describing the same somatic mutation. Note, however, that if the same mutation was called by multiple centers and appeared in different MAF files, it may be described on muliple rows (as you will see later in this notebook). Furthermore, in some cases, the underlying MAF file may have been based on a multi-center mutationa-calling exercise, in which case you may see a list of centers in the Center field, eg "bcgsc.ca;broad.mit.edu;hgsc.bcm.edu;mdanderson.org;ucsc.edu".
In conclusion, if you are counting up the number of mutations observed in a sample or a patient or a tumor-type, be sure to include the necessary GROUP BY clause(s) in order to avoid double-counting!
As usual, in order to work with BigQuery, you need to import the bigquery module (gcp.bigquery) and you need to know the name(s) of the table(s) you are going to be working with:
End of explanation
"""
%bigquery schema --table $somatic_mutations_BQtable
"""
Explanation: Let's start by taking a look at the table schema:
End of explanation
"""
%%sql --module count_unique
SELECT COUNT(DISTINCT $f,25000) AS n
FROM $t
fieldList = ['ParticipantBarcode', 'Tumor_SampleBarcode', 'Normal_SampleBarcode' ]
for aField in fieldList:
field = somatic_mutations_BQtable.schema[aField]
rdf = bq.Query(count_unique,t=somatic_mutations_BQtable,f=field).to_dataframe()
print " There are %6d unique values in the field %s. " % ( rdf.iloc[0]['n'], aField)
"""
Explanation: That's a lot of fields! Let's dig in a bit further to see what is included in this table. For example let's count up the number of unique patients, tumor-samples, and normal-samples based on barcode identifiers:
End of explanation
"""
%%sql --module top_5_values
SELECT $f, COUNT(*) AS n
FROM $t
WHERE ( $f IS NOT NULL )
GROUP BY $f
ORDER BY n DESC
LIMIT 5
"""
Explanation: Now let's look at a few key fields and find the top-5 most frequent values in each field:
End of explanation
"""
bq.Query(top_5_values,t=somatic_mutations_BQtable,f=somatic_mutations_BQtable.schema['Hugo_Symbol']).results().to_dataframe()
bq.Query(top_5_values,t=somatic_mutations_BQtable,f=somatic_mutations_BQtable.schema['Center']).results().to_dataframe()
bq.Query(top_5_values,t=somatic_mutations_BQtable,f=somatic_mutations_BQtable.schema['Mutation_Status']).results().to_dataframe()
bq.Query(top_5_values,t=somatic_mutations_BQtable,f=somatic_mutations_BQtable.schema['Protein_Change']).results().to_dataframe()
"""
Explanation: You can use the parameterized query defined above to find the top-5 most frequently occurring values for any field of interest, for example:
End of explanation
"""
%%sql --module find_BRAF_V600E
SELECT
Tumor_SampleBarcode,
Study,
Hugo_Symbol,
Genome_Change,
Protein_Change
FROM
$t
WHERE
( Hugo_Symbol="BRAF"
AND Protein_Change="p.V600E" )
GROUP BY
Tumor_SampleBarcode,
Study,
Hugo_Symbol,
Genome_Change,
Protein_Change
ORDER BY
Study,
Tumor_SampleBarcode
r = bq.Query(find_BRAF_V600E,t=somatic_mutations_BQtable).results()
r.to_dataframe()
"""
Explanation: Everyone probably recognizes the V600E mutation in the previous result, so let's use that well-known BRAF mutation as a way to explore what other information is available in this table.
End of explanation
"""
%%sql
SELECT Study, COUNT(*) AS n
FROM $r
GROUP BY Study
HAVING n > 1
ORDER BY n DESC
"""
Explanation: Let's count these mutations up by study (tumor-type):
End of explanation
"""
%%sql --module find_BRAF_V600E_by_patient
SELECT
ParticipantBarcode,
Study,
Hugo_Symbol,
Genome_Change,
Protein_Change
FROM
$t
WHERE
( Hugo_Symbol="BRAF"
AND Protein_Change="p.V600E" )
GROUP BY
ParticipantBarcode,
Study,
Hugo_Symbol,
Genome_Change,
Protein_Change
ORDER BY
Study,
ParticipantBarcode
r = bq.Query(find_BRAF_V600E_by_patient,t=somatic_mutations_BQtable).results()
%%sql
SELECT Study, COUNT(*) AS n
FROM $r
GROUP BY Study
HAVING n > 1
ORDER BY n DESC
"""
Explanation: You may have noticed that in our earlier query, we did a GROUP BY to make sure that we didn't count the same mutation called on the same sample more than once. We might want to GROUP BY patient instead to see if that changes our counts -- we may have multiple samples from some patients.
End of explanation
"""
%%sql
SELECT
ParticipantBarcode,
COUNT(*) AS m
FROM (
SELECT
ParticipantBarcode,
Tumor_SampleBarcode,
COUNT(*) AS n
FROM
$somatic_mutations_BQtable
WHERE
( Hugo_Symbol="BRAF"
AND Protein_Change="p.V600E"
AND Study="THCA" )
GROUP BY
ParticipantBarcode,
Tumor_SampleBarcode,
)
GROUP BY
ParticipantBarcode
HAVING
m > 1
ORDER BY
m DESC
"""
Explanation: When we counted the number of mutated samples, we found 261 THCA samples, but when we counted the number of patients, we found 258 THCA patients, so let's see what's going on there.
End of explanation
"""
%%sql
SELECT
ParticipantBarcode,
Tumor_SampleBarcode,
Tumor_SampleTypeLetterCode,
Normal_SampleBarcode,
Normal_SampleTypeLetterCode,
Center,
FROM
$somatic_mutations_BQtable
WHERE
( Hugo_Symbol="BRAF"
AND Protein_Change="p.V600E"
AND Study="THCA"
AND ParticipantBarcode="TCGA-EM-A2P1" )
ORDER BY
Tumor_SampleBarcode,
Normal_SampleBarcode,
Center
"""
Explanation: Sure enough, we see that the same mutation is reported twice for each of these three patients. Let's look at why:
End of explanation
"""
%%sql
SELECT
ParticipantBarcode,
Tumor_SampleTypeLetterCode,
Normal_SampleTypeLetterCode,
Study,
Center,
Variant_Type,
Variant_Classification,
Genome_Change,
cDNA_Change,
Protein_Change,
UniProt_Region,
COSMIC_Total_Alterations_In_Gene,
DrugBank
FROM
$somatic_mutations_BQtable
WHERE
( Hugo_Symbol="BRAF"
AND Protein_Change="p.V600E"
AND Study="THCA"
AND ParticipantBarcode="TCGA-EM-A2P1"
AND Tumor_SampleTypeLetterCode="TP"
AND Center="broad.mit.edu" )
"""
Explanation: Aha! not only did this patient provide both a primary tumor (TP) and a metastatic (TM) sample, but we have mutation calls from three different centers.
Finally, let's pick out one of these mutations and see what some of the other fields in this table can tell us:
End of explanation
"""
%%sql --module BRAF_TUTE
SELECT
Chr,
Start,
Func,
Gene,
AA,
Polyphen2_HDIV_score,
Polyphen2_HVAR_score,
MutationAssessor_score,
TUTE
FROM
[silver-wall-555:TuteTable.hg19]
WHERE
( Gene="BRAF"
AND Func="exonic" )
ORDER BY
Start ASC
tuteBRAFscores = bq.Query(BRAF_TUTE).results().to_dataframe()
tuteBRAFscores.describe()
"""
Explanation: When working with variants or mutations, there is another public BigQuery table that you might find useful. Developed by Tute Genomics, this comprehensive, publicly-available database of over 8.5 billion known variants was announced earlier this year. This table includes several types of annotations and scores, such ase Polyphen2 and MutationAssessor, and a proprietary "Tute score" which estimates whether a SNP or indel is likely to be associate with Mendelian phenotypes.
For example, you can look up all exonic BRAF mutations in the TuteTable in less than 20 seconds:
End of explanation
"""
%%sql --module TCGA_BRAF
SELECT
Hugo_Symbol,
Protein_Change,
MutationAssessor_score,
TUTE
FROM (
SELECT
Hugo_Symbol,
Protein_Change
FROM
$t
WHERE
( Hugo_Symbol="BRAF" )
GROUP BY
Hugo_Symbol,
Protein_Change ) AS tcga
JOIN (
SELECT
Gene,
AA,
MutationAssessor_score,
TUTE
FROM
[silver-wall-555:TuteTable.hg19]
WHERE
( Gene="BRAF" ) ) AS tute
ON
tcga.Hugo_Symbol=tute.Gene
AND tcga.Protein_Change=tute.AA
tcgaBRAFscores = bq.Query(TCGA_BRAF,t=somatic_mutations_BQtable).results().to_dataframe()
tcgaBRAFscores.describe()
import numpy as np
import matplotlib.pyplot as plt
plt.hist(tuteBRAFscores['TUTE'],bins=50,normed=True,color='red',alpha=0.6,label='all variants');
plt.hist(tcgaBRAFscores['TUTE'],bins=50,normed=True,color='blue',alpha=0.4,label='TCGA somatic mutations');
plt.legend(loc='upper right');
plt.xlabel('TUTE score');
plt.ylabel('probability');
plt.hist(tuteBRAFscores['MutationAssessor_score'],bins=45,range=[-4,5],normed=True,color='red',alpha=0.6,label='all variants');
plt.hist(tcgaBRAFscores['MutationAssessor_score'],bins=45,range=[-4,5],normed=True,color='blue',alpha=0.4,label='TCGA somatic mutations');
plt.legend(loc='upper right');
plt.xlabel('MutationAssessor score');
plt.ylabel('probability');
"""
Explanation: Let's go back to the TCGA somatic mutations table and pull out all BRAF mutations and then join them with the matching mutations in the Tute Table so that we can compare the distribution of scores (eg MutationAssessor and TUTE) between the somatic mutations seen in TCGA and the larger set of variants contained in the Tute Table.
End of explanation
"""
|
craigrshenton/home
|
notebooks/notebook7.ipynb
|
mit
|
# code written in py_3.0
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
"""
Explanation: Load data from http://media.wiley.com/product_ancillary/6X/11186614/DOWNLOAD/ch08.zip, SwordForecasting.xlsx
End of explanation
"""
# find path to your SwordForecasting.xlsx
df_sales = pd.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch08/SwordForecasting.xlsm','rb'), sheetname=0)
df_sales = df_sales.iloc[0:36, 0:2]
df_sales.rename(columns={'t':'Time'}, inplace=True)
df_sales.head()
df_sales.Time = pd.date_range('2010-1', periods=len(df_sales.Time), freq='M') # 'Time' is now in time-series format
df_sales = df_sales.set_index('Time') # set Time as Series index
sns.set(style="darkgrid", context="notebook", font_scale=0.9, rc={"lines.linewidth": 1.5}) # make plots look nice
fig, ax = plt.subplots(1)
ax.plot(df_sales)
plt.ylabel('Demand')
plt.xlabel('Date')
# rotate and align the tick labels so they look better
fig.autofmt_xdate()
# use a more precise date string for the x axis locations
ax.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
plt.show()
"""
Explanation: Load sales time-series data
End of explanation
"""
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeSeries):
fig, ax = plt.subplots(1)
ax.plot(timeSeries, '-.', label='raw data')
ax.plot(timeSeries.rolling(12).mean(), label='moving average (year)')
ax.plot(timeSeries.expanding().mean(), label='expanding')
ax.plot(timeSeries.ewm(alpha=0.03).mean(), label='EWMA ($\\alpha=.03$)')
# rotate and align the tick labels so they look better
fig.autofmt_xdate()
plt.ylabel('Demand')
plt.xlabel('Date')
plt.legend(bbox_to_anchor=(1.35, .5))
plt.show()
# perform Dickey-Fuller test:
print('Results of Dickey-Fuller Test:')
dftest = adfuller(timeSeries.ix[:,0], autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(df_sales)
def tsplot(y, lags=None, figsize=(10, 8)):
fig = plt.figure(figsize=figsize)
layout = (2, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
y.plot(ax=ts_ax)
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
[ax.set_xlim(1.5) for ax in [acf_ax, pacf_ax]]
sns.despine()
plt.tight_layout()
return ts_ax, acf_ax, pacf_ax
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import statsmodels.api as sm
mod = smt.ARIMA(df_sales, order=(1, 1, 1))
res = mod.fit()
pred_dy = res.get_prediction(start=min(df_sales.index), dynamic=min(df_sales.index))
pred_dy_ci = pred_dy.conf_int()
"""
Explanation: Following Aarshay Jain over at Analytics Vidhya (see here) we implement a Rolling Mean, Standard Deviation + Dickey-Fuller test
End of explanation
"""
|
ricklupton/sankeyview
|
docs/tutorials/colour-scales.ipynb
|
mit
|
import pandas as pd
import numpy as np
from floweaver import *
df1 = pd.read_csv('holiday_data.csv')
"""
Explanation: Colour-intensity scales
In this tutorial we will look at how to use colours in the Sankey diagram. We have already seen how to use a palette, but in this tutorial we will also create a Sankey where the intensity of the colour is proportional to a numerical value.
First step is to import all the required packages and data:
End of explanation
"""
dataset = Dataset(df1)
df1
"""
Explanation: Now take a look at the dataset we are using. This is a very insightful [made-up] dataset about how different types of people lose weight while on holiday enjoying themselves.
End of explanation
"""
partition_job = Partition.Simple('Employment Job', np.unique(df1['Employment Job']))
partition_activity = Partition.Simple('Activity', np.unique(df1['Activity']))
"""
Explanation: We now define the partitions of the data. Rather than listing the categories by hand, we use np.unique to pick out a list of the unique values that occur in the dataset.
End of explanation
"""
# these statements or the ones above do the same thing
partition_job = dataset.partition('Employment Job')
partition_activity = dataset.partition('Activity')
"""
Explanation: In fact, this is pretty common so there is a built-in function to do this:
End of explanation
"""
nodes = {
'Activity': ProcessGroup(['Activity'], partition_activity),
'Job': ProcessGroup(['Employment Job'], partition_job),
}
bundles = [
Bundle('Activity', 'Job'),
]
ordering = [
['Activity'],
['Job'],
]
"""
Explanation: We then go on to define the structure of our sankey. We define nodes, bundles and the order. In this case its pretty straightforward:
End of explanation
"""
# These are the same each time, so just write them here once
size_options = dict(width=500, height=400,
margins=dict(left=100, right=100))
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, dataset, measures='Calories Burnt').to_widget(**size_options)
"""
Explanation: Now we will plot a Sankey that shows the share of time dedicated to each activity by each type of person.
End of explanation
"""
sdd = SankeyDefinition(nodes, bundles, ordering, flow_partition=partition_job)
weave(sdd, dataset, palette='Set2_8', measures='Calories Burnt').to_widget(**size_options)
"""
Explanation: We can start using colour by specifying that we want to partition the flows according to type of person. Notice that this time we are using a pre-determined palette.
You can find all sorts of palettes listed here.
End of explanation
"""
weave(sdd, dataset, link_color=QuantitativeScale('Calories Burnt'), measures='Calories Burnt').to_widget(**size_options)
"""
Explanation: Now, if we want to make the colour of the flow to be proportional to a numerical value. Use the hue parameter to set the name of the variable that you want to display in colour. To start off, let's use "value", which is the width of the lines: wider lines will be shown in a darker colour.
End of explanation
"""
weave(sdd, dataset, measures={'Calories Burnt': 'sum', 'Enjoyment': 'mean'}, link_width='Calories Burnt',
link_color=QuantitativeScale('Enjoyment')).to_widget(**size_options)
weave(sdd, dataset, measures={'Calories Burnt': 'sum', 'Enjoyment': 'mean'}, link_width='Calories Burnt',
link_color=QuantitativeScale('Enjoyment', intensity='Calories Burnt')).to_widget(**size_options)
"""
Explanation: It's more interesting to use colour to show a different attribute from the flow table. But because a line in the Sankey diagram is an aggregation of multiple flows in the original data, we need to specify how the new dimension will be aggregated. For example, we'll use the mean of the flows within each Sankey link to set the colour. In this case we will use the colour to show how much each type of person emjoys each activity. We can be interested in either the cumulative enjoyment, or the mean enjoyment: try both!
Aggregation is specified with the measures parameter, which should be set to a dictionary mapping dimension names to aggregation functions ('mean', 'sum' etc).
End of explanation
"""
scale = QuantitativeScale('Enjoyment', palette='Blues_9')
weave(sdd, dataset,
measures={'Calories Burnt': 'sum', 'Enjoyment': 'mean'},
link_width='Calories Burnt',
link_color=scale) \
.to_widget(**size_options)
scale.domain
"""
Explanation: You can change the colour palette using the palette attribute. The palette names are different from before, because those were categorical (or qualitative) scales, and this is now a sequential scale. The palette names are listed here.
End of explanation
"""
class MyScale(QuantitativeScale):
def get_palette(self, link):
# Choose colour scheme based on link type (here, Employment Job)
name = 'Greens_9' if link.type == 'Student' else 'Blues_9'
return self.lookup_palette_name(name)
def get_color(self, link, value):
palette = self.get_palette(link)
return palette(0.2 + 0.8*value)
my_scale = MyScale('Enjoyment', palette='Blues_9')
weave(sdd, dataset,
measures={'Calories Burnt': 'sum', 'Enjoyment': 'mean'},
link_width='Calories Burnt',
link_color=my_scale) \
.to_widget(**size_options)
"""
Explanation: It is possible to create a colorbar / scale to show the range of intensity values, but it's not currently as easy as it should be. This should be improved in future.
More customisation
You can subclass the QuantitativeScale class to get more control over the colour scale.
End of explanation
"""
|
arnoldlu/lisa
|
ipynb/examples/android/workloads/Android_Gmaps.ipynb
|
apache-2.0
|
from conf import LisaLogging
LisaLogging.setup()
%pylab inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Import support for Android devices
from android import Screen, Workload
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
import pandas as pd
import sqlite3
from IPython.display import display
def experiment():
# Configure governor
target.cpufreq.set_all_governors('sched')
# Get workload
wload = Workload.getInstance(te, 'GMaps')
# Run GMaps
wload.run(out_dir=te.res_dir,
collect="ftrace",
location_search="London British Museum",
swipe_count=10)
# Dump platform descriptor
te.platform_dump(te.res_dir)
"""
Explanation: GMaps on Android
The goal of this experiment is to test out GMaps on a Pixel device running Android and collect results.
End of explanation
"""
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'android',
"board" : 'pixel',
# Device serial ID
# Not required if there is only one device connected to your computer
"device" : "HT67M0300128",
# Android home
# Not required if already exported in your .bashrc
"ANDROID_HOME" : "/home/vagrant/lisa/tools/",
# Folder where all the results will be collected
"results_dir" : "Gmaps_example",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_load_waking_task",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff"
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'taskset'],
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False)
target = te.target
"""
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
"""
results = experiment()
"""
Explanation: Workload execution
End of explanation
"""
# Load traces in memory (can take several minutes)
platform_file = os.path.join(te.res_dir, 'platform.json')
with open(platform_file, 'r') as fh:
platform = json.load(fh)
trace_file = os.path.join(te.res_dir, 'trace.dat')
trace = Trace(platform, trace_file, events=my_conf['ftrace']['events'], normalize_time=False)
# Find exact task name & PID
for pid, name in trace.getTasks().iteritems():
if "GLRunner" in name:
glrunner = {"pid" : pid, "name" : name}
print("name=\"" + glrunner["name"] + "\"" + " pid=" + str(glrunner["pid"]))
"""
Explanation: Trace analysis
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
End of explanation
"""
# Helper functions to pinpoint issues
def find_prev_cpu(trace, taskname, time):
sdf = trace.data_frame.trace_event('sched_switch')
sdf = sdf[sdf.prev_comm == taskname]
sdf = sdf[sdf.index <= time]
sdf = sdf.tail(1)
wdf = trace.data_frame.trace_event('sched_wakeup')
wdf = wdf[wdf.comm == taskname]
wdf = wdf[wdf.index <= time]
# We're looking for the previous wake event,
# not the one related to the wake latency
wdf = wdf.tail(2)
stime = sdf.index[0]
wtime = wdf.index[1]
if stime > wtime:
res = wdf["target_cpu"].values[0]
else:
res = sdf["__cpu"].values[0]
return res
def find_next_cpu(trace, taskname, time):
wdf = trace.data_frame.trace_event('sched_wakeup')
wdf = wdf[wdf.comm == taskname]
wdf = wdf[wdf.index <= time].tail(1)
return wdf["target_cpu"].values[0]
def trunc(value, precision):
offset = pow(10, precision)
res = int(value * offset)
return float(res) / offset
# Look for latencies > 1 ms
df = trace.data_frame.latency_wakeup_df(glrunner["pid"])
df = df[df.wakeup_latency > 0.001]
# Load times at which system was overutilized (EAS disabled)
ou_df = trace.data_frame.overutilized()
# Find which wakeup latencies were induced by EAS
# Times to look at will be saved in a times.txt file
eas_latencies = []
times_file = te.res_dir + "/times.txt"
!touch {times_file}
for time, cols in df.iterrows():
# Check if cpu was over-utilized (EAS disabled)
ou_row = ou_df[:time].tail(1)
if ou_row.empty:
continue
was_eas = ou_row.iloc[0, 1] < 1.0
if (was_eas):
toprint = "{:.1f}ms @ {}".format(cols[0] * 1000, trunc(time, 5))
next_cpu = find_next_cpu(trace, glrunner["name"], time)
prev_cpu = find_prev_cpu(trace, glrunner["name"], time)
if (next_cpu != prev_cpu):
toprint += " [CPU SWITCH]"
print toprint
eas_latencies.append([time, cols[0]])
!echo {toprint} >> {times_file}
"""
Explanation: EAS-induced wakeup latencies
In this example we are looking at a specific task : GLRunner. GLRunner is a very CPU-heavy task, and is also boosted (member of the top-app group) in EAS, which makes it an interesting task to study.
To study the behaviour of GLRunner, we'll be looking at the wakeup decisions of the scheduler. We'll be looking for times at which the task took "too long" to wake up, i.e it was runnable and had to wait some time to actually be run. In our case that latency treshold is (arbitrarily) set to 1ms.
We're also on the lookout for when the task has been moved from one CPU to another. Depending on several parameters (kernel version, boost values, etc), the task could erroneously be switched to another CPU which could induce wakeup latencies.
Finally, we're only interested in scheduling decisions made by EAS, so we'll only be looking at wakeup latencies that occured when the system was not overutilized, i.e EAS was enabled.
End of explanation
"""
# Plot each EAS-induced latency (blue cross)
# If the background is red, system was over-utilized and the latency wasn't caused by EAS
for start, latency in eas_latencies:
trace.setXTimeRange(start - 0.002, start + 0.002)
trace.analysis.latency.plotLatency(task=glrunner["pid"], tag=str(start))
trace.analysis.cpus.plotCPU(cpus=[2,3])
"""
Explanation: Traces visualisation
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
Here each latency is plotted in order to double-check that it was truly induced by an EAS decision. In LISA, the latency plots have a red background when the system is overutilized, which shouldn't be the case here.
End of explanation
"""
# Plots all of the latencies over the duration of the experiment
trace.setXTimeRange(trace.window[0] + 1, trace.window[1])
trace.analysis.latency.plotLatency(task=glrunner["pid"])
"""
Explanation: Overall latencies
This plot displays the whole duration of the experiment, it can be used to see how often the system was overutilized or how much latency was involved.
End of explanation
"""
!kernelshark {trace_file} 2>/dev/null
"""
Explanation: Kernelshark analysis
End of explanation
"""
|
rahlk/learnPy
|
Lecture4-Main.ipynb
|
mit
|
def foo():
return 1
foo()
"""
Explanation: CSX91: Python Tutorial
1. Functions
Fucntions in Python are created using the keyword def
It can return values with return
Let's create a simple function:
End of explanation
"""
aString = 'Global var'
def foo():
a = 'Local var'
print locals()
foo()
print globals()
"""
Explanation: Q. What happens if there is no return?
2. Scope
In python functions have their own scope (namespace).
Python first looks at the function's namespace first before looking at the global namespace.
Let's use locals() and globals() to see what happens:
End of explanation
"""
def foo():
x = 10
foo()
print x
"""
Explanation: 2.1 Variable lifetime
Variables within functions exist only withing their namespaces. Once the function stops, all the variables inside it gets destroyed. For instance, the following won't work.
End of explanation
"""
aString = 'Global var'
def foo():
print aString
foo()
"""
Explanation: 3. Variable Resolution
Python first looks at the function's namespace first before looking at the global namespace.
End of explanation
"""
aString = 'Global var'
def foo():
aString = 'Local var'
print aString
foo()
"""
Explanation: If you try and reassign a global variable inside a function, like so:
End of explanation
"""
aString = 'Global var'
def foo():
global aString # <------ Declared here
aString = 'Local var'
print aString
def bar():
print aString
foo()
bar()
"""
Explanation: Q. What would be the value of aString if I print it?
As we can see, global variables can be accessed (even changed if they are mutable data types) but not (by default) assigned to.
Global variables are very dangerous. So, python wants you to be sure of what you're doing.
If you MUST reassign it. Declare it as global. Like so:
End of explanation
"""
def foo(x):
print locals()
foo(1)
"""
Explanation: 4. Function Arguments: args and kwargs
Python allows us to pass function arguments (duh..)
The arguments are local to the function. For instance:
End of explanation
"""
"Args"
def foo(x,y):
print x+y
"kwargs"
def bar(x=5, y=8):
print x-y
"Both"
def foobar(x,y=100):
print x*y
"Calling with args"
foo(5,12)
"Calling with kwargs"
bar()
"Calling both"
foobar(10)
"""
Explanation: Arguments in functions can be classified as:
Args
kwargs (keyword args)
When calling a function, args are mandatory. kwargs are optional.
End of explanation
"""
"Args"
def foo(x,y):
print x+y
"kwargs"
def bar(x=5, y=8):
print x-y
"Both"
def foobar(x,y=100):
print x*y
"kwargs"
bar(5,8) # kwargs as args (default: x=5, y=8)
bar(5,y=8) # x=5, y=8
"Change the order of kwargs if you want"
bar(y=8, x=5)
"args as kwargs will also work"
foo(x=5, y=12)
"""
Explanation: Other ways of calling:
All the following are legit:
End of explanation
"""
"Args"
def foo(x,y):
print x+y
"kwargs"
def bar(x=5, y=8):
print x-y
"Both"
def foobar(x,y=100):
print x*y
bar(x=9, 7) #1
foo(x=5, 6) #2
"""
Explanation: Q. will these two work?
End of explanation
"""
def outer():
x=1
def inner():
print x
inner()
outer()
"""
Explanation: Never call args after kwargs
5. Nesting functions
You can nest functions.
Class nesting is somewhat uncommon, but can be done.
End of explanation
"""
def outer():
x = 1
def inner():
x = 2
print 'Inner x=%d'%(x)
inner()
return x
print 'Outer x=%d'%outer()
"""
Explanation: All the namespace conventions apply here.
What would happen if I changed x inside inner()?
End of explanation
"""
x = 4
def outer():
global x
x = 1
def inner():
global x
x = 2
print 'Inner x=%d'%(x)
inner()
return x
print 'Outer x=%d'%outer()
print 'Global x=%d'%x
"""
Explanation: What about global variables?
End of explanation
"""
class foo():
def __init__(i, arg1): # self can br replaced by anything.
i.arg1 = arg1
def bar(i, arg2): # Always use self as the first argument
print i.arg1, arg2
FOO = foo(7)
FOO.bar(5)
print FOO.arg1
"""
Explanation: Declare global every time the global x needs changing
6. Classes
Define classes with the class keyword
Here's a simple class
End of explanation
"""
class foo():
def __init__(i, num):
i.num = num
d = foo(2)
d()
"""
Explanation: All arg and kwarg conventions apply here
6.1 Overriding class methods
Lets try:
End of explanation
"""
class foo():
def __init__(i, num):
i.num = num
def __call__(i):
return i.num
d = foo(2)
d()
"""
Explanation: We know the __call__ raises an exception. Python lets you redefine it:
End of explanation
"""
class foo():
def __init__(i, num):
i.num = num
FOO = foo(5)
FOO += 1
"""
Explanation: There are many such redefinitions permitted by python. See Python Docs
6.2 Emulating numeric types
A very useful feature in python is the ability to emulate numeric types.
Would this work?
End of explanation
"""
class foo():
def __init__(i, num):
i.num = num
def __add__(i, new):
i.num += new
return i
def __sub__(i, new):
i.num -= new
return i
FOO = foo(5)
FOO += 1
print FOO.num
FOO -= 4
print FOO.num
"""
Explanation: Let's rewrite this:
End of explanation
"""
class foo():
"Me is foo"
def __init__(i, num):
i.num = num
def __add__(i, new):
i.num += new
return i
def __sub__(i, new):
i.num -= new
return i
def __repr__(i):
return i.__doc__
def __getitem__(i, num):
print "Nothing @ %d"%(num)
FOO = foo(4)
FOO[2]
"""
Explanation: Aside: __repr__, __call__,__getitem__,... are all awesome.
End of explanation
"""
issubclass(int, object)
"""
Explanation: 7. Functions and Classes are Objects
Functions and objects are like anything else in python.
All objects inherit from a base class in python.
For instance,
End of explanation
"""
a = 9
dir(a)
"""
Explanation: It follows that the variable a here is a class.
End of explanation
"""
from pdb import set_trace # pdb is quite useful
def add(x,y): return x+y
def sub(x,y): return x-y
def foo(x,y,func=add):
set_trace()
return func(x,y)
foo(7,4,sub)
"""
Explanation: This means:
Functions and Classes can be passed as arguments.
Functions can return other functions/classes.
End of explanation
"""
def foo():
x=1
foo()
print x
"""
Explanation: 8. Closures
Remember this example?
End of explanation
"""
def foo():
x='Outer String'
def bar():
print x
return bar
test = foo()
test()
"""
Explanation: Obviously, this fails. Why? As per variable lifetime rules (see 2.1), foo() has ceased execution, x is destroyed.
So how about this?
End of explanation
"""
def foo(x,y): return x**y
bar = lambda x,y: x**y # <--- Notice no return statements
print foo(4,2)
print bar(4,2)
"""
Explanation: This works. But it shouldn't, because x is local to foo(), when foo() has ceased execution, x must be destroyed. Right?
Turns out, Python supports a feature called function closure. This enables nested inner functions to keep track of their namespaces.
8.1 Aside: lambda functions and sorted
Anonymous functions in python can be defined using the lambda keyword.
The following two are the same:
End of explanation
"""
foo = lambda x: lambda y: x+y
print foo(3)(5)
"""
Explanation: Nested lambda is permitted (idk why you'd use them, still, worth a mention)
End of explanation
"""
student_tuples = [ #(Name, height(cms), weight(kg))
('john', 180, 85),
('doe', 177, 99),
('jane', 169, 69),
]
# Sort based on height
print 'Weight: ', sorted(student_tuples, key=lambda stud: stud[1])
# Sort based on Name
print 'Name: ', sorted(student_tuples, key=lambda stud: stud[0])
# Sort based on BMI
print 'BMI: ', sorted(student_tuples, key=lambda stud: stud[2]*100/stud[1])
"""
Explanation: 8.1.1 Sorted
Python's sorted function can sort based on a key argument, key is a lambda function that deterimes how the data is sorted.
End of explanation
"""
def outer(func):
def inner(*args):
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
return inner
def foo():
"I'm foo"
return 1
print foo()
decorated_foo = outer(foo)
print decorated_foo()
"""
Explanation: 9. Decorators!
Decorators are callables that take a function as argument, and return a replacement function (with additional functionalities)
End of explanation
"""
def outer(func):
def inner(*args):
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
print inner.__doc__, inner
return inner
def foo():
"I'm foo"
return 1
print foo.__name__, foo
decorated_foo = outer(foo)
print decorated_foo.__name__, decorated_foo
"""
Explanation: Lets look at memory locations of the functions.
End of explanation
"""
def outer(func):
def inner():
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
return inner
def foo():
"I'm foo"
return 1
print foo()
foo = outer(foo)
print foo()
"""
Explanation: A common practice is to replace the original function with the decorated function
End of explanation
"""
def outer(func):
def inner():
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
return inner
@outer
def foo():
"I'm foo"
return 1
print foo()
"""
Explanation: Python uses @ to represent foo = outer(foo). The above code can be retwritten as follows:
End of explanation
"""
import time
from pdb import set_trace
def logger(func):
def inner(*args, **kwargs):
print "Arguments were: %s, %s"%(args, kwargs)
return func(*args, **kwargs)
return inner
def timer(func):
def inner(*args, **kwargs):
tb=time.time()
result = func(*args, **kwargs)
ta=time.time()
print "Time taken: %f sec"%(ta-tb)
return result
return inner
@logger
@timer
def foo(a=5, b=2):
return a+b
@logger
@timer
def bar(a=10, b=1):
time.sleep(0.1)
return a-b
if __name__=='__main__': ## <----- Note
foo(2,3)
bar(5,7)
"""
Explanation: 9.1 Logging and timing a function
Decorators can be classes, they can take input arguments/keyword args.
Lets build a decorator that logs and times another function
End of explanation
"""
|
gcgruen/homework
|
data-databases-homework/Homework_4_Gruen.ipynb
|
mit
|
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
"""
Explanation: Graded =11/11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
"""
numbers = [int(i) for i in numbers_str.split(",")]
max(numbers)
"""
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
"""
sorted(numbers)[-10:]
"""
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
"""
sorted([number for number in numbers if number%3 == 0])
"""
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
"""
from math import sqrt
[sqrt(number) for number in numbers if number < 100]
"""
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
"""
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
"""
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
"""
[planet['name'] for planet in planets if planet['diameter']/2 > (planets[2]['diameter'] / 2 * 4)]
"""
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
"""
sum([planet['mass'] for planet in planets])
"""
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
"""
[planet['name'] for planet in planets if 'giant' in planet['type']]
"""
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
"""
[planet['name'] for planet in sorted(planets, key = lambda planet: planet['moons'])]
# Useful reads:
# http://stackoverflow.com/questions/8966538/syntax-behind-sortedkey-lambda
# https://docs.python.org/3.5/howto/sorting.html#sortinghowto
"""
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
End of explanation
"""
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
"""
Explanation: Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
"""
[line for line in poem_lines if re.search(r"\b\w{4}\b\s\b\w{4}\b", line)]
"""
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
"""
#character class that matches non-alphanumeric characters = \W
#in ananalogy to \s and \S
[line for line in poem_lines if re.search(r"\b\w{5}(?:$|\W$)", line)]
"""
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
"""
all_lines = " ".join(poem_lines)
"""
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
"""
[line[2:] for line in re.findall(r"\bI\b\s\b\w{1,}\b", all_lines)]
"""
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
"""
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
"""
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
"""
#test cell to try code bits
[item.split("$") for item in entrees if re.search(r"(?:\d\d|\d).\d\d", item)]
#TA-Stephan: Careful - price should be int.
menu = []
for item in entrees:
dish ={}
dish['name'] = re.search(r"(.*)\s\$", item).group(1)
dish['price'] = re.search(r"\d{1,2}\.\d{2}", item).group()
dish['vegetarian'] = bool(re.search(r"\s-\sv", item))
menu.append(dish)
menu
"""
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation
"""
|
ocefpaf/secoora
|
notebooks/timeSeries/sss/01-skill_score.ipynb
|
mit
|
import os
try:
import cPickle as pickle
except ImportError:
import pickle
run_name = '2014-07-07'
fname = os.path.join(run_name, 'config.pkl')
with open(fname, 'rb') as f:
config = pickle.load(f)
import numpy as np
from pandas import DataFrame, read_csv
from utilities import (load_secoora_ncs, to_html,
save_html, apply_skill)
fname = '{}-all_obs.csv'.format(run_name)
all_obs = read_csv(os.path.join(run_name, fname), index_col='name')
"""
Explanation: <img style='float: left' width="150px" src="http://secoora.org/sites/default/files/secoora_logo.png">
<br><br>
SECOORA Notebook 2
Sea Surface Salinity time-series model skill
This notebook calculates several skill scores for the
SECOORA models weekly time-series saved by 00-fetch_data.ipynb.
Load configuration
End of explanation
"""
from utilities import mean_bias
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
#df = rename_cols(df)
skill_score = dict(mean_bias=df.copy())
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'mean_bias.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 1: Model Bias (or Mean Bias)
The bias skill compares the model mean salinity against the observations.
It is possible to introduce a Mean Bias in the model due to a mismatch of the
boundary forcing and the model interior.
$$ \text{MB} = \mathbf{\overline{m}} - \mathbf{\overline{o}}$$
End of explanation
"""
from utilities import rmse
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
skill_score['rmse'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'rmse.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 2: Central Root Mean Squared Error
Root Mean Squared Error of the deviations from the mean.
$$ \text{CRMS} = \sqrt{\left(\mathbf{m'} - \mathbf{o'}\right)^2}$$
where: $\mathbf{m'} = \mathbf{m} - \mathbf{\overline{m}}$ and $\mathbf{o'} = \mathbf{o} - \mathbf{\overline{o}}$
End of explanation
"""
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
skill_score['r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'r2.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 3: R$^2$
https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
"""
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=True)
skill_score['low_pass_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_r2.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 4: Low passed R$^2$
http://dx.doi.org/10.1175/1520-0450(1979)018%3C1016:LFIOAT%3E2.0.CO;2
https://github.com/ioos/secoora/issues/188
End of explanation
"""
from utilities import r2
dfs = load_secoora_ncs(run_name)
# SABGOM dt = 3 hours.
dfs = dfs.swapaxes('items', 'major').resample('3H').swapaxes('items', 'major')
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
skill_score['low_pass_resampled_3H_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_resampled_3H_r2.html'.format(run_name))
save_html(fname, html)
html
"""
Explanation: Skill 4: Low passed and re-sampled (3H) R$^2$
https://github.com/ioos/secoora/issues/183
End of explanation
"""
fname = os.path.join(run_name, 'skill_score.pkl')
with open(fname,'wb') as f:
pickle.dump(skill_score, f)
"""
Explanation: Save scores
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from utilities.taylor_diagram import TaylorDiagram
def make_taylor(samples):
fig = plt.figure(figsize=(9, 9))
dia = TaylorDiagram(samples['std']['OBS_DATA'],
fig=fig,
label="Observation")
colors = plt.matplotlib.cm.jet(np.linspace(0, 1, len(samples)))
# Add samples to Taylor diagram.
samples.drop('OBS_DATA', inplace=True)
for model, row in samples.iterrows():
dia.add_sample(row['std'], row['corr'], marker='s', ls='',
label=model)
# Add RMS contours, and label them.
contours = dia.add_contours(colors='0.5')
plt.clabel(contours, inline=1, fontsize=10)
# Add a figure legend.
kw = dict(prop=dict(size='small'), loc='upper right')
leg = fig.legend(dia.samplePoints,
[p.get_label() for p in dia.samplePoints],
numpoints=1, **kw)
return fig
dfs = load_secoora_ncs(run_name)
# Bin and interpolate all series to 1 hour.
freq = '3H'
for station, df in list(dfs.iteritems()):
df = df.resample(freq).interpolate().dropna(axis=1)
if 'OBS_DATA' in df:
samples = DataFrame.from_dict(dict(std=df.std(),
corr=df.corr()['OBS_DATA']))
else:
continue
samples[samples < 0] = np.NaN
samples.dropna(inplace=True)
if len(samples) <= 2: # 1 obs 1 model.
continue
fig = make_taylor(samples)
fig.savefig(os.path.join(run_name, '{}.png'.format(station)))
plt.close(fig)
"""
Explanation: Normalized Taylor diagrams
The radius is model standard deviation error divided by observations deviation,
azimuth is arc-cosine of cross correlation (R), and distance to point (1, 0) on the
abscissa is Centered RMS.
End of explanation
"""
|
GoogleCloudPlatform/ml-on-gcp
|
tutorials/sklearn/hpsearch/gke_bayes_search.ipynb
|
apache-2.0
|
from sklearn.datasets import fetch_mldata
from sklearn.utils import shuffle
mnist = fetch_mldata('MNIST original', data_home='./mnist_data')
X, y = shuffle(mnist.data[:60000], mnist.target[:60000])
X_small = X[:100]
y_small = y[:100]
# Note: using only 10% of the training data
X_large = X[:6000]
y_large = y[:6000]
"""
Explanation: Train locally
Import training data
For illustration purposes we will use the MNIST dataset. The following code downloads the dataset and puts it in ./mnist_data.
The first 60000 images and targets are the original training set, while the last 10000 are the testing set. The training set is ordered by the labels so we shuffle them since we will use a very small portion of the data to shorten training time.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from skopt import BayesSearchCV
from skopt.space import Integer, Real
rfc = RandomForestClassifier(n_jobs=-1)
search_spaces = {
'max_features': Real(0.5, 1.0),
'n_estimators': Integer(10, 200),
'max_depth': Integer(5, 45),
'min_samples_split': Real(0.01, 0.1)
}
search = BayesSearchCV(estimator=rfc, search_spaces=search_spaces, n_jobs=-1, verbose=3, n_iter=100)
"""
Explanation: Instantiate the estimator and the SearchCV objects
For illustration purposes we will use the RandomForestClassifier with scikit-optimize's BayesSearchCV:
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
https://scikit-optimize.github.io/#skopt.BayesSearchCV
End of explanation
"""
%time search.fit(X_small, y_small)
print(search.best_score_, search.best_params_)
"""
Explanation: Fit the BayesSearchCV object locally
After fitting we can examine the best score (accuracy) and the best parameters that achieve that score.
End of explanation
"""
project_id = 'YOUR-PROJECT-ID'
"""
Explanation: Everything up to this point is what you would do when training locally. With larger amount of data it would take much longer.
Train on Google Container Engine
Set up for training on Google Container Engine
Before we can start training on the Container Engine we need to:
Build the Docker image which will be handling the workloads.
Create a cluster.
For these we will first set up some configuration variables.
Your Google Cloud Platform project id.
End of explanation
"""
bucket_name = 'YOUR-BUCKET-NAME'
"""
Explanation: A Google Cloud Storage bucket belonging to your project created through either:
- gsutil mb gs://YOUR-BUCKET-NAME; or
- https://console.cloud.google.com/storage
This bucket will be used for storing temporary data during Docker image building, for storing training data, and for storing trained models.
This can be an existing bucket, but we recommend you create a new one.
End of explanation
"""
cluster_id = 'YOUR-CLUSTER-ID'
"""
Explanation: Pick a cluster id for the cluster on Google Container Engine we will create. Preferably not an existing cluster to avoid affecting its workload.
End of explanation
"""
image_name = 'YOUR-IMAGE-NAME'
"""
Explanation: Choose a name for the image that will be running on the container.
End of explanation
"""
zone = 'us-central1-b'
"""
Explanation: Choose a zone to host the cluster.
List of zones: https://cloud.google.com/compute/docs/regions-zones/
End of explanation
"""
source_dir = 'source'
"""
Explanation: Change this only if you have customized the source.
End of explanation
"""
from helpers.cloudbuild_helper import build
build(project_id, source_dir, bucket_name, image_name)
"""
Explanation: Build a Docker image
This step builds a Docker image using the content in the source/ folder. The image will be tagged with the provided image_name so the workers can pull it. The main script source/worker.py would retrieve a pickled BayesSearchCV object from Cloud Storage and fit it to data on GCS.
Note: This step only needs to be run once the first time you follow these steps,
and each time you modify the codes in source/. If you have not modified source/ then
you can just re-use the same image.
Note: This could take a couple minutes.
To monitor the build process: https://console.cloud.google.com/gcr/builds
End of explanation
"""
from helpers.gke_helper import create_cluster
create_cluster(project_id, zone, cluster_id, n_nodes=1, machine_type='n1-standard-64')
"""
Explanation: Create a cluster
This step creates a cluster on the Container Engine.
You can alternatively create the cluster with the gcloud command line tool or through the console, but
you must add the additional scope of write access to Google Clous Storage: 'https://www.googleapis.com/auth/devstorage.read_write'
Note: This could take several minutes.
To monitor the cluster creation process: https://console.cloud.google.com/kubernetes/list
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from skopt import BayesSearchCV
from skopt.space import Integer, Real
rfc = RandomForestClassifier(n_jobs=-1)
search_spaces = {
'max_features': Real(0.5, 1.0),
'n_estimators': Integer(10, 200),
'max_depth': Integer(5, 45),
'min_samples_split': Real(0.01, 0.1)
}
search = BayesSearchCV(estimator=rfc, search_spaces=search_spaces, n_jobs=-1, verbose=3, n_iter=100)
from gke_parallel import GKEParallel
gke_search = GKEParallel(search, project_id, zone, cluster_id, bucket_name, image_name)
"""
Explanation: For GCE instance pricing: https://cloud.google.com/compute/pricing
Instantiate the GKEParallel object
The GKEParallel class is a helper wrapper around a BayesSearchCV object that manages deploying fitting jobs to the Container Engine cluster created above.
We pass in the BayesSearchCV object, which will be pickled and stored on Cloud Storage with
uri of the form:
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/search.pkl
End of explanation
"""
! bash get_cluster_credentials.sh $cluster_id $zone
"""
Explanation: Refresh access token to the cluster
To make it easy to gain access to the cluster through the Kubernetes client library, included in this sample is a script that retrieves credentials for the cluster with gcloud
and refreshes access token with kubectl.
End of explanation
"""
gke_search.fit(X_large, y_large)
"""
Explanation: Deploy the fitting task
GKEParallel instances implement a similar (but different!) interface as BayesSearchCV.
Calling fit(X, y) first uploads the training data to Cloud Storage as:
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/X.pkl
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/y.pkl
This allows reusing the same uploaded datasets for future training tasks.
For instance, if you already have pickled data on Cloud Storage:
gs://DATA-BUCKET/X.pkl
gs://DATA-BUCKET/y.pkl
then you can deploy the fitting task with:
gke_search.fit(X='gs://DATA-BUCKET/X.pkl', y='gs://DATA-BUCKET/y.pkl')
Calling fit(X, y) also pickles the wrapped search and gke_search, stores them on Cloud Storage as:
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/search.pkl
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/gke_search.pkl
End of explanation
"""
gke_search.search_spaces
gke_search.task_name
"""
Explanation: Inspect the GKEParallel object
In the background, the GKEParallel instance splits the search_spaces into smaller search_spaces
Each smaller search_spaces is pickled and stored on GCS within each worker's workspace:
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/WORKER-ID/search_spaces.pkl
The search_spaces can be accessed as follows, showing how they are assigned to each worker.
The keys of this dictionary are the worker_ids.
End of explanation
"""
gke_search.job_names
"""
Explanation: Similarly, each job is given a job_name. The dictionary of job_names can be accessed as follows. Each worker pod handles one job processing one of the smaller search_spaces.
To monitor the jobs: https://console.cloud.google.com/kubernetes/workload
End of explanation
"""
#gke_search.cancel()
"""
Explanation: Cancel the task
To cancel the task, run cancel(). This will delete all the deployed worker pods and jobs,
but will NOT delete the cluster, nor delete any data already persisted to Cloud Storage.
End of explanation
"""
gke_search.done(), gke_search.dones
"""
Explanation: Monitor the progress
GKEParallel instances implement a similar (but different!) interface as Future instances.
Calling done() checks whether each worker has completed the job and persisted its outcome
on GCS with uri:
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/WORKER-ID/fitted_search.pkl
To monitor the jobs: https://console.cloud.google.com/kubernetes/workload
To access the persisted data directly: https://console.cloud.google.com/storage/browser/YOUR-BUCKET-NAME/
End of explanation
"""
result = gke_search.result(download=False)
"""
Explanation: When all the jobs are finished, the pods will stop running (but the cluster will remain), and we can retrieve the fitted model.
Calling result() will populate the gke_search.results attribute which is returned.
This attribute records all the fitted BayesSearchCV from the jobs. The fitted model is downloaded only if the download argument is set to True.
Calling result() also updates the pickled gke_search object on Cloud Storage:
gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/gke_search.pkl
End of explanation
"""
from helpers.kubernetes_helper import get_pod_logs
for pod_name, log in get_pod_logs().items():
print('=' * 20)
print('\t{}\n'.format(pod_name))
print(log)
"""
Explanation: You can also get the logs from the pods:
End of explanation
"""
from helpers.gke_helper import delete_cluster
#delete_cluster(project_id, zone, cluster_id)
"""
Explanation: Once the jobs are finished, the cluster can be deleted. All the fitted models are stored on GCS.
The cluster can also be deleted from the console: https://console.cloud.google.com/kubernetes/list
End of explanation
"""
import time
from helpers.gke_helper import delete_cluster
while not gke_search.done():
n_done = len([d for d in gke_search.dones.values() if d])
print('{}/{} finished'.format(n_done, len(gke_search.job_names)))
time.sleep(60)
delete_cluster(project_id, zone, cluster_id)
result = gke_search.result(download=True)
"""
Explanation: The next cell continues to poll the jobs until they are all finished, downloads the results, and deletes the cluster.
End of explanation
"""
from helpers.gcs_helper import download_uri_and_unpickle
gcs_uri = 'gs://YOUR-BUCKET-NAME/YOUR-CLUSTER-ID.YOUR-IMAGE-NAME.UNIX-TIME/gke_search.pkl'
gke_search_restored = download_uri_and_unpickle(gcs_uri)
"""
Explanation: Restore the GKEParallel object
To restore the fitted gke_search object (for example from a different notebook), you can use the helper function included in this sample.
End of explanation
"""
gke_search.best_score_, gke_search.best_params_, gke_search.best_estimator_
"""
Explanation: Inspect the result
GKEParallel also implements part of the interface of BayesSearchCV to allow easy access to best_score+, best_param_, and beat_estimator_.
End of explanation
"""
predicted = gke_search.predict(mnist.data[60000:])
print(len([p for i, p in enumerate(predicted) if p == mnist.target[60000:][i]]))
"""
Explanation: You can also call predict(), which deligates the call to the best_estimator_.
Below we calculate the accuracy on the 10000 test images.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cccma/cmip6/models/canesm5/seaice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCMA
Source ID: CANESM5
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a/ml_crypted_data_correction.ipynb
|
mit
|
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.ml - Machine Learning et données cryptées - correction
Comment faire du machine learning avec des données cryptées ? Ce notebook propose d'en montrer un principe exposés CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy. Correction.
End of explanation
"""
def compose(x, a, n):
return (a * x) % n
def crypt(x):
return compose(x, 577, 10000)
crypt(5), crypt(6)
crypt(5+6), (crypt(5) + crypt(6)) % 10000
crypt(6-5), (crypt(6) - crypt(5)) % 10000
crypt(5-6), (crypt(5) - crypt(6)) % 10000
"""
Explanation: Principe
Voir l'énoncé.
Exercice 1 : écrire deux fonctions de cryptage, décryptage
Il faut bien choisir $n$, $a$ pour implémenter la fonction de cryptage :
$\varepsilon:\mathbb{N} \rightarrow \mathbb{Z}/n\mathbb{Z}$ et $\varepsilon(x) = (x * a) \mod n$. On vérifie ensuite qu'elle conserve l'addition au module $n$ près.
End of explanation
"""
n = 10000
for k in range(2, n):
if (577*k) % n == 1:
ap = k
break
ap
def decrypt(x):
return compose(x, 2513, 10000)
decrypt(crypt(5)), decrypt(crypt(6))
decrypt(crypt(5)*67), decrypt(crypt(5*67))
"""
Explanation: Si $a=47$, on cherche $a',k$ tel que $aa' - nk=1$.
End of explanation
"""
from sklearn.datasets import load_diabetes
data = load_diabetes()
X = data.data
Y = data.target
from sklearn.linear_model import LinearRegression
clr = LinearRegression()
clr.fit(X, Y)
clr.predict(X[:1]), Y[0]
from sklearn.metrics import r2_score
r2_score(Y, clr.predict(X))
"""
Explanation: Notes sur l'inverse de a
Si $n$ est premier alors $\mathbb{Z}/n\mathbb{Z}$ est un corps. Cela implique que tout nombre $a \neq 0$ a un inverse dans $\mathbb{Z}/n\mathbb{Z}$. Donc, $\forall a \neq 0, \exists a'$ tel que $aa'=1$. On va d'abord montrer que $\forall a \neq 0, \forall k \in \mathbb{N^*}, a^k \neq 0$. On procède par l'absurde en supposant que $\exists k > 0$ tel quel $a^k=0$. Cela signifie qu'il existe $v$ tel quel $a^k = vn$. Comme $n$ est premier, $a$ divise $v$ et on peut écrire que $a^k = wan \Rightarrow a(a^{k-1} - wn)=0$. Par récurrence, on peut montrer qu'il existe $z$ tel que $a = zn$ donc $a$ est un multiple de $n$ et c'est impossible car $a$ et $n$ sont premiers entre eux.
L'ensemble $A={a, a^2, a^3, ...}$ est à valeur dans $\mathbb{Z}/n\mathbb{Z}$ et est fini donc il existe nécessairement $i$ tel que $a^i \in A$. Il existe alors $k > 0$ tel que $a^i \equiv a^k \mod n$ et $u$ tel que $a^i = a^k + un$. On suppose d'abord que $i > k$, alors $a^k(a^{i-k} -1) = un$. Comme $n$ est premier, $a^{i-k} -1$ divise $n$ donc il existe $v$ tel que $a^{i-k}=un + 1$ donc $a^{i-k} \equiv 1 \mod n$. On note $a^{i-k-1} = a^{-1}$ l'inverse de $a$ dans $\mathbb{Z}/n\mathbb{Z}$. Si $k > i$, la même chose est vraie pour $a^{k-i}$. Si $i^=\arg\min{i \, | \, a^i \in A}$, $i^ \leqslant n-1$ car l'ensemble $A$ contient au plus $n-1$ éléments et $i^-k < n-1$. On note maintenant $j^ = \arg \min {j \, | \, a^j \equiv 1 \mod n}$. Donc ce cas, on peut montrer que $A = {1, a, ..., a^{j^-1}}$. $j^$ est l'[ordre](https://fr.wikipedia.org/wiki/Ordre_(th%C3%A9orie_des_groupes) du sous-groupe engendré par $a$.
Le théorème de Lagrange nous dit que cet ordre divise $n-1$ qui est l'ordre du groupe multiplicatif $\mathbb{Z}/n\mathbb{Z} \backslash {0}$. On peut donc écrire $n-1=kj^*$ avec $k \in \mathbb{N}$. Par conséquent, $a^{n-1} \equiv 1 \mod n$. Ce théorème en considérant les classes d'équivalence qui forment une partition de l'ensemble du groupe de départ.
Exercice 2 : Entraîner une régression linéaire
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
import numpy
X_norm = numpy.hstack([MinMaxScaler((0, 100)).fit_transform(X),
numpy.ones((X.shape[0], 1))])
Y_norm = MinMaxScaler((0, 100)).fit_transform(Y.reshape(len(Y), 1)).ravel()
Y_norm.min(), Y_norm.max()
clr_norm = LinearRegression(fit_intercept=False)
clr_norm.fit(X_norm, Y_norm)
clr_norm.predict(X_norm[:1]), Y_norm[0]
from sklearn.metrics import r2_score
r2_score(Y_norm, clr_norm.predict(X_norm))
"""
Explanation: On considère seulement la fonction de décision brute car c'est une fonction qui peut-être calculée à partir d'additions et de multiplications. Pour la suite, nous aurons besoin d'un modèle qui fonctionne sur des variables normalisées avec MinMaxScaler. On supprime également le biais pour le remplacer par une colonne constante.
End of explanation
"""
def decision_linreg(xs, coef, bias):
s = bias
xs = xs.copy().ravel()
coef = coef.copy().ravel()
if xs.shape != coef.shape:
raise ValueError("Not the same dimension {0}!={1}".format(xs.shape, coef.shape))
for x, c in zip(xs, coef):
s += c * x
return s
list(X[0])[:5]
clr.predict(X[:1]), decision_linreg(X[:1], clr.coef_, clr.intercept_)
clr_norm.predict(X_norm[:1]), decision_linreg(X_norm[:1], clr_norm.coef_, clr_norm.intercept_)
"""
Explanation: Exercice 3 : réécrire la fonction de prédiction pour une régression linéaire
La fonction est un produit scalaire.
End of explanation
"""
coef_int = [int(i) for i in clr_norm.coef_ * 100]
coef_int
inter_int = int(clr_norm.intercept_ * 10000)
inter_int
import numpy
def decision_linreg_int(xs, coef):
s = 0
for x, c in zip(xs, coef):
s += c * x
return s % 10000
def decision_crypt_decrypt_linreg(xs, coef_int):
# On crypte les entrées
int_xs = [int(x) for x in xs.ravel()]
crypt_xs = [crypt(i) for i in int_xs]
# On applique la prédiction.
pred = decision_linreg_int(crypt_xs, coef_int)
# On décrypte.
dec = decrypt(pred % 10000)
return dec / 100
(decision_linreg(X_norm[:1], clr_norm.coef_, clr_norm.intercept_),
decision_crypt_decrypt_linreg(X_norm[0], coef_int))
p1s = []
p2s = []
for i in range(0, X_norm.shape[0]):
p1 = decision_linreg(X_norm[i:i+1], clr_norm.coef_, clr_norm.intercept_)
p2 = decision_crypt_decrypt_linreg(X_norm[i], coef_int)
if i < 5:
print(i, p1, p2)
p1s.append(p1)
p2s.append(p2)
import matplotlib.pyplot as plt
plt.plot(p1s, p2s, '.')
"""
Explanation: Exercice 4 : assembler le tout
Prendre une observation, crypter, prédire, décrypter, comparer avec la version non cryptée. Il faudra sans doute un peu ruser car la fonction de cryptage s'applique à des entiers et le modèle de prédiction à des réels. On multiplie par 10000 les variables. Comme le cryptage que nous avons choisi ne conserve que l'addition, nous garderons les modèles en clair.
End of explanation
"""
from numpy.random import poisson
X = poisson(size=10000)
mx = X.max()+1
X.min(), mx
from matplotlib import pyplot as plt
plt.hist(X, bins=mx, rwidth=0.9);
def crypt(x):
return compose(x, 5794, 10000)
import numpy
Xcrypt = numpy.array([crypt(x) for x in X])
Xcrypt[:10]
plt.hist(Xcrypt, bins=mx, rwidth=0.9);
"""
Explanation: Notes
Les coefficients sont en clair mais les données sont cryptées. Pour crypter les coefficients du modèle, il faudrait pouvoir s'assurer que l'addition et la multiplication sont stables après le cryptage. Cela nécessite un cryptage différent comme Fully Homomorphic Encryption over the Integers. Les entiers cryptés sont dans l'intervalle [0, 10000], cela veut dire qu'il est préférable de crypter des entiers dans un intervalle équivalent sous peine de ne pouvoir décrypter avec certitude. Ceci implique que l'algorithme fasse des calculs qui restent dans cet intervalle. C'est pourquoi les entrées et les sorties prennent leur valeur dans l'intervalle [0, 100] afin que le produit coefficient x entrée reste dans l'intervalle considéré. Pour éviter ce problème, il faudrait décomposer chaque entier en une séquence d'entiers entre 0 et 100 et réécrire les opérations addition et multiplication en fonction.
Questions
Le cryptage choisi est moins efficace qu'un cryptage RSA qui conserve la multiplication. Il faudrait transformer l'écriture du modèle pour utiliser des multiplications plutôt que des additions. Si je vous disais qu'une des variables est l'âge d'une population, vous pourriez la retrouver. Il en est de même pour un chiffrage RSA qui change un entier en un autre. On peut crypter des éléments de ces entiers et les recomposer dans le monde crypté. C'est ce que propose d'autres type de cryptage. On peut aussi altérer les données en ajoutant un bruit aléatoire qui change peu la prédiction mais qui change la valeur cryptée. Dans ce cas, la distribution de chaque variable paraîtra uniforme.
On peut entraîner un modèle sur des données cryptées si on peut reproduire l'addition et la multiplication avec les nombres cryptés. Une option est le cryptage : Fully Homomorphic Encryption over the Integers. Cela implique qu'on peut approcher toute fonction par un polynôme (voir développement limité). Le gradient d'un polynôme est un polynôme également. Il est possible de calculer la norme du gradient crypté mais pas de la comparer à une autre valeur cryptées.
De ce fait les arbres de décision se prêtent mal à ce type d'apprentissage puisque chaque noeud de l'arbre consiste à comparer deux valeurs. Cependant, on peut s'en sortir en imposant à l'algorithme d'apprentissage d'un arbre de décision de ne s'appuyer sur des égalités. Cela nécessite plus de coefficients et la discrétisation des variables continues. Il reste une dernière chose à vérifier. Chaque noeud d'un arbre de décision est déterminé en maximisant une quantité. Comment trouver le maximum dans un ensemble de données cryptées qu'on ne peut comparer ? On utilise une propriété des normes :
$$\lim_{d \rightarrow \infty} (x^d + y^d)^{1/d} = \max(x, y)$$
Il existe d'autres options : Machine Learning Classification over Encrypted Data.
Ajouter du bruit sur une colonne
Les données peuvent être cryptées mais la distribution est inchangée à une permutation près. Pour éviter cela, on ajoute un peu de bruit, nous allons voir comment faire cela. On suppose que nous avons une colonne qui des entiers distribués selon une loi de Poisson.
End of explanation
"""
import random
Xbruit = numpy.array([100*x + random.randint(0,100) for x in X])
Xbruit[:10]
fix, ax = plt.subplots(1, 2, figsize=(12,4))
ax[0].hist(Xbruit, bins=mx, rwidth=0.9)
ax[1].hist(Xbruit, bins=mx*100);
Xbruitcrypt = numpy.array([crypt(x) for x in Xbruit])
fix, ax = plt.subplots(1, 2, figsize=(12,4))
ax[0].hist(Xbruitcrypt, bins=mx, rwidth=0.9)
ax[1].hist(Xbruitcrypt, bins=mx*100);
"""
Explanation: Même distribution dans un ordre différent. Pour changer cette distribution, on ajoute un petit bruit peu important pour la variable numérique considérée mais qui sera cryptée de manière totalement différente.
End of explanation
"""
|
hadibakalim/deepLearning
|
01.neural_network/03.multiple_linear_regression/multiple_linear_regression.ipynb
|
mit
|
from sklearn.linear_model import LinearRegression
# here we just downloaded the data from the library
from sklearn.datasets import load_boston
"""
Explanation: Multiple Linear Regression
We just saw how we can predict life expectancy using BMI. Here, BMI was the predictor, also known as an independent variable. A predictor is a variable we're looking at in order to make predictions about other variables, while the values we are trying to predict are known as dependent variables. In this case, life expectancy was the dependent variable.
If the outcome we want to predict depends on more than one variable, we can make a more complicated model. As long as they're relevant to the situation, using more independent/predictor variables can help we get a better prediction.
When there's just one predictor, the linear regression model is a line, but as we add more predictor variables, we're adding more dimensions.
When we have one predictor variable, the equation of the line is
y=mx+b
and the plot might look something like we saw before.
Adding a predictor variable to go to two predictor variables means that the predicting equation is:
$ y=m_1x_1 +m_2x_2 + b $
To represent this graphically, we'll need a three-dimensional plot, with the linear regression model represented as a plane.
Data discover
We'll be using the Boston house-prices dataset. The dataset consists of 13 features of 506 houses and their median value in $1000's. We'll fit a model on the 13 features to predict on the value of houses.
Load the libraries
End of explanation
"""
# Load the data from the the boston house-prices dataset
boston_data = load_boston()
x = boston_data['data']
y = boston_data['target']
"""
Explanation: Load the data
End of explanation
"""
# Make and fit the linear regression model
# Fit the model and Assign it to the model variable
model = LinearRegression()
model.fit(x,y)
# Make a prediction using the model
sample_house = [[2.29690000e-01, 0.00000000e+00, 1.05900000e+01, 0.00000000e+00, 4.89000000e-01,
6.32600000e+00, 5.25000000e+01, 4.35490000e+00, 4.00000000e+00, 2.77000000e+02,
1.86000000e+01, 3.94870000e+02, 1.09700000e+01]]
"""
Explanation: Linear Regression
End of explanation
"""
# Predict housing price for the sample_house
prediction = model.predict(sample_house)
print(prediction)
"""
Explanation: Prediction
End of explanation
"""
|
ekaakurniawan/iPyMacLern
|
PGM-W1/Factor.ipynb
|
gpl-3.0
|
# Display graph inline
%matplotlib inline
# Display graph in 'retina' format for Mac with retina display. Others, use PNG or SVG format.
%config InlineBackend.figure_format = 'retina'
#%config InlineBackend.figure_format = 'PNG'
#%config InlineBackend.figure_format = 'SVG'
"""
Explanation: Part of iPyMacLern project.
Copyright (C) 2016 by Eka A. Kurniawan
eka.a.kurniawan(ta)gmail(tod)com
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
Display Settings
End of explanation
"""
import sys
print("Python %s" % sys.version)
import numpy as np
print("NumPy %s" % np.__version__)
import sklearn
from sklearn.utils.extmath import cartesian
print("scikit-learn %s" % sklearn.__version__)
import pandas as pd
print("Pandas %s" % pd.__version__)
"""
Explanation: Tested On
End of explanation
"""
import math
"""
Explanation: Imports
End of explanation
"""
phi = {'variables': np.array(['X_2', 'X_1', 'X_3']),
'cardinalities': np.array([2, 2, 2]),
'values': np.ones(8)}
"""
Explanation: Table of Contents
Definition
Data Structure
Class
Basic Math Functions
Operations
Unit Testing
References
Definition
Factor $\phi$ is a function mapping values from a set of random variables $D$, $Val(D)$ to real number $I!R$.$^{[1,2]}$
$$\phi(X_1, \ldots, X_k):Val(X_1, \ldots, X_k) \rightarrow I!R$$
Where $D$ is also called the scope of the factor.
$$D = Scope[\phi] = {X_1, \ldots, X_k}$$
Data Structure
Data structure for factor $\phi$ with three binary variables $X_2$, $X_1$, and $X_3$ that are all assigned to 1, are written as follow.$^{[3]}$
End of explanation
"""
def convert_factor_ds_to_table(phi):
cart = cartesian([range(d) for d in phi['cardinalities']])
# Construct table
df = pd.DataFrame(cart)
df.columns = ['$%s$' % v for v in phi['variables']]
df['$\phi(%s)$' % ','.join([v for v in phi['variables']])] = phi['values']
return df
phi_table = convert_factor_ds_to_table(phi)
phi_table
"""
Explanation: The card key stores cardinality of all variables. In the example, the cardinality of variable $X_3$, $|Val(X_3)|$ is 2.
Following function converts factor data structure into the table form for displaying purpose.
End of explanation
"""
def convert_assignment_to_index(A, D):
step = [np.prod(D[i+1:]) for i in range(len(D)-1)] + [0]
return np.sum(np.multiply(A, step)) + A[-1]
A = [1, 0, 1]
I = convert_assignment_to_index(A, D = phi['cardinalities'])
I
"""
Explanation: Assignment and Index
Convert from factor assignment $A$ to the value index $I$ based on the cardinalities $D$.
End of explanation
"""
def convert_index_to_assignment(I, D):
step = [np.prod(D[i+1:]) for i in range(len(D)-1)] + [0]
I_tmp = I
A = []
for i in range(len(D)-1):
a = int(I_tmp / step[i])
A.append(a)
I_tmp = I_tmp - (step[i] * a)
A.append(I_tmp)
return A
I = 5
A = convert_index_to_assignment(I, D = phi['cardinalities'])
A
"""
Explanation: Convert from factor value index $I$ to the assignment $A$ based on the cardinalities $D$.
End of explanation
"""
class factor:
def __init__(self, variables, cardinalities, values):
self.variables = np.array(variables, '<U255')
self.cardinalities = np.array(cardinalities)
self.values = np.array(values)
def get_table(self):
cart = cartesian([range(cardinality) for cardinality in self.cardinalities])
# Construct table
df = pd.DataFrame(cart)
df.columns = ['$%s$' % l for l in self.variables]
df['$\phi(%s)$' % ','.join([l for l in self.variables])] = self.values
return df
def get_index_from_assignment(self, A):
step = [np.prod(self.cardinalities[i+1:]) for i in range(len(self.cardinalities)-1)] + [0]
return np.sum(np.multiply(A, step)) + A[-1]
def get_assignment_from_index(self, I):
step = [np.prod(self.cardinalities[i+1:]) for i in range(len(self.cardinalities)-1)] + [0]
I_tmp = I
A = []
for i in range(len(self.cardinalities)-1):
a = int(I_tmp / step[i])
A.append(a)
I_tmp = I_tmp - (step[i] * a)
A.append(I_tmp)
return A
def get_value_of_assignment(self, A):
I = self.get_index_from_assignment(A)
return self.values[I]
def set_value_of_assignment(self, A, v):
I = self.get_index_from_assignment(A)
self.values[I] = v
def reduce(self, E):
new_values = self.values.copy()
for variable, value in E.items():
if variable not in self.variables:
continue
variable_index = np.where(self.variables == variable)[0][0]
cart = cartesian([range(cardinality) for cardinality in self.cardinalities])
reduce_index = np.where(cart[:,variable_index] != value)[0]
new_values[reduce_index] = 0
return factor(variables = self.variables,
cardinalities = self.cardinalities,
values = new_values)
phi = factor(variables = ['X_2', 'X_1', 'X_3'],
cardinalities = [2, 2, 2],
values = np.ones(8))
phi.variables
phi.cardinalities
phi.values
phi.get_index_from_assignment([1, 0, 1])
phi.get_assignment_from_index(5)
phi.get_value_of_assignment([1, 0, 1])
phi.set_value_of_assignment([1, 0, 1], 6)
phi.get_value_of_assignment([1, 0, 1])
"""
Explanation: Class
End of explanation
"""
def get_member_index(A, B):
C = []
for v in A:
idx = np.where(B == v)[0]
if len(idx):
C.append(idx[0])
return np.array(C)
A = np.array([1,2,3])
B = np.array([2,1,3])
get_member_index(A, B)
A = np.array([1,2,3])
B = np.array([1,4,3,5,2])
get_member_index(A, B)
"""
Explanation: Basic Math Functions
get_member_index
get_member_index function returns the lowest indices of values in array $A$ found in array $B$.
End of explanation
"""
phi_X1 = factor(variables = ['X_1'],
cardinalities = [2],
values = [0.11, 0.89])
phi_X2 = factor(variables = ['X_1', 'X_2'],
cardinalities = [2, 2],
values = [0.59, 0.41, 0.22, 0.78])
phi_X3 = factor(variables = ['X_2', 'X_3'],
cardinalities = [2, 2],
values = [0.39, 0.61, 0.06, 0.94])
phi_X1.get_table()
phi_X2.get_table()
phi_X3.get_table()
"""
Explanation: Operations
Sample factors ($\phi(X_1)$, $\phi(X_2)$, and $\phi(X_3)$).$^{[3]}$
End of explanation
"""
def factor_product(A, B):
variables = np.union1d(A.variables, B.variables)
ttl_variables = len(variables)
mapA = get_member_index(A.variables, variables)
mapB = get_member_index(B.variables, variables)
cardinalities = np.zeros(ttl_variables, int)
cardinalities[mapA] = A.cardinalities
cardinalities[mapB] = B.cardinalities
ttl_values = np.prod(cardinalities)
values = np.zeros(ttl_values)
C = factor(variables = variables,
cardinalities = cardinalities,
values = values)
assignments = np.array([C.get_assignment_from_index(i) for i in range(ttl_values)])
indexA = [A.get_index_from_assignment(assignment) for assignment in assignments[:, mapA]];
indexB = [B.get_index_from_assignment(assignment) for assignment in assignments[:, mapB]];
values = np.multiply(A.values[indexA], B.values[indexB])
for i in range(len(C.values)):
C.set_value_of_assignment(assignments[i], values[i])
return C
phi_X1_X2 = factor_product(phi_X1, phi_X2)
phi_X1_X2.get_table()
"""
Explanation: Factor Product
factor_product function returns product of two factors $A$ and $B$ stored in $C$.
End of explanation
"""
def factor_marginalization(A, V):
variables = np.setdiff1d(A.variables, V)
ttl_variables = len(variables)
mapB = get_member_index(variables, A.variables)
cardinalities = np.zeros(ttl_variables, int)
cardinalities = A.cardinalities[mapB]
ttl_values = np.prod(cardinalities)
values = np.zeros(ttl_values)
B = factor(variables = variables,
cardinalities = cardinalities,
values = values)
assignments = np.array([A.get_assignment_from_index(i) for i in range(len(A.values))])
indics = np.array([B.get_index_from_assignment(assignment[mapB]) for assignment in assignments])
for i in range(len(B.values)):
value = np.sum(A.values[np.where(indics == i)])
B.values[i] = value
return B
phi_X2_marginalized = factor_marginalization(phi_X2, ['X_2'])
phi_X2_marginalized.get_table()
"""
Explanation: Factor Marginalization
factor_marginalization function returns new factor $B$ in which variables in factor $A$ have been summed up based on variables in $V$.
End of explanation
"""
evidence = {
'X_2': 0,
'X_3': 1,
}
phi_X1.reduce(evidence).get_table()
phi_X2.reduce(evidence).get_table()
phi_X3.reduce(evidence).get_table()
"""
Explanation: Factor Reduction
End of explanation
"""
def calculate_joint_distribution(F):
joint = F[0]
for i in range(1, len(F)):
joint = factor_product(joint, F[i])
return joint
F = [phi_X1, phi_X2, phi_X3]
phi_X1_X2_X3 = calculate_joint_distribution(F)
phi_X1_X2_X3.get_table()
"""
Explanation: Joint Distribution
End of explanation
"""
def compare_factors(A, B):
ttl_variables_A = len(A.variables)
map_variables = get_member_index(A.variables, B.variables)
# Compare variables
if np.array_equal(A.variables, B.variables[map_variables]):
print("%-58s%12s" % ("Variables:", "same"))
else:
print("%-58s%12s" % ("Variables:", "DIFFERENT"))
print(" A: ", A.variables)
print(" B: ", B.variables[map_variables])
# Compare cardinalities
if np.array_equal(A.cardinalities, B.cardinalities[map_variables]):
print("%-58s%12s" % ("Cardinalities:", "same"))
else:
print("%-58s%12s" % ("Cardinalities:", "DIFFERENT"))
print(" A: ", A.cardinalities)
print(" B: ", B.cardinalities[map_variables])
# Compare values
ttl_values = np.prod(A.cardinalities)
cart = cartesian([range(cardinality) for cardinality in A.cardinalities])
for i in range(ttl_values):
variables_assignments = ["%s=%s" % (A.variables[j], cart[i][j]) for j in range(ttl_variables_A)]
if math.isclose(A.get_value_of_assignment(cart[i]),
B.get_value_of_assignment(cart[i][map_variables]),
rel_tol=1e-15, abs_tol=0.0):
print("%-58s%12s" % ("Values(" + ",".join(variables_assignments) + "):", "same"))
else:
print("%-58s%12s" % ("Values(" + ",".join(variables_assignments) + "):", "DIFFERENT"))
print(" A: ", A.get_value_of_assignment(cart[i]))
print(" B: ", B.get_value_of_assignment(cart[i][map_variables]))
"""
Explanation: Unit Testing
End of explanation
"""
phi_A_B = factor(variables = ['A', 'B'],
cardinalities = [3, 2],
values = [0.5, 0.8, 0.1, 0.0, 0.3, 0.9])
phi_B_C = factor(variables = ['B', 'C'],
cardinalities = [2, 2],
values = [0.5, 0.7, 0.1, 0.2])
phi_A_B_C_expected1 = factor(variables = ['A', 'B', 'C'],
cardinalities = [3, 2, 2],
values = [0.25, 0.35, 0.08, 0.16, 0.05, 0.07, 0.00, 0.00, 0.15, 0.21, 0.09, 0.18])
phi_A_B_C_expected2 = factor(variables = ['B', 'A', 'C'],
cardinalities = [2, 3, 2],
values = [0.25, 0.35, 0.05, 0.07, 0.15, 0.21, 0.08, 0.16, 0.00, 0.00, 0.09, 0.18])
phi_A_B_C = factor_product(phi_A_B, phi_B_C)
compare_factors(phi_A_B_C, phi_A_B_C_expected1)
compare_factors(phi_A_B_C, phi_A_B_C_expected2)
"""
Explanation: Factor Product
End of explanation
"""
phi_A_B_C = factor(variables = ['A', 'B', 'C'],
cardinalities = [3, 2, 2],
values = [0.25, 0.35, 0.08, 0.16, 0.05, 0.07, 0.00, 0.00, 0.15, 0.21, 0.09, 0.18])
phi_A_C_expected = factor(variables = ['A', 'C'],
cardinalities = [3, 2],
values = [0.33, 0.51, 0.05, 0.07, 0.24, 0.39])
phi_A_C = factor_marginalization(phi_A_B_C, ['B'])
compare_factors(phi_A_C, phi_A_C_expected)
"""
Explanation: Factor Marginalization
End of explanation
"""
evidence = {
'C': 0,
}
phi_A_B_C = factor(variables = ['A', 'B', 'C'],
cardinalities = [3, 2, 2],
values = [0.25, 0.35, 0.08, 0.16, 0.05, 0.07, 0.00, 0.00, 0.15, 0.21, 0.09, 0.18])
phi_A_B_C0_expected = factor(variables = ['A', 'B', 'C'],
cardinalities = [3, 2, 2],
values = [0.25, 0.0, 0.08, 0.0, 0.05, 0.0, 0.00, 0.0, 0.15, 0.0, 0.09, 0.0])
phi_A_B_C0 = phi_A_B_C.reduce(evidence)
compare_factors(phi_A_B_C0, phi_A_B_C0_expected)
"""
Explanation: Factor Reduction
End of explanation
"""
phi_A_B = factor(variables = ['A', 'B'],
cardinalities = [3, 2],
values = [0.5, 0.8, 0.1, 0.0, 0.3, 0.9])
phi_B_C = factor(variables = ['B', 'C'],
cardinalities = [2, 2],
values = [0.5, 0.7, 0.1, 0.2])
phi_A_B_C_expected = factor(variables = ['A', 'B', 'C'],
cardinalities = [3, 2, 2],
values = [0.25, 0.35, 0.08, 0.16, 0.05, 0.07, 0.00, 0.00, 0.15, 0.21, 0.09, 0.18])
F = [phi_A_B]
phi_A_B_joint = calculate_joint_distribution(F)
compare_factors(phi_A_B_joint, phi_A_B)
F = [phi_A_B, phi_B_C]
phi_A_B_C = calculate_joint_distribution(F)
compare_factors(phi_A_B_C, phi_A_B_C_expected)
"""
Explanation: Joint Distribution
End of explanation
"""
|
fastai/fastai
|
dev_nbs/course/lesson7-wgan.ipynb
|
apache-2.0
|
path = untar_data(URLs.LSUN_BEDROOMS)
"""
Explanation: LSun bedroom data
For this lesson, we'll be using the bedrooms from the LSUN dataset. The full dataset is a bit too large so we'll use a sample from kaggle.
End of explanation
"""
dblock = DataBlock(blocks = (TransformBlock, ImageBlock),
get_x = generate_noise,
get_items = get_image_files,
splitter = IndexSplitter([]))
def get_dls(bs, size):
dblock = DataBlock(blocks = (TransformBlock, ImageBlock),
get_x = generate_noise,
get_items = get_image_files,
splitter = IndexSplitter([]),
item_tfms=Resize(size, method=ResizeMethod.Crop),
batch_tfms = Normalize.from_stats(torch.tensor([0.5,0.5,0.5]), torch.tensor([0.5,0.5,0.5])))
return dblock.dataloaders(path, path=path, bs=bs)
"""
Explanation: We then grab all the images in the folder with the data block API. We don't create a validation set here for reasons we'll explain later. It consists of random noise of size 100 by default (can be changed if you replace generate_noise by partial(generate_noise, size=...)) as inputs and the images of bedrooms as targets.
End of explanation
"""
dls = get_dls(128, 64)
dls.show_batch(max_n=16)
"""
Explanation: We'll begin with a small size since GANs take a lot of time to train.
End of explanation
"""
generator = basic_generator(64, n_channels=3, n_extra_layers=1)
critic = basic_critic (64, n_channels=3, n_extra_layers=1, act_cls=partial(nn.LeakyReLU, negative_slope=0.2))
learn = GANLearner.wgan(dls, generator, critic, opt_func = partial(Adam, mom=0.))
learn.recorder.train_metrics=True
learn.recorder.valid_metrics=False
learn.fit(30, 2e-4, wd=0)
#learn.gan_trainer.switch(gen_mode=True)
learn.show_results(max_n=16, figsize=(8,8), ds_idx=0)
"""
Explanation: Models
GAN stands for Generative Adversarial Nets and were invented by Ian Goodfellow. The concept is that we will train two models at the same time: a generator and a critic. The generator will try to make new images similar to the ones in our dataset, and the critic will try to classify real images from the ones the generator does. The generator returns images, the critic a single number (usually 0. for fake images and 1. for real ones).
We train them against each other in the sense that at each step (more or less), we:
1. Freeze the generator and train the critic for one step by:
- getting one batch of true images (let's call that real)
- generating one batch of fake images (let's call that fake)
- have the critic evaluate each batch and compute a loss function from that; the important part is that it rewards positively the detection of real images and penalizes the fake ones
- update the weights of the critic with the gradients of this loss
Freeze the critic and train the generator for one step by:
generating one batch of fake images
evaluate the critic on it
return a loss that rewards positively the critic thinking those are real images; the important part is that it rewards positively the detection of real images and penalizes the fake ones
update the weights of the generator with the gradients of this loss
Here, we'll use the Wassertein GAN.
We create a generator and a critic that we pass to gan_learner. The noise_size is the size of the random vector from which our generator creates images.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cnrm-cerfacs/cmip6/models/sandbox-2/seaice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
pbcquoc/pbcquoc.github.io
|
images/vinid.ipynb
|
mit
|
!pip install hyperas
# Basic compuational libaries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
np.random.seed(2)
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
from sklearn.model_selection import KFold
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.layers import Dense, Dropout, Conv2D, GlobalAveragePooling2D, Flatten, GlobalMaxPooling2D
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization
from keras.models import Sequential
from keras.optimizers import RMSprop, Adam, SGD, Nadam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras import regularizers
# Import hyperopt for tunning hyper params
from hyperopt import hp, tpe, fmin
from hyperopt import space_eval
sns.set(style='white', context='notebook', palette='deep')
# Set the random seed
random_seed = 2
"""
Explanation: 1. Giới thiệu
Trong notebook này, mình sẽ trình bày cách giải quyết đề tài tuyển dụng của VinID. Mô hình CNN được sử dụng để phân loại 10 số viết tay trong bộ MNIST. Trong notebook này,bao gồm các phần sau:
1. Giới thiệu
2. Tiền xử lý dữ liệu
2.1 Load dữ liệu
2.2 Kiểm tra missing value
2.3 Chuẩn hóa
2.5 Label encoding
2.6 Xây dựng tập train/test
3. Data augmentation
4. Xây dựng mô hình
4.1 Xây dựng mô hình
5. Tunning parameter
5.1 Khai báo không gian tìm kiếm siêu tham số
5.2 Grid search
6. So sánh các optimizer và loss function
6.1 So sánh optimizer
6.2 So sánh loss function
7. Đánh giá mô hình
7.1 Confusion matrix
8. Dự đoán
8.1 Predict and Submit results
Cài đặt thư viện hyperas để hỗ trỡ quá trình tunning siêu tham số. Hyperas cung cấp các api rất tiện lợi cho quá trình theo huấn luyện và theo dõi độ chính xác của model tại mỗi bộ tham số.
End of explanation
"""
def data():
# Load the data
train = pd.read_csv("../input/digit-recognizer/train.csv")
test = pd.read_csv("../input/digit-recognizer/test.csv")
Y_train = train["label"]
# Drop 'label' column
X_train = train.drop(labels = ["label"],axis = 1)
# Normalize the data
X_train = X_train / 255.0
test = test / 255.0
# Reshape image in 3 dimensions (height = 28px, width = 28px , canal = 1)
X_train = X_train.values.reshape(-1,28,28,1)
test = test.values.reshape(-1,28,28,1)
# Encode labels to one hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
Y_train = to_categorical(Y_train, num_classes = 10)
return X_train, Y_train, test
X, Y, X_test = data()
"""
Explanation: 2. Tiền xử lý
Đọc bộ dữ liệu MNIST, chúng ta chia bộ dữ liệu thành tập train/valid/test. Đồng thời chuẩn hóa dữ liệu về khoảng [0-1] để giúp tăng tốc quá trình hội tụ. Tập valid của chúng ta sẽ gồm 20% tập train.
End of explanation
"""
g = sns.countplot(np.argmax(Y, axis=1))
"""
Explanation: Kiểm tra phân bố của nhãn
Chúng ta thấy rằng số lượng mẫu dữ liệu cho mỗi nhãn tương đương nhau.
End of explanation
"""
for i in range(0, 9):
plt.subplot(330 + (i+1))
plt.imshow(X[i][:,:,0], cmap=plt.get_cmap('gray'))
plt.title(np.argmax(Y[i]));
plt.axis('off')
plt.tight_layout()
"""
Explanation: Thử nhìn qua một số mẫu trong tập huấn luyện. Chúng ta thấy rằng hầu hết các ảnh đều rõ nét và tương đối dễ dàng để nhận dạng.
End of explanation
"""
epochs = 30 # Turn epochs to 30 to get 0.9967 accuracy
batch_size = 64
"""
Explanation: Định nghĩa số epochs cần huấn luyện và bachsize
End of explanation
"""
# With data augmentation to prevent overfitting (accuracy 0.99286)
train_aug = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
)
test_aug = ImageDataGenerator()
"""
Explanation: 3. Data Augmentation
Kĩ thuật data augmentation được sử dụng để phát sinh thêm những mẫu dữ liệu mới bằng cách áp dụng các kĩ thuật xử lý ảnh trên bức ảnh. Các phép biến đổi nhỏ này phải đảm bảo không làm thay đổi nhãn của bức ảnh.
Một số kĩ thuật phổ biến của data augmentation như là:
* Rotation: Xoay một góc nhỏ
* Translation: Tính tiến
* Brightness, Staturation: Thay đổi độ sáng, tương phản
* Zoom: zoom to/nhỏ bức ảnh
* Elastic Distortion: biến dạng bức ảnh
* Flip: lật trái/phải/trên/dưới.
Ở dưới đây, chúng ta sẽ chọn xoay 1 góc trong 0-10 độ. Zoom ảnh 0.1 lần, tịnh tiến 0.1 lần mỗi chiều.
End of explanation
"""
# Set the CNN model
def train_model(train_generator, valid_generator, params):
model = Sequential()
model.add(Conv2D(filters = params['conv1'], kernel_size = params['kernel_size_1'], padding = 'Same',
activation ='relu', input_shape = (28,28,1)))
model.add(BatchNormalization())
model.add(Conv2D(filters = params['conv2'], kernel_size = params['kernel_size_2'], padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size = params['pooling_size_1']))
model.add(Dropout(params['dropout1']))
model.add(BatchNormalization())
model.add(Conv2D(filters = params['conv3'], kernel_size = params['kernel_size_3'], padding = 'Same',
activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = params['conv4'], kernel_size = params['kernel_size_4'], padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size = params['pooling_size_1'], strides=(2,2)))
model.add(Dropout(params['dropout2']))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(params['dense1'], activation = "relu"))
model.add(Dropout(params['dropout3']))
model.add(Dense(10, activation = "softmax"))
if params['opt'] == 'rmsprop':
opt = RMSprop()
elif params['opt'] == 'sgd':
opt = SGD()
elif params['opt'] == 'nadam':
opt = Nadam()
else:
opt = Adam()
model.compile(loss=params['loss'], optimizer=opt, metrics=['acc'])
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=2, mode='auto', cooldown=2, min_lr=1e-7)
early = EarlyStopping(monitor='val_loss', patience=3)
callbacks_list = [reduce_lr, early]
history = model.fit_generator(train_generator,
validation_data=valid_generator,
steps_per_epoch=len(train_generator),
validation_steps=len(valid_generator),
callbacks=callbacks_list, epochs = epochs,
verbose=2)
score, acc = model.evaluate_generator(valid_generator, steps=len(valid_generator), verbose=0)
return acc, model, history
"""
Explanation: 4. Xây dưng mô hình CNN
CNN bao gồm tập hợp các lớp cơ bản bao gồm: convolution layer + nonlinear layer, pooling layer, fully connected layer. Các lớp này liên kết với nhau theo một thứ tự nhất định. Thông thường, một ảnh sẽ được lan truyền qua tầng convolution layer + nonlinear layer đầu tiên, sau đó các giá trị tính toán được sẽ lan truyền qua pooling layer, bộ ba convolution layer + nonlinear layer + pooling layer có thể được lặp lại nhiều lần trong network. Và sau đó được lan truyền qua tầng fully connected layer và softmax để tính sác xuất ảnh đó chứa vật thế gì.
Định nghĩa mô hình
Chúng ta sử dụng Keras Sequential API để định nghĩa mô hình. Các layer được thêm vào rất dễ dàng và tương đối linh động.
Đầu tiên chúng ta sử dụng layer Conv2D trên ảnh đầu vào. Conv2D bao gồm một tập các filters cần phải học. Mỗi filters sẽ trược qua toàn bộ bức ảnh để detect các đặt trưng trên bức ảnh đó.
Pooling layer là tầng quan trọng và thường đứng sau tầng Conv. Tầng này có chức năng giảm chiều của feature maps trước đó. Đối với max-pooling, tầng này chỉ đơn giản chọn giá trị lớn nhất trong vùng có kích thước pooling_size x pooling_size (thường là 2x2). Tầng pooling này được sử dụng để giảm chi phí tính toán và giảm được overfit của mô hình.
Đồng thời, Dropout cũng được sử dụng để hạn chế overfit. Dropout sẽ bỏ đi ngẫu nhiên các neuron bằng cách nhân với mask zeros, do đó, giúp mô hình học được những đặc trưng hữu ích. Dropout trong hầu hết các trường hợp đều giúp tăng độ chính xác và hạn chết overfit của mô hình.
Ở tầng cuối cùng, chúng ta flatten feature matrix thành một vector, sau đó sử dụng các tầng fully connected layers để phân loại ảnh thành các lớp cho trước.
Để giúp mô hình hội tụ gần với gobal minima chúng ta sử dụng annealing learning rate. Learning sẽ được điều chỉnh nhỏ dần sau mỗi lần cập nhật nếu như sau một số bước nhất định mà loss của mô hình không giảm nữa. Để giảm thời gian tính toán, chúng ta có thể sử dụng learning ban đầu lớn, sau đó giảm dần để mô hình hội tụ nhanh hơn.
Ngoài ra, chúng ta sử dụng early stopping để hạn chế hiện tượng overfit của mô hình. early stopping sẽ dừng quá trình huấn luyện nếu như loss trên tập validation tăng dần trong khi trên tập lại giảm.
Sử dụng hyperas để tunning siêu tham số
Trong quá trình định nghĩa mô hình, chúng ta sẽ lồng vào đó các đoạn mã để hỗ trợ quá trình search siêu tham số đã được định nghĩa ở trên. Chúng ta sẽ cần search các tham số như filter_size, pooling_size, dropout rate, dense size. Đồng thời chúng ta cũng thử việc điều chỉnh cả optimizer của mô hình.
End of explanation
"""
#This is the space of hyperparameters that we will search
space = {
'opt':hp.choice('opt', ['adam', 'sgd', 'rmsprop']),
'conv1':hp.choice('conv1', [16, 32, 64, 128]),
'conv2':hp.choice('conv2', [16, 32, 64, 128]),
'kernel_size_1': hp.choice('kernel_size_1', [3, 5]),
'kernel_size_2': hp.choice('kernel_size_2', [3, 5]),
'dropout1': hp.choice('dropout1', [0, 0.25, 0.5]),
'pooling_size_1': hp.choice('pooling_size_1', [2, 3]),
'conv3':hp.choice('conv3', [32, 64, 128, 256, 512]),
'conv4':hp.choice('conv4', [32, 64, 128, 256, 512]),
'kernel_size_3': hp.choice('kernel_size_3', [3, 5]),
'kernel_size_4': hp.choice('kernel_size_4', [3, 5]),
'dropout2':hp.choice('dropout2', [0, 0.25, 0.5]),
'pooling_size_2': hp.choice('pooling_size_2', [2, 3]),
'dense1':hp.choice('dense1', [128, 256, 512, 1024]),
'dropout3':hp.choice('dropout3', [0, 0.25, 0.5]),
'loss': hp.choice('loss', ['categorical_crossentropy', 'kullback_leibler_divergence']),
}
"""
Explanation: 5. Hyper-params tunning
Chúng ta sử dụng Hyperas để tunning các tham số. Hyperas sẽ phát sinh bộ tham số dựa trên khai báo ở trên. Sau đó huấn luyện mô hình và đánh giá trên tập validation. Bộ tham số có độ chính xác cao nhất trên tập validation sẽ được ghi nhận lại.
5.1 Khai báo không gian tìm kiếm siêu tham số
Có rất nhiều siêu tham số cần được tunning như: kiến trúc mạng, số filter, kích thước mỗi filters, kích thước pooling, các cách khởi tạo, hàm kích hoạt, tỉ lệ dropout,... Trong phần này, chúng ta sẽ tập trung vào các tham số như kích thước filter, số filters, pooling size.
Đầu tiên, chúng ta cần khai báo các siêu tham để hyperas có thể tìm kiếm trong tập đấy. Ở mỗi tầng conv, chúng ta sẽ tunning kích thước filter, filter size. Ở tầng pooling, kích thước pooling size sẽ được tunning. Đồng thời, tỉ lệ dropout ở tầng Dropout cũng được tunning. Số filters ở tầng conv thường từ 16 -> 1024, kích thước filter hay thường dùng nhất trong là 3 với 5. Còn tỉ lệ dropout nằm trong đoạn 0-1
End of explanation
"""
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size = 0.2, random_state=random_seed)
# only apply data augmentation with train data
train_gen = train_aug.flow(X_train, Y_train, batch_size=batch_size)
valid_gen = test_aug.flow(X_val, Y_val, batch_size=batch_size)
def optimize(params):
acc, model, history = train_model(train_gen, valid_gen, params)
return -acc
"""
Explanation: 5.2 Optimze để tìm bộ tham số tốt nhất
Hyperas sẽ phát sinh các bộ tham số giữ trên không gian tìm kiếm định nghĩa trước của chúng ta. Sau đó thư viện sẽ hỗ trợ quá trình tìm kiếm các tham số này đơn giản bằng một số API có sẵn.
End of explanation
"""
best = fmin(fn = optimize, space = space,
algo = tpe.suggest, max_evals = 50) # change to 50 to search more
best_params = space_eval(space, best)
print('best hyper params: \n', best_params)
"""
Explanation: Chạy quá trình search tham số. Bộ siêu tham số tốt nhất sẽ được ghi nhận lại để chúng ta sử dụng trong mô hình cuối cùng.
End of explanation
"""
acc, model, history = train_model(train_gen, valid_gen, best_params)
print("validation accuracy: {}".format(acc))
"""
Explanation: Huấn luyện lại mô hình với bộ tham số tốt nhất ở trên.
End of explanation
"""
optimizers = ['rmsprop', 'sgd', 'adam']
hists = []
params = best_params
for optimizer in optimizers:
params['opt'] = optimizer
print("Train with optimizer: {}".format(optimizer))
_, _, history = train_model(train_gen, valid_gen, params)
hists.append((optimizer, history))
"""
Explanation: Kết quả trên tập validation khá cao với acc > 99%
6. So sánh optimizers và loss
6.1 So sánh các optimzers
Mục tiêu của quá trình huấn luyện mô hình ML là giảm độ lỗi của hàm loss function được tính bằng sự khác biệt của giá trị mô hình dự đoán và giá trị thực tế. Để đạt được mục đích này chúng ta thường sử dụng gradient descent. Gradient descent sẽ cập nhật trọng số của mô hình ngược với chiều gradient để giảm độ lỗi của loss function.
Chúng ta sử thường sử dụng 3 optimzer phổ biến sau là adam, sgd, rmsprop để cập nhật trọng số của mô hình.
Stochastic Gradient Descent là một biến thể của Gradient Descent, yêu cầu chúng ta phải shuffle dự liệu trước khi huấn luyện. Trong khi đó RMSProp và Adam là 2 optimizer hướng đến việc điều chỉnh learning rate tự động theo quá trình học.
RMSprop (Root mean square propagation) được giới thiệu bởi Geoffrey Hinton. RMSProp giải quyết vấn đề giảm dần learning rate của Adagrad bằng cách chuẩn hóa learning với gradient gần với thời điểm cập nhật mà thôi. Để làm được điều này tác giả chia learning rate cho tổng bình phương gradient giảm dần.
Adam là optimizer phổ biến nhất tại thời điểm hiện tại. Adam cũng tính learning riêng biệt cho từng tham số, tương tự như RMSProp và Adagrad. Adam chuẩn hóa learning của mỗi tham số bằng first và second order moment của gradient.
End of explanation
"""
for name, history in hists:
plt.plot(history.history['val_acc'], label=name)
plt.legend(loc='best', shadow=True)
plt.tight_layout()
"""
Explanation: Plot quá trình huấn luyện mô hình với 3 lọai optimizers khác nhau.
End of explanation
"""
loss_functions = ['categorical_crossentropy', 'kullback_leibler_divergence']
hists = []
params = best_params
for loss_funct in loss_functions:
params['loss'] = loss_funct
print("Train with loss function : {}".format(loss_funct))
_, _, history = train_model(train_gen, valid_gen, params)
hists.append((loss_funct, history))
"""
Explanation: 6.2 So sánh các loss function
Trong bài toán phân loại nhiều lớp. Chúng ta thường sử dụng 2 loại loss function sau:
* Cross entropy
* Kullback Leibler Divergence Loss
Cross entropy được sử dụng phổ biến nhất trong bài toán của chúng ta. Cross entropy loss có nền tảng toán học của maximun likelihood được tính bằng tổng của sự khác biệt giữ giá trị dự đoán và giá trị thực tế của dữ liệu. Cross entropy error tốt nhất khi có giá trị bằng 0.
KL loss (Kullback Leibler Divergence Loss) thể hiện sự khác biệt giữ 2 phân bố xác suất. KL loss bằng 0, chứng tỏ 2 phân bố này hoàn toàn giống nhau.
Cross entropy cho bằng toán phân loại nhiều lớn tương đối giống với KL Loss về mặt toán học, nên có thể xem 2 độ lỗi này là một trong bài toán của chúng ta.
End of explanation
"""
for name, history in hists:
plt.plot(history.history['val_acc'], label=name)
plt.legend(loc='best', shadow=True)
plt.tight_layout()
"""
Explanation: Plot quá trình huấn luyện mô hình với 2 loại loss function khác nhau.
End of explanation
"""
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred,axis = 1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val,axis = 1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes = range(10))
"""
Explanation: Chúng ta thấy rằng không có sự khác biệt rõ rằng về tốc độ hội tụ giữ 2 hàm loss function là cross-entropy và KL loss trong bài toán của chúng ta.
7. Đánh giá mô hình.
Chúng ta sẽ xem xét một số lỗi của mô hình dự huấn luyện được. Một số lỗi dễ dàng được phát hiện bằng confusion matrix thể hiện xác xuất/số ảnh bị phân loại nhầm thành lớp khác.
End of explanation
"""
# Display some error results
# Errors are difference between predicted labels and true labels
errors = (Y_pred_classes - Y_true != 0)
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
""" This function shows 6 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((28,28)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
fig.tight_layout()
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
"""
Explanation: Các giá trị trên đường chéo rất cao, chúng ta mô hình chúng ta có độ chính xác rất tốt.
Nhìn vào confusion matrix ở trên, chúng ta có một số nhận xét như sau:
* Số 4 hay nhầm lẫn với số 9, bởi vì khi viết tay đường nét của 2 số này tương tự nhau khá nhiều
* số 3 và số 8, cũng hay bị tình trạng tương tự.
Hiển thị một số trường hợp bị sai
Để có cái nhìn rõ hơn về một số mẫu bị sai, chúng ta quan sát top các mẫu có giá trị dự đoán khác nhất so với nhãn thật
End of explanation
"""
kf = KFold(n_splits=5)
preds = []
for train_index, valid_index in kf.split(X):
X_train, Y_train, X_val, Y_val = X[train_index], Y[train_index], X[valid_index], Y[valid_index]
train_gen = train_aug.flow(X_train, Y_train, batch_size=batch_size)
valid_gen = test_aug.flow(X_val, Y_val, batch_size=batch_size)
acc, model, history = train_model(train_gen, valid_gen, best_params)
pred = model.predict(X_test)
preds.append(pred)
# predict results
results = np.mean(preds, axis=0)
# select the indix with the maximum probability
results = np.argmax(results,axis = 1)
results = pd.Series(results,name="Label")
submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)
submission.to_csv("cnn_mnist_datagen.csv",index=False)
"""
Explanation: Với các mẫu ảnh sai, chúng ta có thể thấy rằng những mẫu này rất khó nhận dạng nhầm lẫn sáng các lớp khác. ví dụ số 9 và 4 hay là 3 và 8
8. Kfold, Predict và submit kết quả
Chúng ta huấn luyện lại mô hình sử dụng kfold, kết quả cuối dự đoán cuối cùng là trung bình cộng sự đoán của các mô hình huấn luyện trên mỗi fold.
Chúng ta chọn nhãn là lớp được dự đoán có xác suất cao nhất mà mô hình nhận dạng được
End of explanation
"""
|
FeitengLab/EmotionMap
|
2StockEmotion/3. 主成份分析(PCA)(曼哈顿).ipynb
|
mit
|
import numpy as np
from sklearn.decomposition import PCA
import pandas as pd
df = pd.read_csv('Manhattan.txt', sep='\s+')
df.drop('id', axis=1, inplace=True)
df.tail()
"""
Explanation: Here I will using scikit-learn to perform PCA in Jupyter Notebook.
First, I need some example to get familiar with this
Get our data and analysis it
End of explanation
"""
tdf = df.iloc[:, 0:-3]
tdf.tail()
"""
Explanation: how to index a given part of a DataFrame have been a problem for me.
Refer pandas/html/10min.html#selection-by-position to keep in mind(link to file outside this dir not work well)
file:///C:/work/python/%E6%96%87%E6%A1%A3/pandas/html/10min.html#selection-by-position
End of explanation
"""
pca = PCA(n_components=8)
pca.fit(tdf)
np.set_printoptions(precision=6, suppress=True)
print('各主成份方差贡献占比:', end=' ')
print(pca.explained_variance_ratio_)
emotion_score = pd.DataFrame(pca.transform(tdf))
emotion_score.rename(columns={'0': 'emotion_score'}, inplace=True)
# 第一个主成份
pd.concat([df, emotion_score.loc[:, 0]], axis=1, join='inner').rename(index=str, columns={0: 'emotion_score'}).to_csv('Manhattan_score_raw.txt', index=None, sep='\t')
"""
Explanation: 取一个主成分, 解释方差0.917864
End of explanation
"""
|
tensorflow/docs-l10n
|
site/en-snapshot/guide/distributed_training.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
"""
Explanation: Distributed training with TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/distributed_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
tf.distribute.Strategy is a TensorFlow API to distribute training across multiple GPUs, multiple machines, or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.
tf.distribute.Strategy has been designed with these key goals in mind:
Easy to use and support multiple user segments, including researchers, machine learning engineers, etc.
Provide good performance out of the box.
Easy switching between strategies.
You can distribute training using tf.distribute.Strategy with a high-level API like Keras Model.fit, as well as custom training loops (and, in general, any computation using TensorFlow).
In TensorFlow 2.x, you can execute your programs eagerly, or in a graph using tf.function. tf.distribute.Strategy intends to support both these modes of execution, but works best with tf.function. Eager mode is only recommended for debugging purposes and not supported for tf.distribute.TPUStrategy. Although training is the focus of this guide, this API can also be used for distributing evaluation and prediction on different platforms.
You can use tf.distribute.Strategy with very few changes to your code, because the underlying components of TensorFlow have been changed to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, you will learn about various types of strategies and how you can use them in different situations. To learn how to debug performance issues, check out the Optimize TensorFlow GPU performance guide.
Note: For a deeper understanding of the concepts, watch the deep-dive presentation—Inside TensorFlow: tf.distribute.Strategy. This is especially recommended if you plan to write your own training loop.
Set up TensorFlow
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
"""
Explanation: Types of strategies
tf.distribute.Strategy intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
Synchronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
Hardware platform: You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, TensorFlow has MirroredStrategy, TPUStrategy, MultiWorkerMirroredStrategy, ParameterServerStrategy, CentralStorageStrategy, as well as other strategies available. The next section explains which of these are supported in which scenarios in TensorFlow. Here is a quick overview:
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
| :----------------------- | :----------------- | :------------ | :---------------------------- | :----------------------- | :------------------------ |
| Keras Model.fit | Supported | Supported | Supported | Experimental support | Experimental support |
| Custom training loop | Supported | Supported | Supported | Experimental support | Experimental support |
| Estimator API | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
Note: Experimental support means the APIs are not covered by any compatibilities guarantees.
Warning: Estimator support is limited. Basic training and evaluation are experimental, and advanced features—such as scaffold—are not implemented. You should be using Keras or custom training loops if a use case is not covered. Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. Go to the migration guide for details.
MirroredStrategy
tf.distribute.MirroredStrategy supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called MirroredVariable. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices. All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device. It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses the NVIDIA Collective Communication Library (NCCL) as the all-reduce implementation. You can choose from a few other options or write your own.
Here is the simplest way of creating MirroredStrategy:
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
"""
Explanation: This will create a MirroredStrategy instance, which will use all the GPUs that are visible to TensorFlow, and NCCL—as the cross-device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
"""
Explanation: If you wish to override the cross device communication, you can do so using the cross_device_ops argument by supplying an instance of tf.distribute.CrossDeviceOps. Currently, tf.distribute.HierarchicalCopyAllReduce and tf.distribute.ReductionToOneDevice are two options other than tf.distribute.NcclAllReduce, which is the default.
End of explanation
"""
strategy = tf.distribute.MultiWorkerMirroredStrategy()
"""
Explanation: TPUStrategy
tf.distribute.TPUStrategy lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the TPU Research Cloud, and Cloud TPU.
In terms of distributed training architecture, TPUStrategy is the same MirroredStrategy—it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in TPUStrategy.
Here is how you would instantiate TPUStrategy:
Note: To run any TPU code in Colab, you should select TPU as the Colab runtime. Refer to the Use TPUs guide for a complete example.
python
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.TPUStrategy(cluster_resolver)
The TPUClusterResolver instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it.
If you want to use this for Cloud TPUs:
You must specify the name of your TPU resource in the tpu argument.
You must initialize the TPU system explicitly at the start of the program. This is required before TPUs can be used for computation. Initializing the TPU system also wipes out the TPU memory, so it's important to complete this step first in order to avoid losing state.
MultiWorkerMirroredStrategy
tf.distribute.MultiWorkerMirroredStrategy is very similar to MirroredStrategy. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it creates copies of all variables in the model on each device across all workers.
Here is the simplest way of creating MultiWorkerMirroredStrategy:
End of explanation
"""
communication_options = tf.distribute.experimental.CommunicationOptions(
implementation=tf.distribute.experimental.CommunicationImplementation.NCCL)
strategy = tf.distribute.MultiWorkerMirroredStrategy(
communication_options=communication_options)
"""
Explanation: MultiWorkerMirroredStrategy has two implementations for cross-device communications. CommunicationImplementation.RING is RPC-based and supports both CPUs and GPUs. CommunicationImplementation.NCCL uses NCCL and provides state-of-art performance on GPUs but it doesn't support CPUs. CollectiveCommunication.AUTO defers the choice to Tensorflow. You can specify them in the following way:
End of explanation
"""
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
"""
Explanation: One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. The 'TF_CONFIG' environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. Learn more in the setting up TF_CONFIG section of this document.
For more details about MultiWorkerMirroredStrategy, consider the following tutorials:
Multi-worker training with Keras Model.fit
Multi-worker training with a custom training loop
ParameterServerStrategy
Parameter server training is a common data-parallel method to scale up model training on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. Check out the Parameter server training tutorial for details.
In TensorFlow 2, parameter server training uses a central coordinator-based architecture via the tf.distribute.experimental.coordinator.ClusterCoordinator class.
In this implementation, the worker and parameter server tasks run tf.distribute.Servers that listen for tasks from the coordinator. The coordinator creates resources, dispatches training tasks, writes checkpoints, and deals with task failures.
In the programming running on the coordinator, you will use a ParameterServerStrategy object to define a training step and use a ClusterCoordinator to dispatch training steps to remote workers. Here is the simplest way to create them:
python
strategy = tf.distribute.experimental.ParameterServerStrategy(
tf.distribute.cluster_resolver.TFConfigClusterResolver(),
variable_partitioner=variable_partitioner)
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy)
To learn more about ParameterServerStrategy, check out the Parameter server training with Keras Model.fit and a custom training loop tutorial.
Note: You will need to configure the 'TF_CONFIG' environment variable if you use TFConfigClusterResolver. It is similar to 'TF_CONFIG' in MultiWorkerMirroredStrategy but has additional caveats.
In TensorFlow 1, ParameterServerStrategy is available only with an Estimator via tf.compat.v1.distribute.experimental.ParameterServerStrategy symbol.
Note: This strategy is experimental as it is currently under active development.
CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
Create an instance of CentralStorageStrategy by:
End of explanation
"""
default_strategy = tf.distribute.get_strategy()
"""
Explanation: This will create a CentralStorageStrategy instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables.
Note: This strategy is experimental, as it is currently a work in progress.
Other strategies
In addition to the above strategies, there are two other strategies which might be useful for prototyping and debugging when using tf.distribute APIs.
Default Strategy
The Default Strategy is a distribution strategy which is present when no explicit distribution strategy is in scope. It implements the tf.distribute.Strategy interface but is a pass-through and provides no actual distribution. For instance, Strategy.run(fn) will simply call fn. Code written using this strategy should behave exactly as code written without any strategy. You can think of it as a "no-op" strategy.
The Default Strategy is a singleton—and one cannot create more instances of it. It can be obtained using tf.distribute.get_strategy outside any explicit strategy's scope (the same API that can be used to get the current strategy inside an explicit strategy's scope).
End of explanation
"""
# In optimizer or other library code
# Get currently active strategy
strategy = tf.distribute.get_strategy()
strategy.reduce("SUM", 1., axis=None) # reduce some values
"""
Explanation: This strategy serves two main purposes:
It allows writing distribution-aware library code unconditionally. For example, in tf.optimizers you can use tf.distribute.get_strategy and use that strategy for reducing gradients—it will always return a strategy object on which you can call the Strategy.reduce API.
End of explanation
"""
if tf.config.list_physical_devices('GPU'):
strategy = tf.distribute.MirroredStrategy()
else: # Use the Default Strategy
strategy = tf.distribute.get_strategy()
with strategy.scope():
# Do something interesting
print(tf.Variable(1.))
"""
Explanation: Similar to library code, it can be used to write end users' programs to work with and without distribution strategy, without requiring conditional logic. Here's a sample code snippet illustrating this:
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
"""
Explanation: OneDeviceStrategy
tf.distribute.OneDeviceStrategy is a strategy to place all variables and computation on a single specified device.
python
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
This strategy is distinct from the Default Strategy in a number of ways. In the Default Strategy, the variable placement logic remains unchanged when compared to running TensorFlow without any distribution strategy. But when using OneDeviceStrategy, all variables created in its scope are explicitly placed on the specified device. Moreover, any functions called via OneDeviceStrategy.run will also be placed on the specified device.
Input distributed through this strategy will be prefetched to the specified device. In the Default Strategy, there is no input distribution.
Similar to the Default Strategy, this strategy could also be used to test your code before switching to other strategies which actually distribute to multiple devices/machines. This will exercise the distribution strategy machinery somewhat more than the Default Strategy, but not to the full extent of using, for example, MirroredStrategy or TPUStrategy. If you want code that behaves as if there is no strategy, then use the Default Strategy.
So far you've learned about different strategies and how you can instantiate them. The next few sections show the different ways in which you can use them to distribute your training.
Use tf.distribute.Strategy with Keras Model.fit
tf.distribute.Strategy is integrated into tf.keras, which is TensorFlow's implementation of the Keras API specification. tf.keras is a high-level API to build and train models. By integrating into the tf.keras backend, it's seamless for you to distribute your training written in the Keras training framework using Model.fit.
Here's what you need to change in your code:
Create an instance of the appropriate tf.distribute.Strategy.
Move the creation of Keras model, optimizer and metrics inside strategy.scope. Thus the code in the model's call(), train_step(), and test_step() methods will all be distributed and executed on the accelerator(s).
TensorFlow distribution strategies support all types of Keras models—Sequential, Functional, and subclassed.
Here is a snippet of code to do this for a very simple Keras model with one Dense layer:
End of explanation
"""
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
"""
Explanation: This example uses MirroredStrategy, so you can run this on a machine with multiple GPUs. strategy.scope() indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows you to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. MirroredStrategy takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.
End of explanation
"""
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
"""
Explanation: Here a tf.data.Dataset provides the training and eval input. You can also use NumPy arrays:
End of explanation
"""
mirrored_strategy.num_replicas_in_sync
# Compute a global batch size using a number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15, 20:0.175}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
"""
Explanation: In both cases—with Dataset or NumPy—each batch of the given input is divided equally among the multiple replicas. For instance, if you are using the MirroredStrategy with 2 GPUs, each batch of size 10 will be divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators, so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use strategy.num_replicas_in_sync to get the number of replicas.
End of explanation
"""
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
"""
Explanation: What's supported now?
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | ParameterServerStrategy | CentralStorageStrategy |
| ----------------- | ------------------ | ------------- | ----------------------------- | ------------------------- | ------------------------ |
| Keras Model.fit | Supported | Supported | Supported | Experimental support | Experimental support |
Examples and tutorials
Here is a list of tutorials and examples that illustrate the above integration end-to-end with Keras Model.fit:
Tutorial: Training with Model.fit and MirroredStrategy.
Tutorial: Training with Model.fit and MultiWorkerMirroredStrategy.
Guide: Contains an example of using Model.fit and TPUStrategy.
Tutorial: Parameter server training with Model.fit and ParameterServerStrategy.
Tutorial: Fine-tuning BERT for many tasks from the GLUE benchmark with Model.fit and TPUStrategy.
TensorFlow Model Garden repository containing collections of state-of-the-art models implemented using various strategies.
Use tf.distribute.Strategy with custom training loops
As demonstrated above, using tf.distribute.Strategy with Keras Model.fit requires changing only a couple lines of your code. With a little more effort, you can also use tf.distribute.Strategy with custom training loops.
If you need more flexibility and control over your training loops than is possible with Estimator or Keras, you can write custom training loops. For instance, when using a GAN, you may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training.
The tf.distribute.Strategy classes provide a core set of methods to support custom training loops. Using these may require minor restructuring of the code initially, but once that is done, you should be able to switch between GPUs, TPUs, and multiple machines simply by changing the strategy instance.
Below is a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
First, create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
End of explanation
"""
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
"""
Explanation: Next, create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset to distribute the dataset based on the strategy.
End of explanation
"""
loss_object = tf.keras.losses.BinaryCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)
def train_step(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
predictions = model(features, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def distributed_train_step(dist_inputs):
per_replica_losses = mirrored_strategy.run(train_step, args=(dist_inputs,))
return mirrored_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
"""
Explanation: Then, define one step of the training. Use tf.GradientTape to compute gradients and optimizer to apply those gradients to update your model's variables. To distribute this training step, put it in a function train_step and pass it to tf.distribute.Strategy.run along with the dataset inputs you got from the dist_dataset created before:
End of explanation
"""
for dist_inputs in dist_dataset:
print(distributed_train_step(dist_inputs))
"""
Explanation: A few other things to note in the code above:
You used tf.nn.compute_average_loss to compute the loss. tf.nn.compute_average_loss sums the per example loss and divides the sum by the global_batch_size. This is important because later after the gradients are calculated on each replica, they are aggregated across the replicas by summing them.
You also used the tf.distribute.Strategy.reduce API to aggregate the results returned by tf.distribute.Strategy.run. tf.distribute.Strategy.run returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can reduce them to get an aggregated value. You can also do tf.distribute.Strategy.experimental_local_results to get the list of values contained in the result, one per local replica.
When you call apply_gradients within a distribution strategy scope, its behavior is modified. Specifically, before applying gradients on each parallel instance during synchronous training, it performs a sum-over-all-replicas of the gradients.
Finally, once you have defined the training step, you can iterate over dist_dataset and run the training in a loop:
End of explanation
"""
iterator = iter(dist_dataset)
for _ in range(10):
print(distributed_train_step(next(iterator)))
"""
Explanation: In the example above, you iterated over the dist_dataset to provide input to your training. You are also provided with the tf.distribute.Strategy.make_experimental_numpy_dataset to support NumPy inputs. You can use this API to create a dataset before calling tf.distribute.Strategy.experimental_distribute_dataset.
Another way of iterating over your data is to explicitly use iterators. You may want to do this when you want to run for a given number of steps as opposed to iterating over the entire dataset. The above iteration would now be modified to first create an iterator and then explicitly call next on it to get the input data.
End of explanation
"""
|
FordyceLab/AcqPack
|
notebooks/Experiment_Arjun20170606.ipynb
|
mit
|
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
from config import utils as ut
%matplotlib inline
"""
Explanation: SETUP
End of explanation
"""
# config directory must have "__init__.py" file
# from the 'config' directory, import the following classes:
from config import Motor, ASI_Controller, Autosipper
autosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
autosipper.coord_frames
from config import gui
gui.stage_control(autosipper.XY, autosipper.Z)
platemap = ut.generate_position_table((8,12),(9,9),95.0)
platemap['x'] = -platemap['x'] - 1.8792
platemap['y'] = platemap['y'] + 32.45
platemap.loc[platemap.shape[0]] = [96, 99, 99, 99, 'W01', -8.2492, 1.1709, 68.3999]
platemap.loc[platemap.shape[0]] = [97, 99, 99, 99, 'W02', -36.9737, 1.1709, 68.3999]
platemap['contents'] = ["" for i in range(len(platemap['name']))]
for i in range(10):
platemap['contents'].iloc[36+i] = "conc"+str(i)
autosipper.coord_frames.hardware.position_table = platemap
platemap
autosipper.go_to('hardware', 'name', 'B01')
"""
Explanation: Autosipper
End of explanation
"""
from config import Manifold
from config.gui import manifold_control
manifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512)
manifold.valvemap[manifold.valvemap.name>0]
manifold_control(manifold)
def valve_states():
tmp = []
for i in [2,0,14,8]:
status = 'x'
if manifold.read_valve(i):
status = 'o'
tmp.append([status, manifold.valvemap.name.iloc[i]])
return pd.DataFrame(tmp)
tmp = []
for i in range(16):
status = 'x'
if manifold.read_valve(i):
status = 'o'
name = manifold.valvemap.name.iloc[i]
tmp.append([status, name])
pd.DataFrame(tmp).replace(np.nan, '')
name = 'inlet_in'
v = manifold.valvemap['valve'][manifold.valvemap.name==name]
v=14
manifold.depressurize(v)
manifold.pressurize(v)
manifold.exit()
"""
Explanation: Manifold
End of explanation
"""
# !!!! Also must have MM folder on system PATH
# mm_version = 'C:\Micro-Manager-1.4'
# cfg = 'C:\Micro-Manager-1.4\SetupNumber2_01282017.cfg'
mm_version = 'C:\Program Files\Micro-Manager-2.0beta'
cfg = 'C:\Program Files\Micro-Manager-2.0beta\Setup2_20170413.cfg'
import sys
sys.path.insert(0, mm_version) # make it so python can find MMCorePy
import MMCorePy
from PIL import Image
core = MMCorePy.CMMCore()
core.loadSystemConfiguration(cfg)
core.setProperty("Spectra", "White_Enable", "1")
core.waitForDevice("Spectra")
core.setProperty("Cam Andor_Zyla4.2", "Sensitivity/DynamicRange", "16-bit (low noise & high well capacity)") # NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)
"""
Explanation: Micromanager
End of explanation
"""
core.setProperty(core.getCameraDevice(), "Exposure", 125)
core.setConfig('Channel','4_eGFP')
core.setProperty(core.getCameraDevice(), "Binning", "3x3")
position_list = ut.load_mm_positionlist("Z:/Data/Setup 2/Arjun/170609_FlippedMITOMI/170609_mwm.pos")
position_list
# ONE ACQUISITION / SCAN
def scan(channel, exposure, washtype, plate_n):
core.setConfig('Channel', channel)
core.setProperty(core.getCameraDevice(), "Exposure", exposure)
time.sleep(.2)
timestamp = time.strftime("%Y%m%d-%H%M%S", time.localtime())
rootdirectory = "Z:/Data/Setup 2/Arjun/170609_FlippedMITOMI/"
solution = autosipper.coord_frames.hardware.position_table.contents.iloc[plate_n]
scandirectory = '{}_{}_{}_{}_{}'.format(timestamp, solution, washtype, channel, exposure)
os.makedirs(rootdirectory+scandirectory)
for i in xrange(len(position_list)):
si = str(i)
x,y = position_list[['x','y']].iloc[i]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
core.snapImage()
img = core.getImage()
image = Image.fromarray(img)
timestamp = time.strftime("%Y%m%d-%H%M%S", time.localtime())
positionname = position_list['name'].iloc[i]
image.save('{}/{}_{}.tif'.format(rootdirectory+scandirectory, timestamp, positionname))
x,y = position_list[['x','y']].iloc[0]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
def get_valve(name):
return ut.lookup(manifold.valvemap,'name',name,'valve',0)[0]
scan('4_eGFP', 125, 'pre')
# PUT PINS IN ALL INPUTS
# in0 WASTE LINE
# in1 WASH LINE, 4.5 PSI
# in2 PEEK LINE, 4.5 PSI
#----------------------------------
# initialize valve states
## all inputs close
## sandwhich
# prime inlet tree (wash)
manifold.depressurize(get_valve('bBSA_2')) ## wash open
manifold.depressurize(get_valve('Waste_2')) ## waste open
time.sleep(60*0.5) ## wait 0.5 min
manifold.pressurize(get_valve('Waste_2')) ## waste close
# backflush tubing (wash)
autosipper.go_to('hardware', 'name', 'W02', zh_travel=40) ## W02 move
manifold.depressurize(get_valve('NA_2')) ## inlet open
time.sleep(60*11) ## wait 11 min
manifold.pressurize(get_valve('NA_2')) ## inlet close
# fill chip (wash)
## chip_in open
## chip_out open
## wait 10 min
## chip_out close
## wait 10 min
# prime tubing (1st input)
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
autosipper.go_to('hardware', 'name', 'W01', zh_travel=40)
autosipper.go_to('hardware', 'n', 36, zh_travel=40)
manifold.depressurize(get_valve('NA_2')) # inlet open
manifold.depressurize(get_valve('Waste_2')) # waste open
time.sleep(60*11) # filling inlet, 11 min...
manifold.pressurize(get_valve('Waste_2')) # waste close
"""
Explanation: Preset: 1_PBP
ConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP
Preset: 2_BF
ConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF
Preset: 3_DAPI
ConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI
Preset: 4_eGFP
ConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP
Preset: 5_Cy5
ConfigGroup,Channel,5_Cy5,TIFilterBlock1,Label,5-Cy5
Preset: 6_AttoPhos
ConfigGroup,Channel,6_AttoPhos,TIFilterBlock1,Label,6-AttoPhos
ACQUISITION
End of explanation
"""
# prime inlet tree (wash)
manifold.depressurize(get_valve('bBSA_2')) ## wash open
manifold.depressurize(get_valve('Waste_2')) ## waste open
time.sleep(60*0.5) ## wait 0.5 min
manifold.pressurize(get_valve('Waste_2')) ## waste close
# backflush tubing (wash)
autosipper.go_to('hardware', 'name', 'W02', zh_travel=40) ## W02 move
manifold.depressurize(get_valve('NA_2')) ## inlet open
time.sleep(60*15) ## wait 15 min
manifold.pressurize(get_valve('NA_2')) ## inlet close
manifold.pressurize(get_valve('bBSA_2')) ## wash close
# prime tubing (1st input)
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
autosipper.go_to('hardware', 'name', 'W01', zh_travel=40)
autosipper.go_to('hardware', 'n', 36, zh_travel=40)
manifold.depressurize(get_valve('NA_2')) # inlet open
manifold.depressurize(get_valve('Waste_2')) # waste open
time.sleep(60*15) # filling inlet, 15 min...
manifold.pressurize(get_valve('Waste_2')) # waste close
# 16,Waste_2
# 17,bBSA_2
# 18,NA_2
# 19,antibody_2
# 20,Extra1_2
# 21,Extra2_2
# 22,Protein_2
# 23,Wash_2
exposures = [1250,1250,1250,1250,1000,600,300,180,70,30]
for i,exposure in enumerate(exposures):
plate_n = 36+i
# flow on chip
manifold.depressurize(get_valve('NA_2')) # inlet open
manifold.depressurize(get_valve('In_2')) # chip_in open
manifold.depressurize(get_valve('Out_2')) # chip_out open
time.sleep(60*10) # filling chip, 10 min...
# CONCURRENTLY:
incubate_time = 60*30 # 30 min
# a) incubate DNA with protein
manifold.pressurize(get_valve('Sandwich1_2')) # sandwhiches close
manifold.pressurize(get_valve('Sandwich2_2')) # "
time.sleep(1) # pause
manifold.depressurize(get_valve('Button1_2')) # buttons open
manifold.depressurize(get_valve('Button2_2')) # "
incubate_start = time.time()
# b1) prime inlet tube with next INPUT
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
manifold.pressurize(get_valve('NA_2')) # inlet close
autosipper.go_to('hardware', 'name', 'W01', zh_travel=40)
autosipper.go_to('hardware', 'n', plate_n+1, zh_travel=40)
manifold.depressurize(get_valve('NA_2')) # inlet open
manifold.depressurize(get_valve('Waste_2')) # waste open
time.sleep(60*15) # filling inlet, 15 min...
manifold.pressurize(get_valve('NA_2')) # inlet close
# b2) prime inlet tree with wash
manifold.depressurize(get_valve('bBSA_2')) # wash open
manifold.pressurize(get_valve('Waste_2')) # waste close
for v in [19,20,21,22,23]:
manifold.depressurize(v)
time.sleep(.2)
manifold.pressurize(v)
manifold.depressurize(get_valve('Waste_2')) # waste open
time.sleep(60*1)
manifold.pressurize(get_valve('Waste_2')) # waste close
remaining_time = incubate_time - (time.time() - incubate_start)
time.sleep(remaining_time)
# prewash Cy5
scan('5_Cy5', exposure, 'pre', plate_n)
# wash
manifold.pressurize(get_valve('Button1_2')) # buttons close
manifold.pressurize(get_valve('Button2_2')) # "
time.sleep(1) # pause
manifold.depressurize(get_valve('Sandwich1_2')) # sandwhiches open
manifold.depressurize(get_valve('Sandwich2_2')) # "
manifold.depressurize(get_valve('In_2')) # chip_in open
manifold.depressurize(get_valve('Out_2')) # chip_out open
time.sleep(60*10) # washing chip, 10 min...
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
manifold.pressurize(get_valve('bBSA_2')) # wash close
# postwash eGFP and postwash Cy5
scan('4_eGFP', 125, 'post', plate_n)
scan('5_Cy5', 1250, 'post', plate_n)
autosipper.go_to('hardware', 'n', 47, zh_travel=40)
# prewash Cy5
scan('5_Cy5', exposure, 'pre', plate_n)
# wash
manifold.pressurize(get_valve('Button1_2')) # buttons close
manifold.pressurize(get_valve('Button2_2')) # "
time.sleep(1) # pause
manifold.depressurize(get_valve('Sandwich1_2')) # sandwhiches open
manifold.depressurize(get_valve('Sandwich2_2')) # "
manifold.depressurize(get_valve('In_2')) # chip_in open
manifold.depressurize(get_valve('Out_2')) # chip_out open
time.sleep(60*10) # washing chip, 10 min...
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
manifold.pressurize(get_valve('bBSA_2')) # wash close
# postwash eGFP and postwash Cy5
scan('4_eGFP', 125, 'post', plate_n)
scan('5_Cy5', 1250, 'post', plate_n)
exposures = [1250,1250,1250,1250,1000,600,300,180,70,30]
for i, exposure in enumerate(exposures):
if i==0:
continue
plate_n = 36+i
# flow on chip
manifold.depressurize(get_valve('NA_2')) # inlet open
manifold.depressurize(get_valve('In_2')) # chip_in open
manifold.depressurize(get_valve('Out_2')) # chip_out open
time.sleep(60*10) # filling chip, 10 min...
# CONCURRENTLY:
incubate_time = 60*30 # 30 min
# a) incubate DNA with protein
manifold.pressurize(get_valve('Sandwich1_2')) # sandwhiches close
manifold.pressurize(get_valve('Sandwich2_2')) # "
time.sleep(1) # pause
manifold.depressurize(get_valve('Button1_2')) # buttons open
manifold.depressurize(get_valve('Button2_2')) # "
incubate_start = time.time()
# b1) prime inlet tube with next INPUT
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
manifold.pressurize(get_valve('NA_2')) # inlet close
autosipper.go_to('hardware', 'name', 'W01', zh_travel=40)
autosipper.go_to('hardware', 'n', plate_n+1, zh_travel=40)
manifold.depressurize(get_valve('NA_2')) # inlet open
manifold.depressurize(get_valve('Waste_2')) # waste open
time.sleep(60*15) # filling inlet, 15 min...
manifold.pressurize(get_valve('NA_2')) # inlet close
# b2) prime inlet tree with wash
manifold.depressurize(get_valve('bBSA_2')) # wash open
manifold.pressurize(get_valve('Waste_2')) # waste close
for v in [19,20,21,22,23]:
manifold.depressurize(v)
time.sleep(.2)
manifold.pressurize(v)
manifold.depressurize(get_valve('Waste_2')) # waste open
time.sleep(60*1)
manifold.pressurize(get_valve('Waste_2')) # waste close
remaining_time = incubate_time - (time.time() - incubate_start)
time.sleep(remaining_time)
# prewash Cy5
scan('5_Cy5', exposure, 'pre', plate_n)
# wash
manifold.pressurize(get_valve('Button1_2')) # buttons close
manifold.pressurize(get_valve('Button2_2')) # "
time.sleep(1) # pause
manifold.depressurize(get_valve('Sandwich1_2')) # sandwhiches open
manifold.depressurize(get_valve('Sandwich2_2')) # "
manifold.depressurize(get_valve('In_2')) # chip_in open
manifold.depressurize(get_valve('Out_2')) # chip_out open
time.sleep(60*10) # washing chip, 10 min...
manifold.pressurize(get_valve('Out_2')) # chip_out close
manifold.pressurize(get_valve('In_2')) # chip_in close
manifold.pressurize(get_valve('bBSA_2')) # wash close
# postwash eGFP and postwash Cy5
scan('4_eGFP', 125, 'post', plate_n)
scan('5_Cy5', 1250, 'post', plate_n)
"""
Explanation: first attempt above: forgot to close wash after backflush
End of explanation
"""
autosipper.exit()
manifold.exit()
core.unloadAllDevices()
core.reset()
print 'closed'
"""
Explanation: EXIT
End of explanation
"""
|
physion/ovation-python
|
examples/qc-activity-example.ipynb
|
gpl-3.0
|
import urllib
import ovation.lab.workflows as workflows
import ovation.session as session
"""
Explanation: Quality Check API Example
End of explanation
"""
s = session.connect(input('Email: '), api='https://lab-services.ovation.io')
"""
Explanation: Create a session. Note the api endpoint, lab-services.ovation.io for Ovation Service Lab.
End of explanation
"""
workflow_id = input('Workflow ID: ')
qc_activity_label = input('QC activity label: ')
"""
Explanation: Create a Quality Check (QC) activity
A QC activity determines the status of results for each Sample in a Workflow. Normally, QC activities are handled in the web application, but you can submit a new activity with the necessary information to complete the QC programaticallly.
First, we'll need a workflow and the label of the QC activity WorkflowActivity:
End of explanation
"""
result_type = input('Result type: ')
workflow_sample_results = s.get(s.path('workflow_sample_results'), params={'workflow_id': workflow_id,
'result_type': result_type})
workflow_sample_results
"""
Explanation: Next, we'll get the WorkflowSampleResults for the batch. Each WorkflowSampleResult contains the parsed data for a single Sample within the batch. Each WorkflowSampleResult has a result_type that distinguishes each kind of data.
End of explanation
"""
import random
WSR_STATUS = ["accepted", "rejected", "repeat"]
ASSAY_STATUS = ["accepted", "rejected"]
qc_results = []
for wsr in workflow_sample_results:
assay_results = {}
for assay_name, assay in wsr.result.items():
assay_results[assay_name] = {"status": random.choice(ASSAY_STATUS)}
wsr_status = random.choice(WSR_STATUS)
result = {'id': wsr.id,
'result_type': wsr.result_type,
'status': wsr_status,
'routing': wsr_status,
'result': assay_results}
qc_results.append(result)
"""
Explanation: Within each WorkflowSampleResult you should see a result object containing records for each assay. In most cases, the results parser created a record for each line in an uploaded tabular (csv or tab-delimited) file. When that record has an entry identifiying the sample and an entry identifying the assay, the parser places that record into the WorkflowSampleResult for the corresponding Workflow Sample, result type, and assay. If more than one record matches this Sample > Result type > Assay, it will be appended to the records for that sample, result type, and assay.
A QC activity updates the status of assays and entire Workflow Sample Results. Each assay may recieve a status ("accepted", "rejected", or "repeat") indicating the QC outcome of that assay for a particular sample. In addition, the WorkflowSampleResult has a global status indicating the overall QC outcome for that sample and result type. Individual assay statuses may be used on repeat to determine which assays need to be repeated. The global status determines how the sample is routed following QC. In fact, there can be multiple routing options for each status (e.g. an "Accept and process for workflow A" and "Accept and process for workflow B" options). Ovation internally uses a routing value to indicate (uniquely) which routing option to chose from the configuration. In many cases routing is the same as status (but not always).
WorkflowSampleResult and assay statuses are set (overriding any existing status) by creating a QC activity, passing the updated status for each workflow sample result and contained assay(s).
In this example, we'll randomly choose statuses for each of the workflow samples above:
End of explanation
"""
qc = workflows.create_activity(s, workflow_id, qc_activity_label,
activity={'workflow_sample_results': qc_results,
'custom_attributes': {} # Always an empty dictionary for QC activities
})
"""
Explanation: The activity data we POST will look like this:
{"workflow_sample_results": [{"id": WORKFLOW_SAMPLE_RESULT_ID,
"result_type": RESULT_TYPE,
"status":"accepted"|"rejected"|"repeat",
"routing":"accepted",
"result":{ASSAY:{"status":"accepted"|"rejected"}}},
...]}}
End of explanation
"""
|
sf-wind/caffe2
|
caffe2/python/tutorials/Getting_Caffe1_Models_for_Translation.ipynb
|
apache-2.0
|
import os
print("Required modules imported.")
"""
Explanation: Getting Caffe1 Models and Datasets
This tutorial will help you acquire a variety of models and datasets and put them into places that the other tutorials will expect. We will primarily utilize Caffe's pre-trained models and the scripts that come in that repo. If you don't already have it, then clone it like so:
git clone https://github.com/BVLC/caffe.git
Start by importing the required modules.
End of explanation
"""
# You should have checked out original Caffe
# git clone https://github.com/BVLC/caffe.git
# change the CAFFE_ROOT directory below accordingly
CAFFE_ROOT = os.path.expanduser('~/caffe')
if not os.path.exists(CAFFE_ROOT):
print("Houston, you may have a problem.")
print("Did you change CAFFE_ROOT to point to your local Caffe repo?")
print("Try running: git clone https://github.com/BVLC/caffe.git")
"""
Explanation: Now you can setup your root folder for Caffe below if you put it somewhere else. You should only be changing the path that's being set for CAFFE_ROOT.
End of explanation
"""
# Pick a model, and if you don't have it, it will be downloaded
# format below is the model's folder, model's dataset inside that folder
#MODEL = 'bvlc_alexnet', 'bvlc_alexnet.caffemodel'
#MODEL = 'bvlc_googlenet', 'bvlc_googlenet.caffemodel'
#MODEL = 'finetune_flickr_style', 'finetune_flickr_style.caffemodel'
#MODEL = 'bvlc_reference_caffenet', 'bvlc_reference_caffenet.caffemodel'
MODEL = 'bvlc_reference_rcnn_ilsvrc13', 'bvlc_reference_rcnn_ilsvrc13.caffemodel'
# scripts to download the models reside here (~/caffe/models)
# after downloading the data will exist with the script
CAFFE_MODELS = os.path.join(CAFFE_ROOT, 'models')
# this is like: ~/caffe/models/bvlc_alexnet/deploy.prototxt
CAFFE_MODEL_FILE = os.path.join(CAFFE_MODELS, MODEL[0], 'deploy.prototxt')
# this is like: ~/caffe/models/bvlc_alexnet/bvlc_alexnet.caffemodel
CAFFE_PRETRAINED = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[1])
# if the model folder doesn't have the goods, then download it
# this is usually a pretty big file with the .caffemodel extension
if not os.path.exists(CAFFE_PRETRAINED):
print(CAFFE_PRETRAINED + " not found. Attempting download. Be patient...")
os.system(
os.path.join(CAFFE_ROOT, 'scripts/download_model_binary.py') +
' ' +
os.path.join(CAFFE_ROOT, 'models', MODEL[0]))
else:
print("You already have " + CAFFE_PRETRAINED)
# if the .prototxt file was missing then you're in trouble; cannot continue
if not os.path.exists(CAFFE_MODEL_FILE):
print("Caffe model file, " + CAFFE_MODEL_FILE + " was not found!")
else:
print("Now we can test the model!")
"""
Explanation: Here's where you pick your model. There are several listed below such as AlexNet, GoogleNet, and Flickr Style. Uncomment the model you want to download.
End of explanation
"""
|
jorgemauricio/INIFAP_Course
|
ejercicios/Pandas/1_Series.ipynb
|
mit
|
# librerias
import numpy as np
import pandas as pd
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Series
El primer tipo de dato que vamos a aprender en pandas es Series
Una series es muy similar a un arreglo de Numpy, la diferencia es que una serie tiene etiquetas en su eje, por tal motivo podemos generar indices por etiquetas en vez de un numero.
End of explanation
"""
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = {'a':10,'b':20,'c':30}
"""
Explanation: Crear una serie
Se pueden crear series desde listas, arreglos de numpy y diccionarios
End of explanation
"""
pd.Series(data=my_list)
pd.Series(data=my_list,index=labels)
pd.Series(my_list,labels)
"""
Explanation: Usando listas
End of explanation
"""
pd.Series(arr)
pd.Series(arr,labels)
"""
Explanation: Arreglos Numpy
End of explanation
"""
pd.Series(d)
"""
Explanation: Diccionarios
End of explanation
"""
pd.Series(data=labels)
# Inclusive funciones
pd.Series([sum,print,len])
"""
Explanation: Informacion en Series
Una serie de pandas puede tener varios tipos de objetos
End of explanation
"""
ser1 = pd.Series([1,2,3,4],index = ['USA', 'Germany','USSR', 'Japan'])
ser1
ser2 = pd.Series([1,2,5,4],index = ['USA', 'Germany','Italy', 'Japan'])
ser2
ser1['USA']
"""
Explanation: Usando indices
La clave al usar series es el entender como se utilizan los indices, ya que pandas los utiliza para hacer consultas rapidas de informacion
End of explanation
"""
ser1 + ser2
"""
Explanation: Operations are then also done based off of index:
End of explanation
"""
|
kkai/perception-aware
|
3.analysis/explore.ipynb
|
mit
|
%pylab inline
windows = [625, 480, 621, 633]
mac = [647, 503, 559, 586]
"""
Explanation: Exploration Example
Let's start with importing some plotting functions (don't care about the warning ... we should use something else, but this is just easier, for the time being).
End of explanation
"""
figure()
plot(windows)
plot(mac,'r')
"""
Explanation: Now Lets try to calculate the mean ... you can just use mean()
we can also plot the raw data
End of explanation
"""
from scipy.stats import ttest_ind
from scipy.stats import ttest_rel
import scipy.stats as stats
#onesided t-test
ttest_ind(mac,windows)
#two sided t-test
ttest_rel(mac,windows)
"""
Explanation: apply a t-test to check for significance
End of explanation
"""
more_win = [625, 480, 621, 633,694,599,505,527,651,505]
more_mac = [647, 503, 559, 586, 458, 380, 477, 409, 589,472]
"""
Explanation: let's say we get more data
End of explanation
"""
more_bottom = [485,436, 512, 564, 560, 587, 391, 488, 555, 446]
"""
Explanation: what to do if we have more than 3 use an ANOVA in python stats.f_oneway()
End of explanation
"""
import pandas as pd
aq=pd.read_csv('data/anscombesQuartet.csv')
aq
mean(aq['I_y'])
"""
Explanation: Anscombe's quartet
Let's take a look at some other data set (and actually import data from a file).
End of explanation
"""
|
jorisvandenbossche/DS-python-data-analysis
|
_solved/pandas_08_reshaping_data.ipynb
|
bsd-3-clause
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: <p><font size="6"><b>07 - Pandas: Tidy data and reshaping</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
"""
data = pd.DataFrame({
'WWTP': ['Destelbergen', 'Landegem', 'Dendermonde', 'Eeklo'],
'Treatment A': [8.0, 7.5, 8.3, 6.5],
'Treatment B': [6.3, 5.2, 6.2, 7.2]
})
data
"""
Explanation: Tidy data
meltcan be used to make a dataframe longer, i.e. to make a tidy version of your data. In a tidy dataset (also sometimes called 'long-form' data or 'denormalized' data) each observation is stored in its own row and each column contains a single variable:
Consider the following example with measurements in different Waste Water Treatment Plants (WWTP):
End of explanation
"""
pd.melt(data) #, id_vars=["WWTP"])
data_long = pd.melt(data, id_vars=["WWTP"],
value_name="pH", var_name="Treatment")
data_long
"""
Explanation: This data representation is not "tidy":
Each row contains two observations of pH (each from a different treatment)
'Treatment' (A or B) is a variable not in its own column, but used as column headers
Melt - from wide to long/tidy format
We can melt the data set to tidy the data:
End of explanation
"""
data_long.groupby("Treatment")["pH"].mean() # switch to `WWTP`
sns.catplot(data=data, x="WWTP", y="...", hue="...", kind="bar") # this doesn't work that easily
sns.catplot(data=data_long, x="WWTP", y="pH",
hue="Treatment", kind="bar") # switch `WWTP` and `Treatment`
"""
Explanation: The usage of the tidy data representation has some important benefits when working with groupby or data visualization libraries such as Seaborn:
End of explanation
"""
df = pd.read_excel("data/verbruiksgegevens-per-maand.xlsx")
df = df.drop(columns=["Regio"])
df
"""
Explanation: Exercise with energy consumption data
To practice the "melt" operation, we are going to use a dataset from Fluvius (who operates and manages the gas and elektricity networks in Flanders) about the monthly consumption of elektricity and gas in 2021 (https://www.fluvius.be/sites/fluvius/files/2021-10/verbruiksgegevens-per-maand.xlsx).
This data is available as an Excel file.
<div class="alert alert-success">
**EXERCISE**:
* Read the "verbruiksgegevens-per-maand.xlsx" file (in the "data/" directory) into a DataFrame `df`.
* Drop the "Regio" column (this column has a constant value "Regio 1" and thus is not that interesting).
<details><summary>Hints</summary>
- Reading Excel files can be done with the `pd.read_excel()` function, passing the path to the file as first argument.
- To drop a column, use the `columns` keyword in the `drop()` method.
</details>
</div>
End of explanation
"""
df_tidy = pd.melt(df, id_vars=["Hoofdgemeente", "Energie", "SLP"], var_name="time", value_name="consumption")
df_tidy
"""
Explanation: <div class="alert alert-success">
**EXERCISE**:
The actual data (consumption numbers) is spread over multiple columns: one column per month. Make a tidy version of this dataset with a single "consumption" column, and an additional "time" column.
Make sure to keep the "Hoofdgemeente", "Energie" and "SLP" columns in the data set. The "SLP" column contains additional categories about the type of elektricity or gas consumption (eg household vs non-household consumption).
Use `pd.melt()` to create a long or tidy version of the dataset, and call the result `df_tidy`.
<details><summary>Hints</summary>
- If there are columns in the original dataset that you want to keep (with repeated values), pass those names to the `id_vars` keyword of `pd.melt()`.
- You can use the `var_name` and `value_name` keywords to directly specify the column names to use for the new variable and value columns.
</details>
</div>
End of explanation
"""
df_tidy["time"] = pd.to_datetime(df_tidy["time"], format="%Y%m")
"""
Explanation: <div class="alert alert-success">
**EXERCISE**:
Convert the "time" column to a column with a datetime data type using `pd.to_datetime`.
<details><summary>Hints</summary>
* When using `pd.to_datetime`, remember to specify a `format`.
</details>
</div>
End of explanation
"""
df_overall = df_tidy.groupby(["time", "Energie"]).sum() # or with .reset_index()
df_overall.head()
facet = sns.relplot(x="time", y="consumption", col="Energie",
data=df_overall, kind="line")
facet.set(ylim=(0, None))
"""
Explanation: <div class="alert alert-success">
**EXERCISE**:
* Calculate the total consumption of elektricity and gas over all municipalities ("Hoofdgemeente") for each month. Assign the result to a dataframe called `df_overall`.
* Using `df_overall`, make a line plot of the consumption of elektricity vs gas over time.
* Create a separate subplot for elektricity and for gas, putting them next to each other.
* Ensure that the y-limit starts at 0 for both subplots.
<details><summary>Hints</summary>
* If we want to sum the consumption over all municipalities that means we should _not_ include this variable in the groupby keys. On the other hand, we want to calculate the sum *for each* month ("time") and *for each* category of elektricity/gas ("Energie").
* Creating a line plot with seaborn can be done with `sns.relplot(..., kind="line")`.
* If you want to split the plot into multiple subplots based on a variable, check the `row` or `col` keyword.
* The `sns.relplot` returns a "facet grid" object, and you can change an element of each of the subplots of this object using the `set()` method of this object. To set the y-limits, you can use the `ylim` keyword.
</details>
</div>
End of explanation
"""
excelample = pd.DataFrame({'Month': ["January", "January", "January", "January",
"February", "February", "February", "February",
"March", "March", "March", "March"],
'Category': ["Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment",
"Transportation", "Grocery", "Household", "Entertainment"],
'Amount': [74., 235., 175., 100., 115., 240., 225., 125., 90., 260., 200., 120.]})
excelample
excelample_pivot = excelample.pivot(index="Category", columns="Month", values="Amount")
excelample_pivot
"""
Explanation: Pivoting data
Cfr. excel
People who know Excel, probably know the Pivot functionality:
The data of the table:
End of explanation
"""
# sum columns
excelample_pivot.sum(axis=1)
# sum rows
excelample_pivot.sum(axis=0)
"""
Explanation: Interested in Grand totals?
End of explanation
"""
df = pd.DataFrame({'Fare': [7.25, 71.2833, 51.8625, 30.0708, 7.8542, 13.0],
'Pclass': [3, 1, 1, 2, 3, 2],
'Sex': ['male', 'female', 'male', 'female', 'female', 'male'],
'Survived': [0, 1, 0, 1, 0, 1]})
df
df.pivot(index='Pclass', columns='Sex', values='Fare')
df.pivot(index='Pclass', columns='Sex', values='Survived')
"""
Explanation: Pivot is just reordering your data:
Small subsample of the titanic dataset:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.head()
"""
Explanation: So far, so good...
Let's now use the full titanic dataset:
End of explanation
"""
try:
df.pivot(index='Sex', columns='Pclass', values='Fare')
except Exception as e:
print("Exception!", e)
"""
Explanation: And try the same pivot (no worries about the try-except, this is here just used to catch a loooong error):
End of explanation
"""
df.loc[[1, 3], ["Sex", 'Pclass', 'Fare']]
"""
Explanation: This does not work, because we would end up with multiple values for one cell of the resulting frame, as the error says: duplicated values for the columns in the selection. As an example, consider the following rows of our three columns of interest:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.pivot_table(index='Sex', columns='Pclass', values='Fare')
"""
Explanation: Since pivot is just restructering data, where would both values of Fare for the same combination of Sex and Pclass need to go?
Well, they need to be combined, according to an aggregation functionality, which is supported by the functionpivot_table
<div class="alert alert-danger">
<b>NOTE</b>:
<ul>
<li><b>Pivot</b> is purely restructering: a single value for each index/column combination is required.</li>
</ul>
</div>
Pivot tables - aggregating while pivoting
End of explanation
"""
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='max')
df.pivot_table(index='Sex', columns='Pclass',
values='Fare', aggfunc='count')
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
* By default, `pivot_table` takes the **mean** of all values that would end up into one cell. However, you can also specify other aggregation functions using the `aggfunc` keyword.
</div>
End of explanation
"""
pd.crosstab(index=df['Sex'], columns=df['Pclass'])
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>There is a shortcut function for a <code>pivot_table</code> with a <code>aggfunc='count'</code> as aggregation: <code>crosstab</code></li>
</ul>
</div>
End of explanation
"""
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
fig, ax1 = plt.subplots()
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean').plot(kind='bar',
rot=0,
ax=ax1)
ax1.set_ylabel('Survival ratio')
"""
Explanation: Exercises
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a pivot table with the survival rates for Pclass vs Sex.</li>
</ul>
</div>
End of explanation
"""
df['Underaged'] = df['Age'] <= 18
df.pivot_table(index='Underaged', columns='Sex',
values='Fare', aggfunc='median')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a table of the median Fare payed by aged/underaged vs Sex.</li>
</ul>
</div>
End of explanation
"""
df_survival = df.groupby(["Pclass", "Sex"])["Survived"].mean().reset_index()
df_survival
df_survival.pivot(index="Pclass", columns="Sex", values="Survived")
"""
Explanation: <div class="alert alert-success">
**EXERCISE**:
A pivot table aggregates values for each combination of the new row index and column values. That reminds of the "groupby" operation.
Can you mimick the pivot table of the first exercise (a pivot table with the survival rates for Pclass vs Sex) using `groupby()`?
</div>
End of explanation
"""
df = pd.DataFrame({'A':['one', 'one', 'two', 'two'],
'B':['a', 'b', 'a', 'b'],
'C':range(4)})
df
"""
Explanation: Reshaping with stack and unstack
The docs say:
Pivot a level of the (possibly hierarchical) column labels, returning a
DataFrame (or Series in the case of an object with a single level of
column labels) having a hierarchical index with a new inner-most level
of row labels.
Indeed...
<img src="../img/pandas/schema-stack.svg" width=50%>
Before we speak about hierarchical index, first check it in practice on the following dummy example:
End of explanation
"""
df = df.set_index(['A', 'B']) # Indeed, you can combine two indices
df
result = df['C'].unstack()
result
df = result.stack().reset_index(name='C')
df
"""
Explanation: To use stack/unstack, we need the values we want to shift from rows to columns or the other way around as the index:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.head()
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li><b>stack</b>: make your data <i>longer</i> and <i>smaller</i> </li>
<li><b>unstack</b>: make your data <i>shorter</i> and <i>wider</i> </li>
</ul>
</div>
Mimick pivot table
To better understand and reason about pivot tables, we can express this method as a combination of more basic steps. In short, the pivot is a convenient way of expressing the combination of a groupby and stack/unstack.
End of explanation
"""
df.pivot_table(index='Pclass', columns='Sex',
values='Survived', aggfunc='mean')
"""
Explanation: Exercises
End of explanation
"""
df.groupby(['Pclass', 'Sex'])['Survived'].mean().unstack()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Get the same result as above based on a combination of `groupby` and `unstack`</li>
<li>First use `groupby` to calculate the survival ratio for all groups`unstack`</li>
<li>Then, use `unstack` to reshape the output of the groupby operation</li>
</ul>
</div>
End of explanation
"""
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
"""
Explanation: [OPTIONAL] Exercises: use the reshaping methods with the movie data
These exercises are based on the PyCon tutorial of Brandon Rhodes (so credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /notebooks/data folder.
End of explanation
"""
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type')
table.plot()
cast.pivot_table(index='year', columns='type', values="character", aggfunc='count').plot()
# for values in using the , take a column with no Nan values in order to count effectively all values -> at this stage: aha-erlebnis about crosstab function(!)
pd.crosstab(index=cast['year'], columns=cast['type']).plot()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year over the whole period of available movie data.</li>
</ul>
</div>
End of explanation
"""
pd.crosstab(index=cast['year'], columns=cast['type']).plot(kind='area')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the number of actor roles each year and the number of actress roles each year. Use kind='area' as plot type</li>
</ul>
</div>
End of explanation
"""
grouped = cast.groupby(['year', 'type']).size()
table = grouped.unstack('type').fillna(0)
(table['actor'] / (table['actor'] + table['actress'])).plot(ylim=[0, 1])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the fraction of roles that have been 'actor' roles each year over the whole period of available movie data.</li>
</ul>
</div>
End of explanation
"""
c = cast
c = c[(c.character == 'Superman') | (c.character == 'Batman')]
c = c.groupby(['year', 'character']).size()
c = c.unstack()
c = c.fillna(0)
c.head()
d = c.Superman - c.Batman
print('Superman years:')
print(len(d[d > 0.0]))
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Define a year as a "Superman year" when films of that year feature more Superman characters than Batman characters. How many years in film history have been Superman years?</li>
</ul>
</div>
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/pandas_string_munging.ipynb
|
mit
|
import pandas as pd
import numpy as np
import re as re
"""
Explanation: Title: String Munging In Dataframe
Slug: pandas_string_munging
Summary: String Munging In Dataframe
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
import modules
End of explanation
"""
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'email': ['jas203@gmail.com', 'momomolly@gmail.com', np.NAN, 'battler@milner.com', 'Ames1234@yahoo.com'],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'email', 'preTestScore', 'postTestScore'])
df
"""
Explanation: Create dataframe
End of explanation
"""
df['email'].str.contains('gmail')
"""
Explanation: Which strings in the email column contains 'gmail'
End of explanation
"""
pattern = '([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\\.([A-Z]{2,4})'
"""
Explanation: Create a regular expression pattern that breaks apart emails
End of explanation
"""
df['email'].str.findall(pattern, flags=re.IGNORECASE)
"""
Explanation: Find everything in df.email that contains that pattern
End of explanation
"""
matches = df['email'].str.match(pattern, flags=re.IGNORECASE)
matches
"""
Explanation: Create a pandas series containing the email elements
End of explanation
"""
matches.str[1]
"""
Explanation: Select the domains of the df.email
End of explanation
"""
|
blua/deep-learning
|
language-translation/dlnd_language_translation_0420.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \
for sentence in source_text.split('\n')]
y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \
for sentence in target_text.split('\n')]
source_id_text = []
target_id_text = []
"""
found in a forum post. necessary?
n1 = len(x[i])
n2 = len(y[i])
n = n1 if n1 < n2 else n2
if abs(n1 - n2) <= 0.3 * n:
if n1 <= 17 and n2 <= 17:
"""
for i in range(len(x)):
source_id_text.append(x[i])
target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']])
return (source_id_text, target_id_text)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
input_text = tf.placeholder(tf.int32,[None, None], name="input")
target_text = tf.placeholder(tf.int32,[None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input_text, target_text, learning_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
_, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32)
return enc_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \
dec_embed_input, sequence_length, scope=decoding_scope)
train_logits = output_fn(train_logits_drop)
#I'm missing the keep_prob! don't know where to put it
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
#Again, don't know where to put the keep_drop param
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
dec_cell_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,\
None, scope=decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
train_logits = decoding_layer_train(encoder_state, dec_cell_drop, dec_embed_input,\
sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state, dec_cell_drop, dec_embeddings,\
target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'], sequence_length,\
vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_state = encoding_layer(embed_input, rnn_size, num_layers, keep_prob)
processed_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, processed_target_data)
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,\
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 200
# Number of Layers
num_layers = 30
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_stats_cluster_methods.ipynb
|
bsd-3-clause
|
# Authors: Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import numpy as np
from scipy import stats
from functools import partial
import matplotlib.pyplot as plt
# this changes hidden MPL vars:
from mpl_toolkits.mplot3d import Axes3D # noqa
from mne.stats import (spatio_temporal_cluster_1samp_test,
bonferroni_correction, ttest_1samp_no_p)
try:
from sklearn.feature_extraction.image import grid_to_graph
except ImportError:
from scikits.learn.feature_extraction.image import grid_to_graph
print(__doc__)
"""
Explanation: Permutation t-test on toy data with spatial clustering
Following the illustrative example of Ridgway et al. 2012,
this demonstrates some basic ideas behind both the "hat"
variance adjustment method, as well as threshold-free
cluster enhancement (TFCE) methods in mne-python.
This toy dataset consists of a 40 x 40 square with a "signal"
present in the center (at pixel [20, 20]) with white noise
added and a 5-pixel-SD normal smoothing kernel applied.
For more information, see:
Ridgway et al. 2012, "The problem of low variance voxels in
statistical parametric mapping; a new hat avoids a 'haircut'",
NeuroImage. 2012 Feb 1;59(3):2131-41.
Smith and Nichols 2009, "Threshold-free cluster enhancement:
addressing problems of smoothing, threshold dependence, and
localisation in cluster inference", NeuroImage 44 (2009) 83-98.
In the top row plot the T statistic over space, peaking toward the
center. Note that it has peaky edges. Second, with the "hat" variance
correction/regularization, the peak becomes correctly centered. Third,
the TFCE approach also corrects for these edge artifacts. Fourth, the
the two methods combined provide a tighter estimate, for better or
worse.
Now considering multiple-comparisons corrected statistics on these
variables, note that a non-cluster test (e.g., FDR or Bonferroni) would
mis-localize the peak due to sharpness in the T statistic driven by
low-variance pixels toward the edge of the plateau. Standard clustering
(first plot in the second row) identifies the correct region, but the
whole area must be declared significant, so no peak analysis can be done.
Also, the peak is broad. In this method, all significances are
family-wise error rate (FWER) corrected, and the method is
non-parametric so assumptions of Gaussian data distributions (which do
actually hold for this example) don't need to be satisfied. Adding the
"hat" technique tightens the estimate of significant activity (second
plot). The TFCE approach (third plot) allows analyzing each significant
point independently, but still has a broadened estimate. Note that
this is also FWER corrected. Finally, combining the TFCE and "hat"
methods tightens the area declared significant (again FWER corrected),
and allows for evaluation of each point independently instead of as
a single, broad cluster.
Note that this example does quite a bit of processing, so even on a
fast machine it can take a few minutes to complete.
End of explanation
"""
width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3 # sigma for the "hat" method
threshold = -stats.distributions.t.ppf(0.05, n_subjects - 1)
threshold_tfce = dict(start=0, step=0.2)
n_permutations = 1024 # number of clustering permutations (1024 for exact)
"""
Explanation: Set parameters
End of explanation
"""
n_src = width * width
connectivity = grid_to_graph(width, width)
# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(42)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the dead center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))
for si in range(X.shape[0]):
for ri in range(X.shape[1]):
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
for ci in range(X.shape[2]):
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
"""
Explanation: Construct simulated data
Make the connectivity matrix just next-neighbor spatially
End of explanation
"""
X = X.reshape((n_subjects, 1, n_src))
"""
Explanation: Do some statistics
<div class="alert alert-info"><h4>Note</h4><p>X needs to be a multi-dimensional array of shape
samples (subjects) x time x space, so we permute dimensions:</p></div>
End of explanation
"""
T_obs, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold,
connectivity=connectivity,
tail=1, n_permutations=n_permutations)
# Let's put the cluster data in a readable format
ps = np.zeros(width * width)
for cl, p in zip(clusters, p_values):
ps[cl[1]] = -np.log10(p)
ps = ps.reshape((width, width))
T_obs = T_obs.reshape((width, width))
# To do a Bonferroni correction on these data is simple:
p = stats.distributions.t.sf(T_obs, n_subjects - 1)
p_bon = -np.log10(bonferroni_correction(p)[1])
# Now let's do some clustering using the standard method with "hat":
stat_fun = partial(ttest_1samp_no_p, sigma=sigma)
T_obs_hat, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold,
connectivity=connectivity,
tail=1, n_permutations=n_permutations,
stat_fun=stat_fun)
# Let's put the cluster data in a readable format
ps_hat = np.zeros(width * width)
for cl, p in zip(clusters, p_values):
ps_hat[cl[1]] = -np.log10(p)
ps_hat = ps_hat.reshape((width, width))
T_obs_hat = T_obs_hat.reshape((width, width))
# Now the threshold-free cluster enhancement method (TFCE):
T_obs_tfce, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold_tfce,
connectivity=connectivity,
tail=1, n_permutations=n_permutations)
T_obs_tfce = T_obs_tfce.reshape((width, width))
ps_tfce = -np.log10(p_values.reshape((width, width)))
# Now the TFCE with "hat" variance correction:
T_obs_tfce_hat, clusters, p_values, H0 = \
spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold_tfce,
connectivity=connectivity,
tail=1, n_permutations=n_permutations,
stat_fun=stat_fun)
T_obs_tfce_hat = T_obs_tfce_hat.reshape((width, width))
ps_tfce_hat = -np.log10(p_values.reshape((width, width)))
"""
Explanation: Now let's do some clustering using the standard method.
<div class="alert alert-info"><h4>Note</h4><p>Not specifying a connectivity matrix implies grid-like connectivity,
which we want here:</p></div>
End of explanation
"""
fig = plt.figure(facecolor='w')
x, y = np.mgrid[0:width, 0:width]
kwargs = dict(rstride=1, cstride=1, linewidth=0, cmap='Greens')
Ts = [T_obs, T_obs_hat, T_obs_tfce, T_obs_tfce_hat]
titles = ['T statistic', 'T with "hat"', 'TFCE statistic', 'TFCE w/"hat" stat']
for ii, (t, title) in enumerate(zip(Ts, titles)):
ax = fig.add_subplot(2, 4, ii + 1, projection='3d')
ax.plot_surface(x, y, t, **kwargs)
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
p_lims = [1.3, -np.log10(1.0 / n_permutations)]
pvals = [ps, ps_hat, ps_tfce, ps_tfce_hat]
titles = ['Standard clustering', 'Clust. w/"hat"',
'Clust. w/TFCE', 'Clust. w/TFCE+"hat"']
axs = []
for ii, (p, title) in enumerate(zip(pvals, titles)):
ax = fig.add_subplot(2, 4, 5 + ii)
plt.imshow(p, cmap='Purples', vmin=p_lims[0], vmax=p_lims[1])
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
axs.append(ax)
plt.tight_layout()
for ax in axs:
cbar = plt.colorbar(ax=ax, shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025)
cbar.set_label('-log10(p)')
cbar.set_ticks(p_lims)
cbar.set_ticklabels(['%0.1f' % p for p in p_lims])
plt.show()
"""
Explanation: Visualize results
End of explanation
"""
|
Heroes-Academy/OOP_Spring_2016
|
notebooks/giordani/Python_3_OOP_Part_5__Metaclasses.ipynb
|
mit
|
a = 5
print(type(a))
print(a.__class__)
print(a.__class__.__bases__)
print(object.__bases__)
"""
Explanation: The Type Brothers
The first step into the most intimate secrets of Python objects comes from two components we already met in the first post: class and object. These two things are the very fundamental elements of Python OOP system, so it is worth spending some time to understand how they work and relate each other.
First of all recall that in Python everything is an object, that is everything inherits from object. Thus, object seems to be the deepest thing you can find digging into Python variables. Let's check this
End of explanation
"""
print(type(a))
print(type(int))
print(type(float))
print(type(dict))
"""
Explanation: The variable a is an instance of the int class, and this latter inherits from object, which inherits from nothing. This demonstrates that object is at the top of the class hierarchy. However, as you can see, both int and object are called classes (<class 'int'>, <class 'object'>). Indeed, while a is an instance of the int class, int itself is an instance of another class, a class that is instanced to build classes
End of explanation
"""
print(type(object))
print(type.__bases__)
"""
Explanation: Since in Python everything is an object, everything is the instance of a class, even classes. Well, type is the class that is instanced to get classes. So remember this: object is the base of every object, type is the class of every type. Sounds puzzling? It is not your fault, don't worry. However, just to strike you with the finishing move, this is what Python is built on
End of explanation
"""
print(type(type))
"""
Explanation: If you are not about to faint at this point chances are that you are Guido van Rossum of one of his friends down at the Python core development team (in this case let me thank you for your beautiful creation). You may get a cup of tea, if you need it.
Jokes apart, at the very base of Python type system there are two things, object and type, which are inseparable. The previous code shows that object is an instance of type, and type inherits from object. Take your time to understand this subtle concept, as it is very important for the upcoming discussion about metaclasses.
When you think you grasped the type/object matter read this and start thinking again
End of explanation
"""
class MyType(type):
pass
class MySpecialClass(metaclass=MyType):
pass
msp = MySpecialClass()
print(type(msp))
print(type(MySpecialClass))
print(type(MyType))
"""
Explanation: The Metaclasses Take Python
You are now familiar with Python classes. You know that a class is used to create an instance, and that the structure of this latter is ruled by the source class and all its parent classes (until you reach object).
Since classes are objects too, you know that a class itself is an instance of a (super)class, and this class is type. That is, as already stated, type is the class that is used to build classes.
So for example you know that a class may be instanced, i.e. it can be called and by calling it you obtain another object that is linked with the class. What prepares the class for being called? What gives the class all its methods? In Python the class in charge of performing such tasks is called metaclass, and type is the default metaclass of all classes.
The point of exposing this structure of Python objects is that you may change the way classes are built. As you know, type is an object, so it can be subclassed just like any other class. Once you get a subclass of type you need to instruct your class to use it as the metaclass instead of type, and you can do this by passing it as the metaclass keyword argument in the class definition.
End of explanation
"""
class Singleton(type):
instance = None
def __call__(cls, *args, **kw):
if not cls.instance:
cls.instance = super(Singleton, cls).__call__(*args, **kw)
return cls.instance
"""
Explanation: Metaclasses 2: Singleton Day
Metaclasses are a very advanced topic in Python, but they have many practical uses. For example, by means of a custom metaclass you may log any time a class is instanced, which can be important for applications that shall keep a low memory usage or have to monitor it.
I am going to show here a very simple example of metaclass, the Singleton. Singleton is a well known design pattern, and many description of it may be found on the Internet. It has also been heavily criticized mostly because its bad behaviour when subclassed, but here I do not want to introduce it for its technological value, but for its simplicity (so please do not question the choice, it is just an example).
Singleton has one purpose: to return the same instance every time it is instanced, like a sort of object-oriented global variable. So we need to build a class that does not work like standard classes, which return a new instance every time they are called.
"Build a class"? This is a task for metaclasses. The following implementation comes from Python 3 Patterns, Recipes and Idioms.
End of explanation
"""
class ASingleton(metaclass=Singleton):
pass
a = ASingleton()
b = ASingleton()
a is b
hex(id(a))
hex(id(b))
"""
Explanation: We are defining a new type, which inherits from type to provide all bells and whistles of Python classes. We override the __call__ method, that is a special method invoked when we call the class, i.e. when we instance it. The new method wraps the original method of type by calling it only when the instance attribute is not set, i.e. the first time the class is instanced, otherwise it just returns the recorded instance. As you can see this is a very basic cache class, the only trick is that it is applied to the creation of instances.
To test the new type we need to define a new class that uses it as its metaclass
End of explanation
"""
class MyClass():
def __new__(cls, *args, **kwds):
obj = super().__new__(cls, *args, **kwds)
# put your code here
return obj
"""
Explanation: By using the is operator we test that the two objects are the very same structure in memory, that is their ids are the same, as explicitly shown. What actually happens is that when you issue a = ASingleton() the ASingleton class runs its __call__() method, which is taken from the Singleton type behind the class. That method recognizes that no instance has been created (Singleton.instance is None) and acts just like any standard class does. When you issue b = ASingleton() the very same things happen, but since Singleton.instance is now different from None its value (the previous instance) is directly returned.
Metaclasses are a very powerful programming tool and leveraging them you can achieve very complex behaviours with a small effort. Their use is a must every time you are actually metaprogramming, that is you are writing code that has to drive the way your code works. Good examples are creational patterns (injecting custom class attributes depending on some configuration), testing, debugging, and performance monitoring.
Coming to Instance
Before introducing you to a very smart use of metaclasses by talking about Abstract Base Classes (read: to save some topics for the next part of this series), I want to dive into the object creation procedure in Python, that is what happens when you instance a class. In the first post this procedure was described only partially, by looking at the __init__() method.
In the first post I recalled the object-oriented concept of constructor, which is a special method of the class that is automatically called when the instance is created. The class may also define a destructor, which is called when the object is destroyed. In languages without a garbage collection mechanism such as C++ the destructor shall be carefully designed. In Python the destructor may be defined through the __del__() method, but it is hardly used.
The constructor mechanism in Python is on the contrary very important, and it is implemented by two methods, instead of just one: __new__() and __init__(). The tasks of the two methods are very clear and distinct: __new__() shall perform actions needed when creating a new instance while __init__ deals with object initialization.
Since in Python you do not need to declare attributes due to its dynamic nature, __new__() is rarely defined by programmers, who may rely on __init__ to perform the majority of the usual tasks. Typical uses of __new__() are very similar to those listed in the previous section, since it allows to trigger some code whenever your class is instanced.
The standard way to override __new__() is
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/fraud_detection_with_tensorflow_bigquery.ipynb
|
apache-2.0
|
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
from tensorflow_io.bigquery import BigQueryClient
import functools
"""
Explanation: Building a Fraud Detection model on Vertex AI with TensorFlow Enterprise and BigQuery
Learning objectives
Analyze the data in BigQuery.
Ingest records from BigQuery.
Preprocess the data.
Build the model.
Train the model.
Evaluate the model.
Introduction
In this notebook, you'll directly ingest a BigQuery dataset and train a fraud detection model with TensorFlow Enterprise on Vertex AI.
You've also walked through all the steps of building a model. Finally, you learned a bit about how to handle imbalanced classification problems.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Ingest records from BigQuery
Step 1: Import Python packages
Run the below cell to import the python packages.
End of explanation
"""
GCP_PROJECT_ID = 'qwiklabs-gcp-00-b1e00ce17168' # Replace with your Project-ID
DATASET_GCP_PROJECT_ID = GCP_PROJECT_ID # A copy of the data is saved in the user project
DATASET_ID = 'tfe_codelab'
TRAIN_TABLE_ID = 'ulb_fraud_detection_train'
VAL_TABLE_ID = 'ulb_fraud_detection_val'
TEST_TABLE_ID = 'ulb_fraud_detection_test'
FEATURES = ['Time','V1','V2','V3','V4','V5','V6','V7','V8','V9','V10','V11','V12','V13','V14','V15','V16','V17','V18','V19','V20','V21','V22','V23','V24','V25','V26','V27','V28','Amount']
LABEL='Class'
DTYPES=[tf.float64] * len(FEATURES) + [tf.int64]
"""
Explanation: Step 2: Define constants
Let's next define some constants for use in the project. Change GCP_PROJECT_ID to the actual project ID you are using. Go ahead and run new cells as you create them.
End of explanation
"""
client = BigQueryClient()
def read_session(TABLE_ID):
return client.read_session(
"projects/" + GCP_PROJECT_ID, DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,
FEATURES + [LABEL], DTYPES, requested_streams=2
)
def extract_labels(input_dict):
features = dict(input_dict)
label = tf.cast(features.pop(LABEL), tf.float64)
return (features, label)
"""
Explanation: Step 3: Define helper functions
Now, let's define a couple functions. read_session() reads data from a BigQuery table. extract_labels() is a helper function to separate the label column from the rest, so that the dataset is in the format expected by keras.model_fit() later on.
End of explanation
"""
BATCH_SIZE = 32
# TODO 1
# Create the datasets
raw_train_data = read_session(TRAIN_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)
raw_val_data = read_session(VAL_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)
raw_test_data = read_session(TEST_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)
next(iter(raw_train_data)) # Print first batch
"""
Explanation: Step 4: Ingest data
Finally, let's create each dataset and then print the first batch from the training dataset. Note that we have defined a BATCH_SIZE of 32. This is an important parameter that will impact the speed and accuracy of training.
End of explanation
"""
# TODO 2
MEANS = [94816.7387536405, 0.0011219465482001268, -0.0021445914636999603, -0.002317402958335562,
-0.002525792169927835, -0.002136576923287782, -3.7586818983702984, 8.135919975738768E-4,
-0.0015535579268265718, 0.001436137140461279, -0.0012193712736681508, -4.5364970422902533E-4,
-4.6175444671576083E-4, 9.92177789685366E-4, 0.002366229151475428, 6.710217226762278E-4,
0.0010325807119864225, 2.557260815835395E-4, -2.0804190062322664E-4, -5.057391100818653E-4,
-3.452114767842334E-6, 1.0145936326270006E-4, 3.839214074518535E-4, 2.2061197469126577E-4,
-1.5601580596677608E-4, -8.235017846415852E-4, -7.298316615408554E-4, -6.898459943652376E-5,
4.724125688297753E-5, 88.73235686453587]
def norm_data(mean, data):
data = tf.cast(data, tf.float32) * 1/(2*mean)
return tf.reshape(data, [-1, 1])
numeric_columns = []
for i, feature in enumerate(FEATURES):
num_col = tf.feature_column.numeric_column(feature, normalizer_fn=functools.partial(norm_data, MEANS[i]))
numeric_columns.append(num_col)
numeric_columns
"""
Explanation: Build the model
Step 1: Preprocess data
Let's create feature columns for each feature in the dataset. In this particular dataset, all of the columns are of type numeric_column, but there a number of other column types (e.g. categorical_column).
You will also norm the data to center around zero so that the network converges faster. You've precalculated the means of each feature to use in this calculation.
End of explanation
"""
# TODO 3
model = keras.Sequential([
tf.keras.layers.DenseFeatures(numeric_columns),
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
# Compile the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy', tf.keras.metrics.AUC(curve='PR')])
"""
Explanation: Step 2: Build the model
Now we are ready to create a model. We will feed the columns we just created into the network. Then we will compile the model. We are including the Precision/Recall AUC metric, which is useful for imbalanced datasets.
End of explanation
"""
# TODO 4
CLASS_WEIGHT = {
0: 1,
1: 100
}
EPOCHS = 3
train_data = raw_train_data.shuffle(10000)
val_data = raw_val_data
test_data = raw_test_data
# Train the model using model.fit()
model.fit(train_data, validation_data=val_data, class_weight=CLASS_WEIGHT, epochs=EPOCHS)
"""
Explanation: Step 3: Train the model
There are a number of techniques to handle imbalanced data, including oversampling (generating new data in the minority class) and undersampling (reducing the data in the majority class).
For the purposes of this codelab, let's use a technique that overweights the loss when misclassifying the minority class. You'll specify a class_weight parameter when training and weight "1" (fraud) higher, since it is much less prevalent.
You will use 3 epochs (passes through the data) in this lab so training is quicker. In a real-world scenario, You'd want to run it long enough to the point where the stop seeing increases in accuracy of the validation set.
End of explanation
"""
# TODO 5
# Evaluate the model
model.evaluate(test_data)
"""
Explanation: Step 4: Evaluate the model
The evaluate() function can be applied to test data that the model has never seen to provide an objective assessment. Fortunately, we've set aside test data just for that!
End of explanation
"""
|
oditorium/blog
|
iPython/DateTime-Basics.ipynb
|
agpl-3.0
|
from datetime import datetime as dt
import time as tm
import pytz as tz
import calendar as cal
"""
Explanation: Datetime - Basics
Time conversions are generally a pain, especially when daylight savings time is involved. Here a number of libraries and tools to deal with this in Python. Firstly, there are three libraries
datetime - a standard module
The datetime module supplies classes for manipulating dates and times in both simple and complex ways. While date and time arithmetic is supported, the focus of the implementation is on efficient attribute extraction for output formatting and manipulation. For related functionality, see also the time and calendar modules.
time - another standard module, with unclear delineation to the previous one
This module provides various time-related functions. For related functionality, see also the datetime and calendar modules.
calendar - a standard module dealing with the date part of datetime
This module allows you to output calendars like the Unix cal program, and provides additional useful functions related to the calendar.
pytz - an additional module that deals with proper timezones, along the lines of London/Europe
pytz brings the Olson tz database into Python. This library allows accurate and cross platform timezone calculations using Python.
End of explanation
"""
dto = dt.strptime ('2014-09-06 07:16 +0000', "%Y-%m-%d %H:%M %z")
dto
tto = tm.strptime ('2014-09-06 07:16 +0000', "%Y-%m-%d %H:%M %z")
tto
dto.timetuple() == tto
dt.fromtimestamp(tm.mktime(tto))
"""
Explanation: Generating times from a string
The format definition is here. The result is either an object, or a timetuple struct, depending on the module.
dt.strptime(), tm.strptime(), [dt].timetuple()
End of explanation
"""
dto = dt.strptime('2014:09:13 21:07:15', '%Y:%m:%d %H:%M:%S')
timezone = tz.timezone('Europe/London')
dto = timezone.localize(dto)
dto
"""
Explanation: Missing timezone information in the string
Timezone information can be added to the datetime after it has been created without it
End of explanation
"""
epoch_time = 0
tm.gmtime(epoch_time)
epoch_time = tm.time()
tm.gmtime(epoch_time)
tm.gmtime(tm.mktime(tto))
"""
Explanation: Epoch related functions
tm.gmtime(), tm.time(), tm.mtime()
End of explanation
"""
tm.time()
dt.now()
dt.now().strftime('%Y-%m-%d %H:%M:%S %Z%z')
"""
Explanation: Current-time related functions
time()
End of explanation
"""
tm.strftime("%Y-%m-%d %H:%M",tto)
dto.strftime("%Y-%m-%d %H:%M")
dto.hour, dto.minute, dto.second,
dto.tzname()
"""
Explanation: Time output
Depending on the library, strftime is either a function or a method. The format string is here
tm.strftime(), [dt].strftime()
End of explanation
"""
from datetime import datetime as dt
import pytz as tz
def change_tz(datetime_obj, tz_str):
""" change the timezone
datatime_obj - a datetime.datetime object representing the time
tz_str - time zone string, eg 'Europe/London'
return - a datetime.datetime object
"""
the_tz = tz.timezone(tz_str)
the_dt = the_tz.normalize(datetime_obj.astimezone(the_tz))
return the_dt
ams = tz.timezone('Europe/Amsterdam')
dto_ams = ams.normalize(dto.astimezone(ams))
dto_ams.strftime('%Y-%m-%d %H:%M:%S %Z%z')
dto_ams2 = change_tz(dto, "Europe/Amsterdam")
dto_ams2
dto_ams2.timetuple()
"""
Explanation: Time Zones
tz.timezone(), [dt].astimezone(), [tz].normalize()
End of explanation
"""
|
simulkade/peteng
|
python/.ipynb_checkpoints/two_phase_1D_fipy_seq-checkpoint.ipynb
|
mit
|
from fipy import Grid2D, CellVariable, FaceVariable
import numpy as np
def upwindValues(mesh, field, velocity):
"""Calculate the upwind face values for a field variable
Note that the mesh.faceNormals point from `id1` to `id2` so if velocity is in the same
direction as the `faceNormal`s then we take the value from `id1`s and visa-versa.
Args:
mesh: a fipy mesh
field: a fipy cell variable or equivalent numpy array
velocity: a fipy face variable (rank 1) or equivalent numpy array
Returns:
numpy array shaped as a fipy face variable
"""
# direction is over faces (rank 0)
direction = np.sum(np.array(mesh.faceNormals * velocity), axis=0)
# id1, id2 are shaped as faces but contains cell index values
id1, id2 = mesh._adjacentCellIDs
return np.where(direction >= 0, field[id1], field[id2])
# mesh = Grid2D(nx=3, ny=3)
# print(
# upwindValues(
# mesh,
# np.arange(mesh.numberOfCells),
# 2 * np.random.random(size=(2, mesh.numberOfFaces)) - 1
# )
# )
from fipy import *
# relperm parameters
swc = 0.0
sor = 0.0
krw0 = 0.3
kro0 = 1.0
nw = 2.0
no = 2.0
# domain and boundaries
k = 1e-12 # m^2
phi = 0.4
u = 1.e-5
p0 = 100e5 # Pa
Lx = 100.
Ly = 10.
nx = 100
ny = 10
dx = Lx/nx
dy = Ly/ny
# fluid properties
muo = 0.002
muw = 0.001
# define the fractional flow functions
def krw(sw):
res = krw0*((sw-swc)/(1-swc-sor))**nw
return res
def dkrw(sw):
res = krw0*nw/(1-swc-sor)*((sw-swc)/(1-swc-sor))**(nw-1)
return res
def kro(sw):
res = kro0*((1-sw-sor)/(1-swc-sor))**no
return res
def dkro(sw):
res = -kro0*no/(1-swc-sor)*((1-sw-sor)/(1-swc-sor))**(no-1)
return res
def fw(sw):
res = krw(sw)/muw/(krw(sw)/muw+kro(sw)/muo)
return res
def dfw(sw):
res = (dkrw(sw)/muw*kro(sw)/muo-krw(sw)/muw*dkro(sw)/muo)/(krw(sw)/muw+kro(sw)/muo)**2
return res
import matplotlib.pyplot as plt
import numpy as np
sw_plot = np.linspace(swc, 1-sor, 50)
"""
Explanation: FiPy 1D two-phase flow in porous mediaq, 11 October, 2019
Different approaches:
* Coupled
* Sequential
* ...
End of explanation
"""
krw_plot = [krw(sw) for sw in sw_plot]
kro_plot = [kro(sw) for sw in sw_plot]
fw_plot = [fw(sw) for sw in sw_plot]
plt.figure(1)
plt.plot(sw_plot, krw_plot, sw_plot, kro_plot)
plt.show()
plt.figure(2)
plt.plot(sw_plot, fw_plot)
plt.show()
# create the grid
mesh = Grid1D(dx = Lx/nx, nx = nx)
x = mesh.cellCenters
# create the cell variables and boundary conditions
sw = CellVariable(mesh=mesh, name="saturation", hasOld=True, value = swc)
p = CellVariable(mesh=mesh, name="pressure", hasOld=True, value = p0)
sw.setValue(1-sor,where = x<=dx)
sw.constrain(1,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
p.faceGrad.constrain([-u/(krw(1-sor)*k/muw)], mesh.facesLeft)
p.constrain(p0, mesh.facesRight)
# p.constrain(3.0*p0, mesh.facesLeft)
u/(krw(1-sor)*k/muw)
"""
Explanation: Visualize the relative permeability and fractional flow curves
End of explanation
"""
# eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw.faceValue)/muw+kro(sw.faceValue)/muo))- \
# UpwindConvectionTerm(var=sw, coeff=-k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*p.faceGrad)- \
# (k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*sw.faceValue*p.faceGrad).divergence == 0
# eq_sw = TransientTerm(coeff=phi, var=sw) + \
# DiffusionTerm(var=p, coeff=-k*krw(sw.faceValue)/muw)+ \
# UpwindConvectionTerm(var=sw, coeff=-k*dkrw(sw.faceValue)/muw*p.faceGrad)- \
# (-k*dkrw(sw.faceValue)/muw*p.faceGrad*sw.faceValue).divergence == 0
eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw.faceValue)/muw+kro(sw.faceValue)/muo)) == 0
eq_sw = TransientTerm(coeff=phi, var=sw) + \
(-k*krw(sw.faceValue)/muw*p.faceGrad).divergence == 0
sw_face = sw.faceValue
# eq = eq_p & eq_sw
steps = 1000
dt0 = 5000.
dt = dt0
t_end = steps*dt0
t = 0.0
viewer = Viewer(vars = sw, datamax=1.1, datamin=-0.1)
while t<t_end:
eq_p.solve(var=p)
eq_sw.solve(var=sw, dt=dt0)
sw.value[sw.value>1-sor]=1-sor
sw.value[sw.value<swc]=swc
p.updateOld()
sw.updateOld()
u_w = -k*krw(sw_face)/muw*p.faceGrad
sw_face = FaceVariable(mesh, upwindValues(mesh, sw, u_w))
sw_face.value[0] = 1.0
eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw_face)/muw+kro(sw_face)/muo)) == 0
eq_sw = TransientTerm(coeff=phi, var=sw) + (-k*krw(sw_face)/muw*p.faceGrad).divergence == 0
t=t+dt0
# Note: try to use the Appleyard method; the overflow is a result of wrong rel-perm values
viewer.plot()
sw_face.value[0] =1.0
sw_face.value
0.5>1-0.6
upwindValues(mesh, sw, u_w)
"""
Explanation: Equations
$$\nabla.\left(\left(-\frac{k_{rw} k}{\mu_w}-\frac{k_{ro} k}{\mu_o} \right)\nabla p \right)=0$$ or
$$\varphi \frac{\partial S_w}{\partial t}+\nabla.\left(-\frac{k_{rw} k}{\mu_w} \nabla p \right)=0$$
End of explanation
"""
import fractional_flow as ff
xt_shock, sw_shock, xt_prf, sw_prf, t, p_inj, R_oil = ff.frac_flow_wf(muw=muw, muo=muo, ut=u, phi=1.0, \
k=1e-12, swc=swc, sor=sor, kro0=kro0, no=no, krw0=krw0, \
nw=nw, sw0=swc, sw_inj=1.0, L=Lx, pv_inj=5.0)
plt.figure()
plt.plot(xt_prf, sw_prf)
plt.plot(x.value.squeeze()/(steps*dt), sw.value)
plt.show()
"""
Explanation: Analytical solution
End of explanation
"""
|
quantopian/research_public
|
notebooks/lectures/Hypothesis_Testing/questions/notebook.ipynb
|
apache-2.0
|
# Useful Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
import scipy.stats
"""
Explanation: Exercises: Hypothesis Testing
By Christopher van Hoecke and Maxwell Margenot
Lecture Link
https://www.quantopian.com/lectures/hypothesis-testing
IMPORTANT NOTE:
This lecture corresponds to the Hypothesis Testing lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
When you feel comfortable with the topics presented here, see if you can create an algorithm that qualifies for the Quantopian Contest. Participants are evaluated on their ability to produce risk-constrained alpha and the top 10 contest participants are awarded cash prizes on a daily basis.
https://www.quantopian.com/contest
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
"""
prices1 = get_pricing('TSLA', start_date = '2015-01-01', end_date = '2016-01-01', fields = 'price')
returns_sample_tsla = prices1.pct_change()[1:]
print 'Tesla return sample mean', returns_sample_tsla.mean()
print 'Tesla return sample standard deviation', returns_sample_tsla.std()
print 'Tesla return sample size', len(returns_sample_tsla)
"""
Explanation: Exercise 1: Hypothesis Testing.
a. One tail test.
Using the techniques laid out in lecture, verify if we can state that the returns of TSLA are greater than 0.
- Start by stating the null and alternative hypothesis
- Are we dealing with a one or two tailed test? Why?
- Calculate the mean differences, and the Z-test using the formula provided in class.
- Recall: This is a one parameter test, use the appropriate Z-test
- Use the stat library to calculate the associated p value with your t statistic.
- Compare your found p-value to the set $\alpha$ value, and conclude.
Useful Formulas:
$$ \text{Test statistic} = \frac{\bar{X}\mu - \theta_0}{s{\bar{X}}} = \frac{\bar{X}_\mu - 0}{s\sqrt{n}} $$
End of explanation
"""
# Testing
## Your code goes here
## Sample mean difference:
## Z- Statistic:
print 't-statistic is:', test_stat
## Finding the p-value for one tail test
print 'p-value is: ', p_val
"""
Explanation: Write your hypotheses here:
End of explanation
"""
## Your code goes here
## Sample mean difference:
## Z- Statistic:
print 't-statistic is:', test_stat
## Finding the p-value for one tail test
print 'p-value is: ', p_val
"""
Explanation: b. Two tailed test.
Using the techniques laid out in lecture, verify if we can state that the returns of TSLA are equal to 0.
- Start by stating the null and alternative hypothesis
- Are we dealing with a one or two tailed test? Why?
- Calculate the mean differences, and the Z-test using the formula provided in class.
- Recall: This is a one parameter test, use the appropriate Z-test
- Use the stat library to calculate the associated p value with your t statistic.
- Compare your found p-value to the set $\alpha$ value, and conclude.
Hypotheses.
<center>Your answer goes here</center>
End of explanation
"""
## Your code goes here
# For alpha = 10%
alpha = 0.1
f =
print 'alpha = 10%: f = ', f
# For alpha = 5%
alpha = 0.05
f =
print 'alpha = 5%: f = ', f
# For alpha = 1%
alpha = 0.01
f =
print 'alpha = 1%: f = ', f
# Plot a standard normal distribution and mark the critical regions with shading
x = np.linspace(-3, 3, 100)
norm_pdf = lambda x: (1/np.sqrt(2 * np.pi)) * np.exp(-x * x / 2)
y = norm_pdf(x)
fig, ax = plt.subplots(1, 1, sharex=True)
ax.plot(x, y)
# Value for alpha = 1%
ax.fill_between(x, 0, y, where = x > ## Your code goes here
, label = 'alpha = 10%')
ax.fill_between(x, 0, y, where = x < ) ## Your code goes here
# Value for alpha = 5%
ax.fill_between(x, 0, y, where = x > ## Your code goes here
, color = 'red', label = 'alpha = 5%')
ax.fill_between(x, 0, y, where = x < ## Your code goes here
, color = 'red')
#Value for alpha = 10%
ax.fill_between(x, 0, y, where = x > ## Your code goes here
, facecolor='green', label = 'alpha = 1%')
ax.fill_between(x, 0, y, where = x < ## Your code goes here
, facecolor='green')
plt.title('Rejection regions for a two-tailed hypothesis test at 90%, 95%, 99% confidence')
plt.xlabel('x')
plt.ylabel('p(x)')
plt.legend();
"""
Explanation: Exercise 2:
a. Critical Values.
Find the critical values associated with $\alpha = 1\%, 5\%, 10\%$ and graph the rejection regions on a plot for a two tailed test.
Useful formulas:
$$ f = 1 - \frac{\alpha}{2} $$
In order to find the z-value associated with each f value use the z-table here.
You can read more about how to read z-tables here
End of explanation
"""
# Calculating Critical Values probability
alpha = 0.1
f = ## Your code goes here
print f
data = get_pricing('SPY', start_date = '2016-01-01', end_date = '2017-01-01', fields = 'price')
returns_sample = data.pct_change()[1:]
# Running the T-test.
n = len(returns_sample)
test_statistic = ## Your code goes here
print 't test statistic: ', test_statistic
"""
Explanation: b. Mean T-Test
Run a T-test on the SPY returns, to determine if the mean returns is 0.01.
- Find the two critical values for a 90% two tailed $z$-test
- Use the formula above to run a t-test on the sample data.
- Conclude about the test results.
End of explanation
"""
# Running p-value test.
alpha = 0.1
p_val = ## Your code goes here
print 'p-value is: ', p_val
"""
Explanation: c. Mean p-value test
Given the returns data above, use the p-value to determine the results of the previous hypothesis test.
End of explanation
"""
# Data Collection
alpha = 0.1
symbol_list = ['XLF', 'MCD']
start = '2015-01-01'
end = '2016-01-01'
pricing_sample = get_pricing(symbol_list, start_date = start, end_date = end, fields='price')
pricing_sample.columns = map(lambda x: x.symbol, pricing_sample.columns)
returns_sample = pricing_sample.pct_change()[1:]
# Sample mean values
mu_xlf, mu_gs = returns_sample.mean()
s_xlf, s_gs = returns_sample.std()
n_xlf = len(returns_sample['XLF'])
n_gs = len(returns_sample['MCD'])
test_statistic = ## Your code goes here
df = ## Your code goes here
print 't test statistic: ', test_statistic
print 'Degrees of freedom (modified): ', df
print 'p-value: ', ## Your code goes here
"""
Explanation: Exercise 3: Multiple Variables Tests.
a. Hypothesis testing on Means.
State the hypothesis tests for comparing two means
Find the test statistic along with the degrees of freedom for the following two assets. Assume variance is different (We assume XLF to be a safer buy than GS.
Use the t-table to conclude about your hypothesis test. Pick $\alpha = 10\%$
Useful Formulas:
$$ t = \frac{\bar{X}_1 - \bar{X}_2}{(\frac{s_p^2}{n_1} + \frac{s_p^2}{n_2})^{1/2}}$$
$$ t = \frac{\bar{X}_1 - \bar{X}_2}{(\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2})^{1/2}}$$
$$df = \frac{(\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2})^2}{(s_1^2/n_1)^2/(n_1-1) + (s_2^2/n_2)^2/(n_2-1)}$$
note: one formula for t involves equal variance, the other does not. Use the right one given the information above
Write your hypotheses here:
End of explanation
"""
# Data
symbol_list = ['XLF', 'MCD']
start = "2015-01-01"
end = "2016-01-01"
pricing_sample = get_pricing(symbol_list, start_date = start, end_date = end, fields = 'price')
pricing_sample.columns = map(lambda x: x.symbol, pricing_sample.columns)
returns_sample = pricing_sample.pct_change()[1:]
# Take returns from above, MCD and XLF, and compare their variances
## Your code goes here
print 'XLF standard deviation is: ', xlf_std_dev
print 'MCD standard deviation is: ', mcd_std_dev
# Calculate F-test with MCD.std > XLF.std
## Your code goes here
print "F Test statistic: ", test_statistic
#degree of freedom
df1 = ## Your code goes here
df2 = ## Your code goe here
print df1
print df2
# Calculate critical values.
from scipy.stats import f
upper_crit_value = f.ppf(0.975, df1, df2)
lower_crit_value = f.ppf(0.025, df1, df2)
print 'Upper critical value at a = 0.05 with df1 = {0} and df2 = {1}: '.format(df1, df2), upper_crit_value
print 'Lower critical value at a = 0.05 with df1 = {0} and df2 = {1}: '.format(df1, df2), lower_crit_value
"""
Explanation: b. Hypothesis Testing on Variances.
State the hypothesis tests for comparing two means.
Calculate the returns and compare their variances.
Calculate the F-test using the variances
Check that both values have the same degrees of freedom.
Write your hypotheses here:
End of explanation
"""
|
adamwang0705/cross_media_affect_analysis
|
develop/20171019-daheng-build_shed_words_freq_dicts.ipynb
|
mit
|
"""
Initialization
"""
'''
Standard modules
'''
import os
import pickle
import csv
import time
from pprint import pprint
'''
Analysis modules
'''
import pandas as pd
'''
Custom modules
'''
import config
import utilities
'''
Misc
'''
nb_name = '20171019-daheng-build_shed_words_freq_dicts'
"""
Explanation: Build selected Hedonometer words frequency dicts for topic_news and topic_tweets docs
Last modified: 2017-10-23
Roadmap
Check shed words pattern-matching requiremnts
Build shed words freq dicts for topic docs
Steps
End of explanation
"""
"""
Check all shed words
"""
if 1 == 1:
ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL)
print(ind_shed_word_dict.values())
"""
Explanation: Check shed words pattern-matching requiremnts
Ref:
- Dodds, P. S., Harris, K. D., Kloumann, I. M., Bliss, C. A., & Danforth, C. M. (2011). Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PloS one, 6(12), e26752.
Notes:
- See 2.1 Algorithm for Hedonometer P3
- See Methods P23
Build shed words freq dicts for topic docs
End of explanation
"""
%%time
"""
Build single shed words freq dict for all topic_news docs
Register
TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL = os.path.join(DATA_DIR, 'topics_news_shed_words_freq.dict.pkl')
in config
"""
if 0 == 1:
topics_news_shed_words_freq_dict = {}
for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST):
localtime = time.asctime(time.localtime(time.time()))
print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1,
len(config.MANUALLY_SELECTED_TOPICS_LST),
topic['name'],
localtime))
topic_shed_words_freq_dict = {}
'''
Load shed_word and shed_word_ind mapping pkls
'''
ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL)
shed_word_ind_dict = pd.read_pickle(config.SHED_WORD_IND_DICT_PKL)
shed_words_set = set(ind_shed_word_dict.values())
'''
Load topic_news doc
'''
csv.register_dialect('topics_docs_line', delimiter='\t', doublequote=True, quoting=csv.QUOTE_ALL)
topic_news_csv_file = os.path.join(config.TOPICS_DOCS_DIR, '{}-{}.news.csv'.format(topic_ind, topic['name']))
with open(topic_news_csv_file, 'r') as f:
reader = csv.DictReader(f, dialect='topics_docs_line')
'''
Count shed words freq for each tweet
'''
# lazy load
for row in reader:
news_native_id = int(row['news_native_id'])
news_doc = row['news_doc']
news_doc_shed_words_freq_dict = utilities.count_news_doc_shed_words_freq(news_doc, ind_shed_word_dict, shed_word_ind_dict, shed_words_set)
topic_shed_words_freq_dict[news_native_id] = news_doc_shed_words_freq_dict
topics_news_shed_words_freq_dict[topic_ind] = topic_shed_words_freq_dict
'''
Make pkl for result single dict
'''
with open(config.TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL, 'wb') as f:
pickle.dump(topics_news_shed_words_freq_dict, f)
"""
Explanation: Build single shed words freq dict for topic_news docs
Result single dict format (for all topic_news docs)
{topic_ind_0: {
news_native_id_0_0: {shed_word_0_ind: shed_word_0_freq,
shed_word_1_ind: shed_word_1_freq,
...},
news_native_id_0_1: {shed_word_0_ind: shed_word_0_freq,
shed_word_1_ind: shed_word_1_freq,
...},
...},
topic_ind_1: {
news_native_id_1_0: {shed_word_0_ind: shed_word_0_freq,
shed_word_1_ind: shed_word_1_freq,
...},
news_native_id_1_1: {shed_word_0_ind: shed_word_0_freq,
shed_word_1_ind: shed_word_1_freq,
...},
...},
...}
Build single shed words freq dict for all topic_news docs
End of explanation
"""
"""
Print out sample news shed_words_freq_dicts inside single topic
"""
if 0 == 1:
target_topic_ind = 0
with open(config.TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL, 'rb') as f:
topics_news_shed_words_freq_dict = pickle.load(f)
count = 0
for news_native_id, news_doc_shed_words_freq_dict in topics_news_shed_words_freq_dict[target_topic_ind].items():
print('news_native_id: {}'.format(news_native_id))
print('\t{}'.format(news_doc_shed_words_freq_dict))
news_doc_shed_words_len = sum(news_doc_shed_words_freq_dict.values())
print('\tLEN: {}'.format(news_doc_shed_words_len))
count += 1
if count >= 5:
break
%%time
"""
Check total shed words length of this topic_news doc
"""
if 0 == 1:
topic_news_shed_words_len = sum([sum(news_doc_shed_words_freq_dict.values()) for news_doc_shed_words_freq_dict in topics_news_shed_words_freq_dict[target_topic_ind].values()])
print('Total shed words length of this topic_news doc: {}'.format(topic_news_shed_words_len))
"""
Explanation: Check basic statistics
End of explanation
"""
%%time
"""
Build shed words freq dict for each topic separately
Register
TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR = os.path.join(DATA_DIR, 'topics_tweets_shed_words_freq_dict_pkls')
in config
Note:
- Number of tweets is large. Process each topic_tweets doc individually to avoid crash
- Execute second time for updated topic_tweets docs
"""
if 0 == 1:
for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST):
localtime = time.asctime(time.localtime(time.time()))
print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1,
len(config.MANUALLY_SELECTED_TOPICS_LST),
topic['name'],
localtime))
topic_shed_words_freq_dict = {}
'''
Load shed_word and shed_word_ind mapping pkls
'''
ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL)
shed_word_ind_dict = pd.read_pickle(config.SHED_WORD_IND_DICT_PKL)
shed_words_set = set(ind_shed_word_dict.values())
'''
Load topic_tweets doc
'''
csv.register_dialect('topics_docs_line', delimiter='\t', doublequote=True, quoting=csv.QUOTE_ALL)
topic_tweets_csv_file = os.path.join(config.TOPICS_DOCS_DIR, '{}-{}.updated.tweets.csv'.format(topic_ind, topic['name']))
with open(topic_tweets_csv_file, 'r') as f:
reader = csv.DictReader(f, dialect='topics_docs_line')
'''
Count shed words freq for each tweet
'''
# lazy load
for row in reader:
tweet_id = int(row['tweet_id'])
tweet_text = row['tweet_text']
tweet_shed_words_freq_dict = utilities.count_tweet_shed_words_freq(tweet_text, ind_shed_word_dict, shed_word_ind_dict, shed_words_set)
topic_shed_words_freq_dict[tweet_id] = tweet_shed_words_freq_dict
'''
Make pkl for result dict file
'''
topic_tweets_shed_words_freq_dict_pkl_file = os.path.join(config.TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR,
'{}.updated.dict.pkl'.format(topic_ind))
with open(topic_tweets_shed_words_freq_dict_pkl_file, 'wb') as f:
pickle.dump(topic_shed_words_freq_dict, f)
"""
Explanation: Build shed words freq dicts for each topic_tweets doc separately
Result dict format (for each given topic_tweets doc)
{tweet_id_0_0: {shed_word_0_ind: shed_word_0_freq,
shed_word_1_ind: shed_word_1_freq,
...},
tweet_id_0_1: {shed_word_0_ind: shed_word_0_freq,
shed_word_1_ind: shed_word_1_freq,
...},
...}
Build shed words freq dict for each topic separately
End of explanation
"""
%%time
"""
Print out sample tweet shed_words_freq_dicts inside single topic
"""
if 0 == 1:
target_topic_ind = 0
topic_tweets_shed_words_freq_dict_pkl_file = os.path.join(config.TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR, '{}.updated.dict.pkl'.format(target_topic_ind))
with open(topic_tweets_shed_words_freq_dict_pkl_file, 'rb') as f:
topic_tweets_shed_words_freq_dict_tmp = pickle.load(f)
count = 0
for tweet_id, tweet_shed_words_freq_dict in topic_tweets_shed_words_freq_dict_tmp.items():
print('tweet_id: {}'.format(tweet_id))
print('\t{}'.format(tweet_shed_words_freq_dict))
tweet_shed_words_len = sum(tweet_shed_words_freq_dict.values())
print('\tLEN: {}'.format(tweet_shed_words_len))
count += 1
if count >= 20:
break
%%time
"""
Check total shed words length of a topic_tweets doc
"""
if 0 == 1:
topic_tweets_shed_words_len = sum([sum(tweet_shed_words_freq_dict.values()) for tweet_shed_words_freq_dict in topic_tweets_shed_words_freq_dict_tmp.values()])
print('Total shed words length of this topic_tweets_doc: {}'.format(topic_tweets_shed_words_len))
"""
Explanation: Check basic statistics
End of explanation
"""
|
pagutierrez/tutorial-sklearn
|
notebooks-spanish/02-herramientas_cientificas_python.ipynb
|
cc0-1.0
|
import numpy as np
# Semilla de números aleatorios (para reproducibilidad)
rnd = np.random.RandomState(seed=123)
# Generar una matriz aleatoria
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # dimensiones 3x5
print(X)
"""
Explanation: Jupyter Notebooks (libros de notas o cuadernos Jupyter)
Puedes ejecutar un Cell (celda) pulsando [shift] + [Enter] o presionando el botón Play en la barra de herramientas.
Puedes obtener ayuda sobre una función u objeto presionando [shift] + [tab] después de los paréntesis de apertura function(
También puedes obtener la ayuda ejecutando function?
Matrices de Numpy
Manipular matrices de numpy es un parte muy importante del aprendizaje automático en Python (en realidad, de cualquier tipo de computación científica). Esto será un repaso para la mayoría. En cualquier caso, repasemos las características más importantes.
End of explanation
"""
# Acceder a los elementos
# Obtener un único elemento
# (primera fila, primera columna)
print(X[0, 0])
# Obtener una fila
# (segunda fila)
print(X[1])
# Obtener una columna
# (segunda columna)
print(X[:, 1])
# Obtener la traspuesta
print(X.T)
"""
Explanation: (tener en cuenta que los arrays en numpy se indexan desde el 0, al igual que la mayoría de estructuras en Python)
End of explanation
"""
# Crear un vector fila de números con la misma separación
# sobre un intervalo prefijado
y = np.linspace(0, 12, 5)
print(y)
# Transformar el vector fila en un vector columna
print(y[:, np.newaxis])
# Obtener la forma de un array y cambiarla
# Generar un array aleatorio
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # a 3 x 5 array
print(X)
print(X.shape)
print(X.reshape(5, 3))
# Indexar según un conjunto de números enteros
indices = np.array([3, 1, 0])
print(indices)
X[:, indices]
"""
Explanation: $$\begin{bmatrix}
1 & 2 & 3 & 4 \
5 & 6 & 7 & 8
\end{bmatrix}^T
=
\begin{bmatrix}
1 & 5 \
2 & 6 \
3 & 7 \
4 & 8
\end{bmatrix}
$$
End of explanation
"""
from scipy import sparse
# Crear una matriz de aleatorios entre 0 y 1
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(10, 5))
print(X)
# Poner a cero la mayoría de elementos
X[X < 0.7] = 0
print(X)
# Transformar X en una matriz CSR (Compressed-Sparse-Row)
X_csr = sparse.csr_matrix(X)
print(X_csr)
# Convertir la matriz CSR de nuevo a una matriz densa
print(X_csr.toarray())
"""
Explanation: Hay mucho más que aprender, pero esto cubre algunas de las cosas fundamentales que se tratarán en este curso.
Matrices dispersas de SciPy
No utilizaremos demasiado las matrices dispersas, pero son muy útiles en múltiples situaciones. En algunas tareas de aprendizaje automático, especialmente en aquellas asociadas con análisis de textos, los datos son casi siempre ceros. Guardar todos estos ceros es muy poco eficiente, mientras que representar estas matrices de forma que solo almacenemos lo qué no es cero es mucho más eficiente. Podemos crear y manipular matrices dispersas de la siguiente forma:
End of explanation
"""
# Crear una matriz LIL vacía y añadir algunos elementos
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(type(X_lil))
X_dense = X_lil.toarray()
print(X_dense)
print(type(X_dense))
"""
Explanation: (puede que encuentres otra forma alternativa para convertir matrices dispersas a densas: numpy.todense; toarray devuelve un array numpy, mientras que todense devuelve una matriz numpy. En este tutorial trabajaremos con arrays numpy, no con matrices, ya que estas últimas no son soportadas por scikit-learn.
La representación CSR puede ser muy eficiente para hacer cómputo, pero no tanto para añadir elementos. Para ello, la representación LIL (List-In-List) es mejor:
End of explanation
"""
X_csr = X_lil.tocsr()
print(X_csr)
print(type(X_csr))
"""
Explanation: A menudo, una vez creamos la matriz LIL, es útil convertirla al formato CSR (muchos algoritmos de scikit-learn requieren formatos CSR)
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
# Dibujar una línea
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x));
# Dibujar un scatter
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);
# Mostrar imágenes usando imshow
# - Tener en cuenta que el origen por defecto está arriba a la izquierda
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]
im = y * np.sin(x) * np.cos(y)
print(im.shape)
plt.imshow(im);
# Hacer un diagrama de curvas de nivel (contour plot)
# - El origen aquí está abajo a la izquierda
plt.contour(im);
# El modo "widget" en lugar de inline permite que los plots sean interactivos
%matplotlib widget
# Plot en 3D
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
xgrid, ygrid = np.meshgrid(x, y.ravel())
ax.plot_surface(xgrid, ygrid, im, cmap=plt.cm.viridis, cstride=2, rstride=2, linewidth=0);
"""
Explanation: Los formatos dispersos disponibles que pueden ser útiles para distintos problemas son:
- CSR (compressed sparse row).
- CSC (compressed sparse column).
- BSR (block sparse row).
- COO (coordinate).
- DIA (diagonal).
- DOK (dictionary of keys).
- LIL (list in list).
El paquete scipy.sparse tienen bastantes funciones para matrices dispersas, incluyendo álgebra lineal, algoritmos de grafos y mucho más.
matplotlib
Otra parte muy importante del aprendizaje automático es la visualización de datos. La herramienta más habitual para esto en Python es matplotlib. Es un paquete extremadamente flexible y ahora veremos algunos elementos básicos.
Ya que estamos usando los libros (notebooks) Jupyter, vamos a usar una de las funciones mágicas que vienen incluidas en IPython, el modo "matoplotlib inline", que dibujará los plots directamente en el libro.
End of explanation
"""
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.collections import EllipseCollection
x = np.arange(10)
y = np.arange(15)
X, Y = np.meshgrid(x, y)
XY = np.hstack((X.ravel()[:, np.newaxis], Y.ravel()[:, np.newaxis]))
ww = X/10.0
hh = Y/15.0
aa = X*9
fig, ax = plt.subplots()
ec = EllipseCollection(ww, hh, aa, units='x', offsets=XY,
transOffset=ax.transData)
ec.set_array((X + Y).ravel())
ax.add_collection(ec)
ax.autoscale_view()
ax.set_xlabel('X')
ax.set_ylabel('y')
cbar = plt.colorbar(ec)
cbar.set_label('X+Y')
plt.show()
"""
Explanation: Hay muchísimos tipos de gráficos disponibles. Una forma útila de explorarlos es mirar la galería de matplotlib.
Puedes probar estos ejemplos fácilmente en el libro de notas: simplemente copia el enlace Source Code de cada página y pégalo en el libro usando el comando mágico %load.
Por ejemplo:
End of explanation
"""
|
jobovy/misc-notebooks
|
inference/ABC-examples.ipynb
|
bsd-3-clause
|
data= ['H','H']
outcomes= ['T','H']
def coin_ABC():
while True:
h= numpy.random.uniform()
flips= numpy.random.binomial(1,h,size=2)
if outcomes[flips[0]] == data[0] \
and outcomes[flips[1]] == data[1]:
yield h
hsamples= []
start= time.time()
for h in coin_ABC():
hsamples.append(h)
if time.time() > start+2.: break
print "Obtained %i samples" % len(hsamples)
dum= hist(hsamples,bins=31,lw=2.,normed=True,zorder=0)
plot(numpy.linspace(0.,1.,1001),numpy.linspace(0.,1.,1001)**2.*3.,lw=3.)
xlabel(r'$h$')
ylabel(r'$p(h|D)$')
"""
Explanation: Examples of ABC inference
Coin flip with two flips
We've flipped a coin twice and gotten heads twice. What is the probability for getting heads?
End of explanation
"""
data= ['T', 'H', 'H', 'T', 'T', 'H', 'H', 'T', 'H', 'H']
def coin_ABC_10flips():
while True:
h= numpy.random.uniform()
flips= numpy.random.binomial(1,h,size=len(data))
if outcomes[flips[0]] == data[0] \
and outcomes[flips[1]] == data[1] \
and outcomes[flips[2]] == data[2] \
and outcomes[flips[3]] == data[3] \
and outcomes[flips[4]] == data[4] \
and outcomes[flips[5]] == data[5] \
and outcomes[flips[6]] == data[6] \
and outcomes[flips[7]] == data[7] \
and outcomes[flips[8]] == data[8] \
and outcomes[flips[9]] == data[9]:
yield h
hsamples= []
start= time.time()
for h in coin_ABC_10flips():
hsamples.append(h)
if time.time() > start+2.: break
print "Obtained %i samples" % len(hsamples)
dum= hist(hsamples,bins=31,lw=2.,normed=True,zorder=0)
xs= numpy.linspace(0.,1.,1001)
ys= xs**numpy.sum([d == 'H' for d in data])*(1.-xs)**numpy.sum([d == 'T' for d in data])
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys,lw=3.)
xlabel(r'$h$')
ylabel(r'$p(h|D)$')
"""
Explanation: Coin flip with 10 flips
Same with 10 flips, still matching the entire sequence:
End of explanation
"""
sufficient_data= numpy.sum([d == 'H' for d in data])
def coin_ABC_10flips_suff():
while True:
h= numpy.random.uniform()
flips= numpy.random.binomial(1,h,size=len(data))
if numpy.sum(flips) == sufficient_data:
yield h
hsamples= []
start= time.time()
for h in coin_ABC_10flips_suff():
hsamples.append(h)
if time.time() > start+2.: break
print "Obtained %i samples" % len(hsamples)
dum= hist(hsamples,bins=31,lw=2.,normed=True,zorder=0)
xs= numpy.linspace(0.,1.,1001)
ys= xs**numpy.sum([d == 'H' for d in data])*(1.-xs)**numpy.sum([d == 'T' for d in data])
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys,lw=3.)
xlabel(r'$h$')
ylabel(r'$p(h|D)$')
"""
Explanation: Using a sufficient statistic instead:
End of explanation
"""
data= numpy.random.normal(size=100)
def Var_ABC(threshold=0.05):
while True:
v= numpy.random.uniform()*4
sim= numpy.random.normal(size=len(data))*numpy.sqrt(v)
d= numpy.fabs(numpy.var(sim)-numpy.var(data))
if d < threshold:
yield v
vsamples= []
start= time.time()
for v in Var_ABC(threshold=0.05):
vsamples.append(v)
if time.time() > start+2.: break
print "Obtained %i samples" % len(vsamples)
h= hist(vsamples,range=[0.,2.],bins=51,normed=True)
xs= numpy.linspace(0.001,2.,1001)
ys= xs**(-len(data)/2.)*numpy.exp(-1./xs/2.*len(data)*(numpy.var(data)+numpy.mean(data)**2.))
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys,lw=2.)
"""
Explanation: Variance of a Gaussian with zero mean
Now we infer the variance of a Gaussian with zero mean using ABC:
End of explanation
"""
vsamples= []
start= time.time()
for v in Var_ABC(threshold=1.5):
vsamples.append(v)
if time.time() > start+2.: break
print "Obtained %i samples" % len(vsamples)
h= hist(vsamples,range=[0.,2.],bins=51,normed=True)
xs= numpy.linspace(0.001,2.,1001)
ys= xs**(-len(data)/2.)*numpy.exp(-1./xs/2.*len(data)*(numpy.var(data)+numpy.mean(data)**2.))
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys,lw=2.)
"""
Explanation: If we raise the threshold too much, we sample simply from the prior:
End of explanation
"""
vsamples= []
start= time.time()
for v in Var_ABC(threshold=0.001):
vsamples.append(v)
if time.time() > start+2.: break
print "Obtained %i samples" % len(vsamples)
h= hist(vsamples,range=[0.,2.],bins=51,normed=True)
xs= numpy.linspace(0.001,2.,1001)
ys= xs**(-len(data)/2.)*numpy.exp(-1./xs/2.*len(data)*(numpy.var(data)+numpy.mean(data)**2.))
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys,lw=2.)
"""
Explanation: And if we make the threshold too small, we don't get many samples:
End of explanation
"""
|
borja876/Thinkful-DataScience-Borja
|
Describe+the+effects+of+age+on+hearing.ipynb
|
mit
|
import math
#odds of hearing problems in a 95 year old woman
a = -1+ 0.02*95 + 1*0
c = math.exp( a )
d = math.exp( a )/(1+ c)
print('Probability of having hearing problems over not having them:', c)
print('HashearingProblem:', d)
"""
Explanation: Write out a description of the effects that age and gender have on the odds of developing hearing problems in terms a layperson could understand. Include estimates for the odds of hearing problems in a 95 year old woman and a 50 year old man.
logit(HasHearingProblem)=−1+.02∗age+1∗male
End of explanation
"""
#odds of hearing problems in a 50 year old man
b = -1+.02*50+1*1
c = math.exp( a )
d = math.exp( a )/(1+ c)
print('Probability of having hearing problems over not having them:', c)
print('HashearingProblem:', d)
"""
Explanation: The probability of a 95 year old man of having hearing problems is nearly 2.5 times more than the probability of not developing them being the probability of 71%
End of explanation
"""
|
mjones01/NEON-Data-Skills
|
code/Python/remote-sensing/hyperspectral-data/Plot_Spectral_Signature_Tiles_py.ipynb
|
agpl-3.0
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') #don't display warnings
"""
Explanation: syncID: c91d556c8fad4570a33a1aaa550a561d
title: "Plot a Spectral Signature in Python - Tiled Data"
description: "Learn how to extract and plot a spectral profile from a single pixel of a reflectance band using NEON tiled hyperspectral data."
dateCreated: 2018-07-04
authors: Bridget Hass
contributors:
estimatedTime:
packagesLibraries: numpy, pandas, gdal, matplotlib, h5py,IPython.display
topics: hyperspectral-remote-sensing, HDF5, remote-sensing
languagesTool: python
dataProduct: NEON.DP3.30006, NEON.DP3.30008
code1: Python/remote-sensing/hyperspectral-data/Plot_Spectral_Signature_Tiles_py.ipynb
tutorialSeries: intro-hsi-tiles-py-series
urlTitle: plot-spec-sig-tiles-python
In this tutorial, we will learn how to extract and plot a spectral profile
from a single pixel of a reflectance band in a NEON hyperspectral HDF5 file.
This tutorial uses the mosaiced or tiled NEON data product. For a tutorial
using the flightline data, please see <a href="/plot-spec-sig-python" target="_blank"> Plot a Spectral Signature in Python - Flightline Data</a>.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Plot the spectral signature of a single pixel
* Remove bad band windows from a spectra
* Use a widget to interactively look at spectra of various pixels
* Calculate the mean spectra over multiple pixels
### Install Python Packages
* **numpy**
* **pandas**
* **gdal**
* **matplotlib**
* **h5py**
* **IPython.display**
### Download Data
{% include/dataSubsets/_data_DI18.html %}
To complete this tutorial, we will import a Python module containing several functions. This imports the functions behind the scenes, the same way you import standard Python packages. In order to import a module, it must be located in the same directory as where you are running your noteook.
[[nid:7489]]
</div>
In this exercise, we will learn how to extract and plot a spectral profile from a single pixel of a reflectance band in a NEON hyperspectral hdf5 file. To do this, we will use the aop_h5refl2array function to read in and clean our h5 reflectance data, and the Python package pandas to create a dataframe for the reflectance and associated wavelength data.
Spectral Signatures
A spectral signature is a plot of the amount of light energy reflected by an object throughout the range of wavelengths in the electromagnetic spectrum. The spectral signature of an object conveys useful information about its structural and chemical composition. We can use these signatures to identify and classify different objects from a spectral image.
For example, vegetation has a distinct spectral signature.
<figure>
<a href="{{ site.baseurl }}/images/hyperspectral/vegetationSpectrum_MarkElowitz.png">
<img src="{{ site.baseurl }}/images/hyperspectral/vegetationSpectrum_MarkElowitz.png"></a>
<figcaption> Spectral signature of vegetation. Source: Mark Elowitz
</figcaption>
</figure>
Vegetation has a unique spectral signature characterized by high reflectance in the near infrared wavelengths, and much lower reflectance in the green portion of the visible spectrum. We can extract reflectance values in the NIR and visible spectrums from hyperspectral data in order to map vegetation on the earth's surface. You can also use spectral curves as a proxy for vegetation health. We will explore this concept more in the next lesson, where we will caluclate vegetation indices.
<figure>
<a href="{{ site.baseurl }}/images/hyperspectral/ReflectanceCurves_waterVegSoil.png">
<img src="{{ site.baseurl }}/images/hyperspectral/ReflectanceCurves_waterVegSoil.png"></a>
<figcaption> Example spectra of water, green grass, dry grass, and soil. Source: National Ecological Observatory Network (NEON)
</figcaption>
</figure>
End of explanation
"""
import neon_aop_hyperspectral as neon_hs
sercRefl, sercRefl_md = neon_hs.aop_h5refl2array('../data/Day1_Hyperspectral_Intro/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
"""
Explanation: Import the hyperspectral functions into the variable neon_hs (for neon hyperspectral):
End of explanation
"""
sercb56 = sercRefl[:,:,55]
neon_hs.plot_aop_refl(sercb56,
sercRefl_md['spatial extent'],
colorlimit=(0,0.3),
title='SERC Tile Band 56',
cmap_title='Reflectance',
colormap='gist_earth')
"""
Explanation: Optionally, you can view the data stored in the metadata dictionary, and print the minimum, maximum, and mean reflectance values in the tile. In order to handle any nan values, use Numpy nanmin nanmax and nanmean.
```python
for item in sorted(sercRefl_md):
print(item + ':',sercRefl_md[item])
print('SERC Tile Reflectance Stats:')
print('min:',np.nanmin(sercRefl))
print('max:',round(np.nanmax(sercRefl),2))
print('mean:',round(np.nanmean(sercRefl),2))
```
For reference, plot the red band of the tile, using splicing, and the plot_aop_refl function:
End of explanation
"""
import pandas as pd
"""
Explanation: We can use pandas to create a dataframe containing the wavelength and reflectance values for a single pixel - in this example, we'll look at the center pixel of the tile (500,500).
End of explanation
"""
serc_pixel_df = pd.DataFrame()
serc_pixel_df['reflectance'] = sercRefl[500,500,:]
serc_pixel_df['wavelengths'] = sercRefl_md['wavelength']
"""
Explanation: To extract all reflectance values from a single pixel, use splicing as we did before to select a single band, but now we need to specify (y,x) and select all bands (using :).
End of explanation
"""
print(serc_pixel_df.head(5))
print(serc_pixel_df.tail(5))
"""
Explanation: We can preview the first and last five values of the dataframe using head and tail:
End of explanation
"""
serc_pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none')
plt.title('Spectral Signature for SERC Pixel (500,500)')
ax = plt.gca()
ax.set_xlim([np.min(serc_pixel_df['wavelengths']),np.max(serc_pixel_df['wavelengths'])])
ax.set_ylim([np.min(serc_pixel_df['reflectance']),np.max(serc_pixel_df['reflectance'])])
ax.set_xlabel("Wavelength, nm")
ax.set_ylabel("Reflectance")
ax.grid('on')
"""
Explanation: We can now plot the spectra, stored in this dataframe structure. pandas has a built in plotting routine, which can be called by typing .plot at the end of the dataframe.
End of explanation
"""
bbw1 = sercRefl_md['bad band window1'];
bbw2 = sercRefl_md['bad band window2'];
print('Bad Band Window 1:',bbw1)
print('Bad Band Window 2:',bbw2)
"""
Explanation: Water Vapor Band Windows
We can see from the spectral profile above that there are spikes in reflectance around ~1400nm and ~1800nm. These result from water vapor which absorbs light between wavelengths 1340-1445 nm and 1790-1955 nm. The atmospheric correction that converts radiance to reflectance subsequently results in a spike at these two bands. The wavelengths of these water vapor bands is stored in the reflectance attributes, which is saved in the reflectance metadata dictionary created with h5refl2array:
End of explanation
"""
serc_pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
plt.title('Spectral Signature for SERC Pixel (500,500)')
ax1 = plt.gca(); ax1.grid('on')
ax1.set_xlim([np.min(serc_pixel_df['wavelengths']),np.max(serc_pixel_df['wavelengths'])]);
ax1.set_ylim(0,0.5)
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
#Add in red dotted lines to show boundaries of bad band windows:
ax1.plot((1340,1340),(0,1.5), 'r--')
ax1.plot((1445,1445),(0,1.5), 'r--')
ax1.plot((1790,1790),(0,1.5), 'r--')
ax1.plot((1955,1955),(0,1.5), 'r--')
"""
Explanation: Below we repeat the plot we made above, but this time draw in the edges of the water vapor band windows that we need to remove.
End of explanation
"""
import copy
w = copy.copy(sercRefl_md['wavelength']) #make a copy to deal with the mutable data type
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan #can also use bbw1[0] or bbw1[1] to avoid hard-coding in
w[-10:]=np.nan; # the last 10 bands sometimes have noise - best to eliminate
#print(w) #optionally print wavelength values to show that -9999 values are replaced with nan
"""
Explanation: We can now set these bad band windows to nan, along with the last 10 bands, which are also often noisy (as seen in the spectral profile plotted above). First make a copy of the wavelengths so that the original metadata doesn't change.
End of explanation
"""
#define index corresponding to nan values:
nan_ind = np.argwhere(np.isnan(w))
#define refl_band, refl, and metadata
refl_band = sercb56
refl = copy.copy(sercRefl)
metadata = copy.copy(sercRefl_md)
from IPython.html.widgets import *
def spectraPlot(pixel_x,pixel_y):
reflectance = refl[pixel_y,pixel_x,:]
reflectance[nan_ind]=np.nan
pixel_df = pd.DataFrame()
pixel_df['reflectance'] = reflectance
pixel_df['wavelengths'] = w
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1,2,1)
# fig, axes = plt.subplots(nrows=1, ncols=2)
pixel_df.plot(ax=ax1,x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
ax1.set_title('Spectra of Pixel (' + str(pixel_x) + ',' + str(pixel_y) + ')')
ax1.set_xlim([np.min(metadata['wavelength']),np.max(metadata['wavelength'])]);
ax1.set_ylim([np.min(pixel_df['reflectance']),np.max(pixel_df['reflectance']*1.1)])
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
ax1.grid('on')
ax2 = fig.add_subplot(1,2,2)
plot = plt.imshow(refl_band,extent=metadata['spatial extent'],clim=(0,0.1));
plt.title('Pixel Location');
cbar = plt.colorbar(plot,aspect=20); plt.set_cmap('gist_earth');
cbar.set_label('Reflectance',rotation=90,labelpad=20);
ax2.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
ax2.plot(metadata['spatial extent'][0]+pixel_x,metadata['spatial extent'][3]-pixel_y,'s',markersize=5,color='red')
ax2.set_xlim(metadata['spatial extent'][0],metadata['spatial extent'][1])
ax2.set_ylim(metadata['spatial extent'][2],metadata['spatial extent'][3])
interact(spectraPlot, pixel_x = (0,refl.shape[1]-1,1),pixel_y=(0,refl.shape[0]-1,1))
"""
Explanation: Interactive Spectra Visualization
Finally, we can create a widget to interactively view the spectra of different pixels along the reflectance tile. Run the two cells below, and interact with them to gain a better sense of what the spectra look like for different materials on the ground.
End of explanation
"""
|
Eomys/MoSQITo
|
tutorials/tuto_sharpness_din.ipynb
|
apache-2.0
|
# Add MOSQITO to the Python path
import sys
sys.path.append('..')
# To get inline plots (specific to Jupyter notebook)
%matplotlib notebook
# Import numpy
import numpy as np
# Import plot function
import matplotlib.pyplot as plt
# Import mosqito functions
from mosqito.utils import load
# Import spectrum computation tool
from scipy.fft import fft, fftfreq
from mosqito.sq_metrics import loudness_zwst_perseg
from mosqito.sq_metrics import sharpness_din_st
from mosqito.sq_metrics import sharpness_din_perseg
from mosqito.sq_metrics import sharpness_din_from_loudness
from mosqito.sq_metrics import sharpness_din_freq
# Import MOSQITO color sheme [Optional]
from mosqito import COLORS
# To get inline plots (specific to Jupyter notebook)
%matplotlib notebook
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-signal" data-toc-modified-id="Load-signal-1"><span class="toc-item-num">1 </span>Load signal</a></span></li><li><span><a href="#Compute-sharpness-of-the-whole-signal" data-toc-modified-id="Compute-sharpness-of-the-whole-signal-2"><span class="toc-item-num">2 </span>Compute sharpness of the whole signal</a></span></li><li><span><a href="#Compute-sharpness-per-signal-segments" data-toc-modified-id="Compute-sharpness-per-signal-segments-3"><span class="toc-item-num">3 </span>Compute sharpness per signal segments</a></span></li><li><span><a href="#Compute-sharpness-from-loudness" data-toc-modified-id="Compute-sharpness-from-loudness-4"><span class="toc-item-num">4 </span>Compute sharpness from loudness</a></span></li><li><span><a href="#Compute-sharpness-from-spectrum" data-toc-modified-id="Compute-sharpness-from-spectrum-5"><span class="toc-item-num">5 </span>Compute sharpness from spectrum</a></span></li></ul></div>
How to compute acoustic Sharpness according to DIN method
This tutorial explains how to use MOSQITO to compute the acoustic sharpness of a signal according to the DIN 45692 method. For more information on the implementation and validation of the metric, you can refer to the documentation.
The following commands are used to import the necessary functions.
End of explanation
"""
# Define path to the .wav file
# To be replaced by your own path
path = "../validations/sq_metrics/loudness_zwst/input/ISO_532_1/Test signal 5 (pinknoise 60 dB).wav"
# load signal
sig, fs = load(path, wav_calib=2 * 2 **0.5)
# plot signal
t = np.linspace(0, (len(sig) - 1) / fs, len(sig))
plt.figure(1)
plt.plot(t, sig, color=COLORS[0])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
"""
Explanation: Load signal
In this tutorial, the signal is imported from a .wav file. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the pink noise signal from MOSQITO that is used in the following.
End of explanation
"""
sharpness = sharpness_din_st(sig, fs, weighting="din")
"""
Explanation: Compute sharpness of the whole signal
The acoustic sharpness is computed by using the following command line. In addition to the signal (as ndarray) and the sampling frequency, the function takes 1 input arguments: "weitghting" to specify the weighting functions to be used ('din' by default, 'aures', 'bismarck' or 'fastl').
End of explanation
"""
print("Sharpness = {:.1f} acum".format(sharpness) )
"""
Explanation: The function return the Sharpness of the signal :
End of explanation
"""
sharpness, time_axis = sharpness_din_perseg(sig, fs, nperseg=8192 * 2, noverlap=4096, weighting="din")
plt.figure(2)
plt.plot(time_axis, sharpness, color=COLORS[0])
plt.xlabel("Time [s]")
plt.ylabel("S_din [acum]")
plt.ylim((0, 3))
"""
Explanation: Compute sharpness per signal segments
To compute the sharpness for successive, possibly overlaping, time segments, you can use the sharpness_din_perseg function. It accepts two more input paramters:
- nperseg: to define the length of each segment
- noverlap: to define the number of points to overlap between segments
End of explanation
"""
N, N_specific, bark_axis, time_axis = loudness_zwst_perseg(
sig, fs, nperseg=8192 * 2, noverlap=4096
)
sharpness = sharpness_din_from_loudness(N, N_specific, weighting='din')
plt.figure(3)
plt.plot(time_axis, sharpness, color=COLORS[0])
plt.xlabel("Time [s]")
plt.ylabel("S_din [acum]")
plt.ylim((0, 3))
"""
Explanation: Compute sharpness from loudness
In case you have already computed the loudness of a signal, you can use the sharpness_din_from_loudness function to compute the sharpnes. It takes the loudness and the specific loudness as input. The loudness can be computed per time segment or not.
End of explanation
"""
# Compute spectrum
n = len(sig)
spec = np.abs(2 / np.sqrt(2) / n * fft(sig)[0:n//2])
freqs = fftfreq(n, 1/fs)[0:n//2]
# Compute sharpness
S = sharpness_din_freq(spec, freqs)
print("Sharpness_din = {:.1f} sone".format(S) )
"""
Explanation: Compute sharpness from spectrum
The commands below shows how to compute the stationary sharpness from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO. One should note that only stationary values can be computed from a frequency input.
The input spectrum can be either 1D with size (Nfrequency) or 2D with size (fNrequency x Ntime). The corresponding time axis can be either the same for all the spectra, with size (Nfrequency) or different for each spectrum with size (Nfrequency x Ntime).
One should pay attention that the input spectrum must be in RMS values !
End of explanation
"""
from datetime import date
print("Tutorial generation date:", date.today().strftime("%B %d, %Y"))
"""
Explanation:
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.15/_downloads/plot_read_evoked.ipynb
|
bsd-3-clause
|
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
from mne import read_evokeds
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
# Reading
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0),
proj=True)
"""
Explanation: Reading and writing an evoked file
This script shows how to read and write evoked datasets.
End of explanation
"""
evoked.plot(exclude=[])
# Show result as a 2D image (x: time, y: channels, color: amplitude)
evoked.plot_image(exclude=[])
"""
Explanation: Show result as a butteryfly plot:
By using exclude=[] bad channels are not excluded and are shown in red
End of explanation
"""
|
uber-common/deck.gl
|
bindings/pydeck/examples/06 - Conway's Game of Life.ipynb
|
mit
|
import random
def new_board(x, y, num_live_cells=2, num_dead_cells=3):
"""Initializes a board for Conway's Game of Life"""
board = []
for i in range(0, y):
# Defaults to a 3:2 dead cell:live cell ratio
board.append([random.choice([0] * num_dead_cells + [1] * num_live_cells) for _ in range(0, x)])
return board
def get(board, x, y):
"""Return the value at location (x, y) on a board, wrapping around if out-of-bounds"""
return board[y % len(board)][x % len(board[0])]
def assign(board, x, y, value):
"""Assigns a value at location (x, y) on a board, wrapping around if out-of-bounds"""
board[y % len(board)][x % len(board[0])] = value
def count_neighbors(board, x, y):
"""Counts the number of living neighbors a cell at (x, y) on a board has"""
return sum([
get(board, x - 1, y),
get(board, x + 1, y),
get(board, x, y - 1),
get(board, x, y + 1),
get(board, x + 1, y + 1),
get(board, x + 1, y - 1),
get(board, x - 1, y + 1),
get(board, x - 1, y - 1)])
def process_life(board):
"""Creates the next iteration from a passed state of Conway's Game of Life"""
next_board = new_board(len(board[0]), len(board))
for y in range(0, len(board)):
for x in range(0, len(board[y])):
num_neighbors = count_neighbors(board, x, y)
is_alive = get(board, x, y) == 1
if num_neighbors < 2 and is_alive:
assign(next_board, x, y, 0)
elif 2 <= num_neighbors <= 3 and is_alive:
assign(next_board, x, y, 1)
elif num_neighbors > 3 and is_alive:
assign(next_board, x, y, 0)
elif num_neighbors == 3 and not is_alive:
assign(next_board, x, y, 1)
else:
assign(next_board, x, y, 0)
return next_board
"""
Explanation: Conway's Game of Life
Conway's Game of Life is a classic demonstration of emergence, where higher level patterns form from a few simple rules. Fantastic patterns emerge when the game is let to run long enough.
The rules here, to borrow from Wikipedia, are as follows:
Any live cell with fewer than two live neighbours dies, as if by underpopulation.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
Below is a simple Conway's Game of Life implementation:
End of explanation
"""
from IPython.display import clear_output
import time
def draw_board(board):
res = ''
for row in board:
for col in row:
if col == 1:
res += '* '
else:
res += ' '
res += '\n'
return res
board = new_board(20, 20)
NUM_ITERATIONS = 100
for i in range(0, NUM_ITERATIONS):
print('Iteration ' + str(i + 1))
board = process_life(board)
res = draw_board(board)
print(res)
time.sleep(0.1)
clear_output(wait=True)
"""
Explanation: A text-based example
To plot a simple version of Conway's Game of Life, we can use a print function:
End of explanation
"""
import numpy as np
import pandas as pd
import pydeck as deck
PINK = [155, 155, 255, 245]
PURPLE = [255, 155, 255, 245]
SCALING_FACTOR = 1000.0
def convert_board_to_df(board):
"""Makes the board matrix into a list for easier processing"""
rows = []
for x in range(0, len(board[0])):
for y in range(0, len(board)):
rows.append([[x / SCALING_FACTOR, y / SCALING_FACTOR], PURPLE if board[y][x] else PINK])
return pd.DataFrame(rows, columns=['position', 'color'])
board = new_board(30, 30)
records = convert_board_to_df(board)
layer = deck.Layer(
'PointCloudLayer',
records,
get_position='position',
get_color='color',
get_radius=40)
view_state = deck.ViewState(latitude=0.00, longitude=0.00, zoom=13, bearing=44, pitch=45)
r = deck.Deck(layers=[layer], initial_view_state=view_state, map_style='')
r.show()
"""
Explanation: pydeck implementation
We can use either the PointCloudLayer or ScatterplotLayer from deck.gl to visualize the game.
End of explanation
"""
NUM_ITERATIONS = 100
display(r.show())
for i in range(0, NUM_ITERATIONS):
board = process_life(board)
records = convert_board_to_df(board)
layer.data = records
r.update()
time.sleep(0.1)
"""
Explanation: To play the game over time, we call update in a loop.
End of explanation
"""
|
DJCordhose/ai
|
notebooks/es/import.ipynb
|
mit
|
mkdir data
cd data
# http://stat-computing.org/dataexpo/2009/the-data.html
# !curl -O http://stat-computing.org/dataexpo/2009/2000.csv.bz2
# !curl -O http://stat-computing.org/dataexpo/2009/2001.csv.bz2
# !curl -O http://stat-computing.org/dataexpo/2009/2002.csv.bz2
# !ls -lh
# !bzip2 -d 2000.csv.bz2
# !bzip2 -d 2001.csv.bz2
# !bzip2 -d 2002.csv.bz2
# !ls -lh
# data_types = {'CRSElapsedTime': int, 'CRSDepTime': int, 'Year': int, 'Month': int, 'DayOfWeek': int, 'DayofMonth': int}
data_types = {'CRSDepTime': int, 'Year': int, 'Month': int, 'DayOfWeek': int, 'DayofMonth': int}
# http://dask.pydata.org/en/latest/dataframe-overview.html
%time df = dd.read_csv('./data/200*.csv', encoding='iso-8859-1', dtype=data_types, assume_missing=True)
# for live feed
# %time df = dd.read_csv('./data/2003.csv', encoding='iso-8859-1', dtype=data_types, assume_missing=True)
%time len(df)
# just 1% of data
df = df.sample(.01)
%time len(df)
%time df.head()
"""
Explanation: Loading data using Dask (loads lazily)
https://www.youtube.com/watch?v=RA_2qdipVng&t=1s
http://matthewrocklin.com/slides/scipy-2017.html
End of explanation
"""
%time df = df.fillna(-1)
# Takes a while
# %time df.count().compute()
# Takes a while, but should be doable
# %time unique_origins = df['Origin'].unique().compute()
# once you compute you get a real pandas series
# type(unique_origins)
# unique_origins
# 2400 is not a valid time
df['CRSDepTime'] = df.apply(lambda row: 2359 if row['CRSDepTime'] == 2400 else row['CRSDepTime'], axis='columns')
# df.apply?
head = df.head()
def create_timestamp (row):
return pd.Timestamp('%s-%s-%s;%04d'%(row['Year'], row['Month'], row['DayofMonth'], row['CRSDepTime']))
# type(head)
# head
# create a sample for dask to figure out the data types
transformation_sample = head.apply(create_timestamp, axis='columns')
type(transformation_sample)
transformation_sample
# meta_information = {'@timestamp': pd.Timestamp}
meta_information = transformation_sample
df['@timestamp'] = df.apply(lambda row: pd.Timestamp('%s-%s-%s;%04d'%(row['Year'], row['Month'], row['DayofMonth'], row['CRSDepTime'])),
axis='columns',
meta=meta_information)
# df.head()
from pyelasticsearch import ElasticSearch, bulk_chunks
ES_HOST = 'http://localhost:9200/'
INDEX_NAME = "expo2009"
DOC_TYPE = "flight"
es = ElasticSearch(ES_HOST)
# https://pyelasticsearch.readthedocs.io/en/latest/api/#pyelasticsearch.ElasticSearch.bulk
def documents(records):
for flight in records:
yield es.index_op(flight)
def chunk_import(records):
# bulk_chunks() breaks your documents into smaller requests for speed:
for chunk in bulk_chunks(documents(records=records),
docs_per_chunk=50000,
bytes_per_chunk=10000000):
# We specify a default index and doc type here so we don't
# have to repeat them in every operation:
es.bulk(chunk, doc_type=DOC_TYPE, index=INDEX_NAME)
# should be 2 initially or 0, depending on if kibana hasrun already
es.count('*')['count']
df.npartitions
begin_partition = 0
end_partition = df.npartitions
# begin_partition = 23
# end_partition = 25
for partition_nr in range(df.npartitions):
if partition_nr >= end_partition:
break
if partition_nr < begin_partition:
continue
print ("Importing partition %d"%(partition_nr))
partition = df.get_partition(partition_nr)
records = partition.compute().to_dict(orient='records')
print ("Importing into ES: %d"%(len(records)))
chunk_import(records)
cnt = es.count('*')['count']
print ("Datasets in ES: %d"%(cnt))
!mkdir feed
!curl -O http://stat-computing.org/dataexpo/2009/2003.csv.bz2
!bzip2 -d 2003.csv.bz2
!mv 2003.csv feed
!ls -l feed
# for live reload of data during demo
# execute this and repeat steps from dd.read_csv in Cell 8
cd ..
mkdir feed
cd feed
!curl -O http://stat-computing.org/dataexpo/2009/2003.csv.bz2
!bzip2 -d 2003.csv.bz2
"""
Explanation: Cleaning and fixing data
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.