repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
yoavg/cnn
pyexamples/tutorials/RNNs.ipynb
apache-2.0
model = Model() NUM_LAYERS=2 INPUT_DIM=50 HIDDEN_DIM=10 builder = LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) # or: # builder = SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) """ Explanation: An LSTM/RNN overview: An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion. Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$. In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences: $h_1^3,...,h_k^3$ $h_1^2,...,h_k^2$ $h_1^1,...h_k^1$, Let $r_i^j$ be the output of cell $h_i^j$. Then: The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$. The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$, and so on. The LSTM (RNN) Interface RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence. End of explanation """ s0 = builder.initial_state() x1 = vecInput(INPUT_DIM) s1=s0.add_input(x1) y1 = s1.output() # here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector) y1.npvalue().shape s2=s1.add_input(x1) # we can add another input y2=s2.output() """ Explanation: Note that when we create the builder, it adds the internal RNN parameters to the model. We do not need to care about them, but they will be optimized together with the rest of the network's parameters. End of explanation """ print s2.h() """ Explanation: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer. If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer: End of explanation """ # create a simple rnn builder rnnbuilder=SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) # initialize a new graph, and a new sequence rs0 = rnnbuilder.initial_state() # add inputs rs1 = rs0.add_input(x1) ry1 = rs1.output() print "all layers:", s1.h() print s1.s() """ Explanation: The same interface that we saw until now for the LSTM, holds also for the Simple RNN: End of explanation """ rnn_h = rs1.h() rnn_s = rs1.s() print "RNN h:", rnn_h print "RNN s:", rnn_s lstm_h = s1.h() lstm_s = s1.s() print "LSTM h:", lstm_h print "LSTM s:", lstm_s """ Explanation: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it: 1. the state of the current RNN column 2. the input x The state is then returned, and we can call it's output() method to get the output y, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the .h() method of the state. .s() The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the .s() method. The output of .s() differs by the type of RNN being used. For the simple-RNN, it is the same as .h(). For the LSTM, it is more involved. End of explanation """ s2=s1.add_input(x1) s3=s2.add_input(x1) s4=s3.add_input(x1) # let's continue s3 with a new input. s5=s3.add_input(x1) # we now have two different sequences: # s0,s1,s2,s3,s4 # s0,s1,s2,s3,s5 # the two sequences share parameters. assert(s5.prev() == s3) assert(s4.prev() == s3) s6=s3.prev().add_input(x1) # we now have an additional sequence: # s0,s1,s2,s6 s6.h() s6.s() """ Explanation: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h. Extra options in the RNN/LSTM interface Stack LSTM The RNN's are shaped as a stack: we can remove the top and continue from the previous state. This is done either by remembering the previous state and continuing it with a new .add_input(), or using we can access the previous state of a given state using the .prev() method of state. Initializing a new sequence with a given state When we call builder.initial_state(), we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to .final_s(). TODO: this is not supported yet. End of explanation """ import random from collections import defaultdict from itertools import count import sys LAYERS = 2 INPUT_DIM = 50 HIDDEN_DIM = 50 characters = list("abcdefghijklmnopqrstuvwxyz ") characters.append("<EOS>") int2char = list(characters) char2int = {c:i for i,c in enumerate(characters)} VOCAB_SIZE = len(characters) model = Model() srnn = SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model) lstm = LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model) model.add_lookup_parameters("lookup", (VOCAB_SIZE, INPUT_DIM)) model.add_parameters("R", (VOCAB_SIZE, HIDDEN_DIM)) model.add_parameters("bias", (VOCAB_SIZE)) # return compute loss of RNN for one sentence def do_one_sentence(rnn, sentence): # setup the sentence renew_cg() s0 = rnn.initial_state() R = parameter(model["R"]) bias = parameter(model["bias"]) lookup = model["lookup"] sentence = ["<EOS>"] + list(sentence) + ["<EOS>"] sentence = [char2int[c] for c in sentence] s = s0 loss = [] for char,next_char in zip(sentence,sentence[1:]): s = s.add_input(lookup[char]) probs = softmax(R*s.output() + bias) loss.append( -log(pick(probs,next_char)) ) loss = esum(loss) return loss # generate from model: def generate(rnn): def sample(probs): rnd = random.random() for i,p in enumerate(probs): rnd -= p if rnd <= 0: break return i # setup the sentence renew_cg() s0 = rnn.initial_state() R = parameter(model["R"]) bias = parameter(model["bias"]) lookup = model["lookup"] s = s0.add_input(lookup[char2int["<EOS>"]]) out=[] while True: probs = softmax(R*s.output() + bias) probs = probs.vec_value() next_char = sample(probs) out.append(int2char[next_char]) if out[-1] == "<EOS>": break s = s.add_input(lookup[next_char]) return "".join(out[:-1]) # strip the <EOS> # train, and generate every 5 samples def train(rnn, sentence): trainer = SimpleSGDTrainer(model) for i in xrange(200): loss = do_one_sentence(rnn, sentence) loss_value = loss.value() loss.backward() trainer.update() if i % 5 == 0: print loss_value, print generate(rnn) """ Explanation: Charecter-level LSTM Now that we know the basics of RNNs, let's build a character-level LSTM language-model. We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character. End of explanation """ sentence = "a quick brown fox jumped over the lazy dog" train(srnn, sentence) sentence = "a quick brown fox jumped over the lazy dog" train(lstm, sentence) """ Explanation: Notice that: 1. We pass the same rnn-builder to do_one_sentence over and over again. We must re-use the same rnn-builder, as this is where the shared parameters are kept. 2. We renew_cg() before each sentence -- because we want to have a new graph (new network) for this sentence. The parameters will be shared through the model and the shared rnn-builder. End of explanation """ train(srnn, "these pretzels are making me thirsty") """ Explanation: The model seem to learn the sentence quite well. Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM! How can that be? The answer is that we are cheating a bit. The sentence we are trying to learn has each letter-bigram exactly once. This means a simple trigram model can memorize it very well. Try it out with more complex sequences. End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/aa221dc65413caee3ba4b18802f88d21/plot_topo_compare_conditions.ipynb
bsd-3-clause
# Authors: Denis Engemann <denis.engemann@gmail.com> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne.viz import plot_evoked_topo from mne.datasets import sample print(__doc__) data_path = sample.data_path() """ Explanation: Compare evoked responses for different conditions In this example, an Epochs object for visual and auditory responses is created. Both conditions are then accessed by their respective names to create a sensor layout plot of the related evoked responses. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Set up amplitude-peak rejection values for MEG channels reject = dict(grad=4000e-13, mag=4e-12) # Create epochs including different events event_id = {'audio/left': 1, 'audio/right': 2, 'visual/left': 3, 'visual/right': 4} epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks='meg', baseline=(None, 0), reject=reject) # Generate list of evoked objects from conditions names evokeds = [epochs[name].average() for name in ('left', 'right')] """ Explanation: Set parameters End of explanation """ colors = 'blue', 'red' title = 'MNE sample data\nleft vs right (A/V combined)' plot_evoked_topo(evokeds, color=colors, title=title, background_color='w') plt.show() """ Explanation: Show topography for two different conditions End of explanation """
srcole/qwm
burrito/Burrito_dimensions.ipynb
mit
%config InlineBackend.figure_format = 'retina' %matplotlib inline import numpy as np import scipy as sp import matplotlib.pyplot as plt import pandas as pd import pandasql import seaborn as sns sns.set_style("white") """ Explanation: San Diego Burrito Analytics: Data characterization Scott Cole 2 July 2016 This notebook analyzes the dimensions of burritos 1. What dimension are people most critical (small mean)? Least critical? Most sensitive? Least sensitive (small variance)? Default imports End of explanation """ import util df = util.load_burritos() N = df.shape[0] """ Explanation: Load data End of explanation """ means = df.mean() variances = df.var() print means print variances """ Explanation: Calculate mean and variance of dimension ratings End of explanation """ from ggplot import * print ggplot(df,aes('Meat','Fillings',color='overall')) +\ geom_point(size=120,alpha=.2) +\ xlab('Meat rating') + ylab('Fillings rating') +\ scale_color_gradient(low = 'red', high = 'blue') import re s = "string. With. Punctuation?" s = re.sub(r'[^\w\s]','',s) print s x = ['x','y','z'] a = 'x' if a in x: print 'yes' else: print 'no' w = {'e':2,'f':4} w.keys() """ Explanation: people are most critical of salsa, least critical of wrap integrity people are most sensitive to wrap integrity and least sensitive to Fillings (overall???) Play with ggplot End of explanation """
mne-tools/mne-tools.github.io
0.17/_downloads/234d5d29991ce5146ff7526007f98039/plot_stats_cluster_spatio_temporal_repeated_measures_anova.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Eric Larson <larson.eric.d@gmail.com> # Denis Engemannn <denis.engemann@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np from numpy.random import randn import matplotlib.pyplot as plt import mne from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm, f_mway_rm, summarize_clusters_stc) from mne.minimum_norm import apply_inverse, read_inverse_operator from mne.datasets import sample print(__doc__) """ Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering This example illustrates how to make use of the clustering functions for arbitrary, self-defined contrasts beyond standard t-tests. In this case we will tests if the differences in evoked responses between stimulation modality (visual VS auditory) depend on the stimulus location (left vs right) for a group of subjects (simulated here using one subject's data). For this purpose we will compute an interaction effect using a repeated measures ANOVA. The multiple comparisons problem is addressed with a cluster-level permutation test across space and time. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' subjects_dir = data_path + '/subjects' src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif' tmin = -0.2 tmax = 0.3 # Use a lower tmax to reduce multiple comparisons # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) """ Explanation: Set parameters End of explanation """ raw.info['bads'] += ['MEG 2443'] picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads') # we'll load all four conditions that make up the 'two ways' of our ANOVA event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4) reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=True) # Equalize trial counts to eliminate bias (which would otherwise be # introduced by the abs() performed below) epochs.equalize_event_counts(event_id) """ Explanation: Read epochs for all channels, removing a bad one End of explanation """ fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' snr = 3.0 lambda2 = 1.0 / snr ** 2 method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA) inverse_operator = read_inverse_operator(fname_inv) # we'll only use one hemisphere to speed up this example # instead of a second vertex array we'll pass an empty array sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)] # Let's average and compute inverse, then resample to speed things up conditions = [] for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important evoked = epochs[cond].average() evoked.resample(50, npad='auto') condition = apply_inverse(evoked, inverse_operator, lambda2, method) # Let's only deal with t > 0, cropping to reduce multiple comparisons condition.crop(0, None) conditions.append(condition) tmin = conditions[0].tmin tstep = conditions[0].tstep """ Explanation: Transform to source space End of explanation """ n_vertices_sample, n_times = conditions[0].lh_data.shape n_subjects = 7 print('Simulating data for %d subjects.' % n_subjects) # Let's make sure our results replicate, so set the seed. np.random.seed(0) X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10 for ii, condition in enumerate(conditions): X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis] """ Explanation: Transform to common cortical space Normally you would read in estimates across several subjects and morph them to the same cortical space (e.g. fsaverage). For example purposes, we will simulate this by just having each "subject" have the same response (just noisy in source space) here. We'll only consider the left hemisphere in this tutorial. End of explanation """ # Read the source space we are morphing to (just left hemisphere) src = mne.read_source_spaces(src_fname) fsave_vertices = [src[0]['vertno'], []] morph_mat = mne.compute_source_morph( src=inverse_operator['src'], subject_to='fsaverage', spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat morph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src n_vertices_fsave = morph_mat.shape[0] # We have to change the shape for the dot() to work properly X = X.reshape(n_vertices_sample, n_times * n_subjects * 4) print('Morphing data.') X = morph_mat.dot(X) # morph_mat is a sparse matrix X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4) """ Explanation: It's a good idea to spatially smooth the data, and for visualization purposes, let's morph these to fsaverage, which is a grade 5 ICO source space with vertices 0:10242 for each hemisphere. Usually you'd have to morph each subject's data separately, but here since all estimates are on 'sample' we can use one morph matrix for all the heavy lifting. End of explanation """ X = np.transpose(X, [2, 1, 0, 3]) # X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)] """ Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the clustering function work correctly with the ANOVA function X needs to be a list of multi-dimensional arrays (one per condition) of shape: samples (subjects) x time x space. First we permute dimensions, then split the array into a list of conditions and discard the empty dimension resulting from the split using numpy squeeze. End of explanation """ factor_levels = [2, 2] """ Explanation: Prepare function for arbitrary contrast As our ANOVA function is a multi-purpose tool we need to apply a few modifications to integrate it with the clustering function. This includes reshaping data, setting default arguments and processing the return values. For this reason we'll write a tiny dummy function. We will tell the ANOVA how to interpret the data matrix in terms of factors. This is done via the factor levels argument which is a list of the number factor levels for each factor. End of explanation """ effects = 'A:B' # Tell the ANOVA not to compute p-values which we don't need for clustering return_pvals = False # a few more convenient bindings n_times = X[0].shape[1] n_conditions = 4 """ Explanation: Finally we will pick the interaction effect by passing 'A:B'. (this notation is borrowed from the R formula language). Without this also the main effects will be returned. End of explanation """ def stat_fun(*args): # get f-values only. return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels, effects=effects, return_pvals=return_pvals)[0] """ Explanation: A stat_fun must deal with a variable number of input arguments. Inside the clustering function each condition will be passed as flattened array, necessitated by the clustering procedure. The ANOVA however expects an input array of dimensions: subjects X conditions X observations (optional). The following function catches the list input and swaps the first and the second dimension, and finally calls ANOVA. <div class="alert alert-info"><h4>Note</h4><p>For further details on this ANOVA function consider the corresponding `time-frequency tutorial <sphx_glr_auto_tutorials_plot_stats_cluster_time_frequency_repeated_measures_anova.py>`. # noqa: E501</p></div> End of explanation """ # as we only have one hemisphere we need only need half the connectivity print('Computing connectivity.') connectivity = mne.spatial_src_connectivity(src[:1]) # Now let's actually do the clustering. Please relax, on a small # notebook and one single thread only this will take a couple of minutes ... pthresh = 0.0005 f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh) # To speed things up a bit we will ... n_permutations = 128 # ... run fewer permutations (reduces sensitivity) print('Clustering.') T_obs, clusters, cluster_p_values, H0 = clu = \ spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1, threshold=f_thresh, stat_fun=stat_fun, n_permutations=n_permutations, buffer_size=None) # Now select the clusters that are sig. at p < 0.05 (note that this value # is multiple-comparisons corrected). good_cluster_inds = np.where(cluster_p_values < 0.05)[0] """ Explanation: Compute clustering statistic To use an algorithm optimized for spatio-temporal clustering, we just pass the spatial connectivity matrix (instead of spatio-temporal). End of explanation """ print('Visualizing clusters.') # Now let's build a convenient representation of each cluster, where each # cluster becomes a "time point" in the SourceEstimate stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep, vertices=fsave_vertices, subject='fsaverage') # Let's actually plot the first "time point" in the SourceEstimate, which # shows all the clusters, weighted by duration subjects_dir = op.join(data_path, 'subjects') # The brighter the color, the stronger the interaction between # stimulus modality and stimulus location brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat', time_label='Duration significant (ms)', clim=dict(kind='value', lims=[0, 1, 40])) brain.save_image('cluster-lh.png') brain.show_view('medial') """ Explanation: Visualize the clusters End of explanation """ inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in enumerate(good_cluster_inds)][0] # first cluster times = np.arange(X[0].shape[1]) * tstep * 1e3 plt.figure() colors = ['y', 'b', 'g', 'purple'] event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis'] for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)): # extract time course at cluster vertices condition = condition[:, :, inds_v] # normally we would normalize values across subjects but # here we use data from the same subject so we're good to just # create average time series across subjects and vertices. mean_tc = condition.mean(axis=2).mean(axis=0) std_tc = condition.std(axis=2).std(axis=0) plt.plot(times, mean_tc.T, color=color, label=eve_id) plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray', alpha=0.5, label='') ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5 plt.xlabel('Time (ms)') plt.ylabel('Activation (F-values)') plt.xlim(times[[0, -1]]) plt.ylim(ymin, ymax) plt.fill_betweenx((ymin, ymax), times[inds_t[0]], times[inds_t[-1]], color='orange', alpha=0.3) plt.legend() plt.title('Interaction between stimulus-modality and location.') plt.show() """ Explanation: Finally, let's investigate interaction effect by reconstructing the time courses End of explanation """
probml/pyprobml
deprecated/two_moons_normalizingFlow.ipynb
mit
!pip install -U dm-haiku distrax optax import matplotlib.pyplot as plt from IPython.display import clear_output from sklearn import datasets, preprocessing import distrax import jax import jax.numpy as jnp import numpy as np import haiku as hk import optax import tensorflow as tf import tensorflow_datasets as tfds from tensorflow_probability.substrates import jax as tfp tfd = tfp.distributions # key = jax.random.PRNGKey(1234) """ Explanation: Two Moons Normalizing Flow Using Distrax + Haiku Neural Spline Flow based off of distrax documentation for a flow. Code to load 2 moons example dataset sourced from Chris Waites's jax-flows demo. End of explanation """ n_samples = 10000 plot_range = [(-2, 2), (-2, 2)] n_bins = 100 scaler = preprocessing.StandardScaler() X, _ = datasets.make_moons(n_samples=n_samples, noise=0.05) X = scaler.fit_transform(X) plt.hist2d(X[:, 0], X[:, 1], bins=n_bins, range=plot_range)[-1] plt.savefig("two-moons-original.pdf") plt.savefig("two-moons-original.png") """ Explanation: Plotting 2 moons dataset Code taken directly from Chris Waites's jax-flows demo. This is the distribution we want to create a bijection to from a simple base distribution, such as a gaussian distribution. End of explanation """ from typing import Any, Iterator, Mapping, Optional, Sequence, Tuple # Hyperparams - change these to experiment flow_num_layers = 8 mlp_num_layers = 4 hidden_size = 1000 num_bins = 8 batch_size = 512 learning_rate = 1e-4 eval_frequency = 100 Array = jnp.ndarray PRNGKey = Array Batch = Mapping[str, np.ndarray] OptState = Any # Functions to create a distrax normalizing flow def make_conditioner( event_shape: Sequence[int], hidden_sizes: Sequence[int], num_bijector_params: int ) -> hk.Sequential: """Creates an MLP conditioner for each layer of the flow.""" return hk.Sequential( [ hk.Flatten(preserve_dims=-len(event_shape)), hk.nets.MLP(hidden_sizes, activate_final=True), # We initialize this linear layer to zero so that the flow is initialized # to the identity function. hk.Linear(np.prod(event_shape) * num_bijector_params, w_init=jnp.zeros, b_init=jnp.zeros), hk.Reshape(tuple(event_shape) + (num_bijector_params,), preserve_dims=-1), ] ) def make_flow_model( event_shape: Sequence[int], num_layers: int, hidden_sizes: Sequence[int], num_bins: int ) -> distrax.Transformed: """Creates the flow model.""" # Alternating binary mask. mask = jnp.arange(0, np.prod(event_shape)) % 2 mask = jnp.reshape(mask, event_shape) mask = mask.astype(bool) def bijector_fn(params: Array): return distrax.RationalQuadraticSpline(params, range_min=-2.0, range_max=2.0) # Number of parameters for the rational-quadratic spline: # - `num_bins` bin widths # - `num_bins` bin heights # - `num_bins + 1` knot slopes # for a total of `3 * num_bins + 1` parameters. num_bijector_params = 3 * num_bins + 1 layers = [] for _ in range(num_layers): layer = distrax.MaskedCoupling( mask=mask, bijector=bijector_fn, conditioner=make_conditioner(event_shape, hidden_sizes, num_bijector_params), ) layers.append(layer) # Flip the mask after each layer. mask = jnp.logical_not(mask) # We invert the flow so that the `forward` method is called with `log_prob`. flow = distrax.Inverse(distrax.Chain(layers)) # Making base distribution normal distribution mu = jnp.zeros(event_shape) sigma = jnp.ones(event_shape) base_distribution = distrax.Independent(distrax.MultivariateNormalDiag(mu, sigma)) return distrax.Transformed(base_distribution, flow) def load_dataset(split: tfds.Split, batch_size: int) -> Iterator[Batch]: # ds = tfds.load("mnist", split=split, shuffle_files=True) ds = split ds = ds.shuffle(buffer_size=10 * batch_size) ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=1000) ds = ds.repeat() return iter(tfds.as_numpy(ds)) def prepare_data(batch: Batch, prng_key: Optional[PRNGKey] = None) -> Array: data = batch.astype(np.float32) return data @hk.without_apply_rng @hk.transform def model_sample(key: PRNGKey, num_samples: int) -> Array: model = make_flow_model( event_shape=TWO_MOONS_SHAPE, num_layers=flow_num_layers, hidden_sizes=[hidden_size] * mlp_num_layers, num_bins=num_bins, ) return model.sample(seed=key, sample_shape=[num_samples]) @hk.without_apply_rng @hk.transform def log_prob(data: Array) -> Array: model = make_flow_model( event_shape=TWO_MOONS_SHAPE, num_layers=flow_num_layers, hidden_sizes=[hidden_size] * mlp_num_layers, num_bins=num_bins, ) return model.log_prob(data) def loss_fn(params: hk.Params, prng_key: PRNGKey, batch: Batch) -> Array: data = prepare_data(batch, prng_key) # Loss is average negative log likelihood. loss = -jnp.mean(log_prob.apply(params, data)) return loss @jax.jit def eval_fn(params: hk.Params, batch: Batch) -> Array: data = prepare_data(batch) # We don't dequantize during evaluation. loss = -jnp.mean(log_prob.apply(params, data)) return loss """ Explanation: Creating the normalizing flow in distrax+haiku Instead of a uniform distribution, we use a normal distribution as the base distribution. This makes more sense for a standardized two moons dataset that is scaled according to a normal distribution using sklearn's StandardScaler(). Using a uniform base distribution will result in inf and nan loss. End of explanation """ optimizer = optax.adam(learning_rate) @jax.jit def update(params: hk.Params, prng_key: PRNGKey, opt_state: OptState, batch: Batch) -> Tuple[hk.Params, OptState]: """Single SGD update step.""" grads = jax.grad(loss_fn)(params, prng_key, batch) updates, new_opt_state = optimizer.update(grads, opt_state) new_params = optax.apply_updates(params, updates) return new_params, new_opt_state """ Explanation: Setting up the optimizer End of explanation """ # Event shape TWO_MOONS_SHAPE = (2,) # Create tf dataset from sklearn dataset dataset = tf.data.Dataset.from_tensor_slices(X) # Splitting into train/validate ds train = dataset.skip(2000) val = dataset.take(2000) # load_dataset(split: tfds.Split, batch_size: int) train_ds = load_dataset(train, 512) valid_ds = load_dataset(val, 512) # Initializing PRNG and Neural Net params prng_seq = hk.PRNGSequence(1) params = log_prob.init(next(prng_seq), np.zeros((1, *TWO_MOONS_SHAPE))) opt_state = optimizer.init(params) training_steps = 1000 for step in range(training_steps): params, opt_state = update(params, next(prng_seq), opt_state, next(train_ds)) if step % eval_frequency == 0: val_loss = eval_fn(params, next(valid_ds)) print(f"STEP: {step:5d}; Validation loss: {val_loss:.3f}") n_samples = 10000 plot_range = [(-2, 2), (-2, 2)] n_bins = 100 X_transf = model_sample.apply(params, next(prng_seq), num_samples=n_samples) plt.hist2d(X_transf[:, 0], X_transf[:, 1], bins=n_bins, range=plot_range)[-1] plt.savefig("two-moons-flow.pdf") plt.savefig("two-moons-flow.png") plt.show() """ Explanation: Training the flow End of explanation """
mldbai/mldb
container_files/demos/Real-Time Digits Recognizer.ipynb
apache-2.0
from IPython.display import YouTubeVideo YouTubeVideo("WGdLCXDiDSo") """ Explanation: MLPaint: Real-Time Handwritten Digits Recognizer The automatic recognition of handwritten digits is now a well understood and studied Machine Vision and Machine Learning problem. We will be using MNIST (check out Wikipedia's page on MNIST) to train our models. From the description on Yann LeCun's MNIST database of handwriten digits: The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. To learn more, you can also check out Kaggle's Digit Recognizer page. In this demo will use MLDB's classifiers functions and REST API, to create a plug-in to predict the value of handwritten digits. We will also use MLDB's explain functions to visually represent the predictive "value or importance" of each pixel in our final predictions. Check out the video below for a demo of what we'll be creating: End of explanation """ from pymldb import Connection mldb = Connection() """ Explanation: The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details. End of explanation """ import random import math import numpy as np from pandas import DataFrame import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from IPython.display import display from ipywidgets import widgets """ Explanation: ... and other Python librairies: End of explanation """ data_url_mnist = 'file://mldb/mldb_test_data/digits_data.csv.gz' print mldb.put('/v1/procedures/import_digits_mnist', { "type":"import.text", "params": { "dataFileUrl": data_url_mnist, "outputDataset": "digits_mnist", "select": "{* EXCLUDING(\"785\")} AS *, \"785\" AS label", "runOnCreation": True, } }) """ Explanation: Loading the data A pickled version of the dataset is available on the deeplearning.net website. The dataset has been unpickled and saved in a public Amazon's S3 cloud storage. Check out MLDB's Protocol Handlers for Files and URLS. End of explanation """ data_stats = mldb.query(""" SELECT avg(horizontal_count({* EXCLUDING(label)})) as NoOfFeatures, count(label) AS TestExamples FROM digits_mnist """) print data_stats """ Explanation: In the original MNIST datasets, the features and labels were in two seperate datasets. To make it easier, we joined the features with the labels. Column 785 is the labels column which was renamed accordingly 'label'. Let's explore the data See the Query API documentation for more details on SQL queries. End of explanation """ labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] x_array1 = [] x_array2 = [] x_data = [] sq = int(np.sqrt(data_stats['NoOfFeatures'][0])) for label in labels: x_data = mldb.query(""" SELECT * EXCLUDING(label) FROM sample( (select * FROM digits_mnist WHERE label = %d AND rowHash() %% 5 = 0), {rows: 1} ) """ %label) if label < 5: x_array1.extend(x_data.as_matrix().reshape(sq, sq)) if label >= 5: x_array2.extend(x_data.as_matrix().reshape(sq, sq)) f, (fig1, fig2) = plt.subplots(1, 2, sharey=True) plt.gray() fig1.matshow(x_array1) fig2.matshow(x_array2) plt.show() """ Explanation: Each example is a row made up of 784 pixels or features. By reshaping the 1D data into a 2D representation, we can visualize the data a little better. At each refresh, we get randomly selected rows for each label using the sample function in a SQL From Expression. End of explanation """ conf_algo = { "bbdt_d5": { "type": "bagging", "verbosity": 3, "weak_learner": { "type": "boosting", "verbosity": 3, "weak_learner": { "type": "decision_tree", "max_depth": 7, "verbosity": 0, "update_alg": "gentle", "random_feature_propn": 0.3 }, "min_iter": 5, "max_iter": 30 }, "num_bags": 5 } } conf_class = { "type": "classifier.experiment", "params": { "experimentName": "mnist_model", "mode": "categorical", "inputData" : """ SELECT {* EXCLUDING(label*)} AS features, label AS label FROM digits_mnist """, "datasetFolds": [ { "trainingWhere": "rowHash() % 5 != 0", #80% of total data "testingWhere": "rowHash() % 5 = 0" #20% of total data } ], "algorithm": "bbdt_d5", "configuration": conf_algo, "modelFileUrlPattern": "file://models/mnist_model.cls", "keepArtifacts": True, "outputAccuracyDataset": True, "runOnCreation": True, "evalTrain": True } } results = mldb.put("/v1/procedures/mnist_model", conf_class) accuracy = results.json()['status']['firstRun']['status']['aggregatedTest']['weightedStatistics']['accuracy']['mean'] print "\nModel classification accuracy on test set = %0.4f\n" % accuracy """ Explanation: Training a classifier We will create a Procedure of type classifier.experiment to train and test our model. The configuration parameter defines a Random Forest algorithm. The model make take some time to train... End of explanation """ confusionMatrix = results.json()['status']['firstRun']['status']['folds'][0]['resultsTest']['confusionMatrix'] confusionMatrix = pd.DataFrame(confusionMatrix).pivot_table(index="predicted", columns="actual") df = np.log(confusionMatrix) df = df.fillna(0) fig = plt.figure(figsize=(8, 8)) plt.imshow(df, interpolation='nearest', cmap=plt.cm.jet) plt.yticks(np.arange(0, 10, 1), fontsize=14) plt.ylabel("Predicted", fontsize=16) plt.xticks(np.arange(0, 10, 1), fontsize=14) plt.xlabel("Actual", fontsize=16) """ Explanation: We are now going to construct the confusion matrix from results on the test set using the pivot aggregate function. You can learn more about confusion matrices here. End of explanation """ print mldb.put('/v1/functions/mnist_explainer', { "id": "mnist_explainer", "type": "classifier.explain", "params": { "modelFileUrl": "file://models/mnist_model.cls" } }) """ Explanation: The model seems to be doing a pretty good job at classfication as seen with the confusion matrix above. In a small percentage of cases, the model seems to think that a '4' is a '9' and that a '3' is a '2'. The two sets of digits are similar in the concentration of pixels, so this makes some sense. How does the model make its predictions? The 'explain' function provides each pixel's weight on the final outcome. Let's create a function of type classifier.explain to help us understand what's happening here. End of explanation """ def rgb_explain(x): scale = 5 explain = [[0, 0, 0]] * data_stats['NoOfFeatures'][0] # [R,G,B] for color image number_explain = len(x) for j, col in enumerate(x.columns.values): try: index = int(col) val = x.values[0][j] * scale if (val >= 0.2): explain[index] = [0, val, 0] # make it green elif (val <= -0.2): explain[index] = [- val, 0, 0] # make it red except: pass return np.array(explain).reshape(sq, sq, 3) @widgets.interact def test_img_plot(digit = [0,9], other_example=[0,1000]): data = mldb.query(""" SELECT * EXCLUDING(label), mnist_model_scorer_0({ features: {* EXCLUDING(label*)} })[scores] AS score FROM digits_mnist WHERE label = %(digit)d AND rowHash() %% 5 = 0 LIMIT 1 OFFSET %(offset)d """ % {"digit": digit, "offset": other_example}) data_array = data.as_matrix() rand_test_img = data_array[0][:-10].reshape(sq, sq) scores = data_array[0][-10:] explain_data = mldb.query(""" SELECT mnist_explainer({ label: %(digit)d, features: {* EXCLUDING(label)} })[explanation] AS * FROM digits_mnist WHERE label = %(digit)d AND rowHash() %% 5 = 0 LIMIT 1 OFFSET %(offset)d """ % {"digit": digit, "offset": other_example}) explain_img = rgb_explain(explain_data) fig = plt.figure(figsize=(8, 8)) # plot digit image ax1 = plt.subplot2grid((4, 4), (0, 0), colspan=2, rowspan = 2) ax1.imshow(rand_test_img) ax1.set_title("Fig1: You chose the digit below", fontsize=12, fontweight='bold') # plot explain matrix ax2 = plt.subplot2grid((4, 4), (0, 2), colspan=2, rowspan = 2) ax2.imshow(explain_img) ax2.set_title("Fig2: Explain matrix picture of digit %d" %digit, fontsize=12, fontweight='bold') # plot scores ax3 = plt.subplot2grid((4, 4), (2, 0), colspan=4, rowspan = 2) greater_than_zero = scores >= 0 lesser_than_zero = scores < 0 ax3.barh(np.arange(len(scores))[greater_than_zero]-0.5, scores[greater_than_zero], color='#87CEFA', height=1) ax3.barh(np.arange(len(scores))[lesser_than_zero]-0.5, scores[lesser_than_zero], color='#E6E6FA', height=1) ax3.grid() ax3.yaxis.set_ticks(np.arange(0, 10, 1)) ax3.yaxis.set_ticks_position('right') ax3.set_title("Fig3: Scores for each number - the number with the highest score wins!", fontsize=12, fontweight='bold') ax3.set_ylabel("Digits") ax3.yaxis.set_label_position('right') ax3.set_xlabel("Scores") plt.tight_layout() plt.show() """ Explanation: We automatically get a REST API to test our model with individual digits The procedure above created for us a Function of type classifier. We will be using two functions: * The scorer function: Scores aren't probabilities, but they can be used to create binary classifiers by applying a cutoff threshold. MLDB's classifier.experiment procedure that we have seen previously outputs a score for each digit (even the wrong ones). * The explain function: The explain function shows how each pixel and its value (black or white) of an image contributes to the model's prediction. We colored such pixels in green for positive contributions and red for negative contributions in Figure 2 below. In essense, pixels flagged in red in the explain figure should be changed to get a better score. For example, a white-colored pixel, that was seen frequently for digit '5' in the train set, will be flagged green if it seen for digit '5' in the test set. If the same pixel is actually of a different color for digit '5' in the test set, then the pixel will be flagged red. Note that the digits are from the test set - we used 80% of the data for training and 20% for testing. You can also get the same digit written differently by using the offset bar. We using an SQL offset - we are calling it 'other_example' in the code below. The offset specifies the number of rows to skip before returning values from the query expression. End of explanation """ mldb.put("/v1/plugins/mlpaint", { "type": "python", "params": { "address": "git://github.com/mldbai/mlpaint" } }) """ Explanation: In the representation of the explain matrix (figure 3), the green pixels help increase the score while the red pixels help decrease the score of the chosen digit. The explain matrix will tell us something about the pixels deemed most important. For example, if nothing was drawn in the top left corner of the picture during training, no information will be provided on the top left set of pixels in the explain matrix. During training, if a pixel is not part of the classification rules (i.e. not on any leaf), that pixel will not show up in the explain matrix. Making a simple web app using MLDB plug-in functionality We've built a very fun web app called MLPaint that uses everything we've shown here to do real-time recognition of digits. The app is shown in the Youtube video at the top of the notebook. The app is built with an MLDB plugin; plugins allow us to extend functionality that we have seen so far. For more information, check out the documentation on plugins. By running the cell below, the plugin is checkout from Github and loaded into MLDB: End of explanation """
yandexdataschool/gumbel_lstm
demo_gumbel_sigmoid.ipynb
mit
temperature = 0.1 logits = np.linspace(-5,5,10).reshape([1,-1]) gumbel_sigm = GumbelSigmoid(t=temperature)(logits) sigm = T.nnet.sigmoid(logits) import matplotlib.pyplot as plt %matplotlib inline plt.title('gumbel-sigmoid samples') for i in range(10): plt.plot(range(10),gumbel_sigm.eval()[0],marker='o',alpha=0.25) plt.ylim(0,1) plt.show() plt.title('average over samples') plt.plot(range(10),np.mean([gumbel_sigm.eval()[0] for _ in range(500)],axis=0), marker='o',label='gumbel-sigmoid average') plt.plot(sigm.eval()[0],marker='+',label='regular softmax') plt.legend(loc='best') """ Explanation: Simple demo Sample from gumbel-softmax Average over samples End of explanation """ from sklearn.datasets import load_digits X = load_digits().data import lasagne from lasagne.layers import * import theano #graph inputs and shareds input_var = T.matrix() temp = theano.shared(np.float32(1),'temperature',allow_downcast=True) #architecture: encoder nn = l_in = InputLayer((None,64),input_var) nn = DenseLayer(nn,64,nonlinearity=T.tanh) nn = DenseLayer(nn,32,nonlinearity=T.tanh) #bottleneck nn = DenseLayer(nn,32,nonlinearity=None)#or nonlinearity=GumbelSigmoid(t=temp) bottleneck = nn = GumbelSigmoidLayer(nn,t=temp) #decoder nn = DenseLayer(nn,32,nonlinearity=T.tanh) nn = DenseLayer(nn,64,nonlinearity=T.tanh) nn = DenseLayer(nn,64,nonlinearity=None) #loss and updates loss = T.mean((get_output(nn)-input_var)**2) updates = lasagne.updates.adam(loss,get_all_params(nn)) #compile train_step = theano.function([input_var],loss,updates=updates) evaluate = theano.function([input_var],loss) """ Explanation: Autoencoder with gumbel-softmax We do not use any bayesian regularization, simply optimizer by backprop Hidden layer contains 32 units End of explanation """ for i,t in enumerate(np.logspace(0,-2,10000)): sample = X[np.random.choice(len(X),32)] temp.set_value(t) mse = train_step(sample) if i %100 ==0: print '%.3f'%evaluate(X), #functions for visualization get_sample = theano.function([input_var],get_output(nn)) get_sample_hard = theano.function([input_var],get_output(nn,hard_max=True)) get_code = theano.function([input_var],get_output(bottleneck,hard_max=False)) for i in range(10): X_sample = X[np.random.randint(len(X)),None,:] plt.figure(figsize=[12,4]) plt.subplot(1,4,1) plt.title("original") plt.imshow(X_sample.reshape([8,8]),interpolation='none',cmap='gray') plt.subplot(1,4,2) plt.title("gumbel") plt.imshow(get_sample(X_sample).reshape([8,8]),interpolation='none',cmap='gray') plt.subplot(1,4,3) plt.title("hard-max") plt.imshow(get_sample_hard(X_sample).reshape([8,8]),interpolation='none',cmap='gray') plt.subplot(1,4,4) plt.title("code") plt.imshow(get_code(X_sample).reshape(8,4),interpolation='none',cmap='gray') plt.show() """ Explanation: Training loop We gradually reduce temperature from 1 to 0.01 over time End of explanation """
GoogleCloudPlatform/practical-ml-vision-book
09_deploying/09a_inmemory.ipynb
apache-2.0
import tensorflow as tf print('TensorFlow version' + tf.version.VERSION) print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!')) print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU")))) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) """ Explanation: Predictions on in-memory model. In this notebook, we start from an already trained and saved model (as in Chapter 7). For convenience, we have put this model in a public bucket in gs://practical-ml-vision-book/flowers_5_trained Enable GPU and set up helper functions This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU. On Colab: - Navigate to Edit→Notebook Settings - Select GPU from the Hardware Accelerator drop-down On Cloud AI Platform Notebooks: - Navigate to https://console.cloud.google.com/ai-platform/notebooks - Create an instance with a GPU or select your instance and add a GPU Next, we'll confirm that we can connect to the GPU with tensorflow: End of explanation """ MODEL_LOCATION='gs://practical-ml-vision-book/flowers_5_trained' !gsutil ls {MODEL_LOCATION} !saved_model_cli show --tag_set serve --signature_def serving_default --dir {MODEL_LOCATION} """ Explanation: Exported model We start from a trained and saved model from Chapter 7. <pre> model.save(...) </pre> End of explanation """ import tensorflow as tf serving_fn = tf.keras.models.load_model(MODEL_LOCATION).signatures['serving_default'] filenames = [ 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg' ] pred = serving_fn(tf.convert_to_tensor(filenames)) print(pred) print('******') print(pred['flower_type_str'].numpy()) import matplotlib.pyplot as plt f, ax = plt.subplots(1, 5, figsize=(15,15)) for idx, (filename, prob, pred_label) in enumerate( zip(filenames, pred['probability'].numpy(), pred['flower_type_str'].numpy())): img = tf.io.read_file(filename) img = tf.image.decode_jpeg(img, channels=3) ax[idx].imshow((img.numpy())); ax[idx].set_title('{} ({:.2f})'.format(pred_label.decode('utf-8'), prob)) ax[idx].axis('off') """ Explanation: In-memory model (Python program) End of explanation """ import apache_beam as beam from apache_beam.utils.shared import Shared import tensorflow as tf class ModelPredict: def __init__(self, shared_handle, model_location): self._shared_handle = shared_handle self._model_location = model_location def __call__(self, filenames): def initialize_model(): print('Loading Keras model from ' + self._model_location) return (tf.keras.models.load_model(self._model_location) .signatures['serving_default']) serving_fn = self._shared_handle.acquire(initialize_model) if isinstance(filenames, str): # only one element, put it into a batch of 1 result = serving_fn(tf.convert_to_tensor([filenames])) else: # a list result = serving_fn(tf.convert_to_tensor(filenames)) return { 'filenames': filenames, 'probability': result['probability'].numpy(), 'pred_label': result['flower_type_str'].numpy() } with beam.Pipeline() as p: shared_handle = Shared() (p | 'input' >> beam.Create([ 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg' ]) | 'pred' >> beam.Map(ModelPredict(shared_handle, MODEL_LOCATION)) | 'write' >> beam.io.textio.WriteToText('/tmp/out.txt', num_shards=1) ) !cat /tmp/out.txt* """ Explanation: In memory model (Beam pipeline) Make sure to share the model so that it is not loaded for each element. There will be a different model on different worker machines. End of explanation """ with beam.Pipeline() as p: shared_handle = Shared() (p | 'input' >> beam.Create([ 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg' ]) | 'addlabel' >> beam.Map(lambda filename: (filename.split('/')[5], filename)) | 'groupbykey' >> beam.GroupByKey() # (daisy, [daisyfiles]) | 'addpred' >> beam.Map(lambda x: ModelPredict(shared_handle, MODEL_LOCATION)(x[1])) | 'write' >> beam.io.textio.WriteToText('/tmp/out.txt', num_shards=1) ) !cat /tmp/out.txt* with beam.Pipeline() as p: shared_handle = Shared() (p | 'input' >> beam.Create([ 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg', 'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg' ]) | 'batch' >> beam.BatchElements(min_batch_size=2, max_batch_size=3) | 'addpred' >> beam.Map( ModelPredict(shared_handle, MODEL_LOCATION) ) | 'write' >> beam.Map(print) ) """ Explanation: If we pass in a list of filenames, the entire list is sent to the model. This is useful if we do a GroupBy for example For efficiency, it can be helpful to generate a random key, group by it. End of explanation """
bert9bert/statsmodels
examples/notebooks/tsa_arma_0.ipynb
bsd-3-clause
%matplotlib inline from __future__ import print_function import numpy as np from scipy import stats import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.graphics.api import qqplot """ Explanation: Autoregressive Moving Average (ARMA): Sunspots data End of explanation """ print(sm.datasets.sunspots.NOTE) dta = sm.datasets.sunspots.load_pandas().data dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) del dta["YEAR"] dta.plot(figsize=(12,8)); fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2) arma_mod20 = sm.tsa.ARMA(dta, (2,0)).fit(disp=False) print(arma_mod20.params) arma_mod30 = sm.tsa.ARMA(dta, (3,0)).fit(disp=False) print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic) print(arma_mod30.params) print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic) """ Explanation: Sunpots Data End of explanation """ sm.stats.durbin_watson(arma_mod30.resid.values) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = arma_mod30.resid.plot(ax=ax); resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid.values.squeeze(), qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) """ Explanation: Does our model obey the theory? End of explanation """ predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True) print(predict_sunspots) fig, ax = plt.subplots(figsize=(12, 8)) ax = dta.ix['1950':].plot(ax=ax) fig = arma_mod30.plot_predict('1990', '2012', dynamic=True, ax=ax, plot_insample=False) def mean_forecast_err(y, yhat): return y.sub(yhat).mean() mean_forecast_err(dta.SUNACTIVITY, predict_sunspots) """ Explanation: This indicates a lack of fit. In-sample dynamic prediction. How good does our model do? End of explanation """ from statsmodels.tsa.arima_process import arma_generate_sample, ArmaProcess np.random.seed(1234) # include zero-th lag arparams = np.array([1, .75, -.65, -.55, .9]) maparams = np.array([1, .65]) """ Explanation: Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order) Simulated ARMA(4,1): Model Identification is Difficult End of explanation """ arma_t = ArmaProcess(arparams, maparams) arma_t.isinvertible arma_t.isstationary """ Explanation: Let's make sure this model is estimable. End of explanation """ fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax.plot(arma_t.generate_sample(nsample=50)); arparams = np.array([1, .35, -.15, .55, .1]) maparams = np.array([1, .65]) arma_t = ArmaProcess(arparams, maparams) arma_t.isstationary arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2) """ Explanation: What does this mean? End of explanation """ arma11 = sm.tsa.ARMA(arma_rvs, (1,1)).fit(disp=False) resid = arma11.resid r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) arma41 = sm.tsa.ARMA(arma_rvs, (4,1)).fit(disp=False) resid = arma41.resid r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) """ Explanation: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags. The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags. End of explanation """ macrodta = sm.datasets.macrodata.load_pandas().data macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3')) cpi = macrodta["cpi"] """ Explanation: Exercise: How good of in-sample prediction can you do for another series, say, CPI End of explanation """ fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = cpi.plot(ax=ax); ax.legend(); """ Explanation: Hint: End of explanation """ print(sm.tsa.adfuller(cpi)[1]) """ Explanation: P-value of the unit-root test, resoundly rejects the null of no unit-root. End of explanation """
seth2000/chinesepoem
.ipynb_checkpoints/PrepareData-checkpoint.ipynb
mit
# -*- coding: utf-8 -*- import os import re import time import codecs import argparse TIME_FORMAT = '%Y-%m-%d %H:%M:%S' BASE_FOLDER = "C:/Users/sethf/source/repos/chinesepoem/" # os.path.abspath(os.path.dirname(__file__)) DATA_FOLDER = os.path.join(BASE_FOLDER, 'data') DEFAULT_FIN = os.path.join(DATA_FOLDER, '唐诗语料库.txt') DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt') reg_noisy = re.compile('[^\u3000-\uffee]') reg_note = re.compile('((.*))') # Cannot deal with () in seperate lines # 中文及全角标点符号(字符)是\u3000-\u301e\ufe10-\ufe19\ufe30-\ufe44\ufe50-\ufe6b\uff01-\uffee """ Explanation: This is the test file to the idea prove. Try to do the Json formatted corpus, but it is so hard, then I find the word2vec can avoid this hard work. End of explanation """ if __name__ == '__main__': # parser = set_arguments() # cmd_args = parser.parse_args() print('{} START'.format(time.strftime(TIME_FORMAT))) fd = codecs.open(DEFAULT_FIN, 'r', 'utf-8') fw = codecs.open( DEFAULT_FOUT, 'w', 'utf-8') reg = re.compile('〖(.*)〗') start_flag = False for line in fd: line = line.strip() if not line or '《全唐诗》' in line or '<http' in line or '□' in line: continue elif '〖' in line and '〗' in line: if start_flag: fw.write('\n') start_flag = True g = reg.search(line) if g: fw.write(g.group(1)) fw.write('\n') else:a # noisy data print(line) else: line = reg_noisy.sub('', line) line = reg_note.sub('', line) line = line.replace(' .', '') fw.write(line) fd.close() fw.close() print('{} STOP'.format(time.strftime(TIME_FORMAT))) """ Explanation: 读取数据,去掉不用的数据 End of explanation """ print('{} START'.format(time.strftime(TIME_FORMAT))) import thulac DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') fd = codecs.open(DEFAULT_FOUT, 'r', 'utf-8') fw = codecs.open(DEFAULT_Segment, 'w', 'utf-8') thu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注 for line in fd: #print(line) fw.write(thu1.cut(line, text=True)) fw.write('\n') fd.close() fw.close() print('{} STOP'.format(time.strftime(TIME_FORMAT))) print('{} START'.format(time.strftime(TIME_FORMAT))) from gensim.models import word2vec #DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') DEFAULT_Word2Vec = os.path.join(DATA_FOLDER, 'Word2Vec150.bin') sentences = word2vec.Text8Corpus(DEFAULT_Segment) model = word2vec.Word2Vec(sentences, size=150) #DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') model.save(DEFAULT_Word2Vec) print('{} STOP'.format(time.strftime(TIME_FORMAT))) model[u'男'] DEFAULT_FIN = os.path.join(DATA_FOLDER, '唐诗语料库.txt') DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt') DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') def GetFirstNline(filePath, linesNumber): fd = codecs.open(filePath, 'r', 'utf-8') for i in range(1,linesNumber): print(fd.readline()) fd.close() GetFirstNline(DEFAULT_Segment, 3) GetFirstNline(DEFAULT_FOUT, 3) """ Explanation: 分词实验 DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt') thu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注 text = thu1.cut("我爱北京天安门", text=True) #进行一句话分词 print(text) thu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注 thu1.cut_f(DEFAULT_FOUT, outp) #对input.txt文件内容进行分词,输出到output.txt End of explanation """ print('{} START'.format(time.strftime(TIME_FORMAT))) DEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt') DEFAULT_charSegment = os.path.join(DATA_FOLDER, 'Charactersegment.txt') fd = codecs.open(DEFAULT_FOUT, 'r', 'utf-8') fw = codecs.open(DEFAULT_charSegment, 'w', 'utf-8') start_flag = False for line in fd: if len(line) > 0: for c in line: if c != '\n': fw.write(c) fw.write(' ') fw.write('\n') fd.close() fw.close() print('{} STOP'.format(time.strftime(TIME_FORMAT))) GetFirstNline(DEFAULT_charSegment, 3) print('{} START'.format(time.strftime(TIME_FORMAT))) from gensim.models import word2vec #DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') DEFAULT_Char2Vec = os.path.join(DATA_FOLDER, 'Char2Vec100.bin') fd = codecs.open(DEFAULT_charSegment, 'r', 'utf-8') sentences = fd.readlines() fd.close model = word2vec.Word2Vec(sentences, size=100) #DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') model.save(DEFAULT_Char2Vec) print('{} STOP'.format(time.strftime(TIME_FORMAT))) model[u'男'] print('{} START'.format(time.strftime(TIME_FORMAT))) from gensim.models import word2vec DEFAULT_charSegment = os.path.join(DATA_FOLDER, 'Charactersegment.txt') DEFAULT_Char2Vec50 = os.path.join(DATA_FOLDER, 'Char2Vec50.bin') fd = codecs.open(DEFAULT_charSegment, 'r', 'utf-8') sentences = fd.readlines() fd.close model = word2vec.Word2Vec(sentences, size=50) #DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt') model.save(DEFAULT_Char2Vec50) print('{} STOP'.format(time.strftime(TIME_FORMAT))) model.wv.most_similar([u'好']) """ Explanation: 分词不是很成功,我们转向直接用汉字字符来代替分段,我们保留标点符号 End of explanation """
anujjamwal/learning
cs231n/lesson-3.ipynb
mit
import numpy as np import matplotlib.pylab as plt import math from scipy.stats import mode %matplotlib inline """ Explanation: Classification Given an input with $D$ dimensions and $k$ classes, the goal of classification if to find the function $f$ such that $$ f:X \Rightarrow K$$ Linear Classification The simplest function $f$ is Linear of form $$ f(X) = WX + B $$ Given input X is 1-D array with dimension $X_{Dx1}$. The goal is to find a matrix $W_{DxK}$ and bias vector $B_{Kx1}$ For convenience the input can be reshaped to include bias within the weight matrix. End of explanation """ def svm_loss(scores, y, delta=1): return np.sum(np.maximum(scores - scores[y] + delta, 0)) - delta """ Explanation: Multiclass SVM Loss The multiclass SVM loss makes use of hinge loss $J(x) = max( 0, x)$ End of explanation """ def softmax(scores, y): scores -= np.max(scores) norm_sum = np.sum(np.exp(scores)) return np.exp(scores[y]) / norm_sum def crossentropy(scores, y): prob = softmax(scores, y) return -1 * np.log(prob) def l2_regulariser(w): return np.sum(np.power(w, 2)) """ Explanation: Cross Entropy Loss The cross entropy loss makes use of log likelihood in place of hinge loss and softmax instead of computing scores End of explanation """
kriete/cie5703_notebooks
week_6_Charlotte.ipynb
mit
import matplotlib.pyplot as plt import pandas as pd import numpy as np %matplotlib inline plt.style.use('ggplot') """ Explanation: Assignment CIE 5703 - week 6 Import Libraries End of explanation """ from mpl_toolkits.basemap import Basemap def get_basemap(_resolution): return Basemap(projection='merc', llcrnrlat=25, urcrnrlat=38, llcrnrlon=275, urcrnrlon=285, lat_ts=35., resolution=_resolution) positions = pd.read_csv('./Raw_RG_Data/RG_lat_lon.csv', header=None) positions.columns=['lat', 'lon'] plt.figure(figsize=(24,12)) m = get_basemap('h') m.drawcoastlines() m.drawcountries() m.fillcontinents(color = 'gray') m.drawmapboundary() for index, row in positions.iterrows(): x,y = m(row['lon']+360, row['lat']) m.plot(x, y, 'ro', markersize=6) x,y = m(positions['lon'][0]+360, positions['lat'][0]) m.plot(x, y, 'bx', markersize=6) m.drawstates() m.drawrivers() plt.show() """ Explanation: Charlotte rain gauge dataset 15 min data from 2003 - 2014 Show the data source locations. Red dots are available gauges, blue cross denotes the selected station. No specific criteria was chosen to select that specific station. Plot the positions of gauges. Optional code! Requires additional sources. Not easily copy-paste-able! End of explanation """ charlotte_rainfall = pd.read_csv('./charlotte_rg_2003-2014.csv', header = None) #charlotte_rainfall = pd.read_csv('./Raw_RG_Data/Charlotte_CRN_gage_2003.csv', header = None) #for i in range(2004,2014): # cur_rainfall = pd.read_csv('./Raw_RG_Data/Charlotte_CRN_gage_%d.csv' % i, header = None) # charlotte_rainfall = charlotte_rainfall.append(cur_rainfall, ignore_index=True) """ Explanation: Read in data End of explanation """ #charlotte_rainfall = charlotte_rainfall.iloc[:,:6] charlotte_rainfall.columns = ["year","month","day", "hour", "min", "Rainfall"] charlotte_rainfall.loc[:,'dt'] = pd.to_datetime(dict(year=charlotte_rainfall['year'], month=charlotte_rainfall['month'], day=charlotte_rainfall['day'], hour=charlotte_rainfall['hour'], minute=charlotte_rainfall['min'])) charlotte_rainfall.index=charlotte_rainfall['dt'] charlotte_rainfall.head() """ Explanation: Format data to year, month, day, hour, min and rainfall & select only ONE rain gauge End of explanation """ plt.plot(charlotte_rainfall['dt'], charlotte_rainfall["Rainfall"]) plt.ylabel('mm/15min') plt.gcf().autofmt_xdate() """ Explanation: Plot rain data as read End of explanation """ charlotte_rainfall["Rainfall"] = charlotte_rainfall["Rainfall"].replace(-99, np.nan) plt.plot(charlotte_rainfall['dt'], charlotte_rainfall["Rainfall"]) plt.ylabel('mm/15min') plt.gcf().autofmt_xdate() charlotte_rainfall.head() """ Explanation: Replace invalid data with NaNs and plot again End of explanation """ charlotte_24h_rainfall = pd.DataFrame() charlotte_24h_rainfall['mean_rain'] = charlotte_rainfall.Rainfall.resample('D').mean() charlotte_24h_rainfall['accum_rain'] = charlotte_rainfall.Rainfall.resample('D').sum() charlotte_24h_rainfall.head() plt.plot(charlotte_24h_rainfall["accum_rain"]) plt.ylabel('mm/24h') plt.gcf().autofmt_xdate() plt.plot(charlotte_24h_rainfall["mean_rain"]) plt.ylabel(r'mm/15min ($\varnothing$ of 24h)') plt.gcf().autofmt_xdate() """ Explanation: Resample the 10-min dataset to 24h accumulated rainfall data End of explanation """ charlotte_1h_rainfall = pd.DataFrame() charlotte_1h_rainfall['mean_rain'] = charlotte_rainfall.Rainfall.resample('H').mean() charlotte_1h_rainfall['accum_rain'] = charlotte_rainfall.Rainfall.resample('H').sum() charlotte_1h_rainfall.head() plt.plot(charlotte_1h_rainfall["accum_rain"]) plt.ylabel('mm/h') plt.gcf().autofmt_xdate() plt.plot(charlotte_1h_rainfall["mean_rain"]) plt.ylabel(r'mm/15min ($\varnothing$ of 1h)') plt.gcf().autofmt_xdate() """ Explanation: Resample 15 min data to 1h accumulated dataset End of explanation """ charlotte_summer_1h_rainfall = charlotte_1h_rainfall.loc[(charlotte_1h_rainfall.index.month>=4) & (charlotte_1h_rainfall.index.month<=9)] plt.plot(charlotte_summer_1h_rainfall["accum_rain"]) plt.ylabel('mm/h') plt.gcf().autofmt_xdate() """ Explanation: Select only summer months (April - Sept) End of explanation """ mask_start = (charlotte_1h_rainfall.index.month >= 1) & (charlotte_1h_rainfall.index.month <= 3) mask_end = (charlotte_1h_rainfall.index.month >= 10) & (charlotte_1h_rainfall.index.month <= 12) mask = mask_start | mask_end charlotte_winter_1h_rainfall = charlotte_1h_rainfall.loc[mask] plt.plot(charlotte_winter_1h_rainfall["accum_rain"]) plt.ylabel('mm/h') plt.gcf().autofmt_xdate() charlotte_winter_1h_rainfall.head() """ Explanation: Select only winter months (Oct - Mar) End of explanation """ charlotte_monthly_rainfall = pd.DataFrame() charlotte_monthly_rainfall['mean_rain'] = charlotte_rainfall.Rainfall.resample('M').mean() charlotte_monthly_rainfall['accum_rain'] = charlotte_rainfall.Rainfall.resample('M').sum() plt.plot(charlotte_monthly_rainfall["accum_rain"]) plt.ylabel('mm/month') plt.gcf().autofmt_xdate() plt.plot(charlotte_monthly_rainfall["mean_rain"]) plt.ylabel(r'mm/15min ($\varnothing$ per month)') plt.gcf().autofmt_xdate() """ Explanation: Resample 15 min dataset to monthly accumulated dataset End of explanation """ print('Mean: %s' % str(charlotte_rainfall.Rainfall.mean())) print('Std: %s' % str(charlotte_rainfall.Rainfall.std())) print('Skew: %s' % str(charlotte_rainfall.Rainfall.skew())) """ Explanation: Answering the assignments 1. General statistics for 24-hour and 15-min datasets: compute mean, standard deviation, skewness; plot histograms 15 min dataset Mean, standard deviation and skewness of the 15 min dataset End of explanation """ charlotte_rainfall.Rainfall.hist(bins = 100) plt.xlabel('mm/15min') plt.gca().set_yscale("log") """ Explanation: Histogram of the data End of explanation """ cur_data = charlotte_rainfall.Rainfall.loc[charlotte_rainfall.Rainfall>0] hist_d = plt.hist(cur_data, bins=100) plt.xlabel('mm/15min') plt.gca().set_yscale("log") """ Explanation: Histogram of the data without zeros End of explanation """ print('Mean: %s' % str(charlotte_24h_rainfall.accum_rain.mean())) print('Std: %s' % str(charlotte_24h_rainfall.accum_rain.std())) print('Skew: %s' % str(charlotte_24h_rainfall.accum_rain.skew())) """ Explanation: 24h accumulated dataset Mean, standard deviation and skewness of 24h accumulated dataset End of explanation """ charlotte_24h_rainfall.accum_rain.hist(bins = 100) plt.xlabel('mm/24h') plt.gca().set_yscale("log") charlotte_24h_rainfall.mean_rain.hist(bins = 100) plt.xlabel(r'mm/15min ($\varnothing$ per 24h)') plt.gca().set_yscale("log") """ Explanation: Histogram of the dataset End of explanation """ cur_data = charlotte_24h_rainfall.accum_rain.loc[charlotte_24h_rainfall.accum_rain>0] hist_d = plt.hist(cur_data, bins=100) plt.xlabel('mm/24h') plt.gca().set_yscale("log") """ Explanation: Histogram without zeros End of explanation """ charlotte_monthly_rainfall['mon'] = charlotte_monthly_rainfall.index.month charlotte_monthly_rainfall['year'] = charlotte_monthly_rainfall.index.year charlotte_monthly_rainfall.boxplot(column=['accum_rain'], by='mon', sym='+') plt.ylabel('mm/month') """ Explanation: 2. a. Analysis of seasonal cycles: create boxplots for monthly totals across all years Boxplot of monthly totals End of explanation """ charlotte_monthly_rainfall.dropna().boxplot(column=['accum_rain'], by='year', sym='+') plt.ylabel('mm/month') plt.gcf().autofmt_xdate() """ Explanation: Or on a yearly scale: End of explanation """ charlotte_1h_rainfall['hour'] = charlotte_1h_rainfall.index.hour charlotte_1h_rainfall.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: 2. b. Analysis of diurnal cycles: create boxplots for hourly totals for entire dataseries End of explanation """ cur_df = charlotte_1h_rainfall.copy() cur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan cur_df.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Neglecting events < 1mm/h End of explanation """ cur_df = charlotte_1h_rainfall.copy() cur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan cur_df.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Neglecting events < 3mm/h End of explanation """ pd.options.mode.chained_assignment = None # default='warn' charlotte_summer_1h_rainfall['hour'] = charlotte_summer_1h_rainfall.index.hour charlotte_summer_1h_rainfall.boxplot(column=['accum_rain'], by='hour', sym='+') """ Explanation: 2. c. Variation of diurnal cycles with seasons: create boxplots for hourly totals for summer season (April – September) and for winter season (October-March) Merge summer hourly data End of explanation """ cur_df = charlotte_summer_1h_rainfall.copy() cur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan cur_df.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Neglecting events <1mm/hour End of explanation """ cur_df = charlotte_summer_1h_rainfall.copy() cur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan cur_df.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Neglecting events <3mm/hour End of explanation """ charlotte_winter_1h_rainfall['hour'] = charlotte_winter_1h_rainfall.index.hour charlotte_winter_1h_rainfall.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Merge hourly winter data End of explanation """ cur_df = charlotte_winter_1h_rainfall.copy() cur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan cur_df.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Neglecting events <1mm/h End of explanation """ cur_df = charlotte_winter_1h_rainfall.copy() cur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan cur_df.boxplot(column=['accum_rain'], by='hour', sym='+') plt.ylabel('mm/h') """ Explanation: Neglecting events <3mm/h End of explanation """ charlotte_1h_exceeds = charlotte_1h_rainfall.accum_rain[charlotte_1h_rainfall.accum_rain>10] """ Explanation: 2. d. Diurnal cycles of intense storm events: Count nr of exceedances above 10 mm/h threshold for each hour of the day, for entire data series and for summer months only Show rainfall events > 10mm /h over entire 1h accumulated dataset End of explanation """ print(len(charlotte_1h_exceeds)) y = np.array(charlotte_1h_exceeds) N = len(y) x = range(N) width = 1 plt.bar(x, y, width) plt.ylabel('mm/h') """ Explanation: Amount of hourly events End of explanation """ charlotte_1h_exceeds_summer = charlotte_summer_1h_rainfall.accum_rain[charlotte_summer_1h_rainfall.accum_rain>10] y = np.array(charlotte_1h_exceeds_summer) N = len(y) x = range(N) width = 1 plt.bar(x, y, width) plt.ylabel('mm/h') """ Explanation: 10 mm/h events in summer periods End of explanation """ print(len(charlotte_1h_exceeds_summer)) """ Explanation: Amount of hourly events End of explanation """ plt.plot(charlotte_1h_exceeds) plt.gcf().autofmt_xdate() charlotte_1h_exceeds.hist(bins=100) from scipy.stats import genextreme x = np.linspace(0, 80, 1000) y = np.array(charlotte_1h_exceeds[:]) np.seterr(divide='ignore', invalid='ignore') genextreme.fit(y) pdf = plt.plot(x, genextreme.pdf(x, *genextreme.fit(y))) pdf_hist = plt.hist(y, bins=50, normed=True, histtype='stepfilled', alpha=0.8) """ Explanation: 3. Fit GEV-distribution for POT values in the time series 3. a. Create plots: histogram and GEV fit and interpret End of explanation """ genextreme.ppf((1-1/1), *genextreme.fit(y)) genextreme.ppf((1-1/10), *genextreme.fit(y)) genextreme.ppf((1-1/100), *genextreme.fit(y)) """ Explanation: 3. c. Compute rainfall amounts associated with return periods of 1 year, 10 years and 100 years End of explanation """ from scipy.stats import genpareto temp_monthly = charlotte_1h_rainfall.groupby(pd.TimeGrouper(freq='M')) block_max_y = np.array(temp_monthly.accum_rain.max()) print(block_max_y) print(len(block_max_y)) x = np.linspace(0, 100, 1000) pdf = plt.plot(x, genextreme.pdf(x, *genextreme.fit(block_max_y))) pdf_hist = plt.hist(block_max_y, bins=50, normed=True, histtype='stepfilled', alpha=0.8) """ Explanation: Update 10.10.2017 Block maxima & GEV End of explanation """ genextreme.fit(block_max_y) genextreme.ppf((1-1/10), *genextreme.fit(block_max_y)) """ Explanation: GEV and block maxima of monthly maxima of 1h data End of explanation """ pdf_bm = plt.plot(x, genpareto.pdf(x, *genpareto.fit(y))) pdf_hist_bm = plt.hist(y, bins=100, normed=True, histtype='stepfilled', alpha=0.8) """ Explanation: POT & GPD End of explanation """ genpareto.fit(y) genpareto.ppf((1-1/10), *genpareto.fit(y)) """ Explanation: GPD and POT of data>10mm/h End of explanation """ event_occurences = pd.DataFrame(charlotte_1h_exceeds) event_occurences['hour'] = event_occurences.index.hour event_occurences.boxplot(column=['accum_rain'], by='hour', sym='+') """ Explanation: Boxplot of POT values End of explanation """ event_occurences.hour.value_counts(sort=False) # plt.plot(asd.hour.value_counts(sort=False)) cur_hist = plt.hist(event_occurences.hour, bins=24, histtype='stepfilled') plt.xticks(range(24)) plt.xlabel('hour') """ Explanation: Number of occurences per hour End of explanation """
rflamary/POT
notebooks/plot_otda_d2.ipynb
mit
# Authors: Remi Flamary <remi.flamary@unice.fr> # Stanislas Chambon <stan.chambon@gmail.com> # # License: MIT License import matplotlib.pylab as pl import ot import ot.plot """ Explanation: OT for domain adaptation on empirical distributions This example introduces a domain adaptation in a 2D setting. It explicits the problem of domain adaptation and introduces some optimal transport approaches to solve it. Quantities such as optimal couplings, greater coupling coefficients and transported samples are represented in order to give a visual understanding of what the transport methods are doing. End of explanation """ n_samples_source = 150 n_samples_target = 150 Xs, ys = ot.datasets.make_data_classif('3gauss', n_samples_source) Xt, yt = ot.datasets.make_data_classif('3gauss2', n_samples_target) # Cost matrix M = ot.dist(Xs, Xt, metric='sqeuclidean') """ Explanation: generate data End of explanation """ # EMD Transport ot_emd = ot.da.EMDTransport() ot_emd.fit(Xs=Xs, Xt=Xt) # Sinkhorn Transport ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn.fit(Xs=Xs, Xt=Xt) # Sinkhorn Transport with Group lasso regularization ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0) ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt) # transport source samples onto target samples transp_Xs_emd = ot_emd.transform(Xs=Xs) transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=Xs) transp_Xs_lpl1 = ot_lpl1.transform(Xs=Xs) """ Explanation: Instantiate the different transport algorithms and fit them End of explanation """ pl.figure(1, figsize=(10, 10)) pl.subplot(2, 2, 1) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Source samples') pl.subplot(2, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Target samples') pl.subplot(2, 2, 3) pl.imshow(M, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Matrix of pairwise distances') pl.tight_layout() """ Explanation: Fig 1 : plots source and target samples + matrix of pairwise distance End of explanation """ pl.figure(2, figsize=(10, 6)) pl.subplot(2, 3, 1) pl.imshow(ot_emd.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nEMDTransport') pl.subplot(2, 3, 2) pl.imshow(ot_sinkhorn.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nSinkhornTransport') pl.subplot(2, 3, 3) pl.imshow(ot_lpl1.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nSinkhornLpl1Transport') pl.subplot(2, 3, 4) ot.plot.plot2D_samples_mat(Xs, Xt, ot_emd.coupling_, c=[.5, .5, 1]) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.title('Main coupling coefficients\nEMDTransport') pl.subplot(2, 3, 5) ot.plot.plot2D_samples_mat(Xs, Xt, ot_sinkhorn.coupling_, c=[.5, .5, 1]) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.title('Main coupling coefficients\nSinkhornTransport') pl.subplot(2, 3, 6) ot.plot.plot2D_samples_mat(Xs, Xt, ot_lpl1.coupling_, c=[.5, .5, 1]) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) pl.title('Main coupling coefficients\nSinkhornLpl1Transport') pl.tight_layout() """ Explanation: Fig 2 : plots optimal couplings for the different methods End of explanation """ # display transported samples pl.figure(4, figsize=(10, 4)) pl.subplot(1, 3, 1) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nEmdTransport') pl.legend(loc=0) pl.xticks([]) pl.yticks([]) pl.subplot(1, 3, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_sinkhorn[:, 0], transp_Xs_sinkhorn[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nSinkhornTransport') pl.xticks([]) pl.yticks([]) pl.subplot(1, 3, 3) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_lpl1[:, 0], transp_Xs_lpl1[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nSinkhornLpl1Transport') pl.xticks([]) pl.yticks([]) pl.tight_layout() pl.show() """ Explanation: Fig 3 : plot transported samples End of explanation """
phasedchirp/Assorted-Data-Analysis
exercises/SlideRule-DS-Intensive/UD120/Evaluation.ipynb
gpl-2.0
import pickle import sys sys.path.append("../tools/") from feature_format import featureFormat, targetFeatureSplit data_dict = pickle.load(open("../final_project/final_project_dataset.pkl", "r") ) features_list = ["poi", "salary"] data = featureFormat(data_dict, features_list) labels, features = targetFeatureSplit(data) """ Explanation: Udacity Machine Learning Evaluation mini-project Prep stuff: End of explanation """ from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import train_test_split features_train, features_test, labels_train, labels_test = train_test_split(features,labels,test_size=0.3,random_state=42) clf = DecisionTreeClassifier() clf.fit(features_train,labels_train) pred = clf.predict(features_test) """ Explanation: Training a decision tree on this starter data: End of explanation """ # ref http://stackoverflow.com/questions/10741346 import numpy as np unique, counts = np.unique(labels_test, return_counts=True) print "true labels" print np.asarray((unique, counts)).T print "predicted labels" unique, counts = np.unique(pred, return_counts=True) print np.asarray((unique, counts)).T """ Explanation: Counts of actual and predicted values: End of explanation """ print "number of true positives:",sum((labels_test==1) & (pred ==1)) """ Explanation: Which turn out to match up very poorly. No true positives. Just guessing 0 for everyone would in fact be more accurate. End of explanation """ from sklearn.metrics import precision_score, recall_score """ Explanation: Precision and Recall: End of explanation """ print "precision:",precision_score(labels_test,pred) print "recall:",recall_score(labels_test,pred) """ Explanation: These are not even slightly good news: End of explanation """ predictions = np.array([0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1]) true_labels = np.array([0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0]) print "number of true positives:",sum((true_labels==1) & (predictions==1)) print "number of false positives:",sum((true_labels==0) & (predictions==1)) print "number of true negatives:",sum((true_labels==0) & (predictions==0)) print "number of false negatives:",sum((true_labels==1) & (predictions==0)) print "precision:", 6/(6+3.) print "recall:", 6/(6+2.) """ Explanation: Same thing with some fake data for comparison: End of explanation """
mercybenzaquen/foundations-homework
foundations_hw/05/.ipynb_checkpoints/Homework5_NYT-checkpoint.ipynb
mit
#my IPA key b577eb5b46ad4bec8ee159c89208e220 #base url http://api.nytimes.com/svc/books/{version}/lists import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-05-10&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() print(best_seller.keys()) print(type(best_seller)) print(type(best_seller['results'])) print(len(best_seller['results'])) print(best_seller['results'][0]) mother_best_seller_results_2009 = best_seller['results'] for item in mother_best_seller_results_2009: print("This books ranks #", item['rank'], "on the list") #just to make sure they are in order for book in item['book_details']: print(book['title']) print("The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2009 were:") for item in mother_best_seller_results_2009: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-05-09&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller_2010 = response.json() print(best_seller.keys()) print(best_seller_2010['results'][0]) mother_best_seller_2010_results = best_seller_2010['results'] print("The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2010 were:") for item in mother_best_seller_2010_results: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-06-21&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() father_best_seller_results_2009 = best_seller['results'] print("The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2009 were:") for item in father_best_seller_results_2009: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-06-20&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() father_best_seller_results_2010 = best_seller['results'] print("The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2010 were:") for item in father_best_seller_results_2010: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) """ Explanation: What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day? End of explanation """ import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() print(best_seller.keys()) print(len(best_seller['results'])) book_categories_2009 = best_seller['results'] for item in book_categories_2009: print(item['display_name']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2015-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() print(len(best_seller['results'])) book_categories_2015 = best_seller['results'] for item in book_categories_2015: print(item['display_name']) """ Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015? End of explanation """ import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") gadafi = response.json() print(gadafi.keys()) print(gadafi['response']) print(gadafi['response'].keys()) print(gadafi['response']['docs']) #so no results for GADAFI. print('The New York times has not used the name Gadafi to refer to Muammar Gaddafi') import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") gaddafi = response.json() print(gaddafi.keys()) print(gaddafi['response'].keys()) print(type(gaddafi['response']['meta'])) print(gaddafi['response']['meta']) print("'The New York times used the name Gaddafi to refer to Muammar Gaddafi", gaddafi['response']['meta']['hits'], "times") import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") kadafi = response.json() print(kadafi.keys()) print(kadafi['response'].keys()) print(type(kadafi['response']['meta'])) print(kadafi['response']['meta']) print("'The New York times used the name Kadafi to refer to Muammar Gaddafi", kadafi['response']['meta']['hits'], "times") import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") qaddafi = response.json() print(qaddafi.keys()) print(qaddafi['response'].keys()) print(type(qaddafi['response']['meta'])) print(qaddafi['response']['meta']) print("'The New York times used the name Qaddafi to refer to Muammar Gaddafi", qaddafi['response']['meta']['hits'], "times") """ Explanation: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names? Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy. End of explanation """ import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=b577eb5b46ad4bec8ee159c89208e220") hipster = response.json() print(hipster.keys()) print(hipster['response'].keys()) print(hipster['response']['docs'][0]) hipster_info= hipster['response']['docs'] print('These articles all had the word hipster in them and were published in 1995') #ordered from oldest to newest for item in hipster_info: print(item['headline']['main'], item['pub_date']) for item in hipster_info: if item['headline']['main'] == "SOUND": print("This is the first article to mention the word hispter in 1995 and was titled:", item['headline']['main'],"and it was publised on:", item['pub_date']) print("This is the lead paragraph of", item['headline']['main'],item['lead_paragraph']) """ Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph? End of explanation """ import requests response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date=19500101&end_date=19593112&api-key=b577eb5b46ad4bec8ee159c89208e220') marriage_1959 = response.json() print(marriage_1959.keys()) print(marriage_1959['response'].keys()) print(marriage_1959['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1959['response']['meta']['hits'], "between 1950-1959") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19600101&end_date=19693112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_1969 = response.json() print(marriage_1969.keys()) print(marriage_1969['response'].keys()) print(marriage_1969['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1969['response']['meta']['hits'], "between 1960-1969") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19700101&end_date=19783112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_1978 = response.json() print(marriage_1978.keys()) print(marriage_1978['response'].keys()) print(marriage_1978['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1978['response']['meta']['hits'], "between 1970-1978") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19800101&end_date=19893112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_1989 = response.json() print(marriage_1989.keys()) print(marriage_1989['response'].keys()) print(marriage_1989['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1989['response']['meta']['hits'], "between 1980-1989") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19900101&end_date=20003112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_2000 = response.json() print(marriage_2000.keys()) print(marriage_2000['response'].keys()) print(marriage_2000['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_2000['response']['meta']['hits'], "between 1990-2000") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20000101&end_date=20093112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_2009 = response.json() print(marriage_2009.keys()) print(marriage_2009['response'].keys()) print(marriage_2009['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_2009['response']['meta']['hits'], "between 2000-2009") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20100101&end_date=20160609&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_2016 = response.json() print(marriage_2016.keys()) print(marriage_2016['response'].keys()) print(marriage_2016['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_2016['response']['meta']['hits'], "between 2010-present") """ Explanation: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present? End of explanation """ import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycles&facet_field=section_name&api-key=b577eb5b46ad4bec8ee159c89208e220") motorcycles = response.json() print(motorcycles.keys()) print(motorcycles['response'].keys()) print(motorcycles['response']['facets']['section_name']['terms']) motorcycles_info= motorcycles['response']['facets']['section_name']['terms'] print(motorcycles_info) print("These are the sections that talk the most about motorcycles:") print("_________________") for item in motorcycles_info: print("The",item['term'],"section mentioned motorcycle", item['count'], "times") motorcycle_info= motorcycles['response']['facets']['section_name']['terms'] most_motorcycle_section = 0 section_name = "" for item in motorcycle_info: if item['count']>most_motorcycle_section: most_motorcycle_section = item['count'] section_name = item['term'] print(section_name, "is the sections that talks the most about motorcycles, with", most_motorcycle_section, "mentions of the word") """ Explanation: 6) What section talks about motorcycles the most? Tip: You'll be using facets End of explanation """ import requests response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=b577eb5b46ad4bec8ee159c89208e220') movies_reviews_20 = response.json() print(movies_reviews_20.keys()) print(movies_reviews_20['results'][0]) critics_pick = 0 not_a_critics_pick = 0 for item in movies_reviews_20['results']: print(item['display_title'], item['critics_pick']) if item['critics_pick'] == 1: print("-------------CRITICS PICK!") critics_pick = critics_pick + 1 else: print("-------------NOT CRITICS PICK!") not_a_critics_pick = not_a_critics_pick + 1 print("______________________") print("There were", critics_pick, "critics picks in the last 20 revies by the NYT") import requests response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=20&api-key=b577eb5b46ad4bec8ee159c89208e220') movies_reviews_40 = response.json() print(movies_reviews_40.keys()) import requests response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=40&api-key=b577eb5b46ad4bec8ee159c89208e220') movies_reviews_60 = response.json() print(movies_reviews_60.keys()) new_medium_list = movies_reviews_20['results'] + movies_reviews_40['results'] print(len(new_medium_list)) critics_pick = 0 not_a_critics_pick = 0 for item in new_medium_list: print(item['display_title'], item['critics_pick']) if item['critics_pick'] == 1: print("-------------CRITICS PICK!") critics_pick = critics_pick + 1 else: print("-------------NOT CRITICS PICK!") not_a_critics_pick = not_a_critics_pick + 1 print("______________________") print("There were", critics_pick, "critics picks in the last 40 revies by the NYT") new_big_list = movies_reviews_20['results'] + movies_reviews_40['results'] + movies_reviews_60['results'] print(new_big_list[0]) print(len(new_big_list)) critics_pick = 0 not_a_critics_pick = 0 for item in new_big_list: print(item['display_title'], item['critics_pick']) if item['critics_pick'] == 1: print("-------------CRITICS PICK!") critics_pick = critics_pick + 1 else: print("-------------NOT CRITICS PICK!") not_a_critics_pick = not_a_critics_pick + 1 print("______________________") print("There were", critics_pick, "critics picks in the last 60 revies by the NYT") """ Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60? Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them. End of explanation """ medium_list = movies_reviews_20['results'] + movies_reviews_40['results'] print(type(medium_list)) print(medium_list[0]) for item in medium_list: print(item['byline']) all_critics = [] for item in medium_list: all_critics.append(item['byline']) print(all_critics) unique_medium_list = set(all_critics) print(unique_medium_list) print("___________________________________________________") print("This is a list of the authors who have written the NYT last 40 movie reviews, in descending order:") from collections import Counter count = Counter(all_critics) print(count) print("___________________________________________________") print("This is a list of the top 3 authors who have written the NYT last 40 movie reviews:") count.most_common(3) """ Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews? End of explanation """
avallarino-ar/MCDatos
Notas/Notas-Python/01_NumPy_ArrayMatrices.ipynb
mit
import numpy as np # Importo numpy con el alias np. np.empty((2, 3)) # Matriz vacía de 2 x 3. """ Explanation: Numpy Librería para operar con vectores y matrices. Hace posible operar con cualquier dato numérico o array. Incorpora operaciones básicas como la suma o la multiplicación u otras más complejas como la transformada de Fourier o el álgebra lineal. Además incorpora herramientas que nos permiten incorporar código fuente de otros lenguajes de programación como C/C++ o Fortran lo que incrementa notablemente su compatibilidad e implementación. Arrays de Vacíos, Unos, Ceros: Cuando se conoce la dimensión pero no se saben los datos que contendrán, o cuando se requieren matrices de Unos o Ceros. Existen las funciones: empty, zeros y ones End of explanation """ np.zeros((3, 1)) # Matriz de 3 x 1 con ceros np.ones((3, 2)) # Matriz de 3 x 2 con unos """ Explanation: Atención: empty asigna valores residuales. No necesariamente van a ser ceros. End of explanation """ a = np.ones((3, 2)) # Matriz de 3 x 2 con unos b = np.zeros_like(a) # Matriz con ceros con la forma de a b """ Explanation: Estas funciones tienen una contrapartida con el sufijo _like, con la que podemos crear matrices con la misma dimensión que una dada. empty_like, zeros_like y ones_like End of explanation """ np.array([1, 2, 3]) # Lista np.array([[1, -1], # Lista de listas [2, 0]]) np.array((0, 1, -1)) # Tupla np.array(range(5)) # Rango """ Explanation: Arrays a partir de listas Cuando conocemos los valores del array, podemos crealos utilizando la función array y pasarle como argumento una lista, tupla o, en general, una secuencia. Es útil para arrays pequeños. End of explanation """ np.arange(5) np.arange(2, 5) # Rango donde se indica intervalo (Desde, Hasta) np.arange(2, 14, 2) # Rango donde se indica el paso """ Explanation: Rangos numéricos NumPy ofrece funciones para crear rangos numéricos. La función arange se utiliza para crear rangos de números enteros, de manera similar a la función range estándar de Python. End of explanation """ np.linspace(0, 1, 11) # 11 puntos entre 0 y 1 np.logspace(2, 5, 4, base=10) # 4 puntos según una escala logarítmica entre 10^2 y 10^5 """ Explanation: Para rangos en el que el paso no sea un número entero se utilizan las funciones linspace y logspace. Estas funciones aceptan como argumento el número de elementos en lugar del paso: End of explanation """ a = np.array([[1, -1], # Lista de listas [2, 0]]) type(a) # Verifico que es un ndarray a.ndim # Consulto nro. de dimensiones a.size # Consulto nro. de elementos a.dtype # Consulto tipo de elementos a.itemsize # Consulto tamaño en bytes a.data # Consulto el buffer de memoria. a.shape # Consulto las dimensiones """ Explanation: Clase ndarray Los principales atributos son: End of explanation """ from numpy import matrix # Importar matrix del módulo numpy. a = matrix([[1,2,-4],[6,4,2],[-5,3,0]]) # Matriz de 3 x 3 a b = matrix([[3],[5],[7]]) # Matriz de 1 x 3 b a*b # Multiplicación de matrices: a.T # Transpuesta de a a.H # Hermítica de a (transpuesta y conjugada) c = a.I # Inversa de a c c """ Explanation: Matrices Arreglo bidimensional. Se crean con la función: matrix End of explanation """ np.identity(3) # Matriz identidad de tamaño 3 """ Explanation: Matriz identidad Para crear una matriz cuadrada con Unos en la diagonal se utiliza la función: identity End of explanation """ np.eye(4, 3) # Matriz de 4x3 con unos en una diagonal y ceros en el resto de elementos np.eye(4, 3, k=-1) # Con el parámetro k podemos controlar qué diagonal está llena de unos """ Explanation: Para un caso más general, sin que sean matrices cuadradas podemos utilizar la función : eye End of explanation """
david4096/bioapi-examples
python_notebooks/1kg_rna_quantification_service.ipynb
apache-2.0
from ga4gh.client import client c = client.HttpClient("http://1kgenomes.ga4gh.org") #Obtain dataSet id REF: -> `1kg_metadata_service` dataset = c.search_datasets().next() """ Explanation: GA4GH RNA Quantification API Example This example illustrates the methods used to access the rna_quantification_service. Initialize client In this step we create a client object which will be used to communicate with the server. End of explanation """ counter = 0 for rna_quant_set in c.search_rna_quantification_sets(dataset_id=dataset.id): if counter > 5: break counter += 1 print(" id: {}".format(rna_quant_set.id)) print(" dataset_id: {}".format(rna_quant_set.dataset_id)) print(" name: {}\n".format(rna_quant_set.name)) """ Explanation: Search RNA Quantification Sets Method This instance returns a list of RNA quantification sets in a dataset. RNA quantification sets are a way to associate a group of related RNA quantifications. Note that we use the dataset_id obtained from the 1kg_metadata_service notebook. End of explanation """ single_rna_quant_set = c.get_rna_quantification_set( rna_quantification_set_id=rna_quant_set.id) print(" name: {}\n".format(single_rna_quant_set.name)) """ Explanation: Get RNA Quantification Set by id method This method obtains an single RNA quantification set by it's unique identifier. This id was chosen arbitrarily from the returned results. End of explanation """ counter = 0 for rna_quant in c.search_rna_quantifications( rna_quantification_set_id=rna_quant_set.id): if counter > 5: break counter += 1 print("RNA Quantification: {}".format(rna_quant.name)) print(" id: {}".format(rna_quant.id)) print(" description: {}\n".format(rna_quant.description)) test_quant = rna_quant """ Explanation: Search RNA Quantifications We can list all of the RNA quantifications in an RNA quantification set. The rna_quantification_set_id was chosen arbitrarily from the returned results. End of explanation """ single_rna_quant = c.get_rna_quantification( rna_quantification_id=test_quant.id) print(" name: {}".format(single_rna_quant.name)) print(" read_ids: {}".format(single_rna_quant.read_group_ids)) print(" annotations: {}\n".format(single_rna_quant.feature_set_ids)) """ Explanation: Get RNA Quantification by Id Similar to RNA quantification sets, we can retrieve a single RNA quantification by specific id. This id was chosen arbitrarily from the returned results. The RNA quantification reported contains details of the processing pipeline which include the source of the reads as well as the annotations used. End of explanation """ def getUnits(unitType): units = ["", "FPKM", "TPM"] return units[unitType] counter = 0 for expression in c.search_expression_levels( rna_quantification_id=test_quant.id): if counter > 5: break counter += 1 print("Expression Level: {}".format(expression.name)) print(" id: {}".format(expression.id)) print(" feature: {}".format(expression.feature_id)) print(" expression: {} {}".format(expression.expression, getUnits(expression.units))) print(" read_count: {}".format(expression.raw_read_count)) print(" confidence_interval: {} - {}\n".format( expression.conf_interval_low, expression.conf_interval_high)) """ Explanation: Search Expression Levels The feature level expression data for each RNA quantification is reported as a set of Expression Levels. The rna_quantification_service makes it easy to search for these. End of explanation """ counter = 0 for expression in c.search_expression_levels( rna_quantification_id=test_quant.id, feature_ids=[]): if counter > 5: break counter += 1 print("Expression Level: {}".format(expression.name)) print(" id: {}".format(expression.id)) print(" feature: {}\n".format(expression.feature_id)) """ Explanation: It is also possible to restrict the search to a specific feature or to request expression values exceeding a threshold amount. End of explanation """ counter = 0 for expression in c.search_expression_levels( rna_quantification_id=test_quant.id, threshold=1000): if counter > 5: break counter += 1 print("Expression Level: {}".format(expression.name)) print(" id: {}".format(expression.id)) print(" expression: {} {}\n".format(expression.expression, getUnits(expression.units))) """ Explanation: Let's look for some high expressing features. End of explanation """
adityaka/misc_scripts
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/05_06/Begin/.ipynb_checkpoints/Data Frame Plots-checkpoint.ipynb
bsd-3-clause
import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') """ Explanation: Data Frame Plots documentation: http://pandas.pydata.org/pandas-docs/stable/visualization.html End of explanation """ ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() ts.plot() plt.show() """ Explanation: The plot method on Series and DataFrame is just a simple wrapper around plt.plot() If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as show in the plot window. End of explanation """ df = pd.DataFrame(np.random.randn(1000, 4), index=pd.date_range('1/1/2016', periods=1000), columns=list('ABCD')) df = df.cumsum() plt.figure() df.plot() plt.show() """ Explanation: On DataFrame, plot() is a convenience to plot all of the columns, and include a legend within the plot. End of explanation """ df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum() df3['A'] = pd.Series(list(range(len(df)))) df3.plot(x='A', y='B') plt.show() df3.tail() """ Explanation: You can plot one column versus another using the x and y keywords in plot(): End of explanation """ plt.figure() df.ix[5].plot(kind='bar') plt.axhline(0, color='k') plt.show() df.ix[5] """ Explanation: Plots other than line plots Plotting methods allow for a handful of plot styles other than the default Line plot. These methods can be provided as the kind keyword argument to plot(). These include: ‘bar’ or ‘barh’ for bar plots ‘hist’ for histogram ‘box’ for boxplot ‘kde’ or 'density' for density plots ‘area’ for area plots ‘scatter’ for scatter plots ‘hexbin’ for hexagonal bin plots ‘pie’ for pie plots For example, a bar plot can be created the following way: End of explanation """ df2 = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd']) df2.plot.bar(stacked=True) plt.show() """ Explanation: stack bar chart End of explanation """ df2.plot.barh(stacked=True) plt.show() """ Explanation: horizontal bar chart End of explanation """ df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E']) df.plot.box() plt.show() """ Explanation: box plot End of explanation """ df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd']) df.plot.area() plt.show() """ Explanation: area plot End of explanation """ ser = pd.Series(np.random.randn(1000)) ser.plot.kde() plt.show() """ Explanation: Plotting with Missing Data Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are dropped, left out, or filled depending on the plot type. | Plot Type | NaN Handling | | |----------------|-------------------------|---| | Line | Leave gaps at NaNs | | | Line (stacked) | Fill 0’s | | | Bar | Fill 0’s | | | Scatter | Drop NaNs | | | Histogram | Drop NaNs (column-wise) | | | Box | Drop NaNs (column-wise) | | | Area | Fill 0’s | | | KDE | Drop NaNs (column-wise) | | | Hexbin | Drop NaNs | | | Pie | Fill 0’s | | If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled, consider using fillna() or dropna() before plotting. density plot End of explanation """ from pandas.tools.plotting import lag_plot plt.figure() data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(np.linspace(-99 * np.pi, 99 * np.pi, num=1000))) lag_plot(data) plt.show() """ Explanation: lag plot Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the lag plot. Non-random structure implies that the underlying data are not random. End of explanation """
bblais/Classy
examples/Example kNearestNeighbor.ipynb
mit
%pylab inline from classy import * """ Explanation: Example for kNearestNeighbor using the Iris Data First we need the standard import End of explanation """ data=load_excel('data/iris.xls',verbose=True) """ Explanation: Load the Data End of explanation """ print(data.vectors.shape) print(data.targets) print(data.target_names) print(data.feature_names) """ Explanation: Look at the data it's a good idea to look at the data a little bit, know the shapes, etc... End of explanation """ subset=extract_features(data,[0,2]) plot2D(subset,legend_location='upper left') """ Explanation: since you can't plot 4 dimensions, try plotting some 2D subsets I don't like the automatic placement of the legend, so lets set it manually End of explanation """ C=kNearestNeighbor() """ Explanation: I don't want to do the classification on this subset, so make sure to use the entire data set. Classification First, we choose a classifier End of explanation """ data_train,data_test=split(data,test_size=0.2) """ Explanation: Split the data into test and train subsets... End of explanation """ timeit(reset=True) C.fit(data_train.vectors,data_train.targets) print("Training time: ",timeit()) print("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets)) print("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets)) """ Explanation: ...and then train... End of explanation """
christoffkok/auxi.0
src/examples/tools/materialphysicalproperties/slags.ipynb
lgpl-3.0
from auxi.tools.materialphysicalproperties.slags import UrbainViscosityTx # create an instance of the model urbainTx = UrbainViscosityTx() # define the material state T = 1873.15 # [K] x = {'SiO2': 0.25, 'P2O5': 0.25, 'CaO': 0.25, 'MgO':0.25} # [mole fraction] # calculate the viscosity mu = urbainTx(T=T, x=x) print(urbainTx.symbol, mu, urbainTx.units) """ Explanation: Working with auxi's Slag Physical Property Models Purpose The purpose of this example is to introduce and demonstrate the slags model classes in auxi's material physical property tools package. Background The slags models provides you with the tools to calculate physical property values of liquid slags as a function of temperature and composition. The module currently contains only viscosity models, specifically those developed by Riboud and Urbain. Items Covered The following items in auxi are discussed and demonstrated in this example: * auxi.tools.materialphysicalproperties.slags.UrbainViscosityTx * auxi.tools.materialphysicalproperties.slags.UrbainViscosityTy * auxi.tools.materialphysicalproperties.slags.RiboudViscosityTx * auxi.tools.materialphysicalproperties.slags.RiboudViscosityTy Example Scope In this example we will address the following aspects: 1. Using the UrbainViscosityTx model 2. Using the UrbainViscosityTy model 3. Using the RiboudViscosityTx model 4. Using the RiboudViscosityTy model 5. Comparing results from these models with experimental data Demonstrations Using the UrbainViscosityTx Model This model calculates viscosity from temperature and composition expressed as mole fractions. End of explanation """ from auxi.tools.materialphysicalproperties.slags import UrbainViscosityTy # create an instance of the model urbainTy = UrbainViscosityTy() # define the material state T = 1873.15 # [K] y = {'SiO2': 0.25, 'P2O5': 0.25, 'CaO': 0.25, 'MgO':0.25} # [mass fraction] # calculate the viscosity mu = urbainTy(T=T, y=y) print(urbainTy.symbol, mu, urbainTy.units) """ Explanation: Using the UrbainViscosityTy Model This model calculates viscosity from temperature and composition expressed as mass fractions. End of explanation """ from auxi.tools.materialphysicalproperties.slags import RiboudViscosityTx # create an instance of the model riboudTx = RiboudViscosityTx() # define the material state T = 1873.15 # [K] x = {'SiO2': 0.25, 'P2O5': 0.25, 'CaO': 0.25, 'MgO':0.25} # [mole fraction] # calculate the viscosity mu = riboudTx(T=T, x=x) print(riboudTx.symbol, mu, riboudTx.units) """ Explanation: Using the RiboudViscosityTx Model This model calculates viscosity from temperature and composition expressed as mole fractions. End of explanation """ from auxi.tools.materialphysicalproperties.slags import RiboudViscosityTy # create an instance of the model riboudTy = RiboudViscosityTy() # define the material state T = 1873.15 # [K] y = {'SiO2': 0.25, 'P2O5': 0.25, 'CaO': 0.25, 'MgO':0.25} # [mass fraction] # calculate the viscosity mu = riboudTy(T=T, y=y) print(riboudTy.symbol, mu, riboudTy.units) """ Explanation: Using the RiboudViscosityTy Model This model calculates viscosity from temperature and composition expressed as mass fractions. End of explanation """ # import an experimental dataset from auxi from auxi.tools.materialphysicalproperties.slags import ds1 print(ds1) """ Explanation: Comparing with Experimental Data Let's compare the calculation results of the Urbain and Riboud models with experimental results from literature. First, let's import a data set, and see what it looks like. End of explanation """ # create lists to contain the calculation results measured = [] urbain = [] riboud = [] # calculate viscosities for the conditions in the data set for index, row in ds1.data.iterrows(): T = row['T'] y = {'FeO': row['FeO'], 'P2O5': row['P2O5'], 'MnO': row['MnO'], 'SiO2': row['SiO2']} measured.append(row['mu']) urbain.append(urbainTy(T=T, y=y)) riboud.append(riboudTy(T=T, y=y)) # import matplotlib so that we can plot with it import matplotlib.pyplot as plt %matplotlib inline # do the plot plt.plot(measured, urbain, "bo", alpha=0.7, label='Urbain') plt.plot(measured, riboud, "ro", alpha=0.7, label='Riboud') plt.xlabel('Experimental $\mu$ [Pa.s]') plt.ylabel('Model $\mu$ [Pa.s]') plt.legend() plt.show() """ Explanation: Now let's use the data set to test the viscosity models. End of explanation """
vbsteja/code
Python/ML_DL/DL/Neural-Networks-Demystified-master/Part 5 Numerical Gradient Checking.ipynb
apache-2.0
from IPython.display import YouTubeVideo YouTubeVideo('pHMzNW8Agq4') """ Explanation: <h1 align = 'center'> Neural Networks Demystified </h1> <h2 align = 'center'> Part 5: Numerical Gradient Checking </h2> <h4 align = 'center' > @stephencwelch </h4> End of explanation """ %pylab inline #Import Code from previous videos: from partFour import * def f(x): return x**2 epsilon = 1e-4 x = 1.5 numericalGradient = (f(x+epsilon)- f(x-epsilon))/(2*epsilon) numericalGradient, 2*x """ Explanation: Last time, we did a bunch of calculus to find the rate of change of our cost, J, with respect to our parameters, W. Although each calculus step was pretty straight forward, it’s still easy to make mistakes. What’s worse, is that our network doesn’t have a good way to tell us that it’s broken – code with incorrectly implemented gradients may appear to be functioning just fine. This is the most nefarious kind of error when building complex systems. Big, in-your-face errors suck initially, but it’s clear that you must fix this error for your work to succeed. More subtle errors can be more troublesome because they hide in your code and steal hours of your time, slowly degrading performance, while you wonder what the problem is. A good solution here is to test the gradient computation part of our code, just as developer would unit test new portions of their code. We’ll combine a simple understanding of the derivative with some mild cleverness to perform numerical gradient checking. If our code passes this test, we can be quite confident that we have computed and coded up our gradients correctly. To get started, let’s quickly review derivatives. Derivates tell us the slope, or how steep a function is. Once you’re familiar with calculus, it’s easy to take for granted the inner workings of the derivative - we just accept that the derivative of x^2 is 2x by the power rule. However, depending on how mean your calculus teacher was, you may have spent months not being taught the power rule, and instead required to compute derivatives using the definition. Taking derivatives this way is a bit tedious, but still important - it provides us a deeper understanding of what a derivative is, and it’s going to help us solve our current problem. The definition of the derivative is really a glorified slope formula. The numerator gives us the change in y values, while the denominator is convenient way to express the change in x values. By including the limit, we are applying the slope formula across an infinitely small region – it’s like zooming in on our function, until it becomes linear. The definition tells us to zoom in until our x distance is infinitely small, but computers can’t really handle infinitely small numbers, especially when they’re in the bottom parts of fractions - if we try to plug in something too small, we will quickly lose precision. The good news here is that if we plug in something reasonable small, we can still get surprisingly good numerical estimates of the derivative. We’ll modify our approach slightly by picking a point in the middle of the interval we would like to test, and call the distance we move in each direction epsilon. Let’s test our method with a simple function, x squared. We’ll choose a reasonable small value for epsilon, and compute the slope of x^2 at a given point by finding the function value just above and just below our test point. We can then compare our result to our symbolic derivative 2x, at the test point. If the numbers match, we’re in business! End of explanation """ class Neural_Network(object): def __init__(self): #Define Hyperparameters self.inputLayerSize = 2 self.outputLayerSize = 1 self.hiddenLayerSize = 3 #Weights (parameters) self.W1 = np.random.randn(self.inputLayerSize,self.hiddenLayerSize) self.W2 = np.random.randn(self.hiddenLayerSize,self.outputLayerSize) def forward(self, X): #Propogate inputs though network self.z2 = np.dot(X, self.W1) self.a2 = self.sigmoid(self.z2) self.z3 = np.dot(self.a2, self.W2) yHat = self.sigmoid(self.z3) return yHat def sigmoid(self, z): #Apply sigmoid activation function to scalar, vector, or matrix return 1/(1+np.exp(-z)) def sigmoidPrime(self,z): #Gradient of sigmoid return np.exp(-z)/((1+np.exp(-z))**2) def costFunction(self, X, y): #Compute cost for given X,y, use weights already stored in class. self.yHat = self.forward(X) J = 0.5*sum((y-self.yHat)**2) return J def costFunctionPrime(self, X, y): #Compute derivative with respect to W and W2 for a given X and y: self.yHat = self.forward(X) delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3)) dJdW2 = np.dot(self.a2.T, delta3) delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2) dJdW1 = np.dot(X.T, delta2) return dJdW1, dJdW2 #Helper Functions for interacting with other classes: def getParams(self): #Get W1 and W2 unrolled into vector: params = np.concatenate((self.W1.ravel(), self.W2.ravel())) return params def setParams(self, params): #Set W1 and W2 using single paramater vector. W1_start = 0 W1_end = self.hiddenLayerSize * self.inputLayerSize self.W1 = np.reshape(params[W1_start:W1_end], (self.inputLayerSize , self.hiddenLayerSize)) W2_end = W1_end + self.hiddenLayerSize*self.outputLayerSize self.W2 = np.reshape(params[W1_end:W2_end], (self.hiddenLayerSize, self.outputLayerSize)) def computeGradients(self, X, y): dJdW1, dJdW2 = self.costFunctionPrime(X, y) return np.concatenate((dJdW1.ravel(), dJdW2.ravel())) """ Explanation: Add helper functions to our neural network class: End of explanation """ def computeNumericalGradient(N, X, y): paramsInitial = N.getParams() numgrad = np.zeros(paramsInitial.shape) perturb = np.zeros(paramsInitial.shape) e = 1e-4 for p in range(len(paramsInitial)): #Set perturbation vector perturb[p] = e N.setParams(paramsInitial + perturb) loss2 = N.costFunction(X, y) N.setParams(paramsInitial - perturb) loss1 = N.costFunction(X, y) #Compute Numerical Gradient numgrad[p] = (loss2 - loss1) / (2*e) #Return the value we changed to zero: perturb[p] = 0 #Return Params to original value: N.setParams(paramsInitial) return numgrad """ Explanation: We can use the same approach to numerically evaluate the gradient of our neural network. It’s a little more complicated this time, since we have 9 gradient values, and we’re interested in the gradient of our cost function. We’ll make things simpler by testing one gradient at a time. We’ll “perturb” each weight - adding epsilon to the current value and computing the cost function, subtracting epsilon from the current value and computing the cost function, and then computing the slope between these two values. End of explanation """ NN = Neural_Network() numgrad = computeNumericalGradient(NN, X, y) numgrad grad = NN.computeGradients(X,y) grad """ Explanation: We’ll repeat this process across all our weights, and when we’re done we’ll have a numerical gradient vector, with the same number of values as we have weights. It’s this vector we would like to compare to our official gradient calculation. We see that our vectors appear very similar, which is a good sign, but we need to quantify just how similar they are. End of explanation """ norm(grad-numgrad)/norm(grad+numgrad) """ Explanation: A nice way to do this is to divide the norm of the difference by the norm of the sum of the vectors we would like to compare. Typical results should be on the order of 10^-8 or less if you’ve computed your gradient correctly. End of explanation """
ioam/scipy-2017-holoviews-tutorial
solutions/00-welcome-with-solutions.ipynb
bsd-3-clause
from IPython.core import page with open('../README.rst', 'r') as f: page.page(f.read()) """ Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a> <div style="float:right;"><h2>00. Introduction and Setup</h2></div> <img src="./assets/tutorial_app.gif"></img> Welcome to the HoloViews+Bokeh SciPy 2017 tutorial! This notebook serves as the homepage of the tutorial, including a general overview, instructions to check that everything is installed properly, and a table of contents listing each tutorial section. What is this all about? HoloViews is an open-source Python library that makes it simpler to explore your data and communicate the results to others. Compared to other tools, the most important feature of HoloViews is that: HoloViews lets you work seamlessly with both the data and its graphical representation. When using HoloViews, the focus is on bundling your data together with the appropriate metadata to support both analysis and plotting, making your raw data and its visualization equally accessible at all times. This tutorial will introduce HoloViews and guide you through the process of building rich, deployable visualizations based on Bokeh, Datashader, and (briefly) matplotlib. Index and Schedule This four-hour tutorial is broken down into the following sections: 40 min &nbsp;1 - Introduction: Get started by creating a variety of different HoloViews "elements". 20 min &nbsp;2 - Customizing visual appearance: How to change the appearance and output format of elements. 30 min &nbsp;3 - Exploration with containers: Using HoloViews "containers" for quick, easy data exploration. 15 min &nbsp;Break 30 min &nbsp;4 - Working with tabular data: Exploring a tabular (columnar) dataset. 20 min &nbsp;5 - Working with gridded data: Exploring a gridded (n-dimensional) dataset. 30 min &nbsp;6 - Custom interactivity: Using HoloViews "streams" to add interactivity to your visualizations. &nbsp;&nbsp;5 min &nbsp;Break 20 min &nbsp;7 - Working with large data: Using datasets too large to feed directly to your browser. 30 min &nbsp;8 - Deploying Bokeh Apps: Deploying your visualizations using Bokeh server. Related links You will find extensive support material on our website holoviews.org. In particular, you may find these links useful during the tutorial: Reference gallery: Visual reference of all elements and containers, along with some other components Getting started guide: Covers some of the same topics as this tutorial, but without exercises Getting set up Please consult the tutorial repository README for instructions on setting up your environment. Here is the condensed version of these instructions for unix-based systems (Linux or Mac OS X): bash $ conda env create -f environment.yml $ source activate hvtutorial $ cd notebooks If you have any problems with running these instructions, you can conveniently view the full instructions within this notebook by running the following cell: End of explanation """ import holoviews as hv hv.__version__ """ Explanation: If you created the environment last week, make sure to git pull, activate the hvtutorial environment in the notebooks directory and run: git pull conda env update -f ../environment.yml Now you can launch the notebook server: bash $ jupyter notebook --NotebookApp.iopub_data_rate_limit=100000000 Once the environment is set up, the following cell should print '1.8.1': End of explanation """ hv.extension('bokeh', 'matplotlib') """ Explanation: And you should see the HoloViews logo after running the following cell: End of explanation """ import bokeh import matplotlib import pandas import datashader import dask import geoviews """ Explanation: The next cell tests the key imports needed for this tutorial: End of explanation """ import os if not os.path.isfile('./assets/nyc_taxi.csv'): print('Taxi dataset not found.') """ Explanation: Lastly, let's make sure the large taxi dataset is available - instructions for acquiring this dataset may be found in the README: End of explanation """ lines = ['import holoviews as hv', 'hv.extension.case_sensitive_completion=True', "hv.Dataset.datatype = ['dataframe']+hv.Dataset.datatype"] print('\n'.join(lines)) """ Explanation: Recommended configuration The following configuration options are recommended additions to your '~/.holoviews.rc' file as they improve the tutorial experience and will be the default behaviour in future: End of explanation """ rcpath = os.path.join(os.path.expanduser('~'), '.holoviews.rc') if not os.path.isfile(rcpath): with open(rcpath, 'w') as f: f.write('\n'.join(lines)) """ Explanation: If you do not have a holoviews.rc already, simply run the following cell to generate one containing the above lines: End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/miroc-es2h/ocnbgchem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'ocnbgchem') """ Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: MIROC Source ID: MIROC-ES2H Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) """ Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) """ Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) """ Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) """ Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) """ Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) """ Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation """
Xilinx/meta-petalinux
recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-streamin-decode-display.ipynb
mit
from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') """ Explanation: Video Codec Unit (VCU) Demo Example: STREAM_IN->DECODE ->DISPLAY Introduction Video Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time. This notebook example acts as Client pipeline in streaming use case. It needs to be run along with Server notebook (vcu-demo-transcode-to-streamout. ipynb or vcu-demo-camera-encode-streamout. ipynb). It receives encoded data over network, decode using VCU and render it on DP/HDMI Monitor. Implementation Details <img src="pictures/block-diagram-streamin-decode.png" align="center" alt="Drawing" style="width: 600px; height: 200px"/> This example requires two boards, board-1 is used for transcode and stream-out (as a server) and board 2 is used for streaming-in and decode purpose (as a client) or VLC player on the host machine can be used as client instead of board-2 (More details regarding Test Setup for board-1 can be found in transcode → stream-out Example). Note: This notebook needs to be run along with "vcu-demo-transcode-to-streamout.ipynb" or "vcu-demo-camera-encode-streamout.ipynb". The configuration settings below are for Client-side pipeline. Board Setup Board 2 is used for streaming-in and decode purpose (as a client) 1. Connect 4k DP/HDMI display to board. 2. Connect serial cable to monitor logs on serial console. 3. If Board is connected to private network, then export proxy settings in /home/root/.bashrc file on board as below, - create/open a bashrc file using "vi ~/.bashrc" - Insert below line to bashrc file - export http_proxy="< private network proxy address >" - export https_proxy="< private network proxy address >" - Save and close bashrc file. 4. Connect two boards in the same network so that they can access each other using IP address. 5. Check server IP on server board. - root@zcu106-zynqmp:~#ifconfig 6. Check client IP. 7. Check connectivity for board-1 & board-2. - root@zcu106-zynqmp:~#ping <board-2's IP> 8. Run stream-in → Decode on board-2 Create test.sdp file on host with below content (Add separate line in test.sdp for each item below) and play test.sdp on host machine. 1. v=0 c=IN IP4 <Client machine IP address> 2. m=video 50000 RTP/AVP 96 3. a=rtpmap:96 H264/90000 4. a=framerate=30 Trouble-shoot for VLC player setup: 1. IP4 is client-IP address 2. H264/H265 is used based on received codec type on the client 3. Turn-off firewall in host machine if packets are not received to VLC. End of explanation """ from ipywidgets import interact import ipywidgets as widgets from common import common_vcu_demo_streamin_decode_display import os from ipywidgets import HBox, VBox, Text, Layout """ Explanation: Run the Demo End of explanation """ codec_type=widgets.RadioButtons( options=['avc', 'hevc'], description='Codec Type:', disabled=False) video_sink={'kmssink':['DP', 'HDMI'], 'fakevideosink':['none']} def print_video_sink(VideoSink): pass def select_video_sink(VideoCodec): display_type.options = video_sink[VideoCodec] sink_name = widgets.RadioButtons(options=sorted(video_sink.keys(), key=lambda k: len(video_sink[k]), reverse=True), description='Video Sink:') init = sink_name.value display_type = widgets.RadioButtons(options=video_sink[init], description='Display:') j = widgets.interactive(print_video_sink, VideoSink=display_type) i = widgets.interactive(select_video_sink, VideoCodec=sink_name) HBox([codec_type, i, j]) """ Explanation: Video End of explanation """ audio_sink={'none':['none'], 'aac':['auto','alsasink','pulsesink'],'vorbis':['auto','alsasink','pulsesink']} audio_src={'none':['none'], 'aac':['auto','alsasrc','pulsesrc'],'vorbis':['auto','alsasrc','pulsesrc']} #val=sorted(audio_sink, key = lambda k: (-len(audio_sink[k]), k)) def print_audio_sink(AudioSink): pass def print_audio_src(AudioSrc): pass def select_audio_sink(AudioCodec): audio_sinkW.options = audio_sink[AudioCodec] audio_srcW.options = audio_src[AudioCodec] audio_codecW = widgets.RadioButtons(options=sorted(audio_sink.keys(), key=lambda k: len(audio_sink[k])), description='Audio Codec:') init = audio_codecW.value audio_sinkW = widgets.RadioButtons(options=audio_sink[init], description='Audio Sink:') audio_srcW = widgets.RadioButtons(options=audio_src[init], description='Audio Src:') j = widgets.interactive(print_audio_sink, AudioSink=audio_sinkW) i = widgets.interactive(select_audio_sink, AudioCodec=audio_codecW) HBox([i, j]) """ Explanation: Audio End of explanation """ kernel_recv_buffer_size=widgets.Text(value='', placeholder='(optional) 16000000', description='Kernel Recv Buf Size:', style={'description_width': 'initial'}, #layout=Layout(width='33%', height='30px'), disabled=False) port_number=widgets.Text(value='', placeholder='(optional) 50000, 42000', description=r'Port No:', #style={'description_width': 'initial'}, # disabled=False) #kernel_recv_buffer_size HBox([kernel_recv_buffer_size, port_number]) entropy_buffers=widgets.Dropdown( options=['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'], value='5', description='Entropy Buffers Nos:', style={'description_width': 'initial'}, disabled=False,) show_fps=widgets.Checkbox( value=False, description='show-fps', #style={'description_width': 'initial'}, disabled=False) HBox([entropy_buffers, show_fps]) from IPython.display import clear_output from IPython.display import Javascript def run_all(ev): display(Javascript('IPython.notebook.execute_cells_below()')) def clear_op(event): clear_output(wait=True) return button1 = widgets.Button( description='Clear Output', style= {'button_color':'lightgreen'}, #style= {'button_color':'lightgreen', 'description_width': 'initial'}, layout={'width': '300px'} ) button2 = widgets.Button( description='', style= {'button_color':'white'}, #style= {'button_color':'lightgreen', 'description_width': 'initial'}, layout={'width': '38px'}, disabled=True ) button1.on_click(run_all) button1.on_click(clear_op) def start_demo(event): #clear_output(wait=True) arg = common_vcu_demo_streamin_decode_display.cmd_line_args_generator(port_number.value, codec_type.value, audio_codecW.value, display_type.value, kernel_recv_buffer_size.value, sink_name.value, entropy_buffers.value, show_fps.value, audio_sinkW.value); #sh vcu-demo-streamin-decode-display.sh $arg > logs.txt 2>&1 !sh vcu-demo-streamin-decode-display.sh $arg return button = widgets.Button( description='click to start vcu-stream_in-decode-display demo', style= {'button_color':'lightgreen'}, #style= {'button_color':'lightgreen', 'description_width': 'initial'}, layout={'width': '350px'} ) button.on_click(start_demo) HBox([button, button2, button1]) """ Explanation: Advanced options: End of explanation """
iris-edu/ispaq
EXAMPLES/Example3_plotPDFs.ipynb
lgpl-3.0
import sqlite3 import pandas as pd import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter import matplotlib.dates as mdates import numpy as np import datetime """ Explanation: Note: In this directory, there are two examples using PDFs: Example 3 - Plot PDF for a station, and Example 4 - Calculate PDFs from PSDs. These two examples are provided in order to highlight different ways that you can use the PDF and PSD values that ISPAQ generates. To be specific, the difference between the two examples are: Example 3 - Plot PDF for a station: Example 3 uses PDFs that already exist in the ISPAQ example database. This means that they have been calculated using an ispaq.py command with the --output db --db_name ispaq_example.db options. This is a great way to do it, especially if you plan to run the PSDs and PDFs at the same time, say on some sort of regular schedule. In that case, you might as well calculate both in the same command and store them both in the ISPAQ database for later retrieval. Additionally, we have tried to make it simple to calculate PDFs in ISPAQ for cases where you already have PSDs for the time span you are interested in. For example, PDFs calculation does not require seismic data since it instead reads in existing PSDs. That means that if you, the user, have been calculating daily PSDs for the past year, you don’t need to load a year’s worth of data to calculate a year-long PDF - you can just use the existing PSDs! By calculating that year-long PDF using ISPAQ, it will be saved to either the database or the csv file and you will be able to retrieve it later. Example 4 - Calculate PDFs from PSDs: Example 4 will calculate PDFs on the fly, meaning that they do not need to exist in the ISPAQ metric database, nor will they be saved to the ISPAQ metric database. Why would you want to do this if you can simply use an ispaq.py command to calculate and save the PDFs in the database? Here are a couple possible reasons: 1) You may want to calculate PDFs on an arbitrary timeframe but don't feel the need to save the PDF values, say if you are just poking around at or investigating changes in the data and don't want to clutter the database. 2) To prevent the ISPAQ database from growing too complicated, the pdf table in the ISPAQ database is very simple and PDFs values are stored with the start and end times used to calculate that particular PDF. If you calculate daily PDFs for a week and then additionally calculate a week-long PDF, the database will store 8 PDFs - one for each day in the week, and one that spans the entire week. This means that, even if you have used ISPAQ to calculate your arbitrary time frame, you must know the specific start and end times of the PDF that you are looking to retrieve. If you look for a time range using less-than and greater-than (<>) instead of equals-to (==) then you risk retrieving multiple PDFs, including ones that you did not intend. By using this on-the-fly method, you bypass this risk since PSDs are stored by the individual PSD (usually an hour span, can vary depending on the sample rate of the data), and only those PSDs that are needed to calculate the PDF are retrieved. Both methods are valid and can be useful in different situations. Example 3 - Plot PDF for a station The intent of this series of Jupyter Notebooks is to demonstrate how metrics can be retrieved from the ISPAQ example sqlite database and provide some ideas on how to use or plot those metrics. This example creates a PDF plot for a station using existing ISPAQ PDF values. It requires that we have the PDF values already calculated for the target for the requested days, and those values should live in the ISPAQ example database. To generate PDFs, corrected PSD values must already exist. If they do not yet exist, then you can run them via (this will take several minutes): ./run_ispaq.py -M psd_corrected -S ANMO --starttime 2020-10-01 --endtime 2020-10-16 --output db --db_name ispaq_example.db To calculate PDF values: ./run_ispaq.py -M pdf -S ANMO --starttime 2020-10-01 --endtime 2020-10-16 --output db --db_name ispaq_example.db --pdf_interval aggregated Note: The above command will also create a PDF plot if the pdf_type parameter is set to 'plot' in the preference file or on the command line. The plot created in this Jupyter notebook has a different color scheme from the default plots and does not include the noise model or the max/mode/min curves. Or to calculate both PSDs and PDFs at the same time: ./run_ispaq.py -M psd_corrected,pdf -S ANMO --starttime 2020-10-01 --endtime 2020-10-16 --output db --db_name ispaq_example.db --pdf_interval aggregated This example will assume that the above command has already been run and the PDFs already exist in the database. To begin, we need to import the necessary modules: End of explanation """ def find_nearest(array, value): array = np.asarray(array) idx = (np.abs(array - value)).argmin() return idx """ Explanation: Because PDFs are calculated for set frequency bins, which depend on the sample rate of the data, we create a simple function that will help us with placing our tick marks in the right location in the plot. End of explanation """ db_name = '../ispaq_example.db' metric = 'pdf' startDate = '2020-10-01T00:00:00.000000Z' # Full time is important for retrieving PDFs endDate = '2020-10-15T23:59:59.000000Z' target = 'IU.ANMO.00.BH1.M' startdate = startDate.split('T')[0] enddate = endDate.split('T')[0] filename = f'example3_{target}_{startdate}_{enddate}_PDF.png' """ Explanation: And now set some variables. End of explanation """ SQLcommand = f"SELECT * FROM {metric} WHERE start = '{startDate}' " \ f"and end = '{endDate}' and (target = '{target}');" print(SQLcommand) """ Explanation: The first step is to create a query that will be used to retrieve the PDFs. End of explanation """ try: conn = sqlite3.connect(db_name) DF = pd.read_sql_query(SQLcommand, conn, parse_dates=['start','end']) conn.close except: print(f"Unable to connect to or find the {metric} table in the database {db_name}") if DF.empty: print("Empty return: there are no PDFs that were retrieved") print(DF) """ Explanation: Create a connection to the database and run the query, loading it into a pandas dataframe End of explanation """ for frequency in DF['frequency'].unique(): # Sum hits for total column DF.loc[DF['frequency'] == frequency, 'total'] = sum(DF[DF['frequency'] == frequency]['hits']) """ Explanation: Sum up the total number of hits for each frequency: End of explanation """ DF['percent'] = DF['hits'] / DF['total'] * 100 """ Explanation: For each frequency-power bin, calculate what percentage of the total hits for that frequency are at that power. End of explanation """ p1 = int(min(DF['power'].unique())) p2 = int(max(DF['power'].unique())) if p1 > -190: p1 = -190 if p2 < -90: p2 = -90 powers = sorted(range(p1,p2+1), reverse=True) freqs = sorted(DF['frequency'].unique(),reverse = True) """ Explanation: Create a minimum range of powers (Y-axis) for better viewing. End of explanation """ plotDF = pd.DataFrame(0,index=powers,columns=freqs) nonZeroFreqs=[] for power in powers: for freq in freqs: value = DF[(DF['frequency']==freq) & (DF['power']== power)]['percent'].values try: plotDF.loc[power,freq] = value[0] if value[0] != 0: # Keep track of the frequencies that have hits, for axes limits nonZeroFreqs.append(freq) except: continue """ Explanation: Create a new dataframe for plotting: rows are powers, columns are periods, value is percent of hits End of explanation """ plotList = plotDF.values.tolist() """ Explanation: Matplotlib imshow takes a list (matrix) of values, so convert the dataframe to a list End of explanation """ # Set up plotting -- color map cmap = plt.get_cmap('gist_heat', 3000) # You can change the colormap here cmaplist = [cmap(i) for i in range(cmap.N)] # convert the first nchange to fade from white, so that anywhere without any hits (or very few) is white nchange = 100 for i in range(nchange): first = cmaplist[nchange][0] second = cmaplist[nchange][1] third = cmaplist[nchange][2] scaleFactor = (nchange-1-i)/float(nchange) df = ((1-first) * scaleFactor) + first ds = ((1-second)* scaleFactor) + second dt = ((1-third) * scaleFactor) + third cmaplist[i] = (df, ds, dt, 1) cmaplist[0] = (1,1,1,1) cmap = cmap.from_list('Custom cmap', cmaplist, cmap.N) # Set up plotting -- axis labeling and ticks periodPoints = [0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000] freqPoints = [1/float(i) for i in periodPoints] xfilter = [(i <= freqs[0]) and (i >= freqs[-1]) for i in freqPoints] xlabels = [i for (i, v) in zip(freqPoints, xfilter) if v] xticks = [find_nearest(freqs, i) for i in xlabels] xlabels = [int(1/i) if i<=1 else 1/i for i in xlabels] #convert to period, use decimal only if <1s yticks = [powers.index(i) for i in list(filter(lambda x: (x % 10 == 0), powers))] ylabels = [powers[i] for i in yticks] # Set up plotting -- plot height = ylabels[0] - ylabels[-1] plt.figure(figsize=( 12, (.055*height + .5) )) plt.imshow(plotList, cmap=cmap, vmin=0, vmax=30, aspect=.4, interpolation='bilinear') # Adjust grids, labels, limits, titles, etc plt.grid(linestyle=':', linewidth=1) plt.xlabel('Period (s)',size=18) plt.ylabel(r'Power [$10log_{10}(\frac{m^2/s^4}{hz}$)][dB]',size=18) plt.xticks(xticks[::-1], xlabels[::-1],size=15) plt.yticks(yticks,ylabels,size=15) xmin=freqs.index(min(nonZeroFreqs)) xmax=freqs.index(max(nonZeroFreqs)) plt.xlim(xmax,xmin) plt.ylim(max(yticks)+5,min(yticks)-5) plt.title(f"{target}\n{startdate} through {enddate}", size=18) # User has option to include colorbar and/or legend cb = plt.colorbar(fraction=.02) cb.set_label('percent probability',labelpad=5) """ Explanation: And now we set up some plotting options: End of explanation """ plt.tight_layout() plt.savefig(filename) """ Explanation: Save the figure for later use: End of explanation """
ejm553/NUREU17
LSST/VariableStarClassification/First_Sources.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from astropy.table import Table as tab """ Explanation: Inital Sources Using the sources at 007.20321 +14.87119 and RA = 20:50:00.91, dec = -00:42:23.8 taken from the NASA/IPAC Infrared Science Archieve on 6/22/17. End of explanation """ source_1 = tab.read('source1.tbl', format='ipac') #In order for this to compile properly, these filenames will need to reflect source_2 = tab.read('source2.tbl', format= 'ipac') #the directory of the user. """ Explanation: Read in the two data files. Currently, the *id's are in double format. This is different from the orginial table's long as .read() was having overflow errors End of explanation """ times_1 = source_1[0][:] #date expressed in juilian days obs_mag_1 = source_1[1][:] #observed magnitude, auto corrected? correlated? obs_mag_error_1 = source_1[2][:] #error on the observed magnitude times_2 = source_2[0][:] obs_mag_2 = source_2[1][:] obs_mag_error_2 = source_2[2][:] """ Explanation: Picking out the relevant data into their own arrays to work with. End of explanation """ plt.errorbar(times_1, obs_mag_1, yerr = obs_mag_error_1, fmt = 'ro', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed Magnitude') plt.title('Source 1 Lightcurve "All Oids"') """ Explanation: Source 1 As each data file had multiple oid's present, I plotted both the raw file and also the individual sources on their own. End of explanation """ oid_11 = np.where(source_1[3][:] == 33261000001104) plt.errorbar(times_1[oid_11], obs_mag_1[oid_11], yerr = obs_mag_error_1[oid_11], fmt = 'ro', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed Magnitude') plt.title('Source 1 Lightcurve "Oid 33261000001104') """ Explanation: Decomposed Oids End of explanation """ oid_12 = np.where(source_1[3][:] == 33262000001431) plt.errorbar(times_1[oid_12], obs_mag_1[oid_12], yerr = obs_mag_error_1[oid_12], fmt = 'ro', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed Magnitude') plt.title('Source 1 Lightcurve "Oid 33262000001431') """ Explanation: This oid doesnt seem to have an variability. And, given the plot above, it would seem that these are in fact distinct sources. End of explanation """ plt.errorbar(times_2, obs_mag_2, yerr = obs_mag_error_2, fmt = 'bo', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed mag') plt.title('Source 2 Lightcurve "All Oids"') """ Explanation: Again, this oid doesn't have any apparent variability. Source 2 End of explanation """ oid_21 = np.where(source_2[3][:] == 226831060005494) plt.errorbar(times_2[oid_21], obs_mag_2[oid_21], yerr = obs_mag_error_2[oid_21], fmt = 'bo', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed mag') plt.title('Source 2 Lightcurve "Oid 226831060005494"') """ Explanation: Decomposed Oids End of explanation """ oid_22 = np.where(source_2[3][:] == 226832060006908) plt.errorbar(times_2[oid_22], obs_mag_2[oid_22], yerr = obs_mag_error_2[oid_22], fmt = 'bo', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed mag') plt.title('Source 2 Lightcurve "Oid 226832060006908"') oid_23 = np.where(source_2[3][:] == 26832000005734) plt.errorbar(times_2[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3) plt.xlabel('MJD') plt.ylabel('Observed mag') plt.title('Source 2 Lightcurve "Oid 26832000005734"') """ Explanation: This is just a single point so it is likely to be some sort of outlier or misattributed source. End of explanation """ primary_period_1 = 0.191486 #taken from the NASA Exoplanet Archieve Periodogram Service phase_21 = (times_2 % primary_period_1) / primary_period_1 plt.errorbar(phase_21[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3) plt.xlabel('Phase') plt.ylabel('Observed mag') plt.title('Source 2 Periodic Lightcurve For Oid 226832060006908') """ Explanation: Folded Lightcurves For oids 226832060006908 and 26832000005734 End of explanation """ primary_period_2 = 2.440220 phase_22 = (times_2 % primary_period_2) / primary_period_2 plt.errorbar(phase_22[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3) plt.xlabel('Phase') plt.ylabel('Observed mag') plt.title('Source 2 Periodic Lightcurve For Oid 26832000005734') """ Explanation: There maybe some periodic variability here. A fit of a cosine might be able to reproduce this data. However, it apprears to be scattered fairly randomly. End of explanation """
MingChen0919/learning-apache-spark
notebooks/07-natural-language-processing/nlp-and-nltk-basics.ipynb
mit
from pyspark import SparkContext sc = SparkContext(master = 'local') from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() """ Explanation: NLP and NLTK Basics SparkContext and SparkSession End of explanation """ import pandas as pd pdf = pd.DataFrame({ 'texts': [['I', 'like', 'playing', 'basketball'], ['I', 'like', 'coding'], ['I', 'like', 'machine', 'learning', 'very', 'much']] }) df = spark.createDataFrame(pdf) df.show(truncate=False) """ Explanation: A lot of examples in this article are borrowed from the book written by Bird et al. (2009). Here I tried to implement the examples from the book with spark as much as possible. Refer to the book for more details: Bird, Steven, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc.", 2009. Basic terminology text: a sequence of words and punctuation. frequency distribution: the frequency of words in a text object. collocation: a sequence of words that occur together unusually often. bigrams: word pairs. High frequent bigrams are collocations. corpus: a large body of text wordnet: a lexical database in which english words are grouped into sets of synonyms (also called synsets). text normalization: the process of transforming text into a single canonical form, e.g., converting text to lowercase, removing punctuations and so on. Lemmatization: the process of grouping variant forms of the same word so that they can be analyzed as a single item. Stemming: the process of reducing inflected words to their word stem. tokenization: segmentation: chunking: Texts as lists of words Create a data frame consisting of text elements. End of explanation """ from pyspark.ml.feature import NGram from pyspark.ml import Pipeline ngrams = [NGram(n=n, inputCol='texts', outputCol=str(n)+'-grams') for n in [2,3,4]] # build pipeline model pipeline = Pipeline(stages=ngrams) # transform data texts_ngrams = pipeline.fit(df).transform(df) # display result texts_ngrams.select('2-grams').show(truncate=False) texts_ngrams.select('3-grams').show(truncate=False) texts_ngrams.select('4-grams').show(truncate=False) """ Explanation: Ngrams and collocations Transform texts to 2-grams, 3-grams and 4-grams collocations. End of explanation """ from nltk.corpus import gutenberg gutenberg_fileids = gutenberg.fileids() gutenberg_fileids """ Explanation: Access corpora from the NLTK package The gutenberg corpus Get file ids in gutenberg corpus End of explanation """ gutenberg.abspath(gutenberg_fileids[0]) """ Explanation: Absolute path of a file End of explanation """ gutenberg.raw(gutenberg_fileids[0])[:200] """ Explanation: Raw text End of explanation """ gutenberg.words() len(gutenberg.words()) """ Explanation: The words of the entire corpus End of explanation """ gutenberg.sents(gutenberg_fileids[0]) len(gutenberg.sents(gutenberg_fileids[0])) """ Explanation: Sentences of a specific file End of explanation """ from nltk.corpus import PlaintextCorpusReader corpus_data = PlaintextCorpusReader('./data', '.*') """ Explanation: Loading custom corpus Let's create a corpus consisting all files from the ./data directory. End of explanation """ data_fileids = corpus_data.fileids() data_fileids """ Explanation: Files in the corpus corpus_data End of explanation """ corpus_data.raw('twitter.txt') """ Explanation: Raw text in twitter.txt End of explanation """ corpus_data.words(fileids='twitter.txt') len(corpus_data.words(fileids='twitter.txt')) corpus_data.sents(fileids='twitter.txt') len(corpus_data.sents(fileids='twitter.txt')) """ Explanation: Words and sentences in file twitter.txt End of explanation """ from nltk.corpus import wordnet wordnet.synsets pdf = pd.DataFrame({ 'car_synsets': [synsets._name for synsets in wordnet.synsets('car')] }) df = spark.createDataFrame(pdf) df.show() """ Explanation: WordNet The nltk.corpus.wordnet.synsets() function load all synsents with a given lemma and part of speech tag. Load all synsets into a spark data frame given the lemma car. End of explanation """ from pyspark.sql.functions import udf from pyspark.sql.types import * from nltk.corpus import wordnet def lemma_names_from_synset(x): synset = wordnet.synset(x) return synset.lemma_names() lemma_names_from_synset('car.n.02') # synset_lemmas_udf = udf(lemma_names_from_synset, ArrayType(StringType())) """ Explanation: Get lemma names given a synset End of explanation """
metpy/MetPy
v1.1/_downloads/83b6998284b63bb8a8f46a92e71d6000/isentropic_example.ipynb
bsd-3-clause
import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt import numpy as np import xarray as xr import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, add_timestamp from metpy.units import units """ Explanation: Isentropic Analysis The MetPy function mpcalc.isentropic_interpolation allows for isentropic analysis from model analysis data in isobaric coordinates. End of explanation """ data = xr.open_dataset(get_test_data('narr_example.nc', False)) print(list(data.variables)) """ Explanation: Getting the data In this example, NARR reanalysis data for 18 UTC 04 April 1987 from the National Centers for Environmental Information will be used. End of explanation """ data = data.squeeze().set_coords(['lon', 'lat']) """ Explanation: We will reduce the dimensionality of the data as it is pulled in to remove an empty time dimension, as well as add longitude and latitude as coordinates (instead of data variables). End of explanation """ isentlevs = [296.] * units.kelvin """ Explanation: To properly interpolate to isentropic coordinates, the function must know the desired output isentropic levels. An array with these levels will be created below. End of explanation """ isent_data = mpcalc.isentropic_interpolation_as_dataset( isentlevs, data['Temperature'], data['u_wind'], data['v_wind'], data['Specific_humidity'], data['Geopotential_height'] ) """ Explanation: Conversion to Isentropic Coordinates Once three dimensional data in isobaric coordinates has been pulled and the desired isentropic levels created, the conversion to isentropic coordinates can begin. Data will be passed to the function as below. The function requires that isentropic levels, as well as a DataArray of temperature on isobaric coordinates be input. Any additional inputs (in this case specific humidity, geopotential height, and u and v wind components) will be logarithmicaly interpolated to isentropic space. End of explanation """ isent_data """ Explanation: The output is an xarray Dataset: End of explanation """ isent_data['u_wind'] = isent_data['u_wind'].metpy.convert_units('kt') isent_data['v_wind'] = isent_data['v_wind'].metpy.convert_units('kt') """ Explanation: Note that the units on our wind variables are not ideal for plotting. Instead, let us convert them to more appropriate values. End of explanation """ isent_data['Relative_humidity'] = mpcalc.relative_humidity_from_specific_humidity( isent_data['pressure'], isent_data['temperature'], isent_data['Specific_humidity'] ).metpy.convert_units('percent') """ Explanation: Converting to Relative Humidity The NARR only gives specific humidity on isobaric vertical levels, so relative humidity will have to be calculated after the interpolation to isentropic space. End of explanation """ # Set up our projection and coordinates crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0) lon = isent_data['pressure'].metpy.longitude lat = isent_data['pressure'].metpy.latitude # Coordinates to limit map area bounds = [(-122., -75., 25., 50.)] # Choose a level to plot, in this case 296 K (our sole level in this example) level = 0 fig = plt.figure(figsize=(17., 12.)) add_metpy_logo(fig, 120, 245, size='large') ax = fig.add_subplot(1, 1, 1, projection=crs) ax.set_extent(*bounds, crs=ccrs.PlateCarree()) ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75) ax.add_feature(cfeature.STATES, linewidth=0.5) # Plot the surface clevisent = np.arange(0, 1000, 25) cs = ax.contour(lon, lat, isent_data['pressure'].isel(isentropic_level=level), clevisent, colors='k', linewidths=1.0, linestyles='solid', transform=ccrs.PlateCarree()) cs.clabel(fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot RH cf = ax.contourf(lon, lat, isent_data['Relative_humidity'].isel(isentropic_level=level), range(10, 106, 5), cmap=plt.cm.gist_earth_r, transform=ccrs.PlateCarree()) cb = fig.colorbar(cf, orientation='horizontal', aspect=65, shrink=0.5, pad=0.05, extendrect='True') cb.set_label('Relative Humidity', size='x-large') # Plot wind barbs ax.barbs(lon.values, lat.values, isent_data['u_wind'].isel(isentropic_level=level).values, isent_data['v_wind'].isel(isentropic_level=level).values, length=6, regrid_shape=20, transform=ccrs.PlateCarree()) # Make some titles ax.set_title(f'{isentlevs[level]:~.0f} Isentropic Pressure (hPa), Wind (kt), ' 'Relative Humidity (percent)', loc='left') add_timestamp(ax, isent_data['time'].values.astype('datetime64[ms]').astype('O'), y=0.02, high_contrast=True) fig.tight_layout() """ Explanation: Plotting the Isentropic Analysis End of explanation """ # Calculate Montgomery Streamfunction and scale by 10^-2 for plotting msf = mpcalc.montgomery_streamfunction( isent_data['Geopotential_height'], isent_data['temperature'] ).values / 100. # Choose a level to plot, in this case 296 K level = 0 fig = plt.figure(figsize=(17., 12.)) add_metpy_logo(fig, 120, 250, size='large') ax = plt.subplot(111, projection=crs) ax.set_extent(*bounds, crs=ccrs.PlateCarree()) ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75) ax.add_feature(cfeature.STATES.with_scale('50m'), linewidth=0.5) # Plot the surface clevmsf = np.arange(0, 4000, 5) cs = ax.contour(lon, lat, msf[level, :, :], clevmsf, colors='k', linewidths=1.0, linestyles='solid', transform=ccrs.PlateCarree()) cs.clabel(fontsize=10, inline=1, inline_spacing=7, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot RH cf = ax.contourf(lon, lat, isent_data['Relative_humidity'].isel(isentropic_level=level), range(10, 106, 5), cmap=plt.cm.gist_earth_r, transform=ccrs.PlateCarree()) cb = fig.colorbar(cf, orientation='horizontal', aspect=65, shrink=0.5, pad=0.05, extendrect='True') cb.set_label('Relative Humidity', size='x-large') # Plot wind barbs ax.barbs(lon.values, lat.values, isent_data['u_wind'].isel(isentropic_level=level).values, isent_data['v_wind'].isel(isentropic_level=level).values, length=6, regrid_shape=20, transform=ccrs.PlateCarree()) # Make some titles ax.set_title(f'{isentlevs[level]:~.0f} Montgomery Streamfunction ' r'($10^{-2} m^2 s^{-2}$), Wind (kt), Relative Humidity (percent)', loc='left') add_timestamp(ax, isent_data['time'].values.astype('datetime64[ms]').astype('O'), y=0.02, pretext='Valid: ', high_contrast=True) fig.tight_layout() plt.show() """ Explanation: Montgomery Streamfunction The Montgomery Streamfunction, ${\psi} = gdz + CpT$, is often desired because its gradient is proportional to the geostrophic wind in isentropic space. This can be easily calculated with mpcalc.montgomery_streamfunction. End of explanation """
KGPML/Hyperspectral
IndianPinesCNN.ipynb
gpl-3.0
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import patch_size import tensorflow as tf # The IndianPines dataset has 16 classes, representing different kinds of land-cover. NUM_CLASSES = 16 # We will classify each patch IMAGE_SIZE = patch_size.patch_size IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE *220 """ Explanation: Builds the IndianPines network. Implements the inference/loss/training pattern for model building. 1. inference() - Builds the model as far as is required for running the network forward to make predictions. 2. loss() - Adds to the inference model the layers required to generate loss. 3. training() - Adds to the loss model the Ops required to generate and apply gradients. This file is used by the various "fully_connected_*.py" files and not meant to be run. End of explanation """ def inference(images, conv1_channels, conv2_channels, fc1_units, fc2_units): """Build the IndianPines model up to where it may be used for inference. Args: images: Images placeholder, from inputs(). conv1_channels: Number of filters in the first convolutional layer. conv2_channels: Number of filters in the second convolutional layer. fc1_units = Number of units in the first fully connected hidden layer fc2_units = Number of units in the second fully connected hidden layer Returns: softmax_linear: Output tensor with the computed logits. """ # Conv 1 with tf.name_scope('conv_1') as scope: weights = tf.get_variable('weights', shape=[5, 5, 220, conv1_channels], initializer=tf.contrib.layers.xavier_initializer_conv2d()) biases = tf.get_variable('biases', shape=[conv1_channels], initializer=tf.constant_initializer(0.05)) # converting the 1D array into a 3D image x_image = tf.reshape(images, [-1,IMAGE_SIZE,IMAGE_SIZE,220]) z = tf.nn.conv2d(x_image, weights, strides=[1, 1, 1, 1], padding='VALID') h_conv1 = tf.nn.relu(z+biases, name=scope.name) # Maxpool 1 h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='h_pool1') # Conv2 with tf.variable_scope('h_conv2') as scope: weights = tf.get_variable('weights', shape=[5, 5, conv1_channels, conv2_channels], initializer=tf.contrib.layers.xavier_initializer_conv2d()) biases = tf.get_variable('biases', shape=[conv2_channels], initializer=tf.constant_initializer(0.05)) z = tf.nn.conv2d(h_pool1, weights, strides=[1, 1, 1, 1], padding='VALID') h_conv2 = tf.nn.relu(z+biases, name=scope.name) # Maxpool 2 h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='h_pool2') # FIXED in python file #size_after_conv_and_pool_twice = 4 size_after_conv_and_pool_twice = int(math.ceil((math.ceil(float(IMAGE_SIZE-KERNEL_SIZE+1)/2)-KERNEL_SIZE+1)/2)) #Reshape from 4D to 2D h_pool2_flat = tf.reshape(h_pool2, [-1, (size_after_conv_and_pool_twice**2)*conv2_channels]) # FC 1 with tf.name_scope('h_FC1') as scope: weights = tf.Variable( tf.truncated_normal([size_after_conv_and_pool_twice, fc1_units], stddev=1.0 / math.sqrt(float(size_after_conv_and_pool_twice))), name='weights') biases = tf.Variable(tf.zeros([fc1_units]), name='biases') h_FC1 = tf.nn.relu(tf.matmul(h_pool2_flat, weights) + biases, name=scope.name) # FC 2 with tf.name_scope('h_FC2'): weights = tf.Variable( tf.truncated_normal([fc1_units, fc2_units], stddev=1.0 / math.sqrt(float(fc1_units))), name='weights') biases = tf.Variable(tf.zeros([fc2_units]), name='biases') h_FC2 = tf.nn.relu(tf.matmul(h_FC1, weights) + biases, name=scope.name) # Linear with tf.name_scope('softmax_linear'): weights = tf.Variable( tf.truncated_normal([fc2_units, NUM_CLASSES], stddev=1.0 / math.sqrt(float(fc2_units))), name='weights') biases = tf.Variable(tf.zeros([NUM_CLASSES]), name='biases') logits = tf.matmul(h_FC2, weights) + biases return logits """ Explanation: Build the IndianPines model up to where it may be used for inference. Args: * images: Images placeholder, from inputs(). * hidden1_units: Size of the first hidden layer. * hidden2_units: Size of the second hidden layer. Returns: * softmax_linear: Output tensor with the computed logits. End of explanation """ def loss(logits, labels): """Calculates the loss from the logits and the labels. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size]. Returns: loss: Loss tensor of type float. """ labels = tf.to_int64(labels) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits, labels, name='xentropy') loss = tf.reduce_mean(cross_entropy, name='xentropy_mean') return loss """ Explanation: Define the loss function End of explanation """ def training(loss, learning_rate): """Sets up the training Ops. Creates a summarizer to track the loss over time in TensorBoard. Creates an optimizer and applies the gradients to all trainable variables. The Op returned by this function is what must be passed to the `sess.run()` call to cause the model to train. Args: loss: Loss tensor, from loss(). learning_rate: The learning rate to use for gradient descent. Returns: train_op: The Op for training. """ # Add a scalar summary for the snapshot loss. tf.scalar_summary(loss.op.name, loss) # Create the gradient descent optimizer with the given learning rate. optimizer = tf.train.GradientDescentOptimizer(learning_rate) # Create a variable to track the global step. global_step = tf.Variable(0, name='global_step', trainable=False) # Use the optimizer to apply the gradients that minimize the loss # (and also increment the global step counter) as a single training step. train_op = optimizer.minimize(loss, global_step=global_step) return train_op """ Explanation: Define the Training OP End of explanation """ def evaluation(logits, labels): """Evaluate the quality of the logits at predicting the label. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size], with values in the range [0, NUM_CLASSES). Returns: A scalar int32 tensor with the number of examples (out of batch_size) that were predicted correctly. """ # For a classifier model, we can use the in_top_k Op. # It returns a bool tensor with shape [batch_size] that is true for # the examples where the label is in the top k (here k=1) # of all logits for that example. correct = tf.nn.in_top_k(logits, labels, 1) # Return the number of true entries. return tf.reduce_sum(tf.cast(correct, tf.int32)) """ Explanation: Define the Evaluation OP End of explanation """
Caranarq/01_Dmine
01_Agua/.ipynb_checkpoints/agua-checkpoint.ipynb
gpl-3.0
# librerías utilizadas from IPython.display import Markdown, Image %matplotlib inline from __future__ import division import numpy as np import pandas as pd import matplotlib.pyplot as plt # Configuracion del sistema import sys; print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) """ Explanation: DIMENSIÓN: AGUA Definición El agua potable es el principal recurso que hace posible la actividad humana dentro de las ciudades. Es imposible para una ciudad alcanzar la sustentabilidad si sus fuentes de abastecimiento de agua son insuficientes para satisfacer la demanda que la ciudad tiene en el presente y que tendrá en el futuro. Para medir el acceso de una ciudad al agua potable, es necesario conocer la calidad y extensión de la red de distribución de agua potable. Para medir la sustentabilidad del recurso, hay que conocer la demanda de agua potable y la capacidad de renovación de las fuentes acuíferas naturales que abastecen a la ciudad. El agua es uno de los 17 temas que la Organizacion de las Naciones Unidas considera dentro de sus Objetivos de Desarrollo Sostenible (SDG, Sustainable Development Goals). Para dar seguimiento a estos objetivos, un grupo interdisciplinario de expertos de la ONU trabaja en el desarrollo de indicadores para el seguimiento a objetivos. En la 48a sesión de la Comisión de Estadística de las Naciones Unidas, celebrada en Marzo de 2017, este grupo liberó un conjunto de 232 indicadores sobre los cuales se ha alcanzado consenso. La ONU considera diferentes indicadores para medir la sustentabilidad del agua. Para el desarrollo de la Plataforma de Conocimiento de Ciudades Sustentables se construirán indicadores en base a los desarrollados por la ONU. |ID PCCS|DESCRIPCION DEL INDICADOR|ID SDG (base)| |--- |:---|--- | | |AG01. Disponibilidad y acceso al Agua| | |AG01.1|Proporción de la población que utiliza servicios de agua potable manejados de manera saludable|6.1.1| |AG01.2|Nivel de estrés de sustracción de agua en proporción a los recursos disponibles de agua potable.|6.4.2| |AG01.3|Proporción de cuerpos de agua con buena calidad del agua en su estado natural.|6.3.2| |AG01.4|Cambios en la eficiencia en la utilización del agua a través del tiempo.|6.4.1| |AG01.5|Cambio en la extensión de ecosistemas relacionados con el agua a través del tiempo.|6.6.1| | |02 - Sanitizacion del Agua| | |AG02.1|Proporcion de la poblacion que utiliza servicios de sanitización manejados de manera saludable, <br>incluyendo insltalaciones con jabón para lavado de manos.|6.2.1| |AG02.2|Proporción de agua de desperdicio tratada de manera segura|6.3.1| | |03 - Administración del Agua| | |AG03.1|Grado de implementación de la Administración Integral de los recursos acuíferos (0-100).|6.5.1| |AG03.2|Proporción de cuencas transfronterizas con un arreglo operacional para cooperación|6.5.2| |AG03.3|Cantidad asistencia oficial al desarrollo acuífero y de sanitización que forma parte de un <br>plan de gastos coordinado por el gobierno.|6.a.1| |AG03.4|Proporción de unidades administrativas locales con políticas y procedimientos establecidos <br>y operacioneales para la participación de comunidades locales en la administración y saneamiento del agua.|6.b.1| Los indicadores para el seguimiento a los Objetivos de Desarrollo Sustentable de la ONU fueron desarrollados para establecer punto de referencia entre países. Aplicarlos directamente a las ciudades mexicanas representa un reto por la disponibilidad de la información para las ciudades mexicanas, tanto por el nivel de desagregación disponible como por parámetros para los cuales no existe seguimiento a la fecha de la realización de este estudio. Por este motivo, Este estudio toma como base los indicadores propuestos por la ONU para generar indicadores locales aplicables a la realidad mexicana. Introducción Este documento contiene las notas del análisis de los datos disponibles para los municipios que forman parte de alguna de las ciudades que componen el Sistema Urbano Nacional (SUN). El principal objetivo de estas notas es conocer para cuántas zonas metropolitanas se puede calcular una calificación, para que esto pueda hacerse es necesario que todos los municipios de una ciudad cuenten con todas las variables que serán utilizadas para definir los parámetros de la dimensión. La selección de las variables se definirá con el ejercicio descrito en estas notas. Este documento contiene las secuencias de extracción y procesamiento de los datos para generar los indicadores de Agua para la Plataforma de Conocimiento de Ciudades Sustentables. End of explanation """ data = pd.read_csv("test.csv") Markdown('La base de datos tiene {} filas y {} columnas. Cada columna es una variable:'.format(data.shape[0], data.shape[1])) """ Explanation: Datos La base de datos está en formato delimitado por comas (csv). Este archivo es el resultado de la extracción de datos a los archivos que el INEGI distribuye en sus anuarios estadísticos. End of explanation """ vdesc = data.columns[0:8] for i,a in enumerate(data.columns): print(i,a) """ Explanation: Las 38 variables que contiene la base de datos son: End of explanation """ # Agrupación de todos los municipios según su clave SUN. zm = data.groupby("cve_sun").count() print("Zonas metropolitanas en total: ",zm.shape[0]) """ Explanation: Las primeras ocho variables, con índices del cero al siete, son aspectos descriptivos. A partir de la novena columna, con índice ocho, se encuentran las variables con información cuantitativa sobre consumo, almacenamiento, fuentes de agua, entre otras. Cada fila corresponde a un municipio y cada municipio corresponde a una zona metropolitana. Para saber a qué ciudad, o zona metropolitana, corresponde un municipio debe de observarse su clave del SUN (cve_sun). El siguiente paso es agrupar a todos los municipios de acuerdo con esta clave. End of explanation """ # Agrupación según clave SUN con todas las variables con valor diferente a "NaN". zm_sinNaN = data.dropna().groupby("cve_sun").count() print("Zonas metropolitanas sin valores 'NaN': ",zm_sinNaN.shape[0]) """ Explanation: Selección de datos para el estudio Para la creación de indicadores es necesario contar con información completa. En esta sección se seleccionarán de la base de datos, las ciudades del SUN con información completa. Python identifica las celdas sin información disponible con el indicador NaN. Por lo tanto, para identificar los municipios que cuentan con información completa será necesario filtrar todos los municipios que no tengan ningún NaN en sus variables. End of explanation """ # Lista de claves SUN de las ciudades sin 'NaNs' para filtrar. cves_sinNaN = zm_sinNaN.index.tolist() # Filtro de ciudades con la lista anterior. zm = zm.loc[zm.index.isin(cves_sinNaN)] # ¿Qué ciudades tienen la información completa para todos sus municipios?() (zm_sinNaN / zm)[:10] """ Explanation: Tal y como muestran las líneas de código anteriores, exiten 59 zonas metropolitanas; cada una está compuesta por cierto número de municipios. Sin embargo, la tabla de datos original tiene muchos valores vacíos (NaN) por lo que, al filtar todos los municipios que no tienen valores vacíos y al agruparlos según su clave SUN, en realidad, de acuerdo con los datos, existen 38 ciudades cuyos municipios cuentan con todas las variables disponibles. Esto no quiere decir que se cuenta con toda la información para todos los municipios que conforman estas 38 ciudades; la variable zm_sinNaN enlista el número de municipios por ciudad que no presentan valores NaN. Por ende, el siguiente paso es observar cuántas ciudades tienen toda la información para todos sus municipios. End of explanation """ # Municipios por ciudad con información completa entre el total de los municipios que existen en esa ciudad. zm = zm_sinNaN / zm # Filtar aquellas ciudades cuyos municipios cuentan con toda la información disponible. zm = zm[zm['_id'] == 1] zm.head() """ Explanation: Aquellas ciudades cuya proporción es igual a uno serán seleccionadas. End of explanation """ muns = data.loc[data['cve_sun'].isin(zm.index.tolist())] muns.head() """ Explanation: Ya que se tienen las claves SUN de las ciudades sin datos faltantes hay que filtrar los datos de los municipios y declarar una nueva variable con esta información. End of explanation """ infoc = muns[muns.columns[5:8]].sort_values('cve_sun') infoc.to_csv(r'.\datasets\infoc.csv') infoc """ Explanation: Las ciudades que cuentan con la información completa para todos sus municipios son las siguientes: End of explanation """ Image('info_map.png', width=640) """ Explanation: El siguiente mapa muestra los municipios pertenecientes a Zonas Urbanas que cuentan con información completa: End of explanation """ dispyacc = list(); dispyacc.append(data.columns[8]) dispyacc = dispyacc + data.columns[13:24].tolist(); dispyacc """ Explanation: Indicadores 01 - Disponibilidad y acceso al agua AG01.1 - PROPORCIÓN DE LA POBLACIÓN QUE UTILIZA SERVICIOS DE AGUA POTABLE MANEJADOS DE MANERA SALUDABLE Los servicios de agua potable manejados de manera saludable se definen como aquéllas fuentes mejoradas de agua potable que se encuentran dentro de las instalaciones y disponibles cuando son necesarias, libres de contaminación por materia fecal o química. Una fuente mejorada de agua potable se define como aquélla que por la naturaleza de su construcción o a través de intervención activa, está protegida de la contaminación externa. Este indicador está diseñado en base al indicador 6.1.1 de los Objetivos de Desarrollo Sostenible de la ONU. Metodología: Como se mencionó anteriormente, no todas las ciudades del SUN cuentan con información completa de indicadores de agua. Si bien para generar un indicador integral de sustentabilidad del agua es necesario que una ciudad cuente con la información completa para todos sus municipios, esto no nos impide analizar de manera individual cada indicador para las ciudades que cuenten con la información. En la base de datos hay 12 columnas relacionadas al acceso a agua potable: End of explanation """ print('Los datos disponibles en la base de datos acerca de agua entubada son los siguientes:') print('\tTotal de municipios en la base de datos: {}'.format(len(data['entubada_total']))) print('\tMunicipios que cuentan con registros de agua entubada: {}'.format(len(data['entubada_total'].dropna()))) # Dataset de agua entubada c_disponibilidad = vdesc.tolist() c_disponibilidad.append('entubada_total'); c_disponibilidad.remove('collection') data_disponibilidad = data[c_disponibilidad] data_disponibilidad.to_csv('disponibilidad.csv') data_disponibilidad.head() """ Explanation: El campo "entubada_total" es el porcentaje de viviendas dentro del municipio que cuentan con agua potable entubada mientras que "entubada_dentro_de_vivienda" y "entubada_fuera_de_vivienda_dentro_de_terreno" muestran la proporción de estas viviendas que cuentan con tubería hasta la vivienda o cuya tubería entra al terreno sin llegar a la vivienda, respectivamente. Para este indicador basta utilizar los datos de entubada_total. El acarreo de agua potable no se considera una fuente mejorada de agua potable, toda vez que no es trazable la salubridad de la fuente. End of explanation """ data_disponibilidad['entubada_total'].dropna().describe() """ Explanation: Quitando los municipios que no tienen información disponible, las estadísticas básicas de la información disponible son las siguientes: End of explanation """ # Casos que serán excluidos del análisis data_disponibilidad[data_disponibilidad['entubada_total']>100] # Declaracion de variable con datos para el análisis analisis_disponibilidad = data_disponibilidad[data_disponibilidad['entubada_total']<100] analisis_disponibilidad = analisis_disponibilidad.set_index('entidad') """ Explanation: Los primeros 3 cuartiles están por debajo del 99%, mientras que es de extrañar que el valor máximo sea 157884. Muy probablemente los valores que superan el 100 están capturados de manera errónea, por lo que serán excluidos del análisis. End of explanation """ analisis_disponibilidad['entubada_total'].describe() analisis_disponibilidad.boxplot(column = 'entubada_total', by = 'entidad', figsize = (14, 4)) plt.suptitle('') plt.title('Acceso al agua potable en ciudades por entidad', fontsize=16) plt.ylabel('% de acceso al agua') plt.show() """ Explanation: Una vez excluidos estos valores, las estadísticas básicas de la información son las siguientes: End of explanation """ analisis_disponibilidad.to_csv('AG01_1.csv') """ Explanation: La informacion de este análisis puede ser exportada en formato CSV para integrarla al sistema de información Geográfica de la Plataforma de Conocimiento sobre Ciudades Sustentables. End of explanation """
nbokulich/short-read-tax-assignment
ipynb/mock-community/taxonomy-assignment-qiime2.ipynb
bsd-3-clause
from os.path import join, exists, split, sep, expandvars from os import makedirs, getpid from glob import glob from shutil import rmtree import csv import json import tempfile from itertools import product from qiime2.plugins import feature_classifier from qiime2 import Artifact from joblib import Parallel, delayed from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import HashingVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from q2_feature_classifier.classifier import spec_from_pipeline from q2_types.feature_data import DNAIterator from pandas import DataFrame from tax_credit.framework_functions import ( gen_param_sweep, generate_per_method_biom_tables, move_results_to_repository) project_dir = expandvars('$HOME/Desktop/projects/short-read-tax-assignment/') analysis_name = 'mock-community' data_dir = join(project_dir, 'data', analysis_name) reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/") results_dir = expandvars("$HOME/Desktop/projects/mock-community/") """ Explanation: Data generation: using Python to sweep over methods and parameters In this notebook, we illustrate how to use Python to perform parameter sweeps for a taxonomic assigner and integrate the results into the TAX CREdiT framework. Environment preparation End of explanation """ # *** one glaring flaw here is that generate_pipeline_sweep iterates # *** through all method_parameters_combinations and reference_dbs # *** and hence will generate training sets for each combo even if # *** not all are called by commands in sweep. This is not an issue # *** if sweep uses all classifiers but is inconvenient if attempting # *** to test on a subset of sweep. Need to explicitly set all inputs! def train_and_run_classifier(method_parameters_combinations, reference_dbs, pipelines, sweep, verbose=False, n_jobs=4): '''Train and run q2-feature-classifier across a parameter sweep. method_parameters_combinations: dict of dicts of lists Classifier methods to run and their parameters/values to sweep Format: {method_name: {'parameter_name': [parameter_values]}} reference_dbs: dict of tuples Reference databases to use for classifier training. Format: {database_name: (ref_seqs, ref_taxonomy)} pipelines: dict Classifier pipelines to use for training each method. Format: {method_name: sklearn.pipeline.Pipeline} sweep: list of tuples output of gen_param_sweep(), format: (parameter_output_dir, input_dir, reference_seqs, reference_tax, method, params) n_jobs: number of jobs to run in parallel. ''' # train classifier once for each pipeline param combo for method, db, pipeline_param, subsweep in generate_pipeline_sweep( method_parameters_combinations, reference_dbs, sweep): ref_reads, ref_taxa = reference_dbs[db] # train classifier classifier = train_classifier( ref_reads, ref_taxa, pipeline_param, pipelines[method], verbose=verbose) # run classifier. Only run in parallel once classifier is trained, # to minimize memory usage (don't want to train large refs in parallel) Parallel(n_jobs=n_jobs)(delayed(run_classifier)( classifier, output_dir, input_dir, split_params(params)[0], verbose=verbose) for output_dir, input_dir, rs, rt, mt, params in subsweep) def generate_pipeline_sweep(method_parameters_combinations, reference_dbs, sweep): '''Generate pipeline parameters for each classifier training step''' # iterate over parameters for method, params in method_parameters_combinations.items(): # split out pipeline parameters classifier_params, pipeline_params = split_params(params) # iterate over reference dbs for db, refs in reference_dbs.items(): # iterate over all pipeline parameter combinations for param_product in product(*[params[id_] for id_ in pipeline_params]): # yield parameter combinations to use for a each classifier pipeline_param = dict(zip(pipeline_params, param_product)) subsweep = [p for p in sweep if split_params(p[5])[1] == pipeline_param and p[2] == refs[0]] yield method, db, pipeline_param, subsweep def train_classifier(ref_reads, ref_taxa, params, pipeline, verbose=False): ref_reads = Artifact.load(ref_reads) ref_taxa = Artifact.load(ref_taxa) pipeline.set_params(**params) spec = json.dumps(spec_from_pipeline(pipeline)) if verbose: print(spec) classifier = feature_classifier.methods.fit_classifier(ref_reads, ref_taxa, spec) #return classifier.classifier def run_classifier(classifier, output_dir, input_dir, params, verbose=False): # Classify the sequences rep_seqs = Artifact.load(join(input_dir, 'rep_seqs.qza')) if verbose: print(output_dir) classification = feature_classifier.methods.classify(rep_seqs, classifier, **params) # Save the results makedirs(output_dir, exist_ok=True) output_file = join(output_dir, 'rep_set_tax_assignments.txt') dataframe = classification.classification.view(DataFrame) dataframe.to_csv(output_file, sep='\t', header=False) def split_params(params): classifier_params = feature_classifier.methods.\ classify.signature.parameters.keys() pipeline_params = {k:v for k, v in params.items() if k not in classifier_params} classifier_params = {k:v for k, v in params.items() if k in classifier_params} return classifier_params, pipeline_params """ Explanation: Utility Methods The below methods are used to load the data, prepare the data, parse the classifier and classification parameters, and fit and run the classifier. They should probably be moved to tax_credit.framework_functions. End of explanation """ dataset_reference_combinations = [ ('mock-1', 'gg_13_8_otus'), # formerly S16S-1 ('mock-2', 'gg_13_8_otus'), # formerly S16S-2 ('mock-3', 'gg_13_8_otus'), # formerly Broad-1 ('mock-4', 'gg_13_8_otus'), # formerly Broad-2 ('mock-5', 'gg_13_8_otus'), # formerly Broad-3 # ('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1 ('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2 ('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3 ('mock-9', 'unite_20.11.2016_clean_fullITS'), # formerly ITS1 ('mock-10', 'unite_20.11.2016_clean_fullITS'), # formerly ITS2-SAG ('mock-12', 'gg_13_8_otus'), # Extreme # ('mock-13', 'gg_13_8_otus_full16S'), # kozich-1 # ('mock-14', 'gg_13_8_otus_full16S'), # kozich-2 # ('mock-15', 'gg_13_8_otus_full16S'), # kozich-3 ('mock-16', 'gg_13_8_otus'), # schirmer-1 ] reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus_515f-806r.qza'), join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')), # 'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus.qza'), # join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')), 'unite_20.11.2016_clean_fullITS' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'), join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')), # 'unite_20.11.2016' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_BITSf-B58S3r_trim250.qza'), # join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.qza')) } """ Explanation: Preparing data set sweep First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. End of explanation """ method_parameters_combinations = { 'q2-multinomialNB': {'confidence': [0.0, 0.2, 0.4, 0.6, 0.8], 'classify__alpha': [0.001, 0.01, 0.1], 'feat_ext__ngram_range': [[8,8], [12,12], [20,20]]}, 'q2-logisticregression': {'classify__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag']}, 'q2-randomforest': {'classify__max_features': ['sqrt', 'None'], 'classify__n_estimators': [5, 10, 100]} } """ Explanation: Preparing the method/parameter combinations and generating commands Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. End of explanation """ # pipeline params common to all classifiers are set here hash_params = dict( analyzer='char_wb', n_features=8192, non_negative=True, ngram_range=[8, 8]) # any params common to all classifiers can be set here classify_params = dict() def build_pipeline(classifier, hash_params, classify_params): return Pipeline([ ('feat_ext', HashingVectorizer(**hash_params)), ('classify', classifier(**classify_params))]) # Now fit the pipelines. pipelines = {'q2-multinomialNB': build_pipeline( MultinomialNB, hash_params, {'fit_prior': False}), 'q2-logisticregression': build_pipeline( LogisticRegression, hash_params, classify_params), 'q2-randomforest': build_pipeline( RandomForestClassifier, hash_params, classify_params)} """ Explanation: Preparing the pipelines The below pipelines are used to specify the scikit-learn classifiers that are used for assignment. At the moment we only include Naïve Bayes but the collection will expand. End of explanation """ dataset_reference_combinations = [ ('mock-3', 'gg_13_8_otus'), # formerly Broad-1 ] method_parameters_combinations = { 'q2-randomforest': {'classify__max_features': ['sqrt'], 'classify__n_estimators': [5]} } reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus_515f-806r.qza'), join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza'))} """ Explanation: Test End of explanation """ sweep = gen_param_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations) sweep = list(sweep) """ Explanation: Do the Sweep End of explanation """ print(len(sweep)) sweep[0] train_and_run_classifier(method_parameters_combinations, reference_dbs, pipelines, sweep, verbose=True, n_jobs=4) """ Explanation: A quick sanity check never hurt anyone... End of explanation """ taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_set_tax_assignments.txt') generate_per_method_biom_tables(taxonomy_glob, data_dir) """ Explanation: Generate per-method biom tables Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells. End of explanation """ precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name) method_dirs = glob(join(results_dir, '*', '*', '*', '*')) move_results_to_repository(method_dirs, precomputed_results_dir) """ Explanation: Move result files to repository Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. End of explanation """
feststelltaste/software-analytics
prototypes/ForensicFiles.ipynb
gpl-3.0
import glob file_list = glob.glob(r'C:/dev/forensic/data/**/*.txt', recursive=True) file_list = [x.replace("\\", "/") for x in file_list] file_list[:5] """ Explanation: Introduction Idea The claim was that the directory structure would be very similar to each other over a period of time. We want to identify this time span by using a time-based analysis on the commits and their corresponding directory structures. We can use some advanced Git repo analysis for this task. Data creation script Iterates over all commits and extracts basic information about the commit like sha, author and commit date (in log.txt) as well as the file's list of a specific version (in files.txt). For each information set, it a new directory with the sha as unique identifier is created. ```bash cd $1 sha_list=git rev-list master for sha in $sha_list: do data_dir="../data/$1/$sha" mkdir -p $data_dir git checkout $sha git log -n 1 $sha > $data_dir/log.txt git ls-files > $data_dir/files.txt done ``` You can store this script e. g. into extract.sh and execute it for a repository with bash sh execute.sh &lt;path_git_repo&gt; and you'll get a directory / files structure like this . ├── data │   ├── lerna │   │   ├── 001ec5882630cedd895f2c95a56a755617bb036c │   │   │   ├── files.txt │   │   │   └── log.txt │   │   ├── 00242afa1efa43a98dc84815ac8f554ffa58d472 │   │   │   ├── files.txt │   │   │   └── log.txt │   │   ├── 007f20b89ae33721bd08f8bcdd0768923bcc6bc5 │   │   │   ├── files.txt │   │   │   └── log.txt The content is as follows: files.txt .babelrc .editorconfig .eslintrc.yaml .github/ISSUE_TEMPLATE.md .github/PULL_REQUEST_TEMPLATE.md .gitignore .npmignore .travis.yml CHANGELOG.md CODE_OF_CONDUCT.md CONTRIBUTING.md FAQ.md LICENSE README.md appveyor.yml bin/lerna.js doc/hoist.md doc/troubleshooting.md lerna.json package.json src/ChildProcessUtilities.js src/Command.js src/ConventionalCommitUtilities.js src/FileSystemUtilities.js src/GitUtilities.js src/NpmUtilities.js ... log.txt ``` commit 001ec5882630cedd895f2c95a56a755617bb036c Author: Daniel Stockman &#100;&#97;&#110;&#105;&#101;&#108;&#115;&#64;&#122;&#105;&#108;&#108;&#111;&#119;&#103;&#114;&#111;&#117;&#112;&#46;&#99;&#111;&#109; Date: Thu Aug 10 09:56:14 2017 -0700 chore: fs-extra 4.x ``` With this data, we have the base for analysing a probably similar directure structure layout over time. abd83718682d7496426bb35f2f9ca20f10c2468d,2015-12-04 23:29:27 +1100 .gitignore LICENSE README.md bin/lerna.js lib/commands/bootstrap.js lib/commands/index.js lib/commands/publish.js lib/init.js lib/progress-bar.js package.json Load all files with the files listings I've executed the script for lerna as well as the web-build-tools. First, we first get all the files.txt using glob. End of explanation """ import pandas as pd dfs = [] for files_file in file_list: try: files_df = pd.read_csv(files_file, names=['sha', 'timestamp']) files_df['project'] = files_file.split("/")[-2] files_df['file'] = files_df.sha files_df['sha'] = files_df.sha[0] files_df['timestamp'] = pd.to_datetime(files_df.timestamp[0]) files_df = files_df[1:] files_df dfs.append(files_df) except OSError as e: print((e,files_file)) file_log = pd.concat(dfs, ignore_index=True) file_log.head() file_log.file = pd.Categorical(file_log.file) file_log.info() dir_log = dir_log[ (dir_log.project=='lerna') & (dir_log.file.str.endswith(".js")) | (dir_log.project=='web-build-tools') & (dir_log.file.str.endswith(".ts")) ] dir_log.project.value_counts() dir_log = dir_log[dir_log.file.str.contains("/")].copy() dir_log['last_dir'] = dir_log.file.str.split("/").str[-2] dir_log['last_dir_id'] = pd.factorize(dir_log.last_dir)[0] dir_log.head() dir_log['date'] = dir_log.timestamp.dt.date dir_log.head() grouped = dir_log.groupby(['project', pd.Grouper(level='date', freq="D"),'last_dir_id'])[['sha']].last() grouped.head() grouped['existent'] = 1 grouped.head() test = grouped.pivot_table('existent', ['project', 'date'], 'last_dir_id').fillna(0) test.head() lerna = test.loc['lerna'][0] lerna %maplotlib inline test.plot() timed_log = dir_log.set_index(['timestamp', 'project']) timed_log.head() timed_log.resample("W").first() %matplotlib inline timed.\ pivot_table('last_dir_id', timed.index, 'project')\ .fillna(method='ffill').dropna().plot() """ Explanation: We can then import the data by looping through all the files and read in the corresponding files' content. We further extract the information items we need on the fly from the path as well as the content of log.txt. The result is stored into a Pandas DataFrame for further analysis. End of explanation """ file_log[file_log.project == "lerna"].iloc[0] file_log[file_log.project == "web-build-tools"].iloc[0] """ Explanation: For each file, we have now a row the complete commit information available for both repositories. End of explanation """ file_log.info() """ Explanation: Basic statistics Let's take a look at our read-in data. End of explanation """ file_log.project.value_counts() """ Explanation: These are the number of entries for each repository. End of explanation """ file_log.groupby('project').sha.nunique() """ Explanation: The amount of commits for each repository are. End of explanation """ file_log[file_log.project=="web-build-tools"].iloc[0] file_log[file_log.project=="web-build-tools"].file.iloc[-10:] lerna = file_log[file_log.project == "lerna"] lerna.info() rush = file_log[file_log.project == "web-build-tools"] rush.info() from scipy.spatial.distance import hamming def calculate_hamming(row): lerna = row.file_list_lerna.split("\n") lerna = [x.rsplit(".", maxsplit=1)[0] for x in lerna] rush = row.file_list_rush.split("\n") rush = [x.rsplit(".", maxsplit=1)[0] for x in rush] count = 0 for i in lerna: if i in rush: count = count + 1 return count comp["amount"] = comp.apply(calculate_hamming, axis=1) comp.head() %matplotlib inline comp.amount.plot() comp.resample("W").amount.mean().plot() comp[comp.amount == comp.amount.max()] """ Explanation: Data preparation We need to adopt the data to the domain analyzed. We want to create a similarity measure between the directory structure of the lerna repository and the rush componente of the web-build-tools repository. The later is a little bit tricky, because there is a shift in the directory renaming. End of explanation """
poppy-project/community-notebooks
tutorials-education/poppy-torso__vrep_Prototype d'ininitiation à l'informatique pour les lycéens/decouverte/Decouverte TP3.ipynb
lgpl-3.0
from poppy.creatures import PoppyTorso poppy = PoppyTorso(simulator='vrep') """ Explanation: Decouverte – Niveau 1 - Python TP3 Pour commencer votre programme python devra contenir les lignes de code ci-dessous et le logiciel V-REP devra être lancé. Dans V-REP (en haut à gauche) utilise les deux icones flèche pour déplacer la vue et regarder poppy sous tous les angles.<br> Dans notebook, utilise le raccourci 'Ctrl+Enter' pour éxécuter les commandes. End of explanation """ print "\n------------\nDico :" dico = {} for m in poppy.motors: dico[m.name] = m.present_position print dico print '\n------------' print 'Valeur à la clé head_z :', dico['head_z'] print 'Valeur à la clé head_y :', dico['head_y'] print 'Valeur à la clé l_arm_z :', dico['l_arm_z'] print '\n------------\nDico :' for m in dico: print 'cle: ',m print 'valeur : ',dico[m] print '-' print 'fin\n-------------' """ Explanation: <h2>Boucle et memoire</h2> <h3>Les dictionnaires</h3> Les dictionnaires varient des listes dans l'accès aux éléments. Comme nous l'avons vu l'accès aux éléments de la liste se fait grâce à leur numéro d'index.<br> Pour accéder aux éléments d'un dictionnaire, nous passons par une clé. Essaies les commandes: End of explanation """ dico = { m.name : m.present_position for m in poppy.motors } cle = dico.keys() print cle compteur = 0 for m in cle: compteur = compteur + 1 print 'clé : ', m print '\nNombre total de clé = ', compteur print "Nombre d'élément dans la liste 'cle' = ", len(cle) """ Explanation: Ainsi, il suffit de connaitre la clé pour récupérer l'information associée.<br> Cette information stockée peut-être: du texte, une variable, une liste, un dicitionnaire, etc Pour récupérer l'ensemble des clés d'un dictionnaire essaies la commande: End of explanation """ poppy.head_z.goal_position = 0 poppy.head_z.goal_position = 90 poppy.head_z.goal_position = 0 """ Explanation: <h3>Boucle while & condition</h3> Essaies les commandes: End of explanation """ import time poppy.head_z.goal_position = 0 time.sleep(.1) poppy.head_z.goal_position = 90 time.sleep(.1) poppy.head_z.goal_position = 0 """ Explanation: Se passe-t-il quelque chose ? oui / non Y-a-t-il un bug ? oui / non les commandes s'éxecutent-t-elles? oui / non trop vite? oui / non Essaies les commandes: End of explanation """ attendre = True poppy.head_z.goal_position = 90 while attendre == True: time.sleep(1) if poppy.head_z.present_position == 90: attendre = False poppy.head_z.goal_position = 0 """ Explanation: On comprend mieux ce qu'il se passe: les commandes s'éxecutent trop rapidement pour l'observer. On voudrait trouver une méthode pour attendre qu'une condition soit remplie ; autrement dit: Tant que (while) 'attendre' est vrai (True) : (faire), si (if) la position du moteur est égale (==) à la position voulue : (faire), 'attendre' devient faux (False). Ceci se traduit par: End of explanation """ #essaies ton code ici # correction def time_position(m,goal,unite=0.001): start,attendre,tps=m.present_position,True,0 poppy.head_z.goal_position = goal while attendre==True: time.sleep(unite) if poppy.head_z.present_position == goal: attendre=False else: tps+=1 poppy.head_z.goal_position = start return tps print time_position(poppy.head_z,90), 'milliseconde' time.sleep(1) print time_position(poppy.head_z,90,1), 'seconde' time.sleep(1) print time_position(poppy.head_z,90,.1), 'dixième de seconde' time.sleep(1) print time_position(poppy.head_z,90,.01), 'centième de seconde' for i in range(5): #problème précision time.sleep(0.5) print time_position(poppy.head_z,90) """ Explanation: On peut même récupérer le temps (approximatif) qu'il a fallu pour effectuer le mouvement. Comment ? End of explanation """ messager.reset_simulation() """ Explanation: Tu as raté? c'est pas grâve, recommmence, essaies ces lignes pour redémarrer : End of explanation """ import pypot poppy.stop_simulation() pypot.vrep.close_all_connections() from poppy.creatures import PoppyTorso poppy=PoppyTorso(simulator='vrep') """ Explanation: Encore buger ? essaies celles-ci : End of explanation """ import pypot poppy.stop_simulation() pypot.vrep.close_all_connections() """ Explanation: Tu as fini? coupes la simulation ici: End of explanation """
spennihana/h2o-3
h2o-py/demos/EEG_eyestate_sklearn_NOPASS.ipynb
apache-2.0
import pandas as pd import numpy as np from collections import Counter """ Explanation: Scikit-Learn singalong: EEG Eye State Classification Author: Kevin Yang Contact: kyang@h2o.ai This tutorial replicates Erin LeDell's oncology demo using Scikit Learn and Pandas, and is intended to provide a comparison of the syntactical and performance differences between sklearn and H2O implementations of Gradient Boosting Machines. We'll be using Pandas, Numpy and the collections package for most of the data exploration. End of explanation """ csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv" data = pd.read_csv(csv_url) """ Explanation: Download EEG Data The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data. Let's import the same dataset directly with pandas End of explanation """ data.shape """ Explanation: Explore Data Once we have loaded the data, let's take a quick look. First the dimension of the frame: End of explanation """ data.head() """ Explanation: Now let's take a look at the top of the frame: End of explanation """ data.columns.tolist() """ Explanation: The first two columns contain an ID and the response. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors. End of explanation """ columns = ['AF3', 'eyeDetection', 'split'] data[columns].head(10) """ Explanation: To select a subset of the columns to look at, typical Pandas indexing applies: End of explanation """ data['eyeDetection'].head() """ Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely: End of explanation """ data['eyeDetection'].unique() """ Explanation: It looks like a binary response, but let's validate that assumption: End of explanation """ data['eyeDetection'].nunique() """ Explanation: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis): End of explanation """ data.isnull() data['eyeDetection'].isnull() """ Explanation: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column. End of explanation """ data['eyeDetection'].isnull().sum() """ Explanation: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look: End of explanation """ data.isnull().sum() """ Explanation: Great, no missing labels. Out of curiosity, let's see if there is any missing data in this frame: End of explanation """ Counter(data['eyeDetection']) """ Explanation: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically. End of explanation """ n = data.shape[0] # Total number of training samples np.array(Counter(data['eyeDetection']).values())/float(n) """ Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below). Let's calculate the percentage that each class represents: End of explanation """ train = data[data['split']=="train"] train.shape valid = data[data['split']=="valid"] valid.shape test = data[data['split']=="test"] test.shape """ Explanation: Split H2O Frame into a train and test set So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set. If you want H2O to do the splitting for you, you can use the split_frame method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want. End of explanation """ y = 'eyeDetection' x = data.columns.drop(['eyeDetection','split']) """ Explanation: Machine Learning in H2O We will do a quick demo of the H2O software -- trying to predict eye state (open/closed) from EEG data. Specify the predictor set and response The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis'). End of explanation """ from sklearn.ensemble import GradientBoostingClassifier import sklearn test.shape """ Explanation: Split H2O Frame into a train and test set End of explanation """ model = GradientBoostingClassifier(n_estimators=100, max_depth=4, learning_rate=0.1) X=train[x].reset_index(drop=True) y=train[y].reset_index(drop=True) model.fit(X, y) print(model) """ Explanation: Train and Test a GBM model End of explanation """ model.get_params() """ Explanation: Inspect Model End of explanation """ from sklearn.metrics import r2_score, roc_auc_score, mean_squared_error y_pred = model.predict(X) r2_score(y_pred, y) roc_auc_score(y_pred, y) mean_squared_error(y_pred, y) """ Explanation: Model Performance on a Test Set End of explanation """ from sklearn import cross_validation cross_validation.cross_val_score(model, X, y, scoring='roc_auc', cv=5) cross_validation.cross_val_score(model, valid[x].reset_index(drop=True), valid['eyeDetection'].reset_index(drop=True), scoring='roc_auc', cv=5) """ Explanation: Cross-validated Performance End of explanation """
lukasmerten/CRPropa3
doc/pages/example_notebooks/trajectories/trajectories.v4.ipynb
gpl-3.0
from crpropa import * randomSeed = 42 turbSpectrum = SimpleTurbulenceSpectrum(Brms=8*nG, lMin = 60*kpc, lMax=800*kpc, sIndex=5./3.) gridprops = GridProperties(Vector3d(0), 256, 30*kpc) BField = SimpleGridTurbulence(turbSpectrum, gridprops, randomSeed) # print some properties of our field print('Lc = {:.1f} kpc'.format(BField.getCorrelationLength() / kpc)) # correlation length print('sqrt(<B^2>) = {:.1f} nG'.format(BField.getBrms() / nG)) # RMS print('<|B|> = {:.1f} nG'.format(BField.getMeanFieldStrength() / nG)) # mean print('B(10 Mpc, 0, 0) =', BField.getField(Vector3d(10,0,0) * Mpc) / nG, 'nG') """ Explanation: 3D trajectories in a turbulent field The following simulation tracks a single UHE nucleus and its secondary nucleons/nuclei through a turbulent magnetic field. First we create a random realization of a turbulent field with a Kolmogorov power spectrum on 60-800 kpc lengthscales and an RMS field strength of 8 nG. The field is stored on a $256^3$ grid with 30 kpc grid spacing, and thus has an extent of $(256 \cdot 30 \rm{kpc})^3$. The field is by default periodically repeated in space to cover an arbitrary volume. The chosen grid size consumes only very little memory. For practical purposes a larger grid is advised in order to represent more variations of turbulent modes, provide a larger turbulent range, or a higher resolution. End of explanation """ # save the field # format: (Bx, By, Bz)(x, y, z) with z changing the quickest. #dumpGrid(BField.getGrid(), 'myfield.dat') # binary, single precision #dumpGridToTxt(Bfield.getGrid(), 'myfield.txt') # ASCII # load your own field #vgrid=Grid3f(gridprops) #loadGrid(vgrid, 'myfield.dat') #loadGridFromTxt(vgrid, 'myfield.txt') """ Explanation: Saving and loading fields In addition to creating random turbulent fields, we can also load and save custom magnetic field grids. As input and output we currently support binary files in single precision and ASCII files. End of explanation """ sim = ModuleList() sim.add(PropagationCK(BField)) sim.add(PhotoPionProduction(CMB())) sim.add(PhotoPionProduction(IRB_Kneiske04())) sim.add(PhotoDisintegration(CMB())) sim.add(PhotoDisintegration(IRB_Kneiske04())) sim.add(ElectronPairProduction(CMB())) sim.add(ElectronPairProduction(IRB_Kneiske04())) sim.add(NuclearDecay()) sim.add(MaximumTrajectoryLength(25 * Mpc)) output = TextOutput('trajectory.txt', Output.Trajectory3D) sim.add(output) x = Vector3d(0,0,0) # position p = Vector3d(1,1,0) # direction c = Candidate(nucleusId(16, 8), 100 * EeV, x, p) sim.run(c, True) """ Explanation: Running the simulation Now that we have our magnetic field ready we can fire up our simulation and hope that something visually interesting is going to happen. End of explanation """ %matplotlib inline from pylab import * from mpl_toolkits.mplot3d import axes3d output.close() data = genfromtxt('trajectory.txt', names=True) # trajectory points x, y, z = data['X'], data['Y'], data['Z'] # translate particle ID to charge number Z = [chargeNumber(int(Id)) for Id in data['ID'].astype(int)] # translate the charge number to color and size # --> protons are blue, Helium is green, everthing else is red colorDict = {0:'k', 1:'b', 2:'g', 3:'r', 4:'r', 5:'r', 6:'r', 7:'r', 8:'r'} sizeDict = {0:4, 1:4, 2:8, 3:10, 4:10, 5:10, 6:10, 7:10, 8:10} colors = [colorDict[z] for z in Z] sizes = [sizeDict[z] for z in Z] fig = plt.figure(figsize=(12, 5))#plt.figaspect(0.5)) ax = fig.gca(projection='3d')# , aspect='equal' ax.scatter(x,y,z+6, 'o', s=sizes, color=colors) ax.set_xlabel('x / Mpc', fontsize=18) ax.set_ylabel('y / Mpc', fontsize=18) ax.set_zlabel('z / Mpc', fontsize=18) ax.set_xlim((-1, 16)) ax.set_ylim((-1, 16)) ax.set_zlim((-1, 16)) ax.xaxis.set_ticks((0, 5, 10, 15)) ax.yaxis.set_ticks((0, 5, 10, 15)) ax.zaxis.set_ticks((0, 5, 10, 15)) show() """ Explanation: (Optional) Plotting We plot the trajectory of our oxygen-16 nucleus. To distinguish between secondary nuclei the following colors are used: protons are blue, alpha particles are green, everthing heavier is red. End of explanation """
Hyperparticle/deep-learning-foundation
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] """ Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation """ view_sentence_range = (30, 40) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ vocab = set(text) vocab_to_int = {word: i for i,word in enumerate(vocab)} int_to_vocab = {i: word for i,word in enumerate(vocab)} return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) """ Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation """ def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ lookup = { '.': '||period||', ',': '||comma||', '"': '||quotation_mark||', ';': '||semicolon||', '!': '||exclamation_mark||', '?': '||question_mark||', '(': '||left_parenthesis||', ')': '||right_parenthesis||', '--': '||dash||', '\n': '||return||' } return lookup """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) """ Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation """ def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ input_ = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') return input_, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) """ Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate) End of explanation """ rnn_layers = 1 # The number of layers of the RNN component keep_prob = 1.0 # The keep probability for dropout def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # Build a basic LSTM cell with dropout def build_cell(lstm_size, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return lstm cells = [build_cell(rnn_size, keep_prob) for _ in range(rnn_layers)] cell = tf.contrib.rnn.MultiRNNCell(cells) initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state') return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) """ Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation """ def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ embedding = tf.Variable(tf.random_uniform([vocab_size, embed_dim], -1, 1)) embed = tf.nn.embedding_lookup(embedding, input_data) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) """ Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation """ def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) """ Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation """ def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_size, embed_dim) outputs, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected( outputs, vocab_size, activation_fn=None, weights_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01), biases_initializer=tf.zeros_initializer() ) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) """ Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation """ def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # Get the number of characters per batch and number of batches we can make characters_per_batch = batch_size * seq_length n_batches = len(int_text) // characters_per_batch # Keep only enough characters to make full batches total_batch = np.array(int_text[:n_batches * characters_per_batch]) # Reshape the batch into the appropriate tensor features = total_batch.reshape(batch_size, -1) # batch_size rows features = np.split(features, n_batches, axis=1) # Select sequences across the columns # The targets are like the features, except shifted (rotated) by 1 targets = np.roll(total_batch, -1) targets = targets.reshape(batch_size, -1) targets = np.split(targets, n_batches, axis=1) # Zip the targets and features together batches = np.array(list(zip(features, targets))) return batches """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) """ Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive. End of explanation """ # Number of Epochs num_epochs = 100 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 32 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 20 rnn_layers = 1 # The number of layers of the RNN component keep_prob = 1.0 # The keep probability for dropout """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) """ Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() """ Explanation: Checkpoint End of explanation """ def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ input_ = loaded_graph.get_tensor_by_name('input:0') initial_state = loaded_graph.get_tensor_by_name('initial_state:0') final_state = loaded_graph.get_tensor_by_name('final_state:0') probs = loaded_graph.get_tensor_by_name('probs:0') return input_, initial_state, final_state, probs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) """ Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation """ def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ probabilities = np.squeeze(probabilities) # Choose a number from 0 to the vocab length with probability choice = np.random.choice(len(int_to_vocab), p=probabilities) # Return the corresponding word return int_to_vocab[choice] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) """ Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation """ gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[:, dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) """ Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation """
vadim-ivlev/STUDY
handson-data-science-python/DataScience-Python3/NaiveBayes.ipynb
mit
import os import io import numpy from pandas import DataFrame from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB def readFiles(path): for root, dirnames, filenames in os.walk(path): for filename in filenames: path = os.path.join(root, filename) inBody = False lines = [] f = io.open(path, 'r', encoding='latin1') for line in f: if inBody: lines.append(line) elif line == '\n': inBody = True f.close() message = '\n'.join(lines) yield path, message def dataFrameFromDirectory(path, classification): rows = [] index = [] for filename, message in readFiles(path): rows.append({'message': message, 'class': classification}) index.append(filename) return DataFrame(rows, index=index) data = DataFrame({'message': [], 'class': []}) data = data.append(dataFrameFromDirectory('e:/sundog-consult/Udemy/DataScience/emails/spam', 'spam')) data = data.append(dataFrameFromDirectory('e:/sundog-consult/Udemy/DataScience/emails/ham', 'ham')) """ Explanation: Naive Bayes (the easy way) We'll cheat by using sklearn.naive_bayes to train a spam classifier! Most of the code is just loading our training data into a pandas DataFrame that we can play with: End of explanation """ data.head() """ Explanation: Let's have a look at that DataFrame: End of explanation """ vectorizer = CountVectorizer() counts = vectorizer.fit_transform(data['message'].values) classifier = MultinomialNB() targets = data['class'].values classifier.fit(counts, targets) """ Explanation: Now we will use a CountVectorizer to split up each message into its list of words, and throw that into a MultinomialNB classifier. Call fit() and we've got a trained spam filter ready to go! It's just that easy. End of explanation """ examples = ['Free Viagra now!!!', "Hi Bob, how about a game of golf tomorrow?"] example_counts = vectorizer.transform(examples) predictions = classifier.predict(example_counts) predictions """ Explanation: Let's try it out: End of explanation """
hasadna/knesset-data-pipelines
jupyter-notebooks/committee protocol parts classification using catma.ipynb
mit
import csv import xml.etree.ElementTree as ET from os import listdir import re import subprocess from tempfile import mkdtemp from glob import glob target = 'target' ana_name = 'ana' TALKER_SEP = '_TALKER_' def get_talker(all_talkers,indices, b): for i in range(len(indices)): if indices[i] > b: return all_talkers[i-1].strip() def get_annotations_text(root, text, as_list=False): texts = {} talkers = {} all_talkers = re.findall('([א-ת])+ [א-ת]+:\n', text) all_talkers = re.findall('\n.*:\n', text) valid_talkers = [] for talker in all_talkers: if len(talker) > 2 and len(talker) <= 40: valid_talkers.append(talker) all_talkers = valid_talkers talkers_indices=[] i = 0 for name in all_talkers: i = text.find(name, i) talkers_indices.append(i) for n in root.iter('{http://www.tei-c.org/ns/1.0}seg'): ana = '' for a in n.iter(): attr = a.attrib #print(attr) if ana_name in attr: ana = attr[ana_name][1:] # print("ana="+ana) if target in attr: b,e = (attr[target].split('=')[1]).split(",") #print(b+"," + e) a = attr[target].split('#')[0] t = text[int(b):int(e) + 1] #print("b,e %s,%s " %(b,e)) if len(t) >= 1: talker = get_talker(all_talkers, talkers_indices, int(b)) if ana in texts: #print("a="+a) #print("t="+t) texts[ana].append(t) #print("appending " + t + " to " + ana) else: #print("inserting " + t + " to " + ana + "...") texts[ana]=[t] talkers[ana] = talker cats = {} for aa in root.iter('{http://www.tei-c.org/ns/1.0}fsDecl'): for a in aa.iter('{http://www.tei-c.org/ns/1.0}fsDecl'): # print (a.tag) att = a.attrib for x in a.iter(): if x.tag == '{http://www.tei-c.org/ns/1.0}fsDescr' and 'type' in att: name = x.text t = att['type'] if not t in cats: cats[t] = name annotaions_text = {} for c in cats: if as_list: annotaions_text[cats[c]] = [] else: annotaions_text[cats[c]] = '' for n in root.iter('{http://www.tei-c.org/ns/1.0}fs'): id = n.attrib['{http://www.w3.org/XML/1998/namespace}id'] if 'type' in n.attrib: t=n.attrib['type'] #print('type='+t) if id in texts: anno_cat = cats[t] anno_text = ''.join(texts[id]) anno_talker = "?" if talkers[id] is not None: anno_talker = talkers[id].strip() if as_list: annotaions_text[anno_cat].append((anno_talker,anno_text)) else: annotaions_text[anno_cat] = annotaions_text[anno_cat] + "\t" + anno_talker + TALKER_SEP + anno_text return annotaions_text def get_annotations(annotation_path, as_list=False, with_text=False): text_filenames = glob('{}/*.txt'.format(annotation_path)) assert len(text_filenames) == 1 text_filename = text_filenames[0] xml_filenames = glob('{}/*/*.xml'.format(annotation_path)) assert len(xml_filenames) == 1 xml_filename = xml_filenames[0] with open(text_filename,'r',encoding='utf-8') as file: text = file.read() text = text.replace("\n",'\n ') tree = ET.parse(xml_filename) root = tree.getroot() annotations = get_annotations_text(root, text, as_list=as_list) if with_text: return annotations, text else: return annotations """ Explanation: Define functions End of explanation """ corpus_filename = '/pipelines/data/catma/ההסדרים_אקראיים1909171124.tar.gz' corpus_dir = mkdtemp() subprocess.check_call('tar -xzvf "{}" -C "{}"'.format(corpus_filename, corpus_dir), shell=True) annotation_paths = glob('{}/*/*'.format(corpus_dir)) for i,p in enumerate(annotation_paths): print(i,p) """ Explanation: Catma is used by Bar Ilan University (BIU) to do manual classification / tagging of protocol parts Original protocol files are uploaded Catma which parses them into text, which BIU manually tags according to certain predefined tags (related to law) Need to export the corpus from Catma and provide the .tar.gz file as input for this notebook End of explanation """ annotation_path = annotation_paths[0] annotation_path get_annotations(annotation_path, as_list=True) """ Explanation: Choose an annotation path to get annotations from End of explanation """ from dataflows import Flow, printer known_categories = [ 'Judicial decision', 'constitutional turns', 'Doubt', 'Anticipating Judicial Review' ] def get_year(text): return re.findall('[2][0][0-9][0-9]', text)[0] yearly_counts = {} def get_annotation_file_stats(annotation_paths): for annotation_path in annotation_paths: annotations, text = get_annotations(annotation_path, as_list=True, with_text=True) year = get_year(text) if not yearly_counts.get(year): yearly_counts[year] = {c: 0 for c in known_categories} row = { 'year': year, 'dirname': annotation_path.replace(corpus_dir, '').strip('/'), **{ c: 0 for c in known_categories } } for category, category_annotations in annotations.items(): assert category in known_categories row[category] = len(category_annotations) yearly_counts[year][category] += len(category_annotations) yield row def get_yearly_counts(): for year, counts in yearly_counts.items(): yield { 'year': year, **counts } Flow( get_annotation_file_stats(annotation_paths), get_yearly_counts(), printer(tablefmt='html') ).process() """ Explanation: Get annotation statistics End of explanation """
Caranarq/01_Dmine
Datasets/INERE/.ipynb_checkpoints/INERE-checkpoint.ipynb
gpl-3.0
descripciones = { 'P0009' : 'Potencial de aprovechamiento energía solar', 'P0010' : 'Potencial de aprovechamiento energía eólica', 'P0011' : 'Potencial de aprovechamiento energía geotérmica', 'P0012' : 'Potencial de aprovechamiento energía de biomasa', 'P0606' : 'Generación mediante fuentes renovables de energía', 'P0607' : 'Potencial de fuentes renovables de energía', 'P0608' : 'Capacidad instalada para aprovechar fuentes renovables de energía' } # Librerias utilizadas import pandas as pd import sys import urllib import os import csv # Configuracion del sistema print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) """ Explanation: Estandarizacion de datos del Inventario Nacional de Energias Renovables 1. Introduccion Indicadores que salen de esta fuente ID |Descripción ---|:---------- P0009|Potencial de aprovechamiento energía solar P0010|Potencial de aprovechamiento energía eólica P0011|Potencial de aprovechamiento energía geotérmica P0012|Potencial de aprovechamiento energía de biomasa P0606|Generación mediante fuentes renovables de energía P0607|Potencial de fuentes renovables de energía P0608|Capacidad instalada para aprovechar fuentes renovables de energía End of explanation """ # Lectura del dataset de energia renovable actual como descargado directorio = r'D:\PCCS\00_RawData\01_CSV\INERE\\' archivo = directorio+'Actual Energia Renovable.xls' raw_actual = pd.read_excel(archivo).dropna() raw_actual.head() # El dataset envía error cuando se intenta leer directamente """ Explanation: Descarga de datos Los datos se encuentran en la plataforma del Inventario Nacional de Energias Renovables (INERE) ubicada en https://dgel.energia.gob.mx/inere/, y tienen que descargarse manualmente porque su página está elaborada en Flash y no permite la descarga sistematizada de datos. A veces ni funciona por sí misma, manda errores al azar. Se descargaron dos datasets, uno que contiene el Inventario Actual y otro con el Inventario Potencial de energías renovables a nivel nacional. Como la base de datos no incluye claves geoestadísticas, estas tienen que asignarse manualmente. A continuacion se muestra el encabezado del archivo que se procesó a mano. End of explanation """ # Lectura del dataset de energia renovable actual después de ser re-guardado en excel directorio = r'D:\PCCS\00_RawData\01_CSV\INERE\\' archivo = directorio+'Actual Energia Renovable.xlsx' raw_actual = pd.read_excel(archivo).dropna() raw_actual.head() """ Explanation: Ninguno de los dos datasets puede ser leido por python tal como fue descargado, por lo que tienen que abrirse en excel y guardarse nuevamente en formato xlsx. Dataset energia renovable actual End of explanation """ # Lectura del dataset de energia renovable actual procesado manualmente directorio = r'D:\PCCS\00_RawData\01_CSV\INERE\\' archivo = directorio+'Actual CVE_GEO.xlsx' actual_proc = pd.read_excel(archivo, dtype={'CVE_MUN': 'str'}).dropna() actual_proc.head() """ Explanation: Se asignó CVE_MUN manualmente a la mayoría de los registros. No fue posible encontrar una clave geoestadística para las siguientes combinaciones de estado/municipio ESTADO |MUNICIPIO -------|:---------- Veracruz|Jiotepec Chiapas|Atizapan Oaxaca|Motzorongo Guerrero|La Venta Jalisco|Santa Rosa Para los siguientes registros, la CVE_MUN fue intuida desde el nombre de la población o el nombre del proyecto: ESTADO |MUNICIPIO|CVE_MUN|PROYECTO -------|:----------|-------|------ Puebla|Atencingo|21051|Ingenio de Atencingo Puebla|Tatlahuquitepec|21186|Mazatepec A continuación se presenta el encabezado del dataset procesado manualmente, incluyendo columnas que se utilizaron como auxiliares para la identificación de municipios End of explanation """ list(actual_proc) # Eliminacion de columnas redundantes y temporales del(actual_proc['ESTADO']) del(actual_proc['MUNICIPIO']) del(actual_proc['3EDO3']) del(actual_proc['3MUN3']) del(actual_proc['GEO_EDO']) del(actual_proc['GEOEDO_3MUN']) del(actual_proc['GEO_MUN_Nom']) # Nombre Unico de Coloumnas actual_proc = actual_proc.rename(columns = { 'NOMBRE' : 'NOMBRE PROYECTO', 'PRODUCTOR': 'SECTOR PRODUCCION', 'TIPO': 'TIPO FUENTE ENER', 'UNIDADES': 'UNIDADES GEN'}) # Asignacion de CVE_MUN como indice actual_proc.set_index('CVE_MUN', inplace=True) actual_proc.head() # Metadatos estándar metadatos = { 'Nombre del Dataset': 'Inventario Actual de Energias Renovables', 'Descripcion del dataset': '', 'Disponibilidad Temporal': '2014', 'Periodo de actualizacion': 'No Determinada', 'Nivel de Desagregacion': 'Localidad, Municipal, Estatal, Nacional', 'Notas': None, 'Fuente': 'SENER', 'URL_Fuente': 'https://dgel.energia.gob.mx/inere/', 'Dataset base': None, } # Convertir metadatos a dataframe actualmeta = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None) actualmeta.columns = ['Descripcion'] actualmeta = actualmeta.rename_axis('Metadato') actualmeta list(actual_proc) # Descripciones de columnas variables = { 'NOMBRE PROYECTO': 'Nombre del proyecto de produccion de energia', 'SECTOR PRODUCCION': 'Sector al que pertenece el proyecto de produccion de energia', 'TIPO FUENTE ENER': 'Tipo de fuente de donde se obtiene la energía', 'UNIDADES GEN': 'Numero de generadores instalados por proyecto', 'CAPACIDAD INSTALADA (MW)': 'Capacidad Instalada en Megawatts', 'GENERACIÓN (GWh/a) ' : 'Generación de Gigawatts/hora al año' } # Convertir descripciones a dataframe actualvars = pd.DataFrame.from_dict(variables, orient='index', dtype=None) actualvars.columns = ['Descripcion'] actualvars = actualvars.rename_axis('Mnemonico') actualvars # Guardar dataset limpio para creacion de parametro. file = r'D:\PCCS\01_Dmine\Datasets\INERE\ER_Actual.xlsx' writer = pd.ExcelWriter(file) actual_proc.to_excel(writer, sheet_name = 'DATOS') actualmeta.to_excel(writer, sheet_name = 'METADATOS') actualvars.to_excel(writer, sheet_name = 'VARIABLES') writer.save() print('---------------TERMINADO---------------') """ Explanation: Para guardar el dataset y utilizarlo en la construcción del parámetro, se eliminarán algunas columnas. End of explanation """ # Lectura del dataset de potencial de energia renovable después de ser re-guardado en excel directorio = r'D:\PCCS\00_RawData\01_CSV\INERE\\' archivo = directorio+'Potencial Energia Renovable.xlsx' raw_potencial = pd.read_excel(archivo, dtype={'CVE_MUN': 'str'}).dropna() raw_potencial.head() # Eliminacion de columnas redundantes y temporales potencial_proc = raw_potencial del(potencial_proc['ESTADO']) del(potencial_proc['MUNICIPIO']) del(potencial_proc['3EDO3']) del(potencial_proc['3MUN3']) del(potencial_proc['GEO_EDO']) del(potencial_proc['GEOEDO_3MUN']) del(potencial_proc['GEO_MUN_Nom']) potencial_proc.head() potencial_proc['SUBCLASIFICACIÓN'].unique() # Nombre Unico de Coloumnas potencial_proc = potencial_proc.rename(columns = { 'PROYECTO' : 'NOMBRE PROYECTO', 'CLASIFICACIÓN': 'PROBABILIDAD', 'TIPO': 'TIPO FUENTE ENER', 'SUBCLASIFICACIÓN': 'NOTAS'}) # Asignacion de CVE_MUN como indice potencial_proc.set_index('CVE_MUN', inplace=True) potencial_proc.head() # Metadatos estándar metadatos = { 'Nombre del Dataset': 'Inventario Potencial de Energias Renovables', 'Descripcion del dataset': '', 'Disponibilidad Temporal': '2014', 'Periodo de actualizacion': 'No Determinada', 'Nivel de Desagregacion': 'Localidad, Municipal, Estatal, Nacional', 'Notas': None, 'Fuente': 'SENER', 'URL_Fuente': 'https://dgel.energia.gob.mx/inere/', 'Dataset base': None, } # Convertir metadatos a dataframe potenmeta = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None) potenmeta.columns = ['Descripcion'] potenmeta = potenmeta.rename_axis('Metadato') potenmeta list(potencial_proc) potencial_proc['FUENTE'].unique() # Descripciones de columnas variables = { 'NOMBRE PROYECTO': 'Nombre del proyecto de produccion de energia', 'TIPO FUENTE ENER': 'Tipo de fuente de donde se obtiene la energía', 'PROBABILIDAD': 'Certeza respecto al proyecto deproduccion de energía', 'NOTAS': 'Notas', 'CAPACIDAD INSTALABLE (MW)': 'Capacidad Instalable en Megawatts', 'POTENCIAL (GWh/a) ' : 'Potencial de Generación de Gigawatts/hora al año', 'FUENTE': 'Fuente de información' } # Convertir descripciones a dataframe potencialvars = pd.DataFrame.from_dict(variables, orient='index', dtype=None) potencialvars.columns = ['Descripcion'] potencialvars = potencialvars.rename_axis('Mnemonico') potencialvars """ Explanation: Dataset Potencial de Energia Renovable End of explanation """
particle-physics-playground/playground
activities/codebkg_DownloadData.ipynb
mit
import pps_tools as pps #pps.download_drive_file() #pps.download_file() """ Explanation: This notebook provides a way to download data files using the <a href="http://docs.python-requests.org/en/latest/">Python requests library</a>. You'll need to have this library installed on your system to do any work. The first thing we do is import some local helper code that allows us to to download a data file(s), given the url(s). We also define where those data files can be found. Make sure you execute the cell below before trying download any of the CLEO or CMS data! End of explanation """ cleo_MC_files = ['Single_D0B_to_KK_ISR_LARGE.dat', 'Single_D0B_to_Kenu_ISR_LARGE.dat', 'Single_D0B_to_Kpipi0_ISR_LARGE.dat', 'Single_D0B_to_Kstenu_ISR_LARGE.dat', 'Single_D0B_to_phigamma_ISR_LARGE.dat', 'Single_D0B_to_pipi_ISR_LARGE.dat', 'Single_D0_to_KK_ISR_LARGE.dat', 'Single_D0_to_Kenu_ISR_LARGE.dat', 'Single_D0_to_Kpi_LARGE.dat', 'Single_D0_to_Kpipi0_ISR_LARGE.dat', 'Single_D0_to_Kstenu_ISR_LARGE.dat', 'Single_D0_to_phigamma_ISR_LARGE.dat', 'Single_D0_to_pipi_ISR_LARGE.dat', 'Single_Dm_to_Kpipi_ISR_LARGE.dat', 'Single_Dp_to_Kpipi_ISR_LARGE.dat'] cleo_data_files = ['data31_100k_LARGE.dat'] """ Explanation: All of the data files for Particle Physics Playground are currently hosted in this Google Drive folder. To download them, you will need the download_drive_file function, which takes the file name (with proper file ending) as an argument, and downloads it as a file of the same name to the 'data' directory included when you cloned Playground. The download_file function can be used to download files from the web that aren't on Google Drive. This function takes a url address as an argument. Though this functionality is provided, it should be unnecessary for all of the included activities. <a href = "https://en.wikipedia.org/wiki/CLEO_(particle_detector)">CLEO</a> data Here is a list of Monte Carlo (MC) and data files from CLEO. The MC files are for specific decays of $D$ mesons, both charged and neutral. For any given file, there are always (CHECK THIS!!!!!!) two D mesons produced. One decays according to the measured branching fractions, and the other decays through a very specific process. The specific decay is in the name of the file. For example, Single_D0_to_Kpi_LARGE.dat would be simulating the following process: $$e^+e^- \rightarrow D^0 \bar{D}^0$$ $$D^0 \rightarrow \textrm{standard decays}$$ $$\bar{D^0} \rightarrow K^- \pi^+$$ where the $D^0$ and $\bar{D}^0$ can be exchanged. End of explanation """ ''' for filename in cleo_MC_files[0:2]: pps.download_drive_file(filename) '''; """ Explanation: Download the data here! The snippet below can be used to download as much or as little of the extra data as you like. It is currently commented now and is set up to download the first two CLEO MC files, but you can edit it to grab whatever data you like. Have fun! End of explanation """ cms_data_files = ['dimuons_100k.dat'] ''' pps.download_drive_file(cms_data_files[0]) '''; """ Explanation: <a href = "https://en.wikipedia.org/wiki/Compact_Muon_Solenoid">CMS</a> data CMS dimuon data End of explanation """ cms_top_quark_files = ['data.zip', 'ttbar.zip', 'wjets.zip', 'dy.zip', 'ww.zip', 'wz.zip', 'zz.zip', 'single_top.zip', 'qcd.zip'] ''' for filename in cms_top_quark_files[0:2]: pps.download_drive_file(filename) '''; """ Explanation: CMS data for top-quark reconstruction exercise End of explanation """ babar_data_files = ['basicPID_R24-AllEvents-Run1-OnPeak-R24-38.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-388.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1133.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1552.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1694.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1920.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-2026.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-2781.hdf5', 'basicPID_R24-AllEvents-Run1-OnPeak-R24-2835.hdf5'] ''' for filename in babar_data_files[0:2]: pps.download_drive_file(filename) '''; """ Explanation: BaBar data End of explanation """
pdamodaran/yellowbrick
examples/Sangarshanan/comparing_corpus_visualizers.ipynb
apache-2.0
##### Import all the necessary Libraries from yellowbrick.text import TSNEVisualizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import CountVectorizer from yellowbrick.text import UMAPVisualizer from yellowbrick.datasets import load_hobbies """ Explanation: Comparing Corpus Visualizers on Yellowbrick End of explanation """ corpus = load_hobbies() """ Explanation: UMAP vs T-SNE Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data The data is uniformly distributed on a Riemannian manifold; The Riemannian metric is locally constant (or can be approximated as such); The manifold is locally connected. From these assumptions it is possible to model the manifold with a fuzzy topological structure. The embedding is found by searching for a low dimensional projection of the data that has the closest possible equivalent fuzzy topological structure. End of explanation """ def visualize(dim_reduction,encoding,corpus,labels = True,alpha=0.7,metric=None): if 'tfidf' in encoding.lower(): encode = TfidfVectorizer() if 'count' in encoding.lower(): encode = CountVectorizer() docs = encode.fit_transform(corpus.data) if labels is True: labels = corpus.target else: labels = None if 'umap' in dim_reduction.lower(): if metric is None: viz = UMAPVisualizer() else: viz = UMAPVisualizer(metric=metric) if 't-sne' in dim_reduction.lower(): viz = TSNEVisualizer(alpha = alpha) viz.fit(docs,labels) viz.poof() """ Explanation: Writing a Function to quickly Visualize Corpus Which can then be used for rapid comparison End of explanation """ visualize('t-sne','tfidf',corpus) visualize('t-sne','count',corpus,alpha = 0.5) visualize('t-sne','tfidf',corpus,labels =False) visualize('umap','tfidf',corpus) visualize('umap','tfidf',corpus,labels = False) visualize('umap','count',corpus,metric= 'cosine') """ Explanation: Quickly Comparing Plots by Controlling The Dimensionality Reduction technique used The Encoding Technique used The dataset to be visualized Whether to differentiate Labels or not Set the alpha parameter Set the metric for UMAP End of explanation """
xgcm/xmitgcm
doc/demo_writing_binary_file.ipynb
mit
import numpy as np import xmitgcm import matplotlib.pylab as plt """ Explanation: Use case: writing a binary input file for MITgcm You may want to write binary files to create forcing data, initial condition,... for your MITgcm configuration. Here we show how xmitgcm can help. Simple case: a regular grid End of explanation """ lon = np.arange(-180,180,1) lat = np.arange(-90,90,1) lon2, lat2 = np.meshgrid(lon,lat) pseudo = np.cos(2*np.pi*lat2/360) * np.cos(4*np.pi*np.pi*lon2*lon2/360/360) plt.contourf(lon2, lat2, pseudo) plt.colorbar() """ Explanation: Let's build a regular lat/lon grid with one degree resolution and create a pseudo-field on this regular grid: End of explanation """ xmitgcm.utils.write_to_binary? xmitgcm.utils.write_to_binary(pseudo.flatten(), 'file1.bin') """ Explanation: We can write the field as a binary file, to be used as an input file for the model with xmitgcm.utils.write_to_binary. Default is single precision, but double precision can be written with corresponding numpy.dtype. Note that here we use a numpy.array but one can use xarray as well using the DataArray.values End of explanation """ # First let's download a sample dataset ! wget https://ndownloader.figshare.com/files/14066567 ! tar -xf 14066567 """ Explanation: More complicated case: a LLC grid In this case, let's assume we have a xarray dataarray or dataset well formatted on the llc grid. This dataset can be the result of a regridding onto the LLC grid that we want to use as an initial condition for the model (for example). We need to generate the binary file that MITgcm can read. Here's what we can do: End of explanation """ extra_metadata = xmitgcm.utils.get_extra_metadata(domain='llc', nx=90) ds = xmitgcm.open_mdsdataset('./global_oce_llc90/', iters= [8], geometry='llc', extra_metadata=extra_metadata) ds """ Explanation: We can load this dataset with xmitgcm: End of explanation """ # temperature facets = xmitgcm.utils.rebuild_llc_facets(ds['T'], extra_metadata) compact = xmitgcm.utils.llc_facets_3d_spatial_to_compact(facets, 'Z', extra_metadata) xmitgcm.utils.write_to_binary(compact, 'T_initial_condition.bin') """ Explanation: Now let's say we want to use the temperature T and make it the initial condition for another simulation. First we need to re-build the facets, concatenate them into the compact form that MITgcm reads/writes and then write the compact to a binary file. We would do this as follows: End of explanation """ !md5sum T_initial_condition.bin !md5sum ./global_oce_llc90/T.0000000008.data """ Explanation: In this case, we already had the binary file to read from so we can compare checksums: End of explanation """
rhiever/scipy_2015_sklearn_tutorial
notebooks/04.1 Cross Validation.ipynb
cc0-1.0
from sklearn.datasets import load_iris from sklearn.neighbors import KNeighborsClassifier iris = load_iris() X, y = iris.data, iris.target classifier = KNeighborsClassifier() """ Explanation: Cross-Validation and scoring methods To evaluate how well our supervised models generalize, so far we split our data into a training and a test set: <img src="figures/train_test_split.svg" width="100%"> However, often (labeled) data is precious, and this approach lets us only use ~ 3/4 of our data for training. On the other hand, we will only ever try to apply our model 1/4 of our data for testing. A common way to use more of the data to built a model, but also get a more robust estimate of the generalization performance is cross-validation. In cross-validation, the data is split repeatedly into a training and test-set, with a separate model built for every pair. The test-set scores are then aggregated for a more robust estimate. The most common way to do cross-validation is k-fold cross-validation, in which the data is first split into k (often 5 or 10) equal-sized folds, and then for each iteration, one of the k folds is used as test data, and the rest as training data: <img src="figures/cross_validation.svg" width="100%"> This way, each data point will be in the test-set exactly once, and we can use all but a k'th of the data for training. Let us apply this technique to evaluate the KNeighborsClassifier algorithm on the Iris dataset: End of explanation """ y """ Explanation: The labels in iris are sorted, which means that if we split the data as illustrated above, the first fold will only have the label 0 in it, while the last one will only have the label 2: End of explanation """ import numpy as np rng = np.random.RandomState(0) permutation = rng.permutation(len(X)) X, y = X[permutation], y[permutation] print(y) """ Explanation: To avoid this problem in evaluation, we first shuffle our data: End of explanation """ k = 5 n_samples = len(X) fold_size = n_samples // k scores = [] masks = [] for fold in range(k): # generate a boolean mask for the test set in this fold test_mask = np.zeros(n_samples, dtype=bool) test_mask[fold * fold_size : (fold + 1) * fold_size] = True # store the mask for visualization masks.append(test_mask) # create training and test sets using this mask X_test, y_test = X[test_mask], y[test_mask] X_train, y_train = X[~test_mask], y[~test_mask] # fit the classifier classifier.fit(X_train, y_train) # compute the score and record it scores.append(classifier.score(X_test, y_test)) """ Explanation: Now implementing cross-validation is easy: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.matshow(masks) """ Explanation: Let's check that our test mask does the right thing: End of explanation """ print(scores) print(np.mean(scores)) """ Explanation: And now let's look a the scores we computed: End of explanation """ from sklearn.cross_validation import cross_val_score scores = cross_val_score(classifier, X, y) print(scores) print(np.mean(scores)) """ Explanation: As you can see, there is a rather wide spectrum of scores from 90% correct to 100% correct. If we only did a single split, we might have gotten either answer. As cross-validation is such a common pattern in machine learning, there are functions to do the above for you with much more flexibility and less code. The sklearn.cross_validation module has all functions related to cross validation. There easiest function is cross_val_score which takes an estimator and a dataset, and will do all of the splitting for you: End of explanation """ cross_val_score(classifier, X, y, cv=5) """ Explanation: As you can see, the function uses three folds by default. You can change the number of folds using the cv argument: End of explanation """ from sklearn.cross_validation import KFold, StratifiedKFold, ShuffleSplit, LeavePLabelOut """ Explanation: There are also helper objects in the cross-validation module that will generate indices for you for all kinds of different cross-validation methods, including k-fold: End of explanation """ cv = StratifiedKFold(iris.target, n_folds=5) for train, test in cv: print(test) """ Explanation: By default, cross_val_score will use StrafifiedKFold for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0. If you would just use KFold cross-validation, it is likely that you would generate a split that only contains class 0. It is generally a good idea to use StratifiedKFold whenever you do classification. StratifiedKFold would also remove our need to shuffle iris. Let's see what kinds of folds it generates on the unshuffled iris dataset. Each cross-validation class is a generator of sets of training and test indices: End of explanation """ def plot_cv(cv, n_samples): masks = [] for train, test in cv: mask = np.zeros(n_samples, dtype=bool) mask[test] = 1 masks.append(mask) plt.matshow(masks) plot_cv(StratifiedKFold(iris.target, n_folds=5), len(iris.target)) """ Explanation: As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds. This way, the class ratios are preserved. Let's visualize the split: End of explanation """ plot_cv(KFold(len(iris.target), n_folds=5), len(iris.target)) """ Explanation: For comparison, again the standard KFold, that ignores the labels: End of explanation """ plot_cv(KFold(len(iris.target), n_folds=10), len(iris.target)) """ Explanation: Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation: End of explanation """ plot_cv(ShuffleSplit(len(iris.target), n_iter=5, test_size=.2), len(iris.target)) """ Explanation: Another helpful cross-validation generator is ShuffleSplit. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently: End of explanation """ plot_cv(ShuffleSplit(len(iris.target), n_iter=20, test_size=.2), len(iris.target)) """ Explanation: If you want a more robust estimate, you can just increase the number of iterations: End of explanation """ cv = ShuffleSplit(len(iris.target), n_iter=5, test_size=.2) cross_val_score(classifier, X, y, cv=cv) """ Explanation: You can use all of these cross-validation generators with the cross_val_score method: End of explanation """
gjwo/nilm_gjw_data
notebooks/disaggregation-CO.ipynb
apache-2.0
%matplotlib inline import numpy as np import pandas as pd from os.path import join from pylab import rcParams import matplotlib.pyplot as plt rcParams['figure.figsize'] = (13, 6) plt.style.use('ggplot') #import nilmtk from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore from nilmtk.disaggregate import CombinatorialOptimisation from nilmtk.utils import print_dict, show_versions from nilmtk.metrics import f1_score #import seaborn as sns #sns.set_palette("Set3", n_colors=12) import warnings warnings.filterwarnings("ignore") #suppress warnings, comment out if warnings required """ Explanation: Disaggregation - Combinatorial Optimisation Customary imports End of explanation """ #uncomment if required #show_versions() """ Explanation: show versions for any diagnostics End of explanation """ data_dir = '/Users/GJWood/nilm_gjw_data/HDF5/' gjw = DataSet(join(data_dir, 'nilm_gjw_data.hdf5')) print('loaded ' + str(len(gjw.buildings)) + ' buildings') building_number=1 """ Explanation: Load dataset End of explanation """ gjw.store.window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00') gjw.set_window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00') elec = gjw.buildings[building_number].elec mains = elec.mains() mains.plot() #plt.show() """ Explanation: Let us perform our analysis on selected 2 days End of explanation """ elec.mains().good_sections() """ Explanation: check sections are good End of explanation """ house = elec['fridge'] #only one meter so any selection will do df = house.load().next() #load the first chunk of data into a dataframe df.info() #check that the data is what we want (optional) #note the data has two columns and a time index df.head() df.tail() df.plot() #plt.show() """ Explanation: Select and check dataframe End of explanation """ df.ix['2015-09-03 11:00:00+01:00':'2015-09-03 12:00:00+01:00'].plot()# select a time range and plot it #plt.show() co = CombinatorialOptimisation() co.train(elec,cols=[('power','active')]) co.steady_states.head() co.steady_states.tail() ax = mains.plot() co.steady_states['active average'].plot(style='o', ax = ax); plt.ylabel("Power (W)") plt.xlabel("Time"); #plt.show() disag_filename = join(data_dir, 'disag_gjw_CO.hdf5') output = HDFDataStore(disag_filename, 'w') co.disaggregate(mains,output) output.close() disag_hart = DataSet(disag_filename) disag_hart_elec = disag_hart.buildings[building].elec from nilmtk.metrics import f1_score f1_hart= f1_score(disag_hart_elec, test_elec) f1_hart.index = disag_hart_elec.get_labels(f1_hart.index) f1_hart.plot(kind='barh') plt.ylabel('appliance'); plt.xlabel('f-score'); plt.title("Hart"); """ Explanation: Training We'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value. End of explanation """
NEONScience/NEON-Data-Skills
tutorials/Python/Hyperspectral/hyperspectral-classification/Classification_OLS_py/Classification_OLS_py.ipynb
agpl-3.0
import numpy as np import matplotlib import matplotlib.pyplot as mplt from scipy import linalg from scipy import io ### Ordinary Least Squares ### SOLVES 2-CLASS LEAST SQUARES PROBLEM ### LOAD DATA ### ### IF LoadClasses IS True, THEN LOAD DATA FROM FILES ### ### OTHERSIE, RANDOMLY GENERATE DATA ### LoadClasses = True TrainOutliers = False TestOutliers = False NOut = 20 NSampsClass = 200 NSamps = 2*NSampsClass """ Explanation: syncID: 1f8217240c064ed1a67b9db20e9362f4 title: "Classification of Hyperspectral Data with Ordinary Least Squares in Python" description: "Learn to classify spectral data using the Ordinary Least Squares method." dateCreated: 2017-06-21 authors: Paul Gader contributors: Donal O'Leary estimatedTime: 1 hour packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot topics: hyperspectral-remote-sensing, HDF5, remote-sensing languagesTool: python dataProduct: NEON.DP1.30006, NEON.DP3.30006, NEON.DP1.30008 code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/hyperspectral-classification/Classification_OLS_py/Classification_OLS_py.ipynb tutorialSeries: intro-hsi-py-series urlTitle: classification-ols-python In this tutorial, we will learn to classify spectral data using the Ordinary Least Squares method. <div id="ds-objectives" markdown="1"> ### Objectives After completing this tutorial, you will be able to: * Classify spectral remote sensing data using Ordinary Least Squares. ### Install Python Packages * **numpy** * **gdal** * **matplotlib** * **matplotlib.pyplot** ### Download Data <a href="https://ndownloader.figshare.com/files/8730436"> Download the spectral classification teaching data subset</a> <a href="https://ndownloader.figshare.com/files/8730436" class="link--button link--arrow"> Download Dataset</a> ### Additional Materials This tutorial was prepared in conjunction with a presentation on spectral classification that can be downloaded. <a href="https://ndownloader.figshare.com/files/8730613"> Download Dr. Paul Gader's Classification 1 PPT</a> <a href="https://ndownloader.figshare.com/files/8731960"> Download Dr. Paul Gader's Classification 2 PPT</a> <a href="https://ndownloader.figshare.com/files/8731963"> Download Dr. Paul Gader's Classification 3 PPT</a> </div> Classification with Ordinary Least Squares solves the 2-class least squares problem. First, we load the required packages and set initial variables. End of explanation """ if LoadClasses: ### GET FILENAMES %%% ### THESE ARE THE OPTIONS ### ### LinSepC1, LinSepC2,LinSepC2Outlier (Still Linearly Separable) ### ### NonLinSepC1, NonLinSepC2, NonLinSepC22 ### ## You will need to update these filepaths for your machine: InFile1 = '/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/NonLinSepC1.mat' InFile2 = '/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/NonLinSepC2.mat' C1Dict = io.loadmat(InFile1) C2Dict = io.loadmat(InFile2) C1 = C1Dict['NonLinSepC1'] C2 = C2Dict['NonLinSepC2'] if TrainOutliers: ### Let's Make Some Noise ### Out1 = 2*np.random.rand(NOut,2)-0.5 Out2 = 2*np.random.rand(NOut,2)-0.5 C1 = np.concatenate((C1,Out1),axis=0) C2 = np.concatenate((C2,Out2),axis=0) NSampsClass = NSampsClass+NOut NSamps = 2*NSampsClass else: ### Randomly Generate Some Data ### Make a covariance using a diagonal array and rotation matrix pi = 3.141592653589793 Lambda1 = 0.25 Lambda2 = 0.05 DiagMat = np.array([[Lambda1, 0.0],[0.0, Lambda2]]) RotMat = np.array([[np.sin(pi/4), np.cos(pi/4)], [-np.cos(pi/4), np.sin(pi/4)]]) mu1 = np.array([0,0]) mu2 = np.array([1,1]) Sigma = np.dot(np.dot(RotMat.T, DiagMat), RotMat) C1 = np.random.multivariate_normal(mu1, Sigma, NSampsClass) C2 = np.random.multivariate_normal(mu2, Sigma, NSampsClass) print(Sigma) print(C1.shape) print(C2.shape) """ Explanation: Next, we read in the example data. Note that you will need to update the filepaths below to work on your machine. End of explanation """ ### PLOT DATA ### matplotlib.pyplot.figure(1) matplotlib.pyplot.plot(C1[:NSampsClass, 0], C1[:NSampsClass, 1], 'bo') matplotlib.pyplot.plot(C2[:NSampsClass, 0], C2[:NSampsClass, 1], 'ro') matplotlib.pyplot.show() ### SET UP TARGET OUTPUTS ### TargetOutputs = np.ones((NSamps,1)) TargetOutputs[NSampsClass:NSamps] = -TargetOutputs[NSampsClass:NSamps] ### PLOT TARGET OUTPUTS ### matplotlib.pyplot.figure(2) matplotlib.pyplot.plot(range(NSampsClass), TargetOutputs[range(NSampsClass)], 'b-') matplotlib.pyplot.plot(range(NSampsClass, NSamps), TargetOutputs[range(NSampsClass, NSamps)], 'r-') matplotlib.pyplot.show() ### FIND LEAST SQUARES SOLUTION ### AllSamps = np.concatenate((C1,C2),axis=0) AllSampsBias = np.concatenate((AllSamps, np.ones((NSamps,1))), axis=1) Pseudo = linalg.pinv2(AllSampsBias) w = Pseudo.dot(TargetOutputs) w ### COMPUTE OUTPUTS ON TRAINING DATA ### y = AllSampsBias.dot(w) ### PLOT OUTPUTS FROM TRAINING DATA ### matplotlib.pyplot.figure(3) matplotlib.pyplot.plot(range(NSamps), y, 'm') matplotlib.pyplot.plot(range(NSamps),np.zeros((NSamps,1)), 'b') matplotlib.pyplot.plot(range(NSamps), TargetOutputs, 'k') matplotlib.pyplot.title('TrainingOutputs (Magenta) vs Desired Outputs (Black)') matplotlib.pyplot.show() ### CALCULATE AND PLOT LINEAR DISCRIMINANT ### Slope = -w[1]/w[0] Intercept = -w[2]/w[0] Domain = np.linspace(-1.1, 1.1, 60) # set up the descision surface domain, -1.1 to 1.1 (looking at the data), do it 60 times Disc = Slope*Domain+Intercept matplotlib.pyplot.figure(4) matplotlib.pyplot.plot(C1[:NSampsClass, 0], C1[:NSampsClass, 1], 'bo') matplotlib.pyplot.plot(C2[:NSampsClass, 0], C2[:NSampsClass, 1], 'ro') matplotlib.pyplot.plot(Domain, Disc, 'k-') matplotlib.pyplot.ylim([-1.1,1.3]) matplotlib.pyplot.title('Ordinary Least Squares') matplotlib.pyplot.show() RegConst = 0.1 AllSampsBias = np.concatenate((AllSamps, np.ones((NSamps,1))), axis=1) AllSampsBiasT = AllSampsBias.T XtX = AllSampsBiasT.dot(AllSampsBias) AllSampsReg = XtX + RegConst*np.eye(3) Pseudo = linalg.pinv2(AllSampsReg) wr = Pseudo.dot(AllSampsBiasT.dot(TargetOutputs)) Slope = -wr[1]/wr[0] Intercept = -wr[2]/wr[0] Domain = np.linspace(-1.1, 1.1, 60) Disc = Slope*Domain+Intercept matplotlib.pyplot.figure(5) matplotlib.pyplot.plot(C1[:NSampsClass, 0], C1[:NSampsClass, 1], 'bo') matplotlib.pyplot.plot(C2[:NSampsClass, 0], C2[:NSampsClass, 1], 'ro') matplotlib.pyplot.plot(Domain, Disc, 'k-') matplotlib.pyplot.ylim([-1.1,1.3]) matplotlib.pyplot.title('Ridge Regression') matplotlib.pyplot.show() """ Explanation: Now we can plot the data. End of explanation """ ### COMPUTE OUTPUTS ON TRAINING DATA ### yr = AllSampsBias.dot(wr) ### PLOT OUTPUTS FROM TRAINING DATA ### matplotlib.pyplot.figure(6) matplotlib.pyplot.plot(range(NSamps), yr, 'm') matplotlib.pyplot.plot(range(NSamps),np.zeros((NSamps,1)), 'b') matplotlib.pyplot.plot(range(NSamps), TargetOutputs, 'k') matplotlib.pyplot.title('TrainingOutputs (Magenta) vs Desired Outputs (Black)') matplotlib.pyplot.show() y1 = y[range(NSampsClass)] y2 = y[range(NSampsClass, NSamps)] Corr1 = np.sum([y1>0]) Corr2 = np.sum([y2<0]) y1r = yr[range(NSampsClass)] y2r = yr[range(NSampsClass, NSamps)] Corr1r = np.sum([y1r>0]) Corr2r = np.sum([y2r<0]) print('Result for Ordinary Least Squares') CorrClassRate=(Corr1+Corr2)/NSamps print(Corr1 + Corr2, 'Correctly Classified for a ', round(100*CorrClassRate), '% Correct Classification \n') print('Result for Ridge Regression') CorrClassRater=(Corr1r+Corr2r)/NSamps print(Corr1r + Corr2r, 'Correctly Classified for a ', round(100*CorrClassRater), '% Correct Classification \n') ### Make Confusion Matrices ### NumClasses = 2; Cm = np.zeros((NumClasses,NumClasses)) Cm[(0,0)] = Corr1/NSampsClass Cm[(0,1)] = (NSampsClass-Corr1)/NSampsClass Cm[(1,0)] = (NSampsClass-Corr2)/NSampsClass Cm[(1,1)] = Corr2/NSampsClass Cm = np.round(100*Cm) print('Confusion Matrix for OLS Regression \n', Cm, '\n') Cm = np.zeros((NumClasses,NumClasses)) Cm[(0,0)] = Corr1r/NSampsClass Cm[(0,1)] = (NSampsClass-Corr1r)/NSampsClass Cm[(1,0)] = (NSampsClass-Corr2r)/NSampsClass Cm[(1,1)] = Corr2r/NSampsClass Cm = np.round(100*Cm) print('Confusion Matrix for Ridge Regression \n', Cm, '\n') """ Explanation: Save this project with the name: OLSandRidgeRegress2DPGader. Make a New Project for Spectra. End of explanation """
mommermi/Introduction-to-Python-for-Scientists
notebooks/.ipynb_checkpoints/Interpolation_20161104-checkpoint.ipynb
mit
# matplotlib inline import numpy as np import matplotlib.pyplot as plt # read in signal.csv data = np.genfromtxt('signal.csv', delimiter=',', dtype=[('x', float), ('y', float), ('yerr', float)]) f, ax = plt.subplots() ax.errorbar(data['x'], data['y'], yerr=data['yerr'], linestyle='', color='red', label='Signal Data') ax.set_xlabel('x [a.u.]') ax.set_ylabel('y [a.u.]') ax.legend(numpoints=1, loc=2) plt.show() """ Explanation: Interpolation (see https://docs.scipy.org/doc/scipy/reference/interpolate.html#module-scipy.interpolate) In this example, we will interpolate data in one and two dimensions. Interpolation is necessary if you have data available only for discrete locations, but you would like to know what the data inbetween those discrete locations look like. Note that interpolation is different from function fitting: while the latter requires a mathematical model function that is fitted to the data, interpolation makes no assumption on what functional behavior the data might be based on. It is important to be aware that interpolation is always associated with uncertainty: it cannot magically reveal details that fall between the discrete locations (good example: movies in which the police has noisy surveillance camera images and then magically create a license plate code from that). One-dimensional Interpolation Imagine you measure a signal that is a function of only one variable (https://raw.githubusercontent.com/mommermi/Introduction-to-Python-for-Scientists/master/notebooks/signal.csv) and you would like to know what this signal looks like between the discrete measurement, for instance, because you want to integrate the signal over time. Once again, let's plot the signal first. End of explanation """ import scipy.interpolate as interp # interpolate data near = interp.interp1d(data['x'], data['y'], kind='nearest') # nearest neighbor interpolation lin = interp.interp1d(data['x'], data['y'], kind='linear') # linear interpolation cub = interp.interp1d(data['x'], data['y'], kind='cubic') # cubic spline interpolation # plot the results f, ax = plt.subplots() ax.errorbar(data['x'], data['y'], yerr=data['yerr'], linestyle='', color='red', label='Signal Data') x_range = np.arange(min(data['x']), max(data['x']), 0.1) ax.plot(x_range, near(x_range), color='orange', label='Nearest Neighbor') ax.plot(x_range, lin(x_range), color='green', label='Linear Interpolation') ax.plot(x_range, cub(x_range), color='blue', label='Cubic Spline') # fit function fit = lambda x: 3.*x+0.005+0.51*np.sin(x/0.2-2) # see last week's notes ax.plot(x_range, fit(x_range), color='black', label='Best Fit') ax.set_xlabel('x [a.u.]') ax.set_ylabel('y [a.u.]') ax.legend(numpoints=1, loc=2) plt.show() """ Explanation: We use scipy.interpolate.interp1d (https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d) and different interpolation methods to interpolate the data. We also compare the different interpolation functions to the fit that we derived for this data set last week (https://github.com/mommermi/Introduction-to-Python-for-Scientists/blob/master/notebooks/Function_Fitting_20161028.ipynb). End of explanation """ from scipy.integrate import quad print quad(lin, min(data['x']), max(data['x'])) print quad(cub, min(data['x']), max(data['x'])) print quad(fit, min(data['x']), max(data['x'])) """ Explanation: It is obvious that both the linear and cublic Spline interpolations provide good results, but only if the function is reasonably well sampled. Where there are large gaps, all interpolations diverge significantly from the best-fit function. Now we can integrate the area underneath the different interpolations and the fit curve: End of explanation """ def func(x,y): return np.sqrt((.5-x)**2+(.5-y)**2) # create a meshgrid grid_x, grid_y = np.mgrid[0:1:100j, 0:1:100j] # plot the function values in a scatter plot f, ax1 = plt.subplots() polar = ax1.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower', cmap='RdYlBu') # add colorbar cbar = f.colorbar(polar) cbar.set_label("Function Value") plt.show() """ Explanation: scipy.integrate.quad integrates the different functions from their minimum to their maximum $x$ values and return the integral value and its uncertainty. Again, the cubic Spline provides more accurate results. Multi-dimensional Interpolation Imagine some function that is defined across the x-y plane. End of explanation """ x_grid, y_grid = np.mgrid[0:3:1, 0:2:1] print x_grid print y_grid """ Explanation: plt.imshow creates an image from the function func over the meshgrid (see below) consisting of grid_x and grid_y. Please note the .T appended to the function call: python uses a matrix indexing convention (rows before columns), whereas an image uses colums before rows. .T transposes the shape of the resulting values, leading to a correct representation (which is not really necessary here due to the symmetry of the problem) when using origin='lower'. cmap='RdYlBu' defines the colormap to be used in the plotting: Red-Yellow-Blue (see, e.g., https://scipy.github.io/old-wiki/pages/Cookbook/Matplotlib/Show_colormaps). np.mgrid generates a meshgrid. Let's check quickly what it actually does: End of explanation """ for i in range(len(np.ravel(x_grid))): print np.ravel(x_grid)[i], np.ravel(y_grid)[i] """ Explanation: np.mgrid creates a number of arrays with coordinates that sample a range of numbers evenly and form a grid. Each array represents one axis. In this example, it forms a grid in the x-y plane ranging over $0 < x < 3$ and $0 < y < 2$ with steps of one each. The x and y arrays have the same shape, meaning that iterating over both arrays samples the whole grid: End of explanation """ def test_func(x, y): return x+y print test_func(x_grid, y_grid) """ Explanation: Hence, np.grid can be used to create an evenly sampled grid and the resulting arrays can be readily passed to functions as arguments: End of explanation """ # create fake data in two dimensions xy_data = np.random.rand(100, 2) z_data = func(xy_data[:,0], xy_data[:,1]) grid_x, grid_y = np.mgrid[0:1:100j, 0:1:100j] # plot the function values in a scatter plot f, ax1 = plt.subplots() ax1.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower', cmap='RdYlBu') ax1.scatter(xy_data[:,1], xy_data[:,0], label='data', edgecolor='black', c=z_data, cmap='RdYlBu') plt.show() """ Explanation: One more addition: meshgrids can be generated for any number of dimensions. The syntax np.mgrid[0:10:20j... will generate in one axis coordinates ranging from zero to 10 in 20 steps. Let's go back to our interpolation problem. We sample our model function func randomly across the x-y plane and plot the function values in a scatter plot: End of explanation """ grid_x, grid_y = np.mgrid[0:1:100j, 0:1:100j] # interpolate grid_znearest = interp.griddata(xy_data, z_data, (grid_x, grid_y), method='nearest') grid_zlinear = interp.griddata(xy_data, z_data, (grid_x, grid_y), method='linear') grid_zcubic = interp.griddata(xy_data, z_data, (grid_x, grid_y), method='cubic') # create 2x2 plot array with shared axes f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10,7)) # actual function ax1.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower', cmap='RdYlBu') # use '.T' to transpose images: python uses row/column notation, # but images use x,y ax1.scatter(xy_data[:,1], xy_data[:,0], label='data', edgecolor='black', c=z_data, cmap='RdYlBu') # 'c' is a colormap, check here for available designs # http://matplotlib.org/examples/color/colormaps_reference.html ax1.set_title('Actual Function') # nearest neighbor ax2.imshow(grid_znearest.T, extent=(0,1,0,1), origin='lower', cmap='RdYlBu') ax2.scatter(xy_data[:,1], xy_data[:,0], label='data', edgecolor='black', c=z_data, cmap='RdYlBu') ax2.set_title('Nearest Neighbor') # linear ax3.imshow(grid_zlinear.T, extent=(0,1,0,1), origin='lower', cmap='RdYlBu') ax3.scatter(xy_data[:,1], xy_data[:,0], label='data', edgecolor='black', c=z_data, cmap='RdYlBu') ax3.set_title('Linear Interp.') # cubic spline ax4.imshow(grid_zcubic.T, extent=(0,1,0,1), origin='lower', cmap='RdYlBu') ax4.scatter(xy_data[:,1], xy_data[:,0], label='data', edgecolor='black', c=z_data, cmap='RdYlBu') ax4.set_title('Cubic Spline') plt.show() """ Explanation: We now use the function scipy.interpolate.griddata to interpolate the randomly sampled data points using different methods and compare them to the original function. End of explanation """
CUBoulder-ASTR2600/lectures
lecture_10_vectors_numpy.ipynb
isc
x = 2 y = 3 myList = [x, y] myList """ Explanation: Array Computing Terminology List A sequence of values that can vary in length. The values can be different data types. The values can be modified (mutable). Tuple A sequence of values with a fixed length. The values can be different data types. The values cannot be modified (immutable). Array A sequence of values with a fixed length. The values cannot be different data types. The values can be modified (mutable). Vector: A 1 dimensional (1D) array. Matrix: - A 2 dimensional (2D) array. Arrays are like lists but less flexible and more efficient for lengthy calculations (one data type, stored in the same location in memory). But first: VECTORS -- very simple arrays Vectors can have an arbitrary number of components, existing in an n-dimensional space. (x1, x2, x3, ... xn) Or (x0, x1, x2, ... x(n-1)) for Python... In Python, vectors are represented by lists or tuples: Lists: End of explanation """ myTuple = (-4, 7) myTuple """ Explanation: Tuples: End of explanation """ numList = [0.0, 1.0, 2.0] numTuple = (0.0, 1.0, 2.0) 2 * numList 2 * numTuple 2.0 * numList """ Explanation: Mathematical Operations on Vectors Review of vector operations: textbook sections 5.1.2 & 5.1.3 In computing: Applying a mathematical function to a vector means applying it to each element in the vector. (you may hear me use the phrase "element-wise," which means "performing some operation one element at a time") However, this is not true of lists and tuples Q. What do these yield? End of explanation """ def distance(t, a = 9.8): '''Calculate the distance given a time and acceleration. Input: time in seconds <int> or <float>, acceleration in m/s^2 <int> or <float> Output: distance in m <float> ''' return 0.5 * a * t**2 numPoints = 6 # number of points delta = 1.0 / (numPoints - 1) # time interval between points # Q. What do the two lines below do? timeList = [index * delta for index in range(numPoints)] distList = [distance(t) for t in timeList] print("Time List: ", timeList) print("Distance List:", distList) """ Explanation: Vectors in Python programming Our current solution: * using lists for collecting function data * convert to NumPy arrays for doing math with them. As an example, a falling object in Earth's gravity: End of explanation """ timeDistList = [] for index in range(numPoints): timeDistList.append([timeList[index], distList[index]]) for element in timeDistList: print element """ Explanation: Repeat on your own: stitching results together: End of explanation """ timeDistList2 = [[time, dist] for time, dist in zip(timeList, distList)] for element in timeDistList2: print(element) daveList = range(5) for element in zip(timeList, distList): print(element) list(zip(timeList, distList, daveList)) """ Explanation: Or using zip, we did this already before: End of explanation """ import numpy as np """ Explanation: When to use lists and arrays? In general, we'll use lists instead of arrays when elements have to be added (e.g., we don't know how the number of elements ahead of time, and must use methods like append and extend) or their types are heterogeneous. Otherwise we'll use arrays for numerical calculations. Basics of numpy arrays Characteristics of numpy arrays: Elements are all the same type Number of elements known when array is created Numerical Python (numpy) must be imported to manipulate arrays. All array elements are operated on by numpy, which eliminates loops and makes programs much faster. Arrays with one index are sometimes called vectors (or 1D arrays). Arrays with two indices are sometimes called matrices (or 2D arrays). End of explanation """ myList = [1, 2, 3] myArray = np.array(myList) print(type(myArray)) myArray """ Explanation: To convert a list to an array use the array method: End of explanation """ np.zeros? myArray = np.zeros(10) myArray """ Explanation: Note the type! To create an array of length n filled with zeros (to be filled later): End of explanation """ myArray = np.zeros(5, dtype=int) myArray """ Explanation: To create arrays with elements of a type other than the default float, use a second argument: End of explanation """ zArray = np.linspace(0, 5, 6) zArray """ Explanation: We often want array elements equally spaced by some interval (delta). numpy.linspace(start, end, number of elements) does this: NOTE #### HERE, THE "end" VALUE IS NOT (end - 1) #### NOTE End of explanation """ zArray[3] """ Explanation: Q. What will that do? Array elements are accessed with square brackets, the same as lists: End of explanation """ yArray = zArray[1:4] yArray """ Explanation: Slicing can also be done on arrays: Q. What does this give us? End of explanation """ zArray """ Explanation: For reference below: End of explanation """ zArray[3] = 10.0 zArray """ Explanation: Let's edit one of the values in the z array End of explanation """ yArray """ Explanation: Now let's look at the y array again End of explanation """ lList = [6, 7, 8, 9, 10, 11] mList = lList[1:3] print(mList) lList[1] = 10 mList """ Explanation: The variable yArray is a reference (or view in Numpy lingo) to three elements (a slice) from zArray: element indices 1, 2, and 3. Here is a blog post which discusses this issue nicely: http://nedbatchelder.com/text/names.html Reason this is of course memory efficiency: Why copy data if not necessary? End of explanation """ def distance(t, a = 9.8): '''Calculate the distance given a time and acceleration. Input: time in seconds <int> or <float>, acceleration in m/s^2 <int> or <float> Output: distance in m <float> ''' return 0.5 * a * t**2 numPoints = 6 # number of points delta = 1.0 / (numPoints - 1) # time interval between points timeList = [index * delta for index in range(numPoints)] # Create the time list distList = [distance(t) for t in timeList] # Create the distance list """ Explanation: Do not forget this -- check your array values frequently if you are unsure! Computing coordinates and function values Here's the distance function we did previously: End of explanation """ timeArray = np.array(timeList) distArray = np.array(distList) print(type(timeArray), timeArray) print(type(distArray), distArray) """ Explanation: We could convert timeList and distList from lists to arrays: End of explanation """ def distance(t, a = 9.8): '''Calculate the distance given a time and acceleration. Input: time in seconds <int> or <float>, acceleration in m/s^2 <int> or <float> Output: distance in m <float> ''' return 0.5 * a * t**2 numPoints = 6 # number of points timeArray = np.linspace(0, 1, numPoints) # Create the time array distArray = np.zeros(numPoints) # Create the distance array populated with 0's print("Time Array: ", type(timeArray), timeArray) print("Dist Array Zeros: ", type(distArray), distArray) for index in range(numPoints): distArray[index] = distance(timeArray[index]) # Populate the distance array with calculated values print("Dist Array Populated:", type(distArray), distArray) """ Explanation: We can do this directly by creating arrays (without converting from a list) with np.linspace to create timeArray and np.zeros to create distArray. (This is merely a demonstration, not superior to the above code for this simple example.) End of explanation """ def distance(t, a = 9.8): '''Calculate the distance given a time and acceleration. Input: time(s) in seconds <int> or <float> or <np.array>, acceleration in m/s^2 <int> or <float> Output: distance in m <float> ''' return 0.5 * a * t**2 numPoints = 6 # number of points timeArray = np.linspace(0, 1, numPoints) # Create the time array distArray = distance(timeArray) # Create and populate the distance array using vectorization print("Time Array:", type(timeArray), timeArray) print("Dist Array:", type(distArray), distArray) """ Explanation: Vectorization -- one of the great powers of arrays The examples above are great, but they doesn't use the computation power of arrays by operating on all the elements simultaneously! Loops are slow. Operating on the elements simultaneously is much faster (and simpler!). "Vectorization" is replacing a loop with vector or array expressions. End of explanation """ numPoints = 6 # Number of points a = 9.8 # Acceleration in m/s^2 timeArray = np.linspace(0, 1, numPoints) # The values a created like before print("Original ", timeArray) timeArray = timeArray**2 # Once in the function, they are first squared print("Squared ", timeArray) print(distArray) timeArray = timeArray * 0.5 # Next they are multiplied by 0.5 print("Times 0.5", timeArray) timeArray = timeArray * a # Finally, they are multiplied by a and the entire modified print("Times a ", timeArray) # array is returned """ Explanation: What just happened? Let's look at what the function "distance" is doing to the values in timeArray End of explanation """ import math math.sin(0.5) """ Explanation: Caution: numpy has its own math functions, such as sin, cos, pi, exp, and some of these are slightly different from Python's math module. Also, the math module does not accept numpy array as arguments, i.e. it is NOT vectorized. Conclusiong: Use numpy built in math whenever dealing with arrays, but be aware that if you repeatedly (in a loop) calculate only 1 value at a time, the math library would be faster (because numpy has some overhead costs to do autmatically element-wise math). So, do this for single calculations: End of explanation """ np.sin([0.1, 0.2, 0.3, 0.4, 0.5]) """ Explanation: but do this for arrays: End of explanation """
vatsan/gp_jupyter_notebook_templates
notebooks/01_data_exploration.ipynb
apache-2.0
%run '00_database_connectivity_setup.ipynb' IPython.display.clear_output() """ Explanation: Setup database connectivity We'll reuse our module from the previous notebook (00_database_connectivity_setup.ipynb) to establish connectivity to the database End of explanation """ %%execsql drop table if exists gp_ds_sample_table; create temp table gp_ds_sample_table as ( select random() as x, random() as y from generate_series(1, 10) x ) distributed randomly; """ Explanation: Your connection object is conn: 1. Queries: You can run your queries using psql.read_sql("""&lt;YOUR SQL&gt;""", conn). Alternatively, if you don't want a handle to the resulting dataframe, you can run the code inline using the magic command we defined in previously in cell: %%showsql. 2. Create/Delete/Updates: You can run these statements using psql.execute("""&lt;YOUR SQL&gt;""", conn), followed by a conn.commit() command to ensure your transaction is committed. Otherwise your changes will be rolledback if you terminate your kernel. Alternatively, you could use the magic command that we previously defined in the cell: %%execsql. If you created a new connection object (say to connect to a new cluster) as shown in the last section of 00_database_connectivity_setup.ipynb notebook, use that connection object where needed. Data Exploration CREATE/UPDATE/DELETE query End of explanation """ %%showsql select * from gp_ds_sample_table; """ Explanation: SELECT query End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/6d7b5624e4fa6fee90fb68aca9314f7f/plot_evoked_topomap.ipynb
bsd-3-clause
# Authors: Christian Brodbeck <christianbrodbeck@nyu.edu> # Tal Linzen <linzen@nyu.edu> # Denis A. Engeman <denis.engemann@gmail.com> # Mikołaj Magnuski <mmagnuski@swps.edu.pl> # # License: BSD (3-clause) # sphinx_gallery_thumbnail_number = 5 import numpy as np import matplotlib.pyplot as plt from mne.datasets import sample from mne import read_evokeds print(__doc__) path = sample.data_path() fname = path + '/MEG/sample/sample_audvis-ave.fif' # load evoked corresponding to a specific condition # from the fif file and subtract baseline condition = 'Left Auditory' evoked = read_evokeds(fname, condition=condition, baseline=(None, 0)) """ Explanation: Plotting topographic maps of evoked data Load evoked data and plot topomaps for selected time points using multiple additional options. End of explanation """ times = np.arange(0.05, 0.151, 0.02) evoked.plot_topomap(times, ch_type='mag', time_unit='s') """ Explanation: Basic plot_topomap options We plot evoked topographies using :func:mne.Evoked.plot_topomap. The first argument, times allows to specify time instants (in seconds!) for which topographies will be shown. We select timepoints from 50 to 150 ms with a step of 20ms and plot magnetometer data: End of explanation """ evoked.plot_topomap(ch_type='mag', time_unit='s') """ Explanation: If times is set to None at most 10 regularly spaced topographies will be shown: End of explanation """ evoked.plot_topomap(times, ch_type='mag', average=0.05, time_unit='s') """ Explanation: Instead of showing topographies at specific time points we can compute averages of 50 ms bins centered on these time points to reduce the noise in the topographies: End of explanation """ evoked.plot_topomap(times, ch_type='grad', time_unit='s') """ Explanation: We can plot gradiometer data (plots the RMS for each pair of gradiometers) End of explanation """ evoked.plot_topomap(times, ch_type='mag', cmap='Spectral_r', res=32, outlines='skirt', contours=4, time_unit='s') """ Explanation: Additional plot_topomap options We can also use a range of various :func:mne.viz.plot_topomap arguments that control how the topography is drawn. For example: cmap - to specify the color map res - to control the resolution of the topographies (lower resolution means faster plotting) outlines='skirt' to see the topography stretched beyond the head circle contours to define how many contour lines should be plotted End of explanation """ extrapolations = ['box', 'head', 'local'] fig, axes = plt.subplots(figsize=(7.5, 2.5), ncols=3) for ax, extr in zip(axes, extrapolations): evoked.plot_topomap(0.1, ch_type='mag', size=2, extrapolate=extr, axes=ax, show=False, colorbar=False) ax.set_title(extr, fontsize=14) """ Explanation: If you look at the edges of the head circle of a single topomap you'll see the effect of extrapolation. By default extrapolate='box' is used which extrapolates to a large box stretching beyond the head circle. Compare this with extrapolate='head' (second topography below) where extrapolation goes to 0 at the head circle and extrapolate='local' where extrapolation is performed only within some distance from channels: End of explanation """ evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False, size=6, res=128, title='Auditory response', time_unit='s') plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88) """ Explanation: More advanced usage Now we plot magnetometer data as topomap at a single time point: 100 ms post-stimulus, add channel labels, title and adjust plot margins: End of explanation """ evoked.animate_topomap(ch_type='mag', times=times, frame_rate=10, time_unit='s') """ Explanation: Animating the topomap Instead of using a still image we can plot magnetometer data as an animation (animates only in matplotlib interactive mode) End of explanation """
alexweav/Learny-McLearnface
GradientChecks.ipynb
mit
%load_ext autoreload %autoreload 2 import numpy as np import LearnyMcLearnface as lml """ Explanation: Layer Gradient Checks Here, we use numerical gradient checking to verify the backpropagation correctness of all layers in the Layers folder. We should expect to see very small nonzero values for error, as the checking process approximates the gradient numerically. End of explanation """ affine = lml.layers.AffineLayer(30, 10, 1e-2) test_input = np.random.randn(50, 30) dout = np.random.randn(50, 10) _ = affine.forward(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : affine.forward(x, affine.W, affine.b), test_input, dout) dW_num = lml.utils.numerical_gradient_layer(lambda w : affine.forward(test_input, w, affine.b), affine.W, dout) db_num = lml.utils.numerical_gradient_layer(lambda b : affine.forward(test_input, affine.W, b), affine.b, dout) dx = affine.backward(dout) print('Affine dx error:', np.max(lml.utils.relative_error(dx, dx_num))) print('Affine dW error:', np.max(lml.utils.relative_error(affine.dW, dW_num))) print('Affine db error:', np.max(lml.utils.relative_error(affine.db, db_num))) """ Explanation: Affine Layer Layers/AffineLayer.py End of explanation """ batchnorm = lml.layers.BatchnormLayer(10, 0.9) test_input = np.random.randn(20, 10) dout = np.random.randn(20, 10) _ = batchnorm.forward_train(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : batchnorm.forward_train(x), test_input, dout) dx = batchnorm.backward(dout) print('Batchnorm dx error:', np.max(lml.utils.relative_error(dx, dx_num))) """ Explanation: Batch Normalization Layer Layers/BatchnormLayer.py End of explanation """ dropout = lml.layers.DropoutLayer(10, 0.6, seed=5684) test_input = np.random.randn(3, 10) dout = np.random.randn(3, 10) _ = dropout.forward_train(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : dropout.forward_train(x), test_input, dout) dx = dropout.backward(dout) print('Dropout dx error:', np.max(lml.utils.relative_error(dx, dx_num))) """ Explanation: Dropout Layer Layers/DropoutLayer.py End of explanation """ prelu = lml.layers.PReLULayer(10) test_input = np.random.randn(50, 10) dout = np.random.randn(50, 10) _ = prelu.forward(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : prelu.forward(x, prelu.W), test_input, dout) dW_num = lml.utils.numerical_gradient_layer(lambda w : prelu.forward(test_input, w), prelu.W, dout) dx = prelu.backward(dout) print('PReLU dx error:', np.max(lml.utils.relative_error(dx, dx_num))) print('PReLU dW error:', np.max(lml.utils.relative_error(prelu.dW, dW_num))) """ Explanation: PReLU (Parametric Rectified Linear Unit) Layer Layers/PReLULayer.py End of explanation """ relu = lml.layers.ReLULayer(10) test_input = np.random.randn(50, 10) dout = np.random.randn(50, 10) _ = relu.forward(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : relu.forward(x), test_input, dout) dx = relu.backward(dout) print('ReLU dx error:', np.max(lml.utils.relative_error(dx, dx_num))) """ Explanation: ReLU (Rectified Linear Unit) Layer Layers/ReLULayer.py End of explanation """ sigmoid = lml.layers.SigmoidLayer(10) test_input = np.random.randn(50, 10) dout = np.random.randn(50, 10) _ = sigmoid.forward(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : sigmoid.forward(x), test_input, dout) dx = sigmoid.backward(dout) print('Sigmoid dx error:', np.max(lml.utils.relative_error(dx, dx_num))) """ Explanation: Sigmoid Layer Layers/SigmoidLayer.py End of explanation """ softmax = lml.layers.SoftmaxLossLayer(10) test_scores = np.random.randn(50, 10) test_classes = np.random.randint(1, 10, 50) _, dx = softmax.loss(test_scores, test_classes) dx_num = lml.utils.numerical_gradient(lambda x : softmax.loss(x, test_classes)[0], test_scores) print('Softmax backprop error:', np.max(lml.utils.relative_error(dx, dx_num))) """ Explanation: Softmax Loss Layer Layers/SoftmaxLossLayer.py End of explanation """ svm = lml.layers.SVMLossLayer(10) test_scores = np.random.randn(50, 10) test_classes = np.random.randint(1, 10, 50) _, dx = svm.loss(test_scores, test_classes) dx_num = lml.utils.numerical_gradient(lambda x : svm.loss(x, test_classes)[0], test_scores) print('SVM backprop error:', np.max(lml.utils.relative_error(dx, dx_num))) """ Explanation: SVM Loss Layer Layers/SVMLossLayer.py End of explanation """ opts = { 'input_dim' : 10, 'data_type' : np.float64 } nn = lml.NeuralNetwork(opts) nn.add_layer('Affine', {'neurons':10, 'weight_scale':5e-2}) nn.add_layer('ReLU', {}) nn.add_layer('Affine', {'neurons':10, 'weight_scale':5e-2}) nn.add_layer('SoftmaxLoss', {}) test_scores = np.random.randn(20, 10) test_classes = np.random.randint(1, 10, 20) loss, dx = nn.backward(test_scores, test_classes) print('With regularization off:') f = lambda _: nn.backward(test_scores, test_classes)[0] d_b1_num = lml.utils.numerical_gradient(f, nn.layers[0].b, accuracy=1e-8) d_W1_num = lml.utils.numerical_gradient(f, nn.layers[0].W, accuracy=1e-8) print('Weight 1 error:', np.max(lml.utils.relative_error(nn.layers[0].dW, d_W1_num))) print('Bias 1 error:', np.max(lml.utils.relative_error(nn.layers[0].db, d_b1_num))) d_b2_num = lml.utils.numerical_gradient(f, nn.layers[2].b, accuracy=1e-8) d_W2_num = lml.utils.numerical_gradient(f, nn.layers[2].W, accuracy=1e-8) print('Weight 2 error:', np.max(lml.utils.relative_error(nn.layers[2].dW, d_W2_num))) print('Bias 2 error:', np.max(lml.utils.relative_error(nn.layers[2].db, d_b2_num))) print('With regularization at lambda = 1.0:') f = lambda _: nn.backward(test_scores, test_classes, reg_param=1.0)[0] d_b1_num = lml.utils.numerical_gradient(f, nn.layers[0].b, accuracy=1e-8) d_W1_num = lml.utils.numerical_gradient(f, nn.layers[0].W, accuracy=1e-8) print('Weight 1 error:', np.max(lml.utils.relative_error(nn.layers[0].dW, d_W1_num))) print('Bias 1 error:', np.max(lml.utils.relative_error(nn.layers[0].db, d_b1_num))) d_b2_num = lml.utils.numerical_gradient(f, nn.layers[2].b, accuracy=1e-8) d_W2_num = lml.utils.numerical_gradient(f, nn.layers[2].W, accuracy=1e-8) print('Weight 2 error:', np.max(lml.utils.relative_error(nn.layers[2].dW, d_W2_num))) print('Bias 2 error:', np.max(lml.utils.relative_error(nn.layers[2].db, d_b2_num))) """ Explanation: Tanh Layer Layers/TanhLayer.py tanh = lml.layers.TanhLayer(10) test_input = np.random.randn(50, 10) dout = np.random.randn(50, 10) _ = tanh.forward(test_input) dx_num = lml.utils.numerical_gradient_layer(lambda x : tanh.forward(x), test_input, dout) dx = tanh.backward(dout) print('Tanh dx error:', np.max(lml.utils.relative_error(dx, dx_num))) Full Model Gradient Checks Two Layer Network This is a gradient check for a simple example network with the following architecture: Affine, ReLU, Affine, Softmax End of explanation """ opts = { 'input_dim' : 10, 'data_type' : np.float64, 'init_scheme' : 'xavier' } nn = lml.NeuralNetwork(opts) nn.add_layer('Affine', {'neurons':10}) nn.add_layer('Batchnorm', {'decay':0.9}) nn.add_layer('PReLU', {}) nn.add_layer('Dropout', {'dropout_param':0.85, 'seed':5684}) nn.add_layer('Affine', {'neurons':10}) nn.add_layer('Batchnorm', {'decay':0.7}) nn.add_layer('PReLU', {}) nn.add_layer('Dropout', {'dropout_param':0.90, 'seed':5684}) nn.add_layer('Affine', {'neurons':10}) nn.add_layer('Batchnorm', {'decay':0.8}) nn.add_layer('PReLU', {}) nn.add_layer('Dropout', {'dropout_param':0.95, 'seed':5684}) nn.add_layer('SoftmaxLoss', {}) test_scores = np.random.randn(20, 10) test_classes = np.random.randint(1, 10, 20) loss, dx = nn.backward(test_scores, test_classes) f = lambda _: nn.backward(test_scores, test_classes, reg_param=0.7)[0] d_b1_num = lml.utils.numerical_gradient(f, nn.layers[0].b, accuracy=1e-8) d_W1_num = lml.utils.numerical_gradient(f, nn.layers[0].W, accuracy=1e-8) print('Weight 1 error:', np.max(lml.utils.relative_error(nn.layers[0].dW, d_W1_num))) print('Bias 1 error:', np.max(lml.utils.relative_error(nn.layers[0].db, d_b1_num))) d_gamma1_num = lml.utils.numerical_gradient(f, nn.layers[1].gamma, accuracy=1e-8) d_beta1_num = lml.utils.numerical_gradient(f, nn.layers[1].beta, accuracy=1e-8) print('Gamma 1 error:', np.max(lml.utils.relative_error(nn.layers[1].dgamma, d_gamma1_num))) print('Beta 1 error:', np.max(lml.utils.relative_error(nn.layers[1].dbeta, d_beta1_num))) d_r1_num = lml.utils.numerical_gradient(f, nn.layers[2].W, accuracy=1e-8) print('Rectifier 1 error:', np.max(lml.utils.relative_error(nn.layers[2].dW, d_r1_num))) d_b1_num = lml.utils.numerical_gradient(f, nn.layers[4].b, accuracy=1e-8) d_W1_num = lml.utils.numerical_gradient(f, nn.layers[4].W, accuracy=1e-8) print('Weight 2 error:', np.max(lml.utils.relative_error(nn.layers[4].dW, d_W1_num))) print('Bias 2 error:', np.max(lml.utils.relative_error(nn.layers[4].db, d_b1_num))) d_gamma2_num = lml.utils.numerical_gradient(f, nn.layers[5].gamma, accuracy=1e-8) d_beta2_num = lml.utils.numerical_gradient(f, nn.layers[5].beta, accuracy=1e-8) print('Gamma 2 error:', np.max(lml.utils.relative_error(nn.layers[5].dgamma, d_gamma2_num))) print('Beta 2 error:', np.max(lml.utils.relative_error(nn.layers[5].dbeta, d_beta2_num))) d_r2_num = lml.utils.numerical_gradient(f, nn.layers[6].W, accuracy=1e-8) print('Rectifier 2 error:', np.max(lml.utils.relative_error(nn.layers[6].dW, d_r2_num))) d_b1_num = lml.utils.numerical_gradient(f, nn.layers[8].b, accuracy=1e-8) d_W1_num = lml.utils.numerical_gradient(f, nn.layers[8].W, accuracy=1e-8) print('Weight 3 error:', np.max(lml.utils.relative_error(nn.layers[8].dW, d_W1_num))) print('Bias 3 error:', np.max(lml.utils.relative_error(nn.layers[8].db, d_b1_num))) d_gamma3_num = lml.utils.numerical_gradient(f, nn.layers[9].gamma, accuracy=1e-8) d_beta3_num = lml.utils.numerical_gradient(f, nn.layers[9].beta, accuracy=1e-8) print('Gamma 3 error:', np.max(lml.utils.relative_error(nn.layers[9].dgamma, d_gamma3_num))) print('Beta 3 error:', np.max(lml.utils.relative_error(nn.layers[9].dbeta, d_beta3_num))) d_r3_num = lml.utils.numerical_gradient(f, nn.layers[10].W, accuracy=1e-8) print('Rectifier 3 error:', np.max(lml.utils.relative_error(nn.layers[10].dW, d_r3_num))) """ Explanation: Multilayer Fully Connected Network with Augmentations End of explanation """
eford/rebound
ipython_examples/OrbitPlot.ipynb
gpl-3.0
import rebound sim = rebound.Simulation() sim.add(m=1) sim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98) sim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14) sim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1) sim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit """ Explanation: Orbit Plot REBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets. End of explanation """ %matplotlib inline fig = rebound.OrbitPlot(sim) """ Explanation: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument. End of explanation """ fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, periastron=True) fig = rebound.OrbitPlot(sim, unitlabel="[AU]", periastron=True, lw=2) """ Explanation: Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!). End of explanation """ from IPython.display import display, clear_output import matplotlib.pyplot as plt sim.move_to_com() for i in range(3): sim.integrate(sim.t+0.31) fig = rebound.OrbitPlot(sim,color=True,unitlabel="[AU]",lim=2.) display(fig) plt.close(fig) clear_output(wait=True) """ Explanation: Note that all orbits are draw with respect to the center of mass of all interior particles. This coordinate system is known as Jacobi coordinates. It requires that the particles are sorted by ascending semi-major axis within the REBOUND simulation's particle array. From within iPython/Jupyter one can also call the OrbitPlot routine in a loop, thus making an animation as one steps through a simulation. This is a nice way of keeping track of what is going on in a simulation without having to wait until the end. To do that we need to import the display and clear_output function from iPython first. We'll also need access to the clear function of matplotlib. Then, we run a loop, updating the figure as we go along. End of explanation """ fig = rebound.OrbitPlot(sim,slices=True,color=True,unitlabel="[AU]",lim=2.,limz=0.36) """ Explanation: To get an idea of the three dimensional distribution of orbits, use the slices=True option. This will plot the orbits three times, from different perspectives. You can adjust the dimensions in the z direction using the limz keyword. End of explanation """
computational-class/cjc2016
code/12.topic-models-with-turicreate.ipynb
mit
import turicreate as tc """ Explanation: Topic Modeling Using Turicreate 王成军 wangchengjun@nju.edu.cn 计算传播网 http://computational-communication.com End of explanation """ sf = tc.SFrame.read_csv("/Users/datalab/bigdata/cjc/w15", header=False) sf """ Explanation: Download Data: <del>http://select.cs.cmu.edu/code/graphlab/datasets/wikipedia/wikipedia_raw/w15 End of explanation """ dir(sf['X1']) bow = sf['X1']._count_words() type(sf['X1']) type(bow) bow.dict_has_any_keys(['limited']) bow.dict_values()[0][:20] sf sf['bow'] = bow sf type(sf['bow']) len(sf['bow']) list(sf['bow'][0].items())[:3] sf['tfidf'] = tc.text_analytics.tf_idf(sf['X1']) sf list(sf['tfidf'][0].items())[:5] """ Explanation: Transformations https://dato.com/learn/userguide/text/analysis.html End of explanation """ docs = sf['bow'].dict_trim_by_values(2) docs = docs.dict_trim_by_keys( tc.text_analytics.stop_words(), exclude=True) """ Explanation: Text cleaning End of explanation """ help(tc.topic_model.create) help(tc.text_analytics.random_split) train, test = tc.text_analytics.random_split(docs, .8) m = tc.topic_model.create(train, num_topics=100, # number of topics num_iterations=100, # algorithm parameters alpha=None, beta=.1) # hyperparameters results = m.evaluate(test) print(results['perplexity']) m m.get_topics() help(m.get_topics) topics = m.get_topics(num_words=10).unstack(['word','score'], \ new_column_name='topic_words')['topic_words'].apply(lambda x: x.keys()) for topic in topics: print(topic) help(m) def print_topics(m): topics = m.get_topics(num_words=5) topics = topics.unstack(['word','score'], new_column_name='topic_words')['topic_words'] topics = topics.apply(lambda x: x.keys()) for topic in topics: print(topic) print_topics(m) """ Explanation: Topic modeling End of explanation """ dir(m) m.vocabulary m.topics m2 = tc.topic_model.create(docs, num_topics=100, initial_topics=m.topics) """ Explanation: pred = m.predict(another_data) pred = m.predict(another_data, output_type='probabilities') Initializing from other models End of explanation """ associations = tc.SFrame() associations['word'] = ['recognition'] associations['topic'] = [0] m2 = tc.topic_model.create(docs, num_topics=20, num_iterations=50, associations=associations, verbose=False) m2.get_topics(num_words=10) print_topics(m2) """ Explanation: Seeding the model with prior knowledge End of explanation """
quantumlib/ReCirq
docs/quantum_chess/quantum_chess_rest_api.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 Google End of explanation """ !pip install -q git+https://github.com/quantumlib/ReCirq/ import recirq import recirq.quantum_chess.ascii_board as ab b = ab.AsciiBoard() b.reset() print(b) """ Explanation: Quantum Chess REST API <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/experiments/quantum_chess/quantum_chess_rest_api"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/quantum_chess/quantum_chess_rest_api.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/quantum_chess/quantum_chess_rest_api.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/quantum_chess/quantum_chess_rest_api.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> Quantum Chess is a variant of chess that gives players access to extra moves which allow them to create superposition. All moves are applied to the game state via unitary evolution, allowing players to experience effects like superposition, entanglement, and interference. This project provides a limited implementation of the full Quantum Chess move set, executed on a set of qubits representing squares. The full Quantum Chess application requires an API for the chess UI to communicate with an external backend for move processing and calculation. In this notebook we will: * Set up an ascii board representation of a Quantum Chess game running on a Cirq simualtor. * Explore how to interact with the ascii board in interactive mode and by batching moves. * Implement the functionality of the Quantum Chess REST API * Start a simple server that serves REST endpoints, which could be used to hook up an instance of Quantum Chess to our Cirq implementation. For more information on how to implement Quantum Chess moves in Cirq, including qubit mapping and error correction, check out (this other notebook) First, install the Quantum Chess package from recirq. End of explanation """ from recirq.quantum_chess.move import Move from recirq.quantum_chess.enums import MoveType, MoveVariant m = Move( source="b1", target="a3", target2="c3", move_type=MoveType.SPLIT_JUMP, move_variant=MoveVariant.BASIC, ) b.reset() r = b.apply(m) print(b) """ Explanation: It is possible to play the game in interactive mode, by applying moves to the board. Split the knight on b1 to a3 and c3. End of explanation """ from recirq.quantum_chess.quantum_board import CirqBoard from recirq.quantum_chess.bit_utils import bit_to_square, xy_to_bit from recirq.quantum_chess.move import to_rank global_board = CirqBoard(1) def print_game(board): board.print_debug_log() print("\n") print(board) print("\n\n") probs = global_board.get_probability_distribution() print_game(global_board) """ Explanation: In Quantum Chess, a move can be uniquely defined by up to 3 squares, a type, and a variant. Move types can take any value from the following set: { JUMP, SLIDE, SPLIT_JUMP, SPLIT_SLIDE, MERGE_JUMP, MERGE_SLIDE, PAWN_STEP, PAWN_TWO_STEP, PAWN_CAPTURE, PAWN_EP, KS_CASTLE, QS_CASTLE } Jump type moves indicate there is no path to consider, like when knights move. Slide type moves must consider the squares along the sliding path. The Split versions are 3-qubit operations, that are designed to put a piece in superposition on two different targets. The Merge versions are just the inverse of a Split. Quantum Chess introduces the concept of a move variant. The variant of a move is determined by the state of the target square, or squares. Move variants can take any value from the following set: {BASIC, CAPTURE, EXCLUDED} A Basic variant is a move where the target square is unoccupied. A Capture variant is a move where the target square is occupied by a piece that can be captured by the piece being moved. An Excluded variant is a move where the target is occupied by a piece that cannot be captured. This can occur if a target is occupied by a same color piece in superposition. In both capture and excluded variants, a measurement will be performed. To learn more about move types, variants, and measurements, please see this paper. The ascii board is a convenience that can be used for testing the project imports. The Quantum Chess REST API defines the interface, which used by the Quantum Chess Engine to assign an external resource to handle the quantum state of the game. This state encodes only the "occupancy" of each square on the board. Each square is mapped to a single qubit, where the state |1> corresponds to the square being occupied by a piece, and |0> is unoccupied. All piece type information, and rules checking, is handled classically within the Quantum Chess Engine. When implementing the API, we only neeed to use the CirqBoard. The following code shows how to initialize a CirqBoard with a single "piece" in square a1. End of explanation """ def init(board, init_basis_state): board.with_state(init_basis_state) probs = board.get_probability_distribution() print_game(board) return { "probabilities": probs, "empty": board.get_empty_squares_bitboard(), "full": board.get_full_squares_bitboard(), } r = init(global_board, 0xFFFF00000000FFFF) """ Explanation: The REST API The Quantum Chess REST API defines an interface for REST endpoints that the Quantum Chess Engine can be directed to use when interacting with the quantum state of the game. The API declares an interface for three functions: * init * do_move * undo_move All three endpoints must return a json object with the following values: * probabilities: an array of 64 floating point numbers representing the probability of each square being occupied. Array indices are mapped to board squares starting from a1, and increasing along rows to h8. <center> <img src='./images/chess_board_indices.png' width="300" > </center> empty_bitboard: A bitboard with bits set to 1 for all squares known to be empty, i.e. 0% chance of being occupied. full_bitboard: A bitboard with bits set to 1 for all squares known to be occupied, i.e. 100% chance of being occupied. A bitboard is a 64-bit integer, where each bit corresponds to a square on the chess board. The bitboard is encoded in little endian form, with the least significant bit corresponding to a1, and increasing along rows up to h8 in the most significant bit. <center> <img src='./images/bitboard_order.png' width="700" > </center> Implement init The init function is used to initialize a quantum state to some classical starting position. It has the following code signature. init(init_basis_state) : { probabilities, empty_bitboard, full_bitboard } The single argument, init_basis_state, is a bitboard that represents the initial classical state of the board, i.e. which squares have a piece on them. The return value is a json object with three fields: probabilities, empty_bitboard, and full_bitboard. The following code defines an implementation of init that prints out the probability distribution of the initialized board, and returns the appropriate json. End of explanation """ from recirq.quantum_chess.move import Move from recirq.quantum_chess.enums import MoveType, MoveVariant # Helper function for creating a split move from json values def get_split_move(move_json): return Move( move_json["square1"], move_json["square2"], target2=move_json["square3"], move_type=MoveType(move_json["type"]), move_variant=MoveVariant(move_json["variant"]), ) # Helper function for creating a merge move from json values def get_merge_move(move_json): return Move( move_json["square1"], move_json["square3"], source2=move_json["square2"], move_type=MoveType(move_json["type"]), move_variant=MoveVariant(move_json["variant"]), ) # Helper function for creating a standard move from json values def get_standard_move(move_json): return Move( move_json["square1"], move_json["square2"], move_type=MoveType(move_json["type"]), move_variant=MoveVariant(move_json["variant"]), ) def do_move(board, move): board.clear_debug_log() r = board.do_move(move) probs = board.get_probability_distribution() print_game(board) return { "result": r, "probabilities": probs, "empty": board.get_empty_squares_bitboard(), "full": board.get_full_squares_bitboard(), } move_json = { "square1": "b1", "square2": "a3", "square3": "c3", "type": MoveType.SPLIT_JUMP, "variant": MoveVariant.BASIC, } split_b1_a3_c3 = get_split_move(move_json) r = init(global_board, 0xFFFF00000000FFFF) r = do_move(global_board, split_b1_a3_c3) """ Explanation: Implement do_move The do_move function is used to apply a specific unitary to the qubits which correspond to the squares involved in the move. It has the following code signature do_move( move ) : { probabilities, empty_bitboard, full_bitboard } It takes a single argument, move, which is a json object with the following fields: * square1: integer index of the first square. * square2: integer index of the second square. * square3: integer index of the third square, only used for split and merge moves. * type: enumerated type of move with the following possible values NULL_TYPE = 0, UNSPECIFIED_STANDARD = 1, JUMP = 2, SLIDE = 3, SPLIT_JUMP = 4, SPLIT_SLIDE = 5, MERGE_JUMP = 6, MERGE_SLIDE = 7, PAWN_STEP = 8, PAWN_TWO_STEP = 9, PAWN_CAPTURE = 10, PAWN_EP = 11, KS_CASTLE = 12, QS_CASTLE = 13 * variant: enumerated variant of move with the following possible values UNSPECIFIED = 0, BASIC = 1, EXCLUDED = 2, CAPTURE = 3 The return value is a json object with three fields: probabilities, empty_bitboard, and full_bitboard. The following code defines some helper functions to create Moves, which are used to apply specific unitaries to the qubits represented in the CirqBoard, and an implementation of do_move that will print the probability distribution after applying the move to the board. End of explanation """ from recirq.quantum_chess.enums import ErrorMitigation from cirq import DensityMatrixSimulator, google from cirq.contrib.noise_models import DepolarizingNoiseModel NOISY_SAMPLER = DensityMatrixSimulator( noise=DepolarizingNoiseModel(depol_prob=0.004) ) noisy_board = CirqBoard( 0, sampler=NOISY_SAMPLER, device=google.Sycamore, error_mitigation=ErrorMitigation.Correct, noise_mitigation=0.05, ) r = init(noisy_board, 0xFFFF00000000FFFF) r = do_move(noisy_board, split_b1_a3_c3) """ Explanation: Notice, the circuit for the move is printed as well. This is made available in the board debug information. You can also see what happens when initializing the board using a noisy simulator with error mitigation. End of explanation """ def undo_last_move(board): board.clear_debug_log() r = board.undo_last_move() probs = board.get_probability_distribution() print_game(board) return { "result": r, "probabilities": probs, "empty": board.get_empty_squares_bitboard(), "full": board.get_full_squares_bitboard(), } r = init(global_board, 0xFFFF00000000FFFF) r = do_move(global_board, split_b1_a3_c3) r = undo_last_move(global_board) """ Explanation: You may notice that the circuit run discarded some of the returned samples due to error mitigation and post selection. Implement undo_last_move The undo_last_move function is used revert the quantum state to a state immediately before the last move that was executed. It has the following code signature. undo_last_move( ) : { probabilities, empty_bitboard, full_bitboard } It takes no arguments, and returns the same json object as the previous endpoints. The following code is an implementation of undo_last_move() that prints the resulting probability distribution. End of explanation """ !pip install -q flask flask_restful flask-ngrok """ Explanation: REST server implementation With the functionality in place, you can define server endpoints and run the server. Use the flask_restful framework to create a simple server that implements these enpoints. Flask-restful allows you to encapsulate the functionality you want in classes that inherit from Resource. You will need to install flask-ngrok to give the server an accessible URL: End of explanation """ from flask import Flask, request, jsonify from flask_restful import Resource, Api from flask_ngrok import run_with_ngrok class Init(Resource): def get(self): return {"about": "Init"} def post(self): print(request.get_json()) n = request.get_json()["init_basis_state"] global_board.clear_debug_log() return init(global_board, int(n)) class DoMove(Resource): def post(self): move_json = request.get_json() t = MoveType(move_json["type"]) # We need to convert square indices to square names. move_json["square1"] = bit_to_square(move_json["square1"]) move_json["square2"] = bit_to_square(move_json["square2"]) move_json["square3"] = bit_to_square(move_json["square3"]) if t == MoveType.SPLIT_SLIDE or t == MoveType.SPLIT_JUMP: return do_move(global_board, get_split_move(move_json)) elif t == MoveType.MERGE_JUMP or t == MoveType.MERGE_SLIDE: return do_move(global_board, get_merge_move(move_json)) else: return do_move(global_board, get_standard_move(move_json)) class UndoLastMove(Resource): def post(self): return undo_last_move(global_board) app = Flask(__name__) run_with_ngrok(app) api = Api(app) api.add_resource(Init, "/quantumboard/init") api.add_resource(DoMove, "/quantumboard/do_move") api.add_resource(UndoLastMove, "/quantumboard/undo_last_move") @app.route("/") def home(): return "<h1>Running Flask on Google Colab!</h1>" """ Explanation: Define the REST endpoints for the webserver: End of explanation """ # docs_infra: no_execute app.run() """ Explanation: And start the local webserver: End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ec-earth-consortium/cmip6/models/ec-earth3-aerchem/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-aerchem', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: EC-EARTH-CONSORTIUM Source ID: EC-EARTH3-AERCHEM Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:59 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
ddebrunner/streamsx.topology
samples/python/topology/notebooks/ViewDemo/ViewDemo.ipynb
apache-2.0
from streamsx.topology.topology import Topology from streamsx.topology import context from some_module import jsonRandomWalk #from streamsx import rest import json import logging # Define topology & submit rw = jsonRandomWalk() top = Topology("myTop") stock_data = top.source(rw) # The view object can be used to retrieve data remotely view = stock_data.view() stock_data.print() """ Explanation: Randomly Generate A Stock Price & View The Data Here, we create an application which is submitted to a remote host, yet we retrieve its data remotely via views. This way, we can graph remote data inside of Jupyter without needing to run the application on the local host. First, we create an application which generates a random stock price by using the jsonRandomWalk class. After we create the stream, we create a view object. This can later be used to retrieve the remote data. End of explanation """ context.submit("DISTRIBUTED", top.graph, username = "streamsadmin", password = "passw0rd") """ Explanation: Submit To Remote Streams Install Then, we submit the application to the default domain. End of explanation """ from streamsx import rest queue = view.start_data_fetch() """ Explanation: Begin Retreiving The Data In A Blocking Queue Using the view object, we can call the start_data_fetch method. This kicks off a background thread which, once per second, queries the remote view REST endpoint and inserts the data into a queue. The queue is returned from start_data_fetch. End of explanation """ for i in iter(queue.get, None): print(i) """ Explanation: Print Data to Screen The queue is a blocking queue, so every time queue.get() is invoked, it will wait until there is more data on the stream. The following is one way of iterating over the queue. End of explanation """ view.stop_data_fetch() """ Explanation: Stop Fetching The Data, Cancelling The Background Thread To stop the background thread from fetching data, invoke the stop_data_fetch method on the view. End of explanation """ %matplotlib inline %matplotlib notebook from streamsx import rest rest.graph_every(view, 'val', 1.0) """ Explanation: Graph The Live Feed Using Matplotlib One of Jupyter strengths is its capacity for data visualization. Here, we can use Matplotlib to interactively update the graph when new data is (or is not) available. End of explanation """
mauriciogtec/PropedeuticoDataScience2017
Alumnos/JuanPabloDeBotton/Tarea1_JuanPabloDeBotton.ipynb
mit
import numpy as np """ Explanation: Tarea 1: Creando una sistema de Álgebra Lineal En esta tarea seran guiados paso a paso en como realizar un sistema de arrays en Python para realizar operaciones de algebra lineal. Pero antes... (FAQ) Como se hace en la realidad? En la practica, se usan paqueterias funcionales ya probadas, en particular numpy, que contiene todas las herramientas necesarias para hacer computo numerico en Python. Por que hacer esta tarea entonces? Python es un lenguage disenado para la programacion orientada a objetos. Al hacer la tarea desarrollaran experiencia en este tipo de programacion que les permitira crear objetos en el futuro cuando lo necesiten, y entender mejor como funciona numpy y en general, todas las herramientas de Python. Ademas, en esta tarea tambien aprenderan la forma de usar numpy simultaneamente. Como comenzar con numpy? En la tarea necesitaremos importar la libreria numpy, que contiene funciones y clases que no son parte de Python basico. Recuerden que Python no es un lenguage de computo cientifico, sino de programacion de proposito general. No esta disenado para hacer algebra lineal, sin embargo, tiene librerias extensas y bien probadas que permiten lograrlo. Anaconda es una distribucion de Python que ademas de instalarlo incluye varias librerias de computo cientifico como numpy. Si instalaron Python por separado deberan tambien instalar numpy manualmente. Antes de comenzar la tarea deberan poder correr: End of explanation """ x = [1,2,3] y = [4,5,6] x + y """ Explanation: Lo que el codigo anterior hace es asociar al nombre np todas las herramientas de la libreria numpy. Ahora podremos llamar funciones de numpy como np.&lt;numpy_fun&gt;. El nombre np es opcional, pueden cambiarlo pero necesitaran ese nombre para acceder a las funciones de numpy como &lt;new_name&gt;.&lt;numpy_fun&gt;. Otra opcion es solo inlcuir import numpy, en cuya caso las funciones se llaman como numpy.&lt;numpy_fun&gt;. Para saber mas del sistema de modulos pueden revisar la liga https://docs.python.org/2/tutorial/modules.html I. Creando una clase Array Python incluye nativo el uso de listas (e.g. x = [1,2,3]). El problema es que las listas no son herramientas de computo numerico, Python ni siquiera entiende una suma de ellas. De hecho, la suma la entiende como concatenacion: End of explanation """ B = np.array([[1,2,3], [4,5,6]]) # habiendo corrido import numpy as np """ Explanation: Vamos a construir una clase Array que incluye a las matrices y a los vectores. Desde el punto de vista computacional, un vector es una matriz de una columna. En clase vimos que conviene pensar a las matrices como transformacion de vectores, sin embargo, desde el punto de vista computacional, como la regla de suma y multiplicacion es similar, conviene pensarlos ambos como arrays, que es el nombre tradicional en programacion Computacionalmente, que es un array? Tecnicamente, es una lista de listas, todas del mismo tamano, cada uno representando una fila (fila o columna es optativo, haremos filas porque asi lo hace numpy, pero yo previero columnas). Por ejemplo, la lista de listas [[1,2,3],[4,5,6]] Corresponde a la matriz $$ \begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \end{bmatrix} $$ The numpy way End of explanation """ B + 2*B # Python sabe sumar y multiplicar arrays como algebra lineal """ Explanation: Es posible sumar matrices y multiplicarlas por escalares End of explanation """ np.matmul(B.transpose(), B) # B^t*B """ Explanation: Las matrices de numpy se pueden multiplicar con la funcion matmul dentro de numpy End of explanation """ B[1,1] """ Explanation: Los arrays the numpy pueden accesarse con indices y slices Una entrada especifica: End of explanation """ B[1,:] """ Explanation: Una fila entera: End of explanation """ B[:,2] """ Explanation: Una columna entera: End of explanation """ B[0:2,0:2] """ Explanation: Un subbloque (notar que un slice n:m es n,n+1,...,m-1 End of explanation """ B.shape """ Explanation: En numpy podemos saber la dimension de un array con el campo shape de numpy End of explanation """ vec = np.array([1,2,3]) print(vec) """ Explanation: Numpy es listo manejando listas simples como vectores End of explanation """ class Array: "Una clase minima para algebra lineal" def __init__(self, list_of_rows): "Constructor" self.data = list_of_rows self.shape = (len(list_of_rows), len(list_of_rows[0])) A = Array([[1,2,3], [4,5,6]]) A.__dict__ # el campo escondido __dict__ permite acceder a las propiedades de clase de un objeto A.data A.shape """ Explanation: Comenzando desde cero... End of explanation """ Array([[1,2,3], [4,5,6]]) print(Array([[1,2,3], [4,5,6]])) np.array([[1,2,3], [4,5,6]]) print(np.array([[1,2,3], [4,5,6]])) """ Explanation: El campo data de un Array almacena la lista de listas del array. Necesitamos implementar algunos metodos para que sea funcional como una clase de algebra lineal. Un metodo para imprimir una matriz de forma mas agradable Validador. Un metodo para validar que la lista de listas sea valida (columnas del mismo tamano y que las listas interiores sean numericas Indexing Hacer sentido a expresiones A[i,j] Iniciar matriz vacia de ceros Este metodos es muy util para preacolar espacio para guardar nuevas matrices Transposicion B.transpose() Suma A + B Multiplicacion escalar y matricial 2 * A y A*B Vectores (Opcional) Con esto seria posible hacer algebra lineal Metodos especiales de clase... Para hacer esto es posible usar metodos especiales de clase __getitem, __setitem__, __add__, __mul__, __str__. Teoricamente es posible hacer todo sin estos metodos especiales, pero, por ejemplo, es mucho mas agradable escribir A[i,j] que A.get(i,j) o A.setitem(i,j,newval) que A[i,j] = newval. 1. Un metodo para imprimir mejor... Necesitamos agregar un metodo de impresion. Noten que un array de numpy se imprime bonito comparado con el nuestro End of explanation """ class TestClass: def __init__(self): pass # this means do nothing in Python def say_hi(self): print("Hey, I am just a normal method saying hi!") def __repr__(self): return "I am the special class method REPRESENTING a TestClass without printing" def __str__(self): return "I am the special class method for explicitly PRINTING a TestClass object" x = TestClass() x.say_hi() x print(x) """ Explanation: Por que estas diferencias? Python secretamente busca un metodo llamado __repr__ cuando un objeto es llamado sin imprimir explicitamente, y __str__ cuando se imprime con print explicitamente. Por ejemplo: End of explanation """ class Array: "Una clase minima para algebra lineal" def __init__(self, list_of_rows): "Constructor y validador" # obtener dimensiones self.data = list_of_rows nrow = len(list_of_rows) # ___caso vector: redimensionar correctamente if not isinstance(list_of_rows[0], list): nrow = 1 self.data = [[x] for x in list_of_rows] # ahora las columnas deben estar bien aunque sea un vector ncol = len(self.data[0]) self.shape = (nrow, ncol) # validar tamano correcto de filas if any([len(r) != ncol for r in self.data]): raise Exception("Las filas deben ser del mismo tamano") # Ejercicio 1 def __repr__(self): str2print = "Array" for i in range(len(self.data)): if(i==0): str2print += str(self.data[i]) + "\n" if(i>0): str2print += " " + str(self.data[i]) + "\n" return str2print def __str__(self): str2print = "" for i in range(len(self.data)): str2print += str(self.data[i]) + "\n" return str2print #Ejercicio2 def __getitem__(self, idx): return self.data[idx[0]][idx[1]] def __setitem__(self, idx, valor): self.data[idx[0]][idx[1]] = valor # Ejercicio 3 def zeros(x, y): array_de_ceros = Array([[0 for col in range(y)] for row in range(x)]) return array_de_ceros def eye(x): array_eye = Array([[0 for col in range(x)] for row in range(x)]) for i in range(x): for j in range(x): if i == j: array_eye[i,j] = 1 return array_eye # Ejercicio 4 def transpose(self): #Obtener dimensiones num_row = len(self.data) num_col = len(self.data[0]) #Crear matriz receptora mat_transpuesta = Array([[0 for col in range(num_row)] for row in range(num_col)]) #Transponer for i in range(num_row): for j in range(num_col): mat_transpuesta[j,i] = self.data[i][j] return mat_transpuesta def __add__(self, other): "Hora de sumar" if isinstance(other, Array): if self.shape != other.shape: raise Exception("Las dimensiones son distintas!") rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) for r in range(rows): for c in range(cols): newArray.data[r][c] = self.data[r][c] + other.data[r][c] return newArray elif isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero rows, cols = self.shape newArray = Array([[0. for c in range(cols)] for r in range(rows)]) for r in range(rows): for c in range(cols): newArray.data[r][c] = self.data[r][c] + other return newArray else: return NotImplemented # es un tipo de error particular usado en estos metodos #Ejercicio 5 ##No me salió :( #Ejercicio 6 def __mul__(self, other): if isinstance(other, Array): #Validar las dimensiones if self.shape[1] != other.shape[0]: raise Exception("Las matrices no son compatibles!") #Obtener las dimensiones num_rowsA = self.shape[0] num_rowsB = other.shape[0] num_colsB = other.shape[1] #Crear matriz receptora newArray = Array([[0 for col in range(num_colsB)] for row in range(num_rowsA)]) #Multiplicar for i in range(num_rowsA): for j in range(num_colsB): for k in range(num_rowsB): newArray[i,j] = newArray[i,j] + self.data[i][k] * other.data[k][j] return newArray #Matriz, entero elif isinstance(other, (int, float, complex)): #Obtener las dimensiones rows, cols = self.shape #Crear matriz receptora newArray = Array([[0 for col in range(cols)] for row in range(rows)]) #Multiplicar for row in range(rows): for col in range(cols): newArray.data[row][col] = self.data[row][col] * other return newArray else: return NotImplemented def __rmul__(self, other): if isinstance(other, (int, float, complex)): rows, cols = self.shape newArray = Array([[0 for col in range(cols)] for row in range(rows)]) for row in range(rows): for col in range(cols): newArray.data[row][col] = self.data[row][col] * other return newArray else: return NotImplemented """ Explanation: Ejercicios End of explanation """ X = Array([[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15]]) X print(X) X[0,2] X X[0,0] = 10 X Array.zeros(5,5) Array.eye(4) X.transpose() B = Array.eye(5) B X*B """ Explanation: Prueba de las clases End of explanation """
antongrin/EasyMig
EasyMig_v3-interact2.ipynb
apache-2.0
# -*- coding: utf-8 -*- """ Created on Fri Feb 12 13:21:45 2016 @author: GrinevskiyAS """ from __future__ import division import numpy as np from numpy import sin,cos,tan,pi,sqrt import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt from ipywidgets import interact, interactive, fixed import ipywidgets as widgets %matplotlib inline font = {'family': 'Arial', 'weight': 'normal', 'size':14} mpl.rc('font', **font) mpl.rc('figure', figsize=(9, 7)) #mpl.rc({'axes.facecolor':[0.97,0.97,0.98],"axes.edgecolor":[0.7,0.7,0.7],"grid.linewidth": 1, # "axes.titlesize":16,"xtick.labelsize":20,"ytick.labelsize":14}) """ Explanation: 0. Before start OK, to begin we need to import some standart Python modules End of explanation """ #This would be the size of each grid cell (X is the spatial coordinate, T is two-way time) xstep=5 tstep=5 #size of the whole grid xmax = 320 tmax = 220 #that's the arrays of x and t xarray = np.arange(0, xmax, xstep).astype(float) tarray = np.arange(0, tmax, tstep).astype(float) #now fimally we created a 2D array img, which is now all zeros, but later we will add some amplitudes there img=np.zeros((len(xarray), len(tarray))) """ Explanation: 1. Setup First, let us setup the working area. End of explanation """ plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2]) """ Explanation: Let's show our all-zero image End of explanation """ class Hyperbola: def __init__(self, xarray, tarray, x0, t0, v=2): ###input parameters define a difractor's position (x0,t0), P-wave velocity of homogeneous subsurface, and x- and t-arrays to compute traveltimes on. ### self.x=xarray self.x0=x0 self.t0=t0 self.v=v #compute traveltimes self.t=sqrt(t0**2 + (2*(xarray-x0)/v)**2) #obtain some grid parameters xstep=xarray[1]-xarray[0] tbegin=tarray[0] tend=tarray[-1] tstep=tarray[1]-tarray[0] #delete t's and x's for samples where t exceeds maxt self.x=self.x[ (self.t>=tbegin) & (self.t <= tend) ] self.t=self.t[ (self.t>=tbegin) & (self.t <= tend) ] self.imgind=((self.x-xarray[0])/xstep).astype(int) #compute amplitudes' fading according to geometrical spreading self.amp = 1/(self.t/self.t0) self.grid_resample(xarray, tarray) def grid_resample(self, xarray, tarray): # that's a function that computes at which 'cells' of image should we place the hyperbola tend=tarray[-1] tstep=tarray[1]-tarray[0] self.xind=((self.x-xarray[0])/xstep).astype(int) #X cells numbers self.tind=np.round((self.t-tarray[0])/tstep).astype(int) #T cells numbers self.tind=self.tind[self.tind*tstep<=tarray[-1]] #delete T's exceeding max.T self.tgrid=tarray[self.tind] # get 'gridded' T-values self.coord=np.vstack((self.xind,tarray[self.tind])) def add_to_img(self, img, wavelet): # puts the hyperbola into the right cells of image with a given wavelet maxind=np.size(img,1) wavlen=np.floor(len(wavelet)/2).astype(int) self.imgind=self.imgind[self.tind < maxind-wavlen-1] self.amp = self.amp[self.tind < maxind-wavlen-1] self.tind=self.tind[self.tind < maxind-wavlen-1] ind_begin=self.tind-wavlen for i,sample in enumerate(wavelet): img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i]+sample*self.amp return img """ Explanation: 2. Main class definition What we are now going to do is create a class named Hyperbola Each object of this class is capable of computing traveltimes to a certain subsurface point (diffractor) and plotting this point response (hyperbola) on a grid How? to more clearly define a class? probably change to a function? End of explanation """ Hyp_test = Hyperbola(xarray, tarray, x0 = 100, t0 = 30, v = 2) #Create a fugure and add axes to it fgr_test1 = plt.figure(figsize=(7,5), facecolor='w') ax_test1 = fgr_test1.add_subplot(111) #Now plot Hyp_test's parameters: X vs T ax_test1.plot(Hyp_test.x, Hyp_test.t, 'r', lw = 2) #and their 'gridded' equivalents ax_test1.plot(Hyp_test.x, Hyp_test.tgrid, ls='none', marker='o', ms=6, mfc=[0,0.5,1],mec='none') #Some commands to add gridlines, change the directon of T axis and move x axis to top ax_test1.set_ylim(tarray[-1],tarray[0]) ax_test1.xaxis.set_ticks_position('top') ax_test1.grid(True, alpha = 0.1, ls='-',lw=.5) ax_test1.set_xlabel('X, m') ax_test1.set_ylabel('T, ms') ax_test1.xaxis.set_label_position('top') plt.show() """ Explanation: For testing purposes, let's create an object named Hyp_test and view its parameters End of explanation """ point_diff_x0 = [100, 150, 210] point_diff_t0 = [100, 50, 70] plt.scatter(point_diff_x0,point_diff_t0, c='r',s=70) plt.xlim(0, xmax) plt.ylim(tmax, 0) plt.gca().set_xlabel('X, m') plt.gca().set_ylabel('T, ms') plt.gca().xaxis.set_ticks_position('top') plt.gca().xaxis.set_label_position('top') plt.gca().grid(True, alpha = 0.1, ls='-',lw=.5) """ Explanation: 3. Creating the model and 'forward modelling' OK, now let's define a subsurface model. For the sake of simplicity, the model will consist of two types of objects: 1. Point diffractor in a homogeneous medium * defined by their coordinates $(x_0, t_0)$ in data domain. 2. Plane reflecting surface * defined by their end points $(x_1, t_1)$ and $(x_2, t_2)$, also in data domain. We will be able to add any number of these objects to image. Let's start by adding three point diffractors: End of explanation """ hyps=[] for x0,t0 in zip(point_diff_x0,point_diff_t0): hyp_i = Hyperbola(xarray, tarray, x0, t0, v=2) hyps.append(hyp_i) """ Explanation: Next step is computing traveltimes for these subsurface diffractors. This is done by creating an instance of Hyperbola class for every diffractor. End of explanation """ wav1 = np.array([-1,2,-1]) with plt.xkcd(): plt.axhline(0,c='k') plt.stem((np.arange(len(wav1))-np.floor(len(wav1)/2)).astype(int) ,wav1) plt.gca().set_xlim(-2*len(wav1), 2*len(wav1)) plt.gca().set_ylim(np.min(wav1)-1, np.max(wav1)+1) for hyp_i in hyps: hyp_i.add_to_img(img,wav1) plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2]) plt.gca().xaxis.set_ticks_position('top') plt.gca().grid(ls=':', alpha=0.25, lw=1, c='w' ) """ Explanation: ~~Next step is computing Green's functions for these subsurface diffractors. To do this, we need to setup a wavelet.~~ Of course, we are going to create an extremely simple wavelet. End of explanation """ class Line: def __init__(self, xmin, xmax, tmin, tmax, xarray, tarray): self.xmin=xmin self.xmax=xmax self.tmin=tmin self.tmax=tmax xstep=xarray[1]-xarray[0] tstep=tarray[1]-tarray[0] xmin=xmin-np.mod(xmin,xstep) xmax=xmax-np.mod(xmax,xstep) tmin=tmin-np.mod(tmin,tstep) tmax=tmax-np.mod(tmax,tstep) self.x = np.arange(xmin,xmax+xstep,xstep) self.t = tmin+(tmax-tmin)*(self.x-xmin)/(xmax-xmin) self.imgind=((self.x-xarray[0])/xstep).astype(int) self.tind=((self.t-tarray[0])/tstep).astype(int) def add_to_img(self, img, wavelet): maxind=np.size(img,1) wavlen=np.floor(len(wavelet)/2).astype(int) self.imgind=self.imgind[self.tind < maxind-1] self.tind=self.tind[self.tind < maxind-1] ind_begin=self.tind-wavlen for i,sample in enumerate(wavelet): img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i]+sample return img """ Explanation: Define a Line class End of explanation """ line1=Line(100,250,50,150,xarray,tarray) img=line1.add_to_img(img, [-1,2,-1]) line2=Line(40,270,175,100,xarray,tarray) img=line2.add_to_img(img, [-1,2,-1]) plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2]) plt.gca().xaxis.set_ticks_position('top') plt.gca().grid(ls=':', alpha=0.25, lw=1, c='w' ) """ Explanation: Create a line and add it to image End of explanation """ def migrate(img,v,aper,xarray,tarray): imgmig=np.zeros_like(img) xstep=xarray[1]-xarray[0] # print 'начинаем миграцию' # print 'апертура {0}, скорость {1},'.format(aper, v) # print '\n xarray: от {0} до {1} с шагом {2},'.format(xarray[0], xarray[-1], xstep) # print '\n tarray: от {0} до {1} с шагом {2},'.format(tarray[0], tarray[-1], tarray[1]-tarray[0]) # for x0 in xarray[(xarray>xarray[0]+aper) & (xarray<xarray[-1]-aper)]: for x0 in xarray: for t0 in tarray[1:-1]: # print "t0 = {0}, x0 = {1}".format(t0,x0) xmig=xarray[(x0-aper<=xarray) & (xarray<=x0+aper)] # print 'xmig = от', xmig[0],' до ', xmig[-1], ' отсчётов ', len(xmig) hi=Hyperbola(xmig,tarray,x0,t0,v) # print 'hi.x: от ', hi.x[0], ' до ', hi.x[-1] migind_start = hi.x[0]/xstep migind_stop = (hi.x[-1]+xstep)/xstep hi.imgind=np.arange(migind_start, migind_stop).astype(int) # si=np.sum(img[hi.imgind,hi.tind]) si=np.mean(img[hi.imgind,hi.tind]*hi.amp) # si=np.mean(img[hi.imgind,hi.tind]) imgmig[(x0/xstep).astype(int),(t0/tstep).astype(int)]=si # if ( (t0==3 and x0==10) or (t0==7 and x0==17) or (t0==11 and x0==12) ): # if ( (t0==8 and x0==20)): # ax_data.plot(hi.x,hi.t,c='m',lw=3,alpha=0.8) # ax_data.plot(hi.x0,hi.t0,marker='H', mfc='r', mec='m',ms=5) # for xi in xmig: # ax_data.plot([xi,hi.x0],[0,hi.t0],c='#AFFF94',lw=1.5,alpha=1) # return imgmig """ Explanation: Excellent. The image now is pretty messy, so we need to migrate it and see what we can achieve 4. Migration definition End of explanation """ vmig = 2 aper = 200 res = migrate(img, vmig, aper, xarray, tarray) plt.imshow(res.T,interpolation='none',vmin=-2,vmax=2,cmap=cm.Greys, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2]) #plt.imshow(res.T,cmap=cm.Greys,vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2]) #f_migv = plt.figure() def migshow(vmig_i, aper_i, gain_i, interp): res_i = migrate(img, vmig_i, aper_i, xarray, tarray) if interp: interp_style = 'bilinear' else: interp_style = 'none' plt.imshow(res_i.T,interpolation=interp_style,vmin=-gain_i,vmax=gain_i,cmap=cm.Greys, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2]) plt.title('Vmig = '+str(vmig_i)) plt.show() interact(migshow, vmig_i = widgets.FloatSlider(min = 1.0,max = 3.0, step = 0.01, value=2.0,continuous_update=False,description='Migration velocity: '), aper_i = widgets.IntSlider(min = 10,max = 500, step = 1, value=200,continuous_update=False,description='Migration aperture: '), gain_i = widgets.FloatSlider(min = 0.0,max = 5.0, step = 0.1, value=2.0,continuous_update=False,description='Gain: '), interp = widgets.Checkbox(value=True, description='interpolate')) #interact(migrate, img=fixed(img), v = widgets.IntSlider(min = 1.0,max = 3.0, step = 0.1, value=2), aper=fixed(aper), xarray=fixed(xarray), tarray=fixed(tarray)) """ Explanation: 5. Migration application End of explanation """
xdnian/pyml
code/bonus/softmax-regression.ipynb
mit
%load_ext watermark %watermark -a '' -u -d -v -p matplotlib,numpy,scipy # to install watermark just uncomment the following line: #%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py %matplotlib inline """ Explanation: Sebastian Raschka, 2016 https://github.com/1iyiwei/pyml Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). End of explanation """ import numpy as np y = np.array([0, 1, 2, 2]) """ Explanation: Bonus Material - Softmax Regression Softmax Regression (synonyms: Multinomial Logistic, Maximum Entropy Classifier, or just Multi-class Logistic Regression) is a generalization of logistic regression that we can use for multi-class classification (under the assumption that the classes are mutually exclusive). In contrast, we use the (standard) Logistic Regression model in binary classification tasks. Below is a schematic of a Logistic Regression model that we discussed in Chapter 3. In Softmax Regression (SMR), we replace the sigmoid logistic function by the so-called softmax function $\phi_{softmax}(\cdot)$. $$P(y=j \mid z^{(i)}) = \phi_{softmax}(z^{(i)}) = \frac{e^{z^{(i)}}}{\sum_{j=0}^{k} e^{z_{k}^{(i)}}},$$ where we define the net input z as $$z = w_1x_1 + ... + w_mx_m + b= \sum_{l=0}^{m} w_l x_l + b= \mathbf{w}^T\mathbf{x} + b.$$ (w is the weight vector, $\mathbf{x}$ is the feature vector of 1 training sample, and $b$ is the bias unit.) Now, this softmax function computes the probability that this training sample $\mathbf{x}^{(i)}$ belongs to class $j$ given the weight and net input $z^{(i)}$. So, we compute the probability $p(y = j \mid \mathbf{x^{(i)}; w}_j)$ for each class label in $j = 1, \ldots, k.$. Note the normalization term in the denominator which causes these class probabilities to sum up to one. To illustrate the concept of softmax, let us walk through a concrete example. Let's assume we have a training set consisting of 4 samples from 3 different classes (0, 1, and 2) $x_0 \rightarrow \text{class }0$ $x_1 \rightarrow \text{class }1$ $x_2 \rightarrow \text{class }2$ $x_3 \rightarrow \text{class }2$ End of explanation """ y_enc = (np.arange(np.max(y) + 1) == y[:, None]).astype(float) print('one-hot encoding:\n', y_enc) """ Explanation: First, we want to encode the class labels into a format that we can more easily work with; we apply one-hot encoding: End of explanation """ X = np.array([[0.1, 0.5], [1.1, 2.3], [-1.1, -2.3], [-1.5, -2.5]]) W = np.array([[0.1, 0.2, 0.3], [0.1, 0.2, 0.3]]) bias = np.array([0.01, 0.1, 0.1]) print('Inputs X:\n', X) print('\nWeights W:\n', W) print('\nbias:\n', bias) """ Explanation: A sample that belongs to class 0 (the first row) has a 1 in the first cell, a sample that belongs to class 2 has a 1 in the second cell of its row, and so forth. Next, let us define the feature matrix of our 4 training samples. Here, we assume that our dataset consists of 2 features; thus, we create a 4x2 dimensional matrix of our samples and features. Similarly, we create a 2x3 dimensional weight matrix (one row per feature and one column for each class). End of explanation """ X = np.array([[0.1, 0.5], [1.1, 2.3], [-1.1, -2.3], [-1.5, -2.5]]) W = np.array([[0.1, 0.2, 0.3], [0.1, 0.2, 0.3]]) bias = np.array([0.01, 0.1, 0.1]) print('Inputs X:\n', X) print('\nWeights W:\n', W) print('\nbias:\n', bias) def net_input(X, W, b): return (X.dot(W) + b) net_in = net_input(X, W, bias) print('net input:\n', net_in) """ Explanation: To compute the net input, we multiply the 4x2 matrix feature matrix X with the 2x3 (n_features x n_classes) weight matrix W, which yields a 4x3 output matrix (n_samples x n_classes) to which we then add the bias unit: $$\mathbf{Z} = \mathbf{X}\mathbf{W} + \mathbf{b}.$$ End of explanation """ def softmax(z): return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T smax = softmax(net_in) print('softmax:\n', smax) """ Explanation: Now, it's time to compute the softmax activation that we discussed earlier: $$P(y=j \mid z^{(i)}) = \phi_{softmax}(z^{(i)}) = \frac{e^{z^{(i)}}}{\sum_{j=0}^{k} e^{z_{k}^{(i)}}}.$$ End of explanation """ def to_classlabel(z): return z.argmax(axis=1) print('predicted class labels: ', to_classlabel(smax)) """ Explanation: As we can see, the values for each sample (row) nicely sum up to 1 now. E.g., we can say that the first sample [ 0.29450637 0.34216758 0.36332605] has a 29.45% probability to belong to class 0. Now, in order to turn these probabilities back into class labels, we could simply take the argmax-index position of each row: [[ 0.29450637 0.34216758 0.36332605] -> 2 [ 0.21290077 0.32728332 0.45981591] -> 2 [ 0.42860913 0.33380113 0.23758974] -> 0 [ 0.44941979 0.32962558 0.22095463]] -> 0 End of explanation """ def cross_entropy(output, y_target): return - np.sum(np.log(output) * (y_target), axis=1) xent = cross_entropy(smax, y_enc) print('Cross Entropy:', xent) def cost(output, y_target): return np.mean(cross_entropy(output, y_target)) J_cost = cost(smax, y_enc) print('Cost: ', J_cost) """ Explanation: As we can see, our predictions are terribly wrong, since the correct class labels are [0, 1, 2, 2]. Now, in order to train our logistic model (e.g., via an optimization algorithm such as gradient descent), we need to define a cost function $J(\cdot)$ that we want to minimize: $$J(\mathbf{W}; \mathbf{b}) = \frac{1}{n} \sum_{i=1}^{n} H(T_i, O_i),$$ which is the average of all cross-entropies over our $n$ training samples. The cross-entropy function is defined as $$H(T_i, O_i) = -\sum_m T_i \cdot log(O_i).$$ Here the $T$ stands for "target" (i.e., the true class labels) and the $O$ stands for output -- the computed probability via softmax; not the predicted class label. End of explanation """ # Sebastian Raschka 2016 # Implementation of the mulitnomial logistic regression algorithm for # classification. # Author: Sebastian Raschka <sebastianraschka.com> # # License: BSD 3 clause import numpy as np from time import time #from .._base import _BaseClassifier #from .._base import _BaseMultiClass class SoftmaxRegression(object): """Softmax regression classifier. Parameters ------------ eta : float (default: 0.01) Learning rate (between 0.0 and 1.0) epochs : int (default: 50) Passes over the training dataset. Prior to each epoch, the dataset is shuffled if `minibatches > 1` to prevent cycles in stochastic gradient descent. l2 : float Regularization parameter for L2 regularization. No regularization if l2=0.0. minibatches : int (default: 1) The number of minibatches for gradient-based optimization. If 1: Gradient Descent learning If len(y): Stochastic Gradient Descent (SGD) online learning If 1 < minibatches < len(y): SGD Minibatch learning n_classes : int (default: None) A positive integer to declare the number of class labels if not all class labels are present in a partial training set. Gets the number of class labels automatically if None. random_seed : int (default: None) Set random state for shuffling and initializing the weights. Attributes ----------- w_ : 2d-array, shape={n_features, 1} Model weights after fitting. b_ : 1d-array, shape={1,} Bias unit after fitting. cost_ : list List of floats, the average cross_entropy for each epoch. """ def __init__(self, eta=0.01, epochs=50, l2=0.0, minibatches=1, n_classes=None, random_seed=None): self.eta = eta self.epochs = epochs self.l2 = l2 self.minibatches = minibatches self.n_classes = n_classes self.random_seed = random_seed def _fit(self, X, y, init_params=True): if init_params: if self.n_classes is None: self.n_classes = np.max(y) + 1 self._n_features = X.shape[1] self.b_, self.w_ = self._init_params( weights_shape=(self._n_features, self.n_classes), bias_shape=(self.n_classes,), random_seed=self.random_seed) self.cost_ = [] y_enc = self._one_hot(y=y, n_labels=self.n_classes, dtype=np.float) for i in range(self.epochs): for idx in self._yield_minibatches_idx( n_batches=self.minibatches, data_ary=y, shuffle=True): # givens: # w_ -> n_feat x n_classes # b_ -> n_classes # net_input, softmax and diff -> n_samples x n_classes: net = self._net_input(X[idx], self.w_, self.b_) softm = self._softmax(net) diff = softm - y_enc[idx] mse = np.mean(diff, axis=0) # gradient -> n_features x n_classes grad = np.dot(X[idx].T, diff) # update in opp. direction of the cost gradient self.w_ -= (self.eta * grad + self.eta * self.l2 * self.w_) self.b_ -= (self.eta * np.sum(diff, axis=0)) # compute cost of the whole epoch net = self._net_input(X, self.w_, self.b_) softm = self._softmax(net) cross_ent = self._cross_entropy(output=softm, y_target=y_enc) cost = self._cost(cross_ent) self.cost_.append(cost) return self def fit(self, X, y, init_params=True): """Learn model from training data. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values. init_params : bool (default: True) Re-initializes model parametersprior to fitting. Set False to continue training with weights from a previous model fitting. Returns ------- self : object """ if self.random_seed is not None: np.random.seed(self.random_seed) self._fit(X=X, y=y, init_params=init_params) self._is_fitted = True return self def _predict(self, X): probas = self.predict_proba(X) return self._to_classlabels(probas) def predict(self, X): """Predict targets from X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- target_values : array-like, shape = [n_samples] Predicted target values. """ if not self._is_fitted: raise AttributeError('Model is not fitted, yet.') return self._predict(X) def predict_proba(self, X): """Predict class probabilities of X from the net input. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- Class probabilties : array-like, shape= [n_samples, n_classes] """ net = self._net_input(X, self.w_, self.b_) softm = self._softmax(net) return softm def _net_input(self, X, W, b): return (X.dot(W) + b) def _softmax(self, z): return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T def _cross_entropy(self, output, y_target): return - np.sum(np.log(output) * (y_target), axis=1) def _cost(self, cross_entropy): L2_term = self.l2 * np.sum(self.w_ ** 2) cross_entropy = cross_entropy + L2_term return 0.5 * np.mean(cross_entropy) def _to_classlabels(self, z): return z.argmax(axis=1) def _init_params(self, weights_shape, bias_shape=(1,), dtype='float64', scale=0.01, random_seed=None): """Initialize weight coefficients.""" if random_seed: np.random.seed(random_seed) w = np.random.normal(loc=0.0, scale=scale, size=weights_shape) b = np.zeros(shape=bias_shape) return b.astype(dtype), w.astype(dtype) def _one_hot(self, y, n_labels, dtype): """Returns a matrix where each sample in y is represented as a row, and each column represents the class label in the one-hot encoding scheme. Example: y = np.array([0, 1, 2, 3, 4, 2]) mc = _BaseMultiClass() mc._one_hot(y=y, n_labels=5, dtype='float') np.array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.], [0., 0., 1., 0., 0.]]) """ mat = np.zeros((len(y), n_labels)) for i, val in enumerate(y): mat[i, val] = 1 return mat.astype(dtype) def _yield_minibatches_idx(self, n_batches, data_ary, shuffle=True): indices = np.arange(data_ary.shape[0]) if shuffle: indices = np.random.permutation(indices) if n_batches > 1: remainder = data_ary.shape[0] % n_batches if remainder: minis = np.array_split(indices[:-remainder], n_batches) minis[-1] = np.concatenate((minis[-1], indices[-remainder:]), axis=0) else: minis = np.array_split(indices, n_batches) else: minis = (indices,) for idx_batch in minis: yield idx_batch def _shuffle_arrays(self, arrays): """Shuffle arrays in unison.""" r = np.random.permutation(len(arrays[0])) return [ary[r] for ary in arrays] """ Explanation: In order to learn our softmax model -- determining the weight coefficients -- via gradient descent, we then need to compute the derivative $$\nabla \mathbf{w}_j \, J(\mathbf{W}; \mathbf{b}).$$ I don't want to walk through the tedious details here, but this cost derivative turns out to be simply: $$\nabla \mathbf{w}j \, J(\mathbf{W}; \mathbf{b}) = \frac{1}{n} \sum^{n}{i=0} \big[\mathbf{x}^{(i)}\ \big(O_i - T_i \big) \big]$$ We can then use the cost derivate to update the weights in opposite direction of the cost gradient with learning rate $\eta$: $$\mathbf{w}_j := \mathbf{w}_j - \eta \nabla \mathbf{w}_j \, J(\mathbf{W}; \mathbf{b})$$ for each class $$j \in {0, 1, ..., k}$$ (note that $\mathbf{w}_j$ is the weight vector for the class $y=j$), and we update the bias units $$\mathbf{b}j := \mathbf{b}_j - \eta \bigg[ \frac{1}{n} \sum^{n}{i=0} \big(O_i - T_i \big) \bigg].$$ As a penalty against complexity, an approach to reduce the variance of our model and decrease the degree of overfitting by adding additional bias, we can further add a regularization term such as the L2 term with the regularization parameter $\lambda$: L2: $\frac{\lambda}{2} ||\mathbf{w}||_{2}^{2}$, where $$||\mathbf{w}||{2}^{2} = \sum^{m}{l=0} \sum^{k}{j=0} w{i, j}$$ so that our cost function becomes $$J(\mathbf{W}; \mathbf{b}) = \frac{1}{n} \sum_{i=1}^{n} H(T_i, O_i) + \frac{\lambda}{2} ||\mathbf{w}||_{2}^{2}$$ and we define the "regularized" weight update as $$\mathbf{w}_j := \mathbf{w}_j - \eta \big[\nabla \mathbf{w}_j \, J(\mathbf{W}) + \lambda \mathbf{w}_j \big].$$ (Please note that we don't regularize the bias term.) SoftmaxRegression Code Bringing the concepts together, we could come up with an implementation as follows: End of explanation """ from mlxtend.data import iris_data from mlxtend.evaluate import plot_decision_regions import matplotlib.pyplot as plt # Loading Data X, y = iris_data() X = X[:, [0, 3]] # sepal length and petal width # standardize X[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std() X[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std() lr = SoftmaxRegression(eta=0.01, epochs=10, minibatches=1, random_seed=0) lr.fit(X, y) plot_decision_regions(X, y, clf=lr) plt.title('Softmax Regression - Gradient Descent') plt.show() plt.plot(range(len(lr.cost_)), lr.cost_) plt.xlabel('Iterations') plt.ylabel('Cost') plt.show() """ Explanation: Example 1 - Gradient Descent End of explanation """ lr.epochs = 800 lr.fit(X, y, init_params=False) plot_decision_regions(X, y, clf=lr) plt.title('Softmax Regression - Stochastic Gradient Descent') plt.show() plt.plot(range(len(lr.cost_)), lr.cost_) plt.xlabel('Iterations') plt.ylabel('Cost') plt.show() """ Explanation: Continue training for another 800 epochs by calling the fit method with init_params=False. End of explanation """ y_pred = lr.predict(X) print('Last 3 Class Labels: %s' % y_pred[-3:]) """ Explanation: Predicting Class Labels End of explanation """ y_pred = lr.predict_proba(X) print('Last 3 Class Labels:\n %s' % y_pred[-3:]) """ Explanation: Predicting Class Probabilities End of explanation """ from mlxtend.data import iris_data from mlxtend.evaluate import plot_decision_regions from mlxtend.classifier import SoftmaxRegression import matplotlib.pyplot as plt # Loading Data X, y = iris_data() X = X[:, [0, 3]] # sepal length and petal width # standardize X[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std() X[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std() lr = SoftmaxRegression(eta=0.05, epochs=200, minibatches=len(y), random_seed=0) lr.fit(X, y) plot_decision_regions(X, y, clf=lr) plt.title('Softmax Regression - Stochastic Gradient Descent') plt.show() plt.plot(range(len(lr.cost_)), lr.cost_) plt.xlabel('Iterations') plt.ylabel('Cost') plt.show() """ Explanation: Example 2 - Stochastic Gradient Descent End of explanation """
hunterherrin/phys202-2015-work
assignments/assignment03/NumpyEx04.ipynb
mit
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns """ Explanation: Numpy Exercise 4 Imports End of explanation """ import networkx as nx K_5=nx.complete_graph(5) nx.draw(K_5) """ Explanation: Complete graph Laplacian In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules. A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node. Here is $K_5$: End of explanation """ def complete_deg(n): """Return the integer valued degree matrix D for the complete graph K_n.""" z=np.zeros((n,n), dtype=int) np.fill_diagonal(z,(n-1)) return z D = complete_deg(5) assert D.shape==(5,5) assert D.dtype==np.dtype(int) assert np.all(D.diagonal()==4*np.ones(5)) assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int)) """ Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple. The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy. End of explanation """ def complete_adj(n): """Return the integer valued adjacency matrix A for the complete graph K_n.""" u = np.zeros((n,n), dtype=int) u = u + 1 np.fill_diagonal(u,0) return u A = complete_adj(5) assert A.shape==(5,5) assert A.dtype==np.dtype(int) assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int)) """ Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy. End of explanation """ def Laplacian(n): return complete_deg(n) - complete_adj(n) for n in range(1,10): print(np.linalg.eigvals(Laplacian(n))) """ Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. End of explanation """
BinRoot/TensorFlow-Book
ch04_classification/Concept03_logistic2d.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt learning_rate = 0.1 training_epochs = 2000 """ Explanation: Ch 04: Concept 03 Logistic regression in higher dimensions Set up the imports and hyper-parameters End of explanation """ x1_label1 = np.random.normal(3, 1, 1000) x2_label1 = np.random.normal(2, 1, 1000) x1_label2 = np.random.normal(7, 1, 1000) x2_label2 = np.random.normal(6, 1, 1000) x1s = np.append(x1_label1, x1_label2) x2s = np.append(x2_label1, x2_label2) ys = np.asarray([0.] * len(x1_label1) + [1.] * len(x1_label2)) """ Explanation: Define positive and negative to classify 2D data points: End of explanation """ X1 = tf.placeholder(tf.float32, shape=(None,), name="x1") X2 = tf.placeholder(tf.float32, shape=(None,), name="x2") Y = tf.placeholder(tf.float32, shape=(None,), name="y") w = tf.Variable([0., 0., 0.], name="w", trainable=True) y_model = tf.sigmoid(-(w[2] * X2 + w[1] * X1 + w[0])) cost = tf.reduce_mean(-tf.log(y_model * Y + (1 - y_model) * (1 - Y))) train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) """ Explanation: Define placeholders, variables, model, and the training op: End of explanation """ with tf.Session() as sess: sess.run(tf.global_variables_initializer()) prev_err = 0 for epoch in range(training_epochs): err, _ = sess.run([cost, train_op], {X1: x1s, X2: x2s, Y: ys}) if epoch % 100 == 0: print(epoch, err) if abs(prev_err - err) < 0.0001: break prev_err = err w_val = sess.run(w, {X1: x1s, X2: x2s, Y: ys}) """ Explanation: Train the model on the data in a session: End of explanation """ x1_boundary, x2_boundary = [], [] with tf.Session() as sess: for x1_test in np.linspace(0, 10, 20): for x2_test in np.linspace(0, 10, 20): z = sess.run(tf.sigmoid(-x2_test*w_val[2] - x1_test*w_val[1] - w_val[0])) if abs(z - 0.5) < 0.05: x1_boundary.append(x1_test) x2_boundary.append(x2_test) """ Explanation: Here's one hacky, but simple, way to figure out the decision boundary of the classifier: End of explanation """ plt.scatter(x1_boundary, x2_boundary, c='b', marker='o', s=20) plt.scatter(x1_label1, x2_label1, c='r', marker='x', s=20) plt.scatter(x1_label2, x2_label2, c='g', marker='1', s=20) plt.show() """ Explanation: Ok, enough code. Let's see some a pretty plot: End of explanation """
tpin3694/tpin3694.github.io
machine-learning/convert_pandas_categorical_column_into_integers_for_scikit-learn.ipynb
mit
# Import required packages from sklearn import preprocessing import pandas as pd """ Explanation: Title: Convert Pandas Categorical Data For Scikit-Learn Slug: convert_pandas_categorical_column_into_integers_for_scikit-learn Summary: Convert Pandas Categorical Column Into Integers For Scikit-Learn Date: 2016-11-30 12:00 Category: Machine Learning Tags: Preprocessing Structured Data Authors: Chris Albon Preliminaries End of explanation """ raw_data = {'patient': [1, 1, 1, 2, 2], 'obs': [1, 2, 3, 1, 2], 'treatment': [0, 1, 0, 1, 0], 'score': ['strong', 'weak', 'normal', 'weak', 'strong']} df = pd.DataFrame(raw_data, columns = ['patient', 'obs', 'treatment', 'score']) """ Explanation: Create DataFrame End of explanation """ # Create a label (category) encoder object le = preprocessing.LabelEncoder() # Fit the encoder to the pandas column le.fit(df['score']) """ Explanation: Fit The Label Encoder End of explanation """ # View the labels (if you want) list(le.classes_) """ Explanation: View The Labels End of explanation """ # Apply the fitted encoder to the pandas column le.transform(df['score']) """ Explanation: Transform Categories Into Integers End of explanation """ # Convert some integers into their category names list(le.inverse_transform([2, 2, 1])) """ Explanation: Transform Integers Into Categories End of explanation """
lwcook/horsetail-matching
notebooks/Gradients.ipynb
mit
import numpy import matplotlib.pyplot as plt from horsetailmatching import UniformParameter, IntervalParameter, HorsetailMatching from horsetailmatching.demoproblems import TP1, TP2 """ Explanation: In this notebook we look at how to use the gradient of the horsetail matching metric to speed up optimizations (in terms of number of evaluations of the quantity of interest). End of explanation """ u1 = UniformParameter(lower_bound=-1, upper_bound=1) u2 = UniformParameter(lower_bound=-1, upper_bound=1) input_uncertainties = [u1, u2] """ Explanation: First we will look at the purely probabilistic case and a simple test problem. We set up the uncertain parameters and create the horsetail matching object as usual. End of explanation """ def fun_qjac(x, u): return TP1(x, u, jac=True) # Returns both qoi and its gradient def fun_q(x, u): return TP1(x, u, jac=False) # Returns just the qoi def fun_jac(x, u): return TP1(x, u, jac=True)[1] # Returns just the gradient theHM = HorsetailMatching(fun_qjac, input_uncertainties, jac=True, method='kernel', kernel_bandwidth=0.001, samples_prob=2000, integration_points=numpy.linspace(-10, 100, 5000)) theHM = HorsetailMatching(fun_q, input_uncertainties, jac=fun_jac, method='empirical', samples_prob=2000) print(theHM.evalMetric([1,1])) """ Explanation: Horsetail matching uses the same syntax for specifying a gradient as the scipy.minimize function: through the 'jac' argument. If 'jac' is True, then horsetail matching expects the qoi function to also return the jacobian of the qoi (the gradient with respect to the design variables). Alternatively 'jac' is a fuction that takes two inputs (the values of the design variables and uncertainties), and returns the gradient. The following code demonstrates these alternatives: End of explanation """ from scipy.optimize import minimize solution = minimize(theHM.evalMetric, x0=[1,1], method='BFGS', jac=True) print(solution) (x1, y1, t1), (x2, y2, t2), CDFs = theHM.getHorsetail() for (x, y) in CDFs: plt.plot(x, y, c='grey', lw=0.5) plt.plot(x1, y1, 'r') plt.plot(t1, y1, 'k--') plt.xlim([-1, 5]) plt.ylim([0, 1]) plt.xlabel('Quantity of Interest') plt.show() """ Explanation: The gradient can be evaluated using either the 'empirical' or 'kernel' based methods, however the 'empirical' method can sometimes give discontinuous gradients and so in general the 'kernel' based method is preferred. Note that when we are using kernels to evaluate the horsetail plot (with the method 'kernel'), it is important to provide integration points that cover the range of values of q that designs visited in the optimization might reach. Integrations points far beyond the range of samples are not evaluated and so this range can be made to be large without taking a computational penalty. Additionally, here we specified the kernel_bandwidth which is fixed throughout an optimization. If this is not specified, Scott's rule is used on the samples from the initial design to determine the bandwidth. Now we can use this in a gradient based optimizer: End of explanation """ def fun_qjac(x, u): return TP2(x, u, jac=True) # Returns both qoi and its gradient u1 = UniformParameter() u2 = IntervalParameter() theHM = HorsetailMatching(fun_qjac, u1, u2, jac=True, method='kernel', samples_prob=500, samples_int=50, integration_points=numpy.linspace(-20, 100, 3000), verbose=True) solution = minimize(theHM.evalMetric, x0=[1, 1], method='BFGS', jac=True) print(solution) """ Explanation: Once again the optimizer has found the optimum where the CDF is a step function, but this time in fewer iterations. We can also use gradients for optimization under mixed uncertainties in exactly the same way. The example below performs the optimization of TP2 like in the mixed uncertainties tutorial, but this time using gradients. Note that we turn on the verbosity so we can see what the horsetail matching object is doing at each design point. End of explanation """ upper, lower, CDFs = theHM.getHorsetail() (q1, h1, t1) = upper (q2, h2, t2) = lower for CDF in CDFs: plt.plot(CDF[0], CDF[1], c='grey', lw=0.05) plt.plot(q1, h1, 'r') plt.plot(q2, h2, 'r') plt.plot(t1, h1, 'k--') plt.plot(t2, h2, 'k--') plt.xlim([0, 15]) plt.ylim([0, 1]) plt.xlabel('Quantity of Interest') plt.show() """ Explanation: To plot the optimum solution... End of explanation """
aapeebles/tibertraining
Markdown.ipynb
mit
Header 1 ======== Header 2 -------- """ Explanation: Markdown What is it? Markdown is a markup language with plain text formatting, designed so that it can be converted to HTML. Markdown can be used to create rich text using a plain text editor. Why should I care? Markdown is your key to formatting the text you provide for this site. By learning a few intuitive rules you’ll be able to ensure your text is formatted with headings, list, quotes etc, without writing any HTML. Markdown Examples This is a paragraph. This is a paragraph. End of explanation """ __Headings__: Use #s followed by a blank space for notebook titles and section headings: # title ## major headings ### subheadings #### 4th level subheadings # Header 1 ## Header 2 ### Header 3 #### Header 4 ##### Header 5 ###### Header 6 """ Explanation: Header 1 Header 2 End of explanation """ __Indenting:__ Use a greater than sign (>) and then a space, then type the text. Everything is indented until the next carriage return. > *Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.* """ Explanation: Headings: Use #s followed by a blank space for notebook titles and section headings: title major headings subheadings 4th level subheadings Header 1 Header 2 Header 3 Header 4 Header 5 Header 6 End of explanation """ > ## This is a header. > 1. This is the first list item. > 2. This is the second list item. > > Here's some example code: > > Markdown.generate(); """ Explanation: Indenting: Use a greater than sign (>) and then a space, then type the text. Everything is indented until the next carriage return. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus. End of explanation """ __Bullets__: Use the dash sign (- ) with two spaces after it or a space, a dash, and a space ( - ), to create a circular bullet. To create a sub bullet, use a tab followed a dash and two spaces. You can also use an asterisk instead of a dash, and it works the same. - <font color=red>Red</font> - This is a sub bullet for red. * <font color=green>Green</font> * <font color=blue>Blue</font> """ Explanation: This is a header. This is the first list item. This is the second list item. Here's some example code: Markdown.generate(); End of explanation """ __Numbered lists__: Start with 1. followed by a space, then it starts numbering for you. Start each line with some number and a period, then a space. Tab to indent to get subnumbering. 1. Buy flour and salt 1. Mix together with water 1. Bake 1. Or go to the store and buy a cake! """ Explanation: Bullets: Use the dash sign (- ) with two spaces after it or a space, a dash, and a space ( - ), to create a circular bullet. To create a sub bullet, use a tab followed a dash and two spaces. You can also use an asterisk instead of a dash, and it works the same. - <font color=red>Red</font> - This is a sub bullet for red. * <font color=green>Green</font> * <font color=blue>Blue</font> End of explanation """ __Internal links__: To link to a section, you need to add an ID for it right above the section title. Use this code: <a id="section_ID"></a> This is [an example](http://datahub.io/) link. This is [an example] [ok] reference-style link. [ok]: https://okfn.org/ """ Explanation: Numbered lists: Start with 1. followed by a space, then it starts numbering for you. Start each line with some number and a period, then a space. Tab to indent to get subnumbering. Buy flour and salt Mix together with water Bake Or go to the store and buy a cake! End of explanation """ Graphics: You can use only graphics that are hosted on the web. You can't add captions for graphics at this time. Use this code: <img src="url.gif" alt="Alt text that describes the graphic" title="Title text" /> <img src="https://i.makeagif.com/media/11-28-2015/NjrA18.gif" alt="Jupiter" title="Jupiter Spinning" /> """ Explanation: Internal links: To link to a section, you need to add an ID for it right above the section title. Use this code: <a id="section_ID"></a> This is an example link. This is an example reference-style link. End of explanation """ **External Links:** referencing directly: [Visit GitHub!](www.github.com) or indectly: [Really visit Github!][a good site] [a good site]:www.github.com """ Explanation: Graphics: You can use only graphics that are hosted on the web. You can't add captions for graphics at this time. Use this code: <img src="url.gif" alt="Alt text that describes the graphic" title="Title text" /> <img src="https://i.makeagif.com/media/11-28-2015/NjrA18.gif" alt="Jupiter" title="Jupiter Spinning" /> End of explanation """ You can even reference code within a markdown cell Use the `printf()` function. """ Explanation: External Links referencing directly: Visit GitHub! or indectly: Really visit Github! End of explanation """ **References:** 1. http://opendatahandbook.org/contribute/markdown-examples/ 1.https://datascience.ibm.com/blog/markdown-for-jupyter-notebooks-cheatsheet/ """ Explanation: You can even reference code within a markdown cell Use the printf() function. You can include math functions to explain your reasoning by wrapping dollar signs around it: $$c = \sqrt{a^2 + b^2}$$ You can include math functions to explain your reasoning by wrapping dollar signs around it: $$c = \sqrt{a^2 + b^2}$$ End of explanation """
andrenatal/DeepSpeech
DeepSpeech.ipynb
mpl-2.0
import os import time import json import datetime import tempfile import subprocess import numpy as np from math import ceil from xdg import BaseDirectory as xdg import tensorflow as tf from util.log import merge_logs from util.gpu import get_available_gpus from util.shared_lib import check_cupti from util.text import sparse_tensor_value_to_texts, wers from tensorflow.python.ops import ctc_ops from tensorflow.contrib.session_bundle import exporter ds_importer = os.environ.get('ds_importer', 'ted') ds_dataset_path = os.environ.get('ds_dataset_path', os.path.join('./data', ds_importer)) import importlib ds_importer_module = importlib.import_module('util.importers.%s' % ds_importer) from util.website import maybe_publish do_fulltrace = bool(len(os.environ.get('ds_do_fulltrace', ''))) if do_fulltrace: check_cupti() """ Explanation: Introduction In this notebook we will reproduce the results of Deep Speech: Scaling up end-to-end speech recognition. The core of the system is a bidirectional recurrent neural network (BRNN) trained to ingest speech spectrograms and generate English text transcriptions. Let a single utterance $x$ and label $y$ be sampled from a training set $S = {(x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), . . .}$. Each utterance, $x^{(i)}$ is a time-series of length $T^{(i)}$ where every time-slice is a vector of audio features, $x^{(i)}t$ where $t=1,\ldots,T^{(i)}$. We use MFCC as our features; so $x^{(i)}{t,p}$ denotes the $p$-th MFCC feature in the audio frame at time $t$. The goal of our BRNN is to convert an input sequence $x$ into a sequence of character probabilities for the transcription $y$, with $\hat{y}_t =\mathbb{P}(c_t \mid x)$, where $c_t \in {a,b,c, . . . , z, space, apostrophe, blank}$. (The significance of $blank$ will be explained below.) Our BRNN model is composed of $5$ layers of hidden units. For an input $x$, the hidden units at layer $l$ are denoted $h^{(l)}$ with the convention that $h^{(0)}$ is the input. The first three layers are not recurrent. For the first layer, at each time $t$, the output depends on the MFCC frame $x_t$ along with a context of $C$ frames on each side. (We typically use $C \in {5, 7, 9}$ for our experiments.) The remaining non-recurrent layers operate on independent data for each time step. Thus, for each time $t$, the first $3$ layers are computed by: $$h^{(l)}_t = g(W^{(l)} h^{(l-1)}_t + b^{(l)})$$ where $g(z) = \min{\max{0, z}, 20}$ is a clipped rectified-linear (ReLu) activation function and $W^{(l)}$, $b^{(l)}$ are the weight matrix and bias parameters for layer $l$. The fourth layer is a bidirectional recurrent layer[1]. This layer includes two sets of hidden units: a set with forward recurrence, $h^{(f)}$, and a set with backward recurrence $h^{(b)}$: $$h^{(f)}t = g(W^{(4)} h^{(3)}_t + W^{(f)}_r h^{(f)}{t-1} + b^{(4)})$$ $$h^{(b)}t = g(W^{(4)} h^{(3)}_t + W^{(b)}_r h^{(b)}{t+1} + b^{(4)})$$ Note that $h^{(f)}$ must be computed sequentially from $t = 1$ to $t = T^{(i)}$ for the $i$-th utterance, while the units $h^{(b)}$ must be computed sequentially in reverse from $t = T^{(i)}$ to $t = 1$. The fifth (non-recurrent) layer takes both the forward and backward units as inputs $$h^{(5)} = g(W^{(5)} h^{(4)} + b^{(5)})$$ where $h^{(4)} = h^{(f)} + h^{(b)}$. The output layer are standard logits that correspond to the predicted character probabilities for each time slice $t$ and character $k$ in the alphabet: $$h^{(6)}{t,k} = \hat{y}{t,k} = (W^{(6)} h^{(5)}_t)_k + b^{(6)}_k$$ Here $b^{(6)}_k$ denotes the $k$-th bias and $(W^{(6)} h^{(5)}_t)_k$ the $k$-th element of the matrix product. Once we have computed a prediction for $\hat{y}_{t,k}$, we compute the CTC loss[2] $\cal{L}(\hat{y}, y)$ to measure the error in prediction. During training, we can evaluate the gradient $\nabla \cal{L}(\hat{y}, y)$ with respect to the network outputs given the ground-truth character sequence $y$. From this point, computing the gradient with respect to all of the model parameters may be done via back-propagation through the rest of the network. We use the Adam method for training[3]. The complete BRNN model is illustrated in the figure below. Preliminaries Imports Here we first import all of the packages we require to implement the DeepSpeech BRNN. End of explanation """ learning_rate = float(os.environ.get('ds_learning_rate', 0.001)) # TODO: Determine a reasonable value for this beta1 = float(os.environ.get('ds_beta1', 0.9)) # TODO: Determine a reasonable value for this beta2 = float(os.environ.get('ds_beta2', 0.999)) # TODO: Determine a reasonable value for this epsilon = float(os.environ.get('ds_epsilon', 1e-8)) # TODO: Determine a reasonable value for this training_iters = int(os.environ.get('ds_training_iters', 15)) # TODO: Determine a reasonable value for this batch_size = int(os.environ.get('ds_batch_size', 64)) # TODO: Determine a reasonable value for this display_step = int(os.environ.get('ds_display_step', 1)) # TODO: Determine a reasonable value for this validation_step = int(os.environ.get('ds_validation_step', 1)) # TODO: Determine a reasonable value for this checkpoint_step = int(os.environ.get('ds_checkpoint_step', 5)) # TODO: Determine a reasonable value for this checkpoint_dir = os.environ.get('ds_checkpoint_dir', xdg.save_data_path('deepspeech')) export_dir = os.environ.get('ds_export_dir', None) export_version = 1 use_warpctc = bool(len(os.environ.get('ds_use_warpctc', ''))) default_stddev = float(os.environ.get('ds_default_stddev', 0.1)) for var in ['b1', 'h1', 'b2', 'h2', 'b3', 'h3', 'b5', 'h5', 'b6', 'h6']: locals()['%s_stddev' % var] = float(os.environ.get('ds_%s_stddev' % var, default_stddev)) """ Explanation: Global Constants Next we introduce several constants used in the algorithm below. In particular, we define * learning_rate - The learning rate we will employ in Adam optimizer[3] * training_iters - The number of iterations we will train for * batch_size - The number of elements in a batch * display_step - The number of epochs we cycle through before displaying progress * checkpoint_step - The number of epochs we cycle through before checkpointing the model * checkpoint_dir - The directory in which checkpoints are stored * export_dir - The directory in which exported models are stored * export_version - The version number of the exported model * default_stddev - The default standard deviation to use when initialising weights and biases * [bh][12356]_stddev - Individual standard deviations to use when initialising particular weights and biases End of explanation """ dropout_rate = float(os.environ.get('ds_dropout_rate', 0.05)) # TODO: Validate this is a reasonable value # This global placeholder will be used for all dropout definitions dropout_rate_placeholder = tf.placeholder(tf.float32) # The feed_dict used for training employs the given dropout_rate feed_dict_train = { dropout_rate_placeholder: dropout_rate } # While the feed_dict used for validation, test and train progress reporting employs zero dropout feed_dict = { dropout_rate_placeholder: 0.0 } """ Explanation: Note that we use the Adam optimizer[3] instead of Nesterov’s Accelerated Gradient [4] used in the original DeepSpeech paper, as, at the time of writing, TensorFlow does not have an implementation of Nesterov’s Accelerated Gradient [4]. As we will also employ dropout on the feedforward layers of the network, we need to define a parameter dropout_rate that keeps track of the dropout rate for these layers End of explanation """ relu_clip = int(os.environ.get('ds_relu_clip', 20)) # TODO: Validate this is a reasonable value """ Explanation: One more constant required of the non-recurrant layers is the clipping value of the ReLU. We capture that in the value of the variable relu_clip End of explanation """ n_input = 26 # TODO: Determine this programatically from the sample rate """ Explanation: Geometric Constants Now we will introduce several constants related to the geometry of the network. The network views each speech sample as a sequence of time-slices $x^{(i)}_t$ of length $T^{(i)}$. As the speech samples vary in length, we know that $T^{(i)}$ need not equal $T^{(j)}$ for $i \ne j$. For each batch, BRNN in TensorFlow needs to know n_steps which is the maximum $T^{(i)}$ for the batch. Each of the at maximum n_steps vectors is a vector of MFCC features of a time-slice of the speech sample. We will make the number of MFCC features dependent upon the sample rate of the data set. Generically, if the sample rate is 8kHz we use 13 features. If the sample rate is 16kHz we use 26 features... We capture the dimension of these vectors, equivalently the number of MFCC features, in the variable n_input End of explanation """ n_context = 9 # TODO: Determine the optimal value using a validation data set """ Explanation: As previously mentioned, the BRNN is not simply fed the MFCC features of a given time-slice. It is fed, in addition, a context of $C \in {5, 7, 9}$ frames on either side of the frame in question. The number of frames in this context is captured in the variable n_context End of explanation """ n_hidden_1 = n_input + 2*n_input*n_context # Note: This value was not specified in the original paper n_hidden_2 = n_input + 2*n_input*n_context # Note: This value was not specified in the original paper n_hidden_5 = n_input + 2*n_input*n_context # Note: This value was not specified in the original paper """ Explanation: Next we will introduce constants that specify the geometry of some of the non-recurrent layers of the network. We do this by simply specifying the number of units in each of the layers End of explanation """ n_cell_dim = n_input + 2*n_input*n_context # TODO: Is this a reasonable value """ Explanation: where n_hidden_1 is the number of units in the first layer, n_hidden_2 the number of units in the second, and n_hidden_5 the number in the fifth. We haven't forgotten about the third or sixth layer. We will define their unit count below. A LSTM BRNN consists of a pair of LSTM RNN's. One LSTM RNN that works "forward in time" <img src="images/LSTM3-chain.png" alt="LSTM" width="800"> and a second LSTM RNN that works "backwards in time" <img src="images/LSTM3-chain.png" alt="LSTM" width="800"> The dimension of the cell state, the upper line connecting subsequent LSTM units, is independent of the input dimension and the same for both the forward and backward LSTM RNN. Hence, we are free to choose the dimension of this cell state independent of the input dimension. We capture the cell state dimension in the variable n_cell_dim. End of explanation """ n_hidden_3 = 2 * n_cell_dim """ Explanation: The number of units in the third layer, which feeds in to the LSTM, is determined by n_cell_dim as follows End of explanation """ n_character = 29 # TODO: Determine if this should be extended with other punctuation """ Explanation: Next, we introduce an additional variable n_character which holds the number of characters in the target language plus one, for the $blamk$. For English it is the cardinality of the set ${a,b,c, . . . , z, space, apostrophe, blank}$ we referred to earlier. End of explanation """ n_hidden_6 = n_character """ Explanation: The number of units in the sixth layer is determined by n_character as follows End of explanation """ def variable_on_cpu(name, shape, initializer): # Use the /cpu:0 device for scoped operations with tf.device('/cpu:0'): # Create or get apropos variable var = tf.get_variable(name=name, shape=shape, initializer=initializer) return var """ Explanation: Graph Creation Next we concern ourselves with graph creation. However, before we do so we must introduce a utility function variable_on_cpu() used to create a variable in CPU memory. End of explanation """ def BiRNN(batch_x, seq_length): # Input shape: [batch_size, n_steps, n_input + 2*n_input*n_context] batch_x_shape = tf.shape(batch_x) # Permute n_steps and batch_size batch_x = tf.transpose(batch_x, [1, 0, 2]) # Reshape to prepare input for first layer batch_x = tf.reshape(batch_x, [-1, n_input + 2*n_input*n_context]) # (n_steps*batch_size, n_input + 2*n_input*n_context) #Hidden layer with clipped RELU activation and dropout b1 = variable_on_cpu('b1', [n_hidden_1], tf.random_normal_initializer(stddev=b1_stddev)) h1 = variable_on_cpu('h1', [n_input + 2*n_input*n_context, n_hidden_1], tf.random_normal_initializer(stddev=h1_stddev)) layer_1 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(batch_x, h1), b1)), relu_clip) layer_1 = tf.nn.dropout(layer_1, (1.0 - dropout_rate_placeholder)) #Hidden layer with clipped RELU activation and dropout b2 = variable_on_cpu('b2', [n_hidden_2], tf.random_normal_initializer(stddev=b2_stddev)) h2 = variable_on_cpu('h2', [n_hidden_1, n_hidden_2], tf.random_normal_initializer(stddev=h2_stddev)) layer_2 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(layer_1, h2), b2)), relu_clip) layer_2 = tf.nn.dropout(layer_2, (1.0 - dropout_rate_placeholder)) #Hidden layer with clipped RELU activation and dropout b3 = variable_on_cpu('b3', [n_hidden_3], tf.random_normal_initializer(stddev=b3_stddev)) h3 = variable_on_cpu('h3', [n_hidden_2, n_hidden_3], tf.random_normal_initializer(stddev=h3_stddev)) layer_3 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(layer_2, h3), b3)), relu_clip) layer_3 = tf.nn.dropout(layer_3, (1.0 - dropout_rate_placeholder)) # Define lstm cells with tensorflow # Forward direction cell lstm_fw_cell = tf.nn.rnn_cell.BasicLSTMCell(n_cell_dim, forget_bias=1.0) # Backward direction cell lstm_bw_cell = tf.nn.rnn_cell.BasicLSTMCell(n_cell_dim, forget_bias=1.0) # Reshape data because rnn cell expects shape [max_time, batch_size, input_size] layer_3 = tf.reshape(layer_3, [-1, batch_x_shape[0], n_hidden_3]) # Get lstm cell output outputs, output_states = tf.nn.bidirectional_dynamic_rnn(cell_fw=lstm_fw_cell, cell_bw=lstm_bw_cell, inputs=layer_3, dtype=tf.float32, time_major=True, sequence_length=seq_length) # Reshape outputs from two tensors each of shape [n_steps, batch_size, n_cell_dim] # to a single tensor of shape [n_steps*batch_size, 2*n_cell_dim] outputs = tf.concat(2, outputs) outputs = tf.reshape(outputs, [-1, 2*n_cell_dim]) #Hidden layer with clipped RELU activation and dropout b5 = variable_on_cpu('b5', [n_hidden_5], tf.random_normal_initializer(stddev=b5_stddev)) h5 = variable_on_cpu('h5', [(2 * n_cell_dim), n_hidden_5], tf.random_normal_initializer(stddev=h5_stddev)) layer_5 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(outputs, h5), b5)), relu_clip) layer_5 = tf.nn.dropout(layer_5, (1.0 - dropout_rate_placeholder)) #Hidden layer of logits b6 = variable_on_cpu('b6', [n_hidden_6], tf.random_normal_initializer(stddev=b6_stddev)) h6 = variable_on_cpu('h6', [n_hidden_5, n_hidden_6], tf.random_normal_initializer(stddev=h6_stddev)) layer_6 = tf.add(tf.matmul(layer_5, h6), b6) # Reshape layer_6 from a tensor of shape [n_steps*batch_size, n_hidden_6] # to a tensor of shape [n_steps, batch_size, n_hidden_6] layer_6 = tf.reshape(layer_6, [-1, batch_x_shape[0], n_hidden_6]) # Return layer_6 # Output shape: [n_steps, batch_size, n_input + 2*n_input*n_context] return layer_6 """ Explanation: That done, we will define the learned variables, the weights and biases, within the method BiRNN() which also constructs the neural network. The variables named hn, where n is an integer, hold the learned weight variables. The variables named bn, where n is an integer, hold the learned bias variables. In particular, the first variable h1 holds the learned weight matrix that converts an input vector of dimension n_input + 2*n_input*n_context to a vector of dimension n_hidden_1. Similarly, the second variable h2 holds the weight matrix converting an input vector of dimension n_hidden_1 to one of dimension n_hidden_2. The variables h3, h5, and h6 are similar. Likewise, the biases, b1, b2..., hold the biases for the various layers. That said let us introduce the method BiRNN() that takes a batch of data batch_x and performs inference upon it. End of explanation """ def calculate_accuracy_and_loss(batch_set): # Obtain the next batch of data batch_x, batch_seq_len, batch_y = batch_set.next_batch() # Calculate the logits of the batch using BiRNN logits = BiRNN(batch_x, tf.to_int64(batch_seq_len)) # Compute the CTC loss if use_warpctc: total_loss = tf.contrib.warpctc.warp_ctc_loss(logits, batch_y, batch_seq_len) else: total_loss = ctc_ops.ctc_loss(logits, batch_y, batch_seq_len) # Calculate the average loss across the batch avg_loss = tf.reduce_mean(total_loss) # Beam search decode the batch decoded, _ = ctc_ops.ctc_beam_search_decoder(logits, batch_seq_len) # Compute the edit (Levenshtein) distance distance = tf.edit_distance(tf.cast(decoded[0], tf.int32), batch_y) # Compute the accuracy accuracy = tf.reduce_mean(distance) # Return results to the caller return total_loss, avg_loss, accuracy, decoded, batch_y """ Explanation: The first few lines of the function BiRNN python def BiRNN(batch_x, seq_length): # Input shape: [batch_size, n_steps, n_input + 2*n_input*n_context] batch_x_shape = tf.shape(batch_x) # Permute n_steps and batch_size batch_x = tf.transpose(batch_x, [1, 0, 2]) # Reshape to prepare input for first layer batch_x = tf.reshape(batch_x, [-1, n_input + 2*n_input*n_context]) ... reshape batch_x which has shape [batch_size, n_steps, n_input + 2*n_input*n_context] initially, to a tensor with shape [n_steps*batch_size, n_input + 2*n_input*n_context]. This is done to prepare the batch for input into the first layer which expects a tensor of rank 2. The next few lines of BiRNN python #Hidden layer with clipped RELU activation and dropout b1 = variable_on_cpu('b1', [n_hidden_1], tf.random_normal_initializer()) h1 = variable_on_cpu('h1', [n_input + 2*n_input*n_context, n_hidden_1], tf.random_normal_initializer()) layer_1 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(batch_x, h1), b1)), relu_clip) layer_1 = tf.nn.dropout(layer_1, (1.0 - dropout_rate_placeholder)) ... pass batch_x through the first layer of the non-recurrent neural network, then applies dropout to the result. The next few lines do the same thing, but for the second and third layers python #Hidden layer with clipped RELU activation and dropout b2 = variable_on_cpu('b2', [n_hidden_2], tf.random_normal_initializer()) h2 = variable_on_cpu('h2', [n_hidden_1, n_hidden_2], tf.random_normal_initializer()) layer_2 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(layer_1, h2), b2)), relu_clip) layer_2 = tf.nn.dropout(layer_2, (1.0 - dropout_rate_placeholder)) #Hidden layer with clipped RELU activation and dropout b3 = variable_on_cpu('b3', [n_hidden_3], tf.random_normal_initializer()) h3 = variable_on_cpu('h3', [n_hidden_2, n_hidden_3], tf.random_normal_initializer()) layer_3 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(layer_2, h3), b3)), relu_clip) layer_3 = tf.nn.dropout(layer_3, (1.0 - dropout_rate_placeholder)) Next we create the forward and backward LSTM units python # Define lstm cells with tensorflow # Forward direction cell lstm_fw_cell = tf.nn.rnn_cell.BasicLSTMCell(n_cell_dim, forget_bias=1.0) # Backward direction cell lstm_bw_cell = tf.nn.rnn_cell.BasicLSTMCell(n_cell_dim, forget_bias=1.0) both of which have inputs of length n_cell_dim and bias 1.0 for the forget gate of the LSTM. The next line of the funtion BiRNN does a bit more data preparation. python # Reshape data because rnn cell expects shape [max_time, batch_size, input_size] layer_3 = tf.reshape(layer_3, [-1, batch_x_shape[0], n_hidden_3]) It reshapes layer_3 in to [n_steps, batch_size, 2*n_cell_dim] as the LSTM BRNN expects its input to be of shape [max_time, batch_size, input_size]. The next line of BiRNN python # Get lstm cell output outputs, output_states = tf.nn.bidirectional_dynamic_rnn(cell_fw=lstm_fw_cell, cell_bw=lstm_bw_cell, inputs=layer_3, dtype=tf.float32, time_major=True, sequence_length=seq_length) feeds layer_3 to the LSTM BRNN cell and obtains the LSTM BRNN output. The next lines convert outputs from two rank two tensors into a single rank two tensor in preparation for passing it to the next neural network layer python # Reshape outputs from two tensors each of shape [n_steps, batch_size, n_cell_dim] # to a single tensor of shape [n_steps*batch_size, 2*n_cell_dim] outputs = tf.concat(2, outputs) outputs = tf.reshape(outputs, [-1, 2*n_cell_dim]) The next couple of lines feed outputs to the fifth hidden layer python #Hidden layer with clipped RELU activation and dropout b5 = variable_on_cpu('b5', [n_hidden_5], tf.random_normal_initializer()) h5 = variable_on_cpu('h5', [(2 * n_cell_dim), n_hidden_5], tf.random_normal_initializer()) layer_5 = tf.minimum(tf.nn.relu(tf.add(tf.matmul(outputs, h5), b5)), relu_clip) layer_5 = tf.nn.dropout(layer_5, (1.0 - dropout_rate_placeholder)) The next line of BiRNN python #Hidden layer of logits b6 = variable_on_cpu('b6', [n_hidden_6], tf.random_normal_initializer()) h6 = variable_on_cpu('h6', [n_hidden_5, n_hidden_6], tf.random_normal_initializer()) layer_6 = tf.add(tf.matmul(layer_5, h6), b6) Applies the weight matrix h6 and bias b6 to the output of layer_5 creating n_classes dimensional vectors, the logits. The next lines of BiRNN python # Reshape layer_6 from a tensor of shape [n_steps*batch_size, n_hidden_6] # to a tensor of shape [n_steps, batch_size, n_hidden_6] layer_6 = tf.reshape(layer_6, [-1, batch_x_shape[0], n_hidden_6]) reshapes layer_6 to the slightly more useful shape [n_steps, batch_size, n_hidden_6]. Note, that this differs from the input in that it is time-major. The final line of BiRNN returns layer_6 python # Return layer_6 # Output shape: [n_steps, batch_size, n_input + 2*n_input*n_context] return layer_6 Accuracy and Loss In accord with Deep Speech: Scaling up end-to-end speech recognition, the loss function used by our network should be the CTC loss function[2]. Conveniently, this loss function is implemented in TensorFlow. Thus, we can simply make use of this implementation to define our loss. To do so we introduce a utility function calculate_accuracy_and_loss() that beam search decodes a mini-batch and calculates the loss and accuracy. Next to total and average loss it returns the accuracy, the decoded result and the batch's original Y. End of explanation """ def create_optimizer(): optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1, beta2=beta2, epsilon=epsilon) return optimizer """ Explanation: The first lines of calculate_accuracy_and_loss() python def calculate_accuracy_and_loss(batch_set): # Obtain the next batch of data batch_x, batch_seq_len, batch_y = batch_set.next_batch() simply obtain the next mini-batch of data. The next line python # Calculate the logits from the BiRNN logits = BiRNN(batch_x, batch_seq_len) calls BiRNN() with a batch of data and does inference on the batch. The next few lines ```python # Compute the CTC loss total_loss = ctc_ops.ctc_loss(logits, batch_y, batch_seq_len) # Calculate the average loss across the batch avg_loss = tf.reduce_mean(total_loss) `` calculate the average loss using tensor flow'sctc_loss` operator. The next lines first beam decode the batch and then compute the accuracy on base of the Levenshtein distance between the decoded batch and the batch's original Y. ```python # Beam search decode the batch decoded, _ = ctc_ops.ctc_beam_search_decoder(logits, batch_seq_len) # Compute the edit (Levenshtein) distance distance = tf.edit_distance(tf.cast(decoded[0], tf.int32), batch_y) # Compute the accuracy accuracy = tf.reduce_mean(distance) ``` Finally, the total_loss, avg_loss, accuracy, the decoded batch and the original batch_y are returned to the caller python # Return results to the caller return total_loss, avg_loss, accuracy, decoded, batch_y Parallel Optimization Next we will implement optimization of the DeepSpeech model across GPU's on a single host. This parallel optimization can take on various forms. For example one can use asynchronous updates of the model, synchronous updates of the model, or some combination of the two. Asynchronous Parallel Optimization In asynchronous parallel optimization, for example, one places the model initially in CPU memory. Then each of the $G$ GPU's obtains a mini-batch of data along with the current model parameters. Using this mini-batch each GPU then computes the gradients for all model parameters and sends these gradients back to the CPU when the GPU is done with its mini-batch. The CPU then asynchronously updates the model parameters whenever it recieves a set of gradients from a GPU. Asynchronous parallel optimization has several advantages and several disadvantages. One large advantage is throughput. No GPU will every be waiting idle. When a GPU is done processing a mini-batch, it can immediately obtain the next mini-batch to process. It never has to wait on other GPU's to finish their mini-batch. However, this means that the model updates will also be asynchronous which can have problems. For example, one may have model parameters $W$ on the CPU and send mini-batch $n$ to GPU 1 and send mini-batch $n+1$ to GPU 2. As processing is asynchronous, GPU 2 may finish before GPU 1 and thus update the CPU's model parameters $W$ with its gradients $\Delta W_{n+1}(W)$, where the subscript $n+1$ identifies the mini-batch and the argument $W$ the location at which the gradient was evaluated. This results in the new model parameters $$W + \Delta W_{n+1}(W).$$ Next GPU 1 could finish with its mini-batch and update the parameters to $$W + \Delta W_{n+1}(W) + \Delta W_{n}(W).$$ The problem with this is that $\Delta W_{n}(W)$ is evaluated at $W$ and not $W + \Delta W_{n+1}(W)$. Hence, the direction of the gradient $\Delta W_{n}(W)$ is slightly incorrect as it is evaluated at the wrong location. This can be counteracted through synchronous updates of model, but this is also problematic. Synchronous Optimization Synchronous optimization solves the problem we saw above. In synchronous optimization, one places the model initially in CPU memory. Then one of the $G$ GPU's is given a mini-batch of data along with the current model parameters. Using the mini-batch the GPU computes the gradients for all model parameters and sends the gradients back to the CPU. The CPU then updates the model parameters and starts the process of sending out the next mini-batch. As on can readily see, synchronous optimization does not have the problem we found in the last section, that of incorrect gradients. However, synchronous optimization can only make use of a single GPU at a time. So, when we have a multi-GPU setup, $G > 1$, all but one of the GPU's will remain idle, which is unacceptable. However, there is a third alternative which is combines the advantages of asynchronous and synchronous optimization. Hybrid Parallel Optimization Hybrid parallel optimization combines most of the benifits of asynchronous and synchronous optimization. It allows for multiple GPU's to be used, but does not suffer from the incorrect gradient problem exhibited by asynchronous optimization. In hybrid parallel optimization one places the model initially in CPU memory. Then, as in asynchronous optimization, each of the $G$ GPU'S obtains a mini-batch of data along with the current model parameters. Using the mini-batch each of the GPU's then computes the gradients for all model parameters and sends these gradients back to the CPU. Now, in contrast to asynchronous optimization, the CPU waits until each GPU is finished with its mini-batch then takes the mean of all the gradients from the $G$ GPU's and updates the model with this mean gradient. <img src="images/Parallelism.png" alt="LSTM" width="600"> Hybrid parallel optimization has several advantages and few disadvantages. As in asynchronous parallel optimization, hybrid parallel optimization allows for one to use multiple GPU's in parallel. Furthermore, unlike asynchronous parallel optimization, the incorrect gradient problem is not present here. In fact, hybrid parallel optimization performs as if one is working with a single mini-batch which is $G$ times the size of a mini-batch handled by a single GPU. Hoewever, hybrid parallel optimization is not perfect. If one GPU is slower than all the others in completing its mini-batch, all other GPU's will have to sit idle until this straggler finishes with its mini-batch. This hurts throughput. But, if all GPU'S are of the same make and model, this problem should be minimized. So, relatively speaking, hybrid parallel optimization seems the have more advantages and fewer disadvantages as compared to both asynchronous and synchronous optimization. So, we will, for our work, use this hybrid model. Adam Optimization In constrast to Deep Speech: Scaling up end-to-end speech recognition, in which Nesterov’s Accelerated Gradient Descent was used, we will use the Adam method for optimization[3], because, generally, it requires less fine-tuning. End of explanation """ # Get a list of the available gpu's ['/gpu:0', '/gpu:1'...] available_devices = get_available_gpus() # If there are no GPU's use the CPU if 0 == len(available_devices): available_devices = ['/cpu:0'] """ Explanation: Towers In order to properly make use of multiple GPU's, one must introduce new abstractions, not present when using a single GPU, that facilitate the multi-GPU use case. In particular, one must introduce a means to isolate the inference and gradient calculations on the various GPU's. The abstraction we intoduce for this purpose is called a 'tower'. A tower is specified by two properties: * Scope - A scope, as provided by tf.name_scope(), is a means to isolate the operations within a tower. For example, all operations within "tower 0" could have their name prefixed with tower_0/. * Device - A hardware device, as provided by tf.device(), on which all operations within the tower execute. For example, all operations of "tower 0" could execute on the first GPU tf.device('/gpu:0'). As we are introducing one tower for each GPU, first we must determine how many GPU's are available End of explanation """ def get_tower_results(batch_set, optimizer=None): # Tower decodings to return tower_decodings = [] # Tower labels to return tower_labels = [] # Tower gradients to return tower_gradients = [] # Tower total batch losses to return tower_total_losses = [] # Tower avg batch losses to return tower_avg_losses = [] # Tower accuracies to return tower_accuracies = [] # Loop over available_devices for i in xrange(len(available_devices)): # Execute operations of tower i on device i with tf.device(available_devices[i]): # Create a scope for all operations of tower i with tf.name_scope('tower_%d' % i) as scope: # Calculate the avg_loss and accuracy and retrieve the decoded # batch along with the original batch's labels (Y) of this tower total_loss, avg_loss, accuracy, decoded, labels = calculate_accuracy_and_loss(batch_set) # Allow for variables to be re-used by the next tower tf.get_variable_scope().reuse_variables() # Retain tower's decoded batch tower_decodings.append(decoded) # Retain tower's labels (Y) tower_labels.append(labels) # If we are in training, there will be an optimizer given and # only then we will compute and retain gradients on base of the loss if optimizer is not None: # Compute gradients for model parameters using tower's mini-batch gradients = optimizer.compute_gradients(avg_loss) # Retain tower's gradients tower_gradients.append(gradients) # Retain tower's total losses tower_total_losses.append(total_loss) # Retain tower's avg losses tower_avg_losses.append(avg_loss) # Retain tower's accuracies tower_accuracies.append(accuracy) # Average accuracies over the 'tower' dimension avg_accuracy = tf.reduce_mean(tower_accuracies, 0) # Return results to caller return tower_decodings, tower_labels, tower_gradients, tower_total_losses, tower_avg_losses, avg_accuracy """ Explanation: With this preliminary step out of the way, we can for each GPU intoduce a tower for which's batch we calculate the CTC decodings decoded, the (total) loss against the outcome (Y) total_loss, the loss averaged over the whole batch avg_loss, the optimization gradient (computed on base of the averaged loss) and the accuracy of the outcome averaged over the whole batch accuracy and retain the original labels (Y). decoded, labels, the optimization gradient, total_loss and avg_loss are collected into the respectful arrays tower_decodings, tower_labels, tower_gradients, tower_total_losses, tower_avg_losses (dimension 0 being the tower). Finally this new method get_tower_results() will return those tower arrays plus (based on accuracy) the averaged accuracy value accross all towers avg_accuracy. End of explanation """ def average_gradients(tower_gradients): # List of average gradients to return to the caller average_grads = [] # Loop over gradient/variable pairs from all towers for grad_and_vars in zip(*tower_gradients): # Introduce grads to store the gradients for the current variable grads = [] # Loop over the gradients for the current variable for g, _ in grad_and_vars: # Add 0 dimension to the gradients to represent the tower. expanded_g = tf.expand_dims(g, 0) # Append on a 'tower' dimension which we will average over below. grads.append(expanded_g) # Average over the 'tower' dimension grad = tf.concat(0, grads) grad = tf.reduce_mean(grad, 0) # Create a gradient/variable tuple for the current variable with its average gradient grad_and_var = (grad, grad_and_vars[0][1]) # Add the current tuple to average_grads average_grads.append(grad_and_var) #Return result to caller return average_grads """ Explanation: Next we want to average the gradients obtained from the GPU's. We compute the average the gradients obtained from the GPU's for each variable in the function average_gradients() End of explanation """ def apply_gradients(optimizer, average_grads): apply_gradient_op = optimizer.apply_gradients(average_grads) return apply_gradient_op """ Explanation: Note also that this code acts as a syncronization point as it requires all GPU's to be finished with their mini-batch before it can run to completion. Now next we introduce a function to apply the averaged gradients to update the model's paramaters on the CPU End of explanation """ def log_variable(variable, gradient=None): name = variable.name mean = tf.reduce_mean(variable) tf.scalar_summary(name + '/mean', mean) tf.scalar_summary(name + '/sttdev', tf.sqrt(tf.reduce_mean(tf.square(variable - mean)))) tf.scalar_summary(name + '/max', tf.reduce_max(variable)) tf.scalar_summary(name + '/min', tf.reduce_min(variable)) tf.histogram_summary(name, variable) if gradient is not None: if isinstance(gradient, tf.IndexedSlices): grad_values = gradient.values else: grad_values = gradient if grad_values is not None: tf.histogram_summary(name + "/gradients", grad_values) """ Explanation: Logging We introduce a function for logging a tensor variable's current state. It logs scalar values for the mean, standard deviation, minimum and maximum. Furthermore it logs a histogram of its state and (if given) of an optimization gradient. End of explanation """ def log_grads_and_vars(grads_and_vars): for gradient, variable in grads_and_vars: log_variable(variable, gradient=gradient) """ Explanation: Let's also introduce a helper function for logging collections of gradient/variable tuples. End of explanation """ logs_dir = os.environ.get('ds_logs_dir', 'logs') log_dir = '%s/%s' % (logs_dir, time.strftime("%Y%m%d-%H%M%S")) def get_git_revision_hash(): return subprocess.check_output(['git', 'rev-parse', 'HEAD']).strip() def get_git_branch(): return subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD']).strip() """ Explanation: Finally we define the top directory for all logs and our current log sub-directory of it. We also add some log helpers. End of explanation """ def decode_batch(data_set): # Get gradients for each tower (Runs across all GPU's) tower_decodings, tower_labels, _, tower_total_losses, _, _ = get_tower_results(data_set) return tower_decodings, tower_labels, tower_total_losses """ Explanation: Test and Validation First we need a helper method to create a normal forward calculation without optimization, dropouts and special reporting. End of explanation """ def calculate_wer(session, tower_decodings, tower_labels, tower_total_losses): originals = [] results = [] losses = [] # Normalization tower_decodings = [j for i in tower_decodings for j in i] # Iterating over the towers for i in range(len(tower_decodings)): decoded, labels, loss = session.run([tower_decodings[i], tower_labels[i], tower_total_losses[i]], feed_dict) originals.extend(sparse_tensor_value_to_texts(labels)) results.extend(sparse_tensor_value_to_texts(decoded)) losses.extend(loss) # Pairwise calculation of all rates rates, mean = wers(originals, results) return zip(originals, results, rates, losses), mean """ Explanation: To report progress and to get an idea of the current state of the model, we create a method that calculates the word error rate (WER) out of (tower) decodings and their respective (original) labels. It will return an array of WER result tuples, each consisting of the original text (text version of X), the resulting text (Y), the calculated WER and the total loss for that item, plus the mean WER across all tuples. End of explanation """ def print_wer_report(caption, mean, items=[]): print print "#" * 80 print "%s WER: %f" % (caption, mean) if len(items) > 0: # Filter out all items with WER=0 items = [a for a in items if a[2] > 0] # Order the remaining items by their loss (lowest loss on top) items.sort(key=lambda a: a[3]) # Take only the first 10 items items = items[:10] # Order this top ten items by their WER (lowest WER on top) items.sort(key=lambda a: a[2]) for a in items: print "-" * 80 print " - WER: %f" % a[2] print " - loss: %f" % a[3] print " - source: \"%s\"" % a[0] print " - result: \"%s\"" % a[1] print "#" * 80 print """ Explanation: Let's introduce a routine to print a WER report under a given caption. It prints the given mean WER plus summaries of the top ten lowest loss items of the given array of WER result tuples (only items with WER=0 and ordered by their WER). End of explanation """ def calculate_and_print_wer_report(session, caption, tower_decodings, tower_labels, tower_total_losses, show_ranked=True): items, mean = calculate_wer(session, tower_decodings, tower_labels, tower_total_losses) if show_ranked: print_wer_report(caption, mean, items=items) else: print_wer_report(caption, mean) return items, mean """ Explanation: Plus a convenience method to calculate and print the WER report all at once. End of explanation """ def train(session, data_sets): # Calculate the total number of batches total_batches = data_sets.train.total_batches batches_per_device = float(total_batches) / len(available_devices) # Create optimizer optimizer = create_optimizer() # Get gradients for each tower (Runs across all GPU's) tower_decodings, \ tower_labels, \ tower_gradients, \ tower_total_losses, \ tower_avg_losses, \ avg_accuracy \ = get_tower_results(data_sets.train, optimizer) # Validation step preparation validation_tower_decodings, \ validation_tower_labels, \ validation_tower_total_losses \ = decode_batch(data_sets.dev) # Average tower gradients avg_tower_gradients = average_gradients(tower_gradients) # Add logging of averaged gradients log_grads_and_vars(avg_tower_gradients) # Apply gradients to modify the model apply_gradient_op = apply_gradients(optimizer, avg_tower_gradients) # Create a saver to checkpoint the model saver = tf.train.Saver(tf.all_variables()) # Prepare tensor board logging merged = tf.merge_all_summaries() writer = tf.train.SummaryWriter(log_dir, session.graph) # Init all variables in session session.run(tf.initialize_all_variables()) # Start queue runner threads tf.train.start_queue_runners(sess=session) # Start importer's queue threads data_sets.start_queue_threads(session) # Init recent word error rate levels last_train_wer = 0.0 last_validation_wer = 0.0 # Loop over the data set for training_epochs epochs for epoch in range(training_iters): # Define total accuracy for the epoch total_accuracy = 0.0 # Loop over the batches for batch in range(int(ceil(batches_per_device))): extra_params = { } if do_fulltrace: loss_run_metadata = tf.RunMetadata() extra_params['options'] = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) extra_params['run_metadata'] = loss_run_metadata # Compute the average loss for the last batch session.run(apply_gradient_op, feed_dict_train, **extra_params) # Add batch to total_accuracy total_accuracy += session.run(avg_accuracy, feed_dict_train) # Log all variable states in current step step = epoch * total_batches + batch * len(available_devices) summary_str = session.run(merged, feed_dict_train) writer.add_summary(summary_str, step) if do_fulltrace: writer.add_run_metadata(loss_run_metadata, 'loss_epoch%d_batch%d' % (epoch, batch)) writer.flush() # Print progress message if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "avg_cer=", "{:.9f}".format((total_accuracy / ceil(batches_per_device))) _, last_train_wer = calculate_and_print_wer_report( \ session, \ "Training", \ tower_decodings, \ tower_labels, \ tower_total_losses) print # Validation step if epoch % validation_step == 0: _, last_validation_wer = calculate_and_print_wer_report( \ session, \ "Validation", \ validation_tower_decodings, \ validation_tower_labels, \ validation_tower_total_losses) print # Checkpoint the model if (epoch % checkpoint_step == 0) or (epoch == training_iters - 1): checkpoint_path = os.path.join(checkpoint_dir, 'model.ckpt') print "Checkpointing in directory", "%s" % checkpoint_dir saver.save(session, checkpoint_path, global_step=epoch) print # Indicate optimization has concluded print "Optimization Finished!" return last_train_wer, last_validation_wer """ Explanation: Training Now, as we have prepared all the apropos operators and methods, we can create the method which trains the network. End of explanation """ # Define CPU as device on which the muti-gpu training is orchestrated with tf.device('/cpu:0'): # Obtain all the data, defaulting to TED LIUM data_sets = ds_importer_module.read_data_sets(ds_dataset_path, batch_size, n_input, n_context) # Create session in which to execute session = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) # Take start time for time measurement time_started = datetime.datetime.utcnow() # Train the network last_train_wer, last_validation_wer = train(session, data_sets) # Take final time for time measurement time_finished = datetime.datetime.utcnow() # Calculate duration in seconds duration = time_finished - time_started duration = duration.days * 86400 + duration.seconds """ Explanation: As everything is prepared, we are now able to do the training. End of explanation """ # Define CPU as device on which the muti-gpu testing is orchestrated with tf.device('/cpu:0'): # Test network test_decodings, test_labels, test_total_losses = decode_batch(data_sets.test) _, test_wer = calculate_and_print_wer_report(session, "Test", test_decodings, test_labels, test_total_losses) """ Explanation: Now the trained model is tested using an unbiased test set. End of explanation """ # Don't export a model if no export directory has been set if export_dir: with tf.device('/cpu:0'): tf.reset_default_graph() session = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) # Run inference # Replace the dropout placeholder with a constant dropout_rate_placeholder = tf.constant(0.0) # Input tensor will be of shape [batch_size, n_steps, n_input + 2*n_input*n_context] input_tensor = tf.placeholder(tf.float32, [None, None, n_input + 2*n_input*n_context]) # Calculate input sequence length. This is done by tiling n_steps, batch_size times. # If there are multiple sequences, it is assumed they are padded with zeros to be of # the same length. n_items = tf.slice(tf.shape(input_tensor), [0], [1]) n_steps = tf.slice(tf.shape(input_tensor), [1], [1]) seq_length = tf.tile(n_steps, n_items) # Calculate the logits of the batch using BiRNN logits = BiRNN(input_tensor, tf.to_int64(seq_length)) # Beam search decode the batch decoded, _ = ctc_ops.ctc_beam_search_decoder(logits, seq_length) decoded = tf.convert_to_tensor( [tf.sparse_tensor_to_dense(sparse_tensor) for sparse_tensor in decoded]) # TODO: Transform the decoded output to a string # Create a saver and exporter using variables from the above newly created graph saver = tf.train.Saver(tf.all_variables()) model_exporter = exporter.Exporter(saver) # Restore variables from training checkpoint # TODO: This restores the most recent checkpoint, but if we use validation to counterract # over-fitting, we may want to restore an earlier checkpoint. checkpoint = tf.train.get_checkpoint_state(checkpoint_dir) saver.restore(session, checkpoint.model_checkpoint_path) print 'Restored checkpoint at training epoch %d' % (int(checkpoint.model_checkpoint_path.split('-')[1]) + 1) # Initialise the model exporter and export the model model_exporter.init(session.graph.as_graph_def(), named_graph_signatures = { 'inputs': exporter.generic_signature( { 'input': input_tensor }), 'outputs': exporter.generic_signature( { 'outputs': decoded})}) model_exporter.export(export_dir, tf.constant(export_version), session) print 'Model exported at %s' % (export_dir) """ Explanation: Finally, we restore the trained variables into a simpler graph that we can export for serving. End of explanation """ with open('%s/%s' % (log_dir, 'hyper.json'), 'w') as dump_file: json.dump({ \ 'context': { \ 'time_started': time_started.isoformat(), \ 'time_finished': time_finished.isoformat(), \ 'git_hash': get_git_revision_hash(), \ 'git_branch': get_git_branch() \ }, \ 'parameters': { \ 'learning_rate': learning_rate, \ 'beta1': beta1, \ 'beta2': beta2, \ 'epsilon': epsilon, \ 'training_iters': training_iters, \ 'batch_size': batch_size, \ 'validation_step': validation_step, \ 'dropout_rate': dropout_rate, \ 'relu_clip': relu_clip, \ 'n_input': n_input, \ 'n_context': n_context, \ 'n_hidden_1': n_hidden_1, \ 'n_hidden_2': n_hidden_2, \ 'n_hidden_3': n_hidden_3, \ 'n_hidden_5': n_hidden_5, \ 'n_hidden_6': n_hidden_6, \ 'n_cell_dim': n_cell_dim, \ 'n_character': n_character, \ 'total_batches_train': data_sets.train.total_batches, \ 'total_batches_validation': data_sets.dev.total_batches, \ 'total_batches_test': data_sets.test.total_batches, \ 'data_set': { \ 'name': ds_importer \ }, \ }, \ 'results': { \ 'duration': duration, \ 'last_train_wer': last_train_wer, \ 'last_validation_wer': last_validation_wer, \ 'test_wer': test_wer \ } \ }, dump_file, sort_keys=True, indent = 4) """ Explanation: Logging Hyper Parameters and Results Now, as training and test are done, we persist the results alongside with the involved hyper parameters for further reporting. End of explanation """ merge_logs(logs_dir) maybe_publish() """ Explanation: Let's also re-populate a central JS file, that contains all the dumps at once. End of explanation """
harrisonpim/bookworm
03 - Visualising and Analysing Networks.ipynb
mit
from bookworm import * %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') plt.rcParams['figure.figsize'] = (12,9) import pandas as pd import numpy as np book = load_book('data/raw/hp_philosophers_stone.txt') characters = extract_character_names(book) sequences = get_sentence_sequences(book) df = find_connections(sequences, characters) cooccurence = calculate_cooccurence(df) """ Explanation: < 02 - Character Building | Home | 04 - Time and Chronology > Visualising and Analysing Networks By now you should have a decent understanding of how bookworm assembles a list of character relationships and assesses their strength. The real point of this project, though, is to give the user a tactile, intuitive view of the network of characters and how they interact. This notebook should cover the methods I've used to achieve that. Let's start by importing all the usual stuff and loading in the Harry Potter network: End of explanation """ import networkx as nx interaction_df = get_interaction_df(cooccurence, threshold=2) interaction_df.sample(5) """ Explanation: Visualisation with NetworkX NetworkX is a very nice python library which is built to handle graphs and networks. We can load our data into a NetworkX Graph object by building up a table of character interactions as follows: End of explanation """ G = nx.from_pandas_dataframe(interaction_df, source='source', target='target') """ Explanation: get_interaction_df() is defined in bookworm/build_network.py, and works by searching through the provided cooccurence matrix for interactions with strength above a specified threshold. We can load that interaction dataframe into a NetworkX Graph using the super simple from_pandas_dataframe() function: End of explanation """ nx.draw_spring(G, with_labels=True) """ Explanation: And, just as easily, visualise it with draw_spring(), where spring is a reference to the idea that edges in the network are treated like physical springs, with elasticity/compressability related to the weights of the connections: End of explanation """ pd.Series(nx.pagerank(G)).sort_values(ascending=False)[:5] a, b = nx.hits(G) pd.Series(a).sort_values(ascending=False)[:5] """ Explanation: Very nice... ish. There's more that could be done to clean up the visualisation and make it pretty, but it's fine for now. One of the nicest things about NetworkX is all of its builtin network analysis functionality. For example, we can use pagerank or hits to give us the most 'important' or 'central' nodes in the network. These algorithms were originally developed to analyse linked networks of websites, but they can just as easily be applied to stations in transport networks, streets in cities, similar products in ecommerce systems, friends in social circles, or connected characters in books. End of explanation """ list(nx.enumerate_all_cliques(G))[-1] """ Explanation: We can ask NetworkX for cliques in the graph, which are especially relevant to social networks like this. enumerate_all_cliques() gives us a massive list of all the cliques it finds - we'll just return the last one because it's most illustrative of what a clique is in this context... End of explanation """ comms = nx.communicability(G) print(comms["('Vernon ',)"]["('Dumbledore ',)"]) print(comms["('Harry ',)"]["('Hermione ',)"]) """ Explanation: It's isolated the people who appear in the book at Number 4, Privet Drive. Fun! We can do stuff like illustrate the communicability of one character with another - We would expect that characters which don't spend much time together in the book would have a harder time communicating with one another than those who spend a lot of time together, illustrated by a smaller communicability value: End of explanation """ nx.dijkstra_path(G, source="('Hedwig ',)", target="('Flamel ',)") """ Explanation: Similarly, we can use NetworkX's implementation of classic pathfinding algoritms like Dijkstra's algorithm and A* to return paths between characters. For example, if Hedwig was interested in getting to know Nicolas Flamel, and wanted to do so with as few new introductions as possible along the way, these are the shoulders she would need to tap on for introductions: End of explanation """ nodes = [{"id": str(id), "group": 1} for id in set(interaction_df['source'])] links = interaction_df.to_dict(orient='records') d3_dict = {'nodes': nodes, 'links': links} """ Explanation: Pathfinding is clearly an application that is more suited to transport networks etc, but it's still interesting to see it applied here... There's an anecdote which gets passed around about a young South Korean computer scientist in academia who wanted to rise to the top of his field as quickly as possible. By developing a network of the academics in his field and their people they had published with, he was able to quickly work out which authors were most influential, and the path of introductions and cooautorship that he would need to take from his own, weak position in the network to publishing papers with the most influencial academics and becoming a central node himself. I have no idea whether the anecdote is true or not, but it's a nice story, and illustrative of where and why this stuff might be useful to think about. Applying it to owls and alchemists is fun, but it can be useful in the real world too... All of this stuff dates back to the 1730s and the origins of graph theory, with Euler and the Seven Bridges of Konigsberg. It's a subject worth reading about if you haven't already - it's fascinating, and the world opens up to you in entirely new ways when you develop some intuition around when and where networks appear in nature and how they can be analysed. Clever applications of graph theory are absolutely key to the success of companies like Google, Facebook, and Amazon. More dynamic visualisations with d3.js The thing above is fast and fun, and allows us to run a load of interesting algorithms over the network, but it all feels very static... The point of this project is to visualise these networks in an way which gives the user an intuitive sense of the relationships between characters. We can get closer to that intuitive, touchy-feely sense of the network by putting together a force directed graph with d3.js, like the one by Mike Bostock (the creatory of d3) shown here. Bostock is visualising the boring old Les Mis dataset - we're going to feed d3 our freshly made Harry Potter one. First we need to set up the data structure which the d3 script requires. End of explanation """ import json with open('bookworm/d3/bookworm.json', 'w') as fp: json.dump(d3_dict, fp) """ Explanation: We can write that dictionary out to a .json file in the project's d3 directory using the json package: End of explanation """ %%bash ls bookworm/d3/ """ Explanation: Jupyter notebooks allow us to run commands in other languages, so we'll use bash to do a few things from here on. For example, we can list the files in the d3 directory: End of explanation """ %%bash cat bookworm/d3/index.html """ Explanation: or print out one of those files: End of explanation """ %%bash cd bookworm/d3/ python -m http.server """ Explanation: The next cell can be used to set up a locally hosted version of that d3.js script. It's a super-simple, two-line bash script which uses python's builtin http.server module to run the javascript visualisation code in the browser on your machine. We dumped our graph data into a file called 'bookworm.json' in one of the cells above - that file can now processed by 'index.html' (printed above), which displays the data using the d3.js javascript library. End of explanation """
daviddesancho/BestMSM
example/fourstate/fourstate_tpt.ipynb
gpl-2.0
%matplotlib inline import matplotlib.pyplot as plt import fourstate import itertools import networkx as nx import numpy as np import operator bhs = fourstate.FourState() """ Explanation: Transition path theory tests In what follows we are going to look at a simple four state model to better understand some fundamental results of transition path theory. The essential reference for this work is a paper by Berezhkovskii, Hummer and Szabo (J. Chem. Phys., 2009). However, there are other important references for alternative formulations of the same results, by Vanden Eijnden et al (J. Stat. Phys., 2006, Multiscale Model. Simul.,2009 and Proc. Natl. Acad. Sci. U.S.A., 2009). Model system Here we focus in a simple four state model described in the Berezhkovskii-Hummer-Szabo (BHS) paper. It consists of two end states (folded, $F$, and unfolded, $U$), connected by two intermediates $I_1$ and $I_2$. In particular, we define an instance of this simple model in order to get some numerical results. Below we show a graph representation of the model taken directly from the BHS paper (Figure 2). <img src="files/fourstate.png"> The model is itself described in the Fourstate class of the fourstate module. So first of all we create an instance of that class: End of explanation """ fig, ax = plt.subplots() ax.bar([0.5,1.5,2.5], -1./bhs.evals[1:], width=1) ax.set_xlabel(r'Eigenvalue', fontsize=16) ax.set_ylabel(r'$\tau_i$', fontsize=18) ax.set_xlim([0,4]) plt.show() """ Explanation: By doing this we get a few variables initialized. First, a symmetric transition count matrix, $\mathbf{N}$, where we see that the most frequent transitions are those within metastable states (corresponding to the terms in the diagonal $N_{ii}$). Non-diagonal transitions are much less frequent (i.e. $N_{ij}<<N_{ii}$ for all $i\neq j$). Then we get the transition matrix $\mathbf{T}$, whose diagonal elements are close to 1, as corresponds to a system with high metastability (i.e. high probability of the system remaining where it was). We can also construct a rate matrix, $\mathbf{K}$. From it we obtain eigenvalues ($\lambda_i$) and corresponding eigenvectors ($\Psi_i$). The latter allow for estimating equilibrium probabilities (note that $U$ and $F$ have the largest populations). The eigenvalues are sorted by value, with the first eigenvalue ($\lambda_0$) being zero, as corresponds to a system with a unique stationary distribution. All other eigenvalues are negative, and they are characteristic of a two-state like system as there is a considerable time-scale separation between the slowest mode ($\lambda_1$, corresponding to a relaxation time of $\tau_1=-1/\lambda_1$) and the other two ($\lambda_2$ and $\lambda_3$), as shown below. End of explanation """ bhs.run_commit() """ Explanation: Committors and fluxes Next we calculate the committors and fluxes for this four state model. For this we define two end states, so that we estimate the flux between folded ($F$) and unfolded ($U$). The values of the committor or $p_{fold}$ are defined to be 1 and 0 for $U$ and $F$, respectively, and using the Berezhkovskii-Hummer-Szabo (BHS) method we calculate the committors for the rest of the states. End of explanation """ print " j J_j(<-) J_j(->)" print " - -------- --------" for i in [1,2]: print "%2i %10.4e %10.4e"%(i, np.sum([bhs.J[i,x] for x in range(4) if bhs.pfold[x] < bhs.pfold[i]]),\ np.sum([bhs.J[x,i] for x in range(4) if bhs.pfold[x] > bhs.pfold[i]])) """ Explanation: We also obtain the flux matrix, $\mathbf{J}$, containing local fluxes ($J_{ji}=J_{i\rightarrow j}$) for the different edges in the network. The signs represent the direction of the transition: positive for those fluxes going from low to high $p_{fold}$ and negative for those going from high to low $p_{fold}$. For example, for intermediate $I_1$ (second column) we see that the transitions to $I_2$ and $F$ have a positive flux (i.e. flux goes from low to high $p_{fold}$). A property of flux conservation that must be fulfilled is that the flux into one state is the same as the flux out of that state, $J_j=\sum_{p_{fold}(i)<p_{fold}(j)}J_{i\rightarrow j}=\sum_{p_{fold}(i)>p_{fold}(j)}J_{j\rightarrow i}$. We check for this property for states $I_1$ and $I_2$. End of explanation """ import tpt_functions Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \ bhs.sum_flux, [3], [0]) JpathG = nx.DiGraph(Jpath.transpose()) print Jnode print Jpath """ Explanation: Paths through the network Another important bit in transition path theory is the possibility of identifying paths through the network. The advantage of a simple case like the one we are looking at is that we can enumerate all those paths and check how much flux each of them carry. For example, the contribution of one given path $U\rightarrow I_1\rightarrow I_2\rightarrow F$ to the total flux is given by $J_{U\rightarrow I_1\rightarrow I_2\rightarrow F}=J_{U \rightarrow I_1}(J_{I_1 \rightarrow I_2}/J_{I_1})(J_{I_2 \rightarrow F}/J_{I_2})$. In the BHS paper, simple rules are defined for calculating the length of a given edge in the network. These rules are implemented in the gen_path_lengths function. End of explanation """ tot_flux = 0 paths = {} k = 0 for path in nx.all_simple_paths(JpathG, 0, 3): paths[k] ={} paths[k]['path'] = path f = bhs.J[path[1],path[0]] print "%2i -> %2i: %10.4e "%(path[0], path[1], \ bhs.J[path[1],path[0]]) for i in range(2, len(path)): print "%2i -> %2i: %10.4e %10.4e"%(path[i-1], path[i], \ bhs.J[path[i],path[i-1]], Jnode[path[i-1]]) f *= bhs.J[path[i],path[i-1]]/Jnode[path[i-1]] tot_flux += f paths[k]['flux'] = f print " J(path) = %10.4e"%f print k+=1 print " Commulative flux: %10.4e"%tot_flux """ Explanation: We can exhaustively enumerate the paths and check whether the fluxes add up to the total flux. End of explanation """ sorted_paths = sorted(paths.items(), key=operator.itemgetter(1)) sorted_paths.reverse() k = 1 for path in sorted_paths: print k, ':', path[1]['path'], ':', 'flux = %g'%path[1]['flux'] k +=1 """ Explanation: So indeed the cumulative flux is equal to the total flux we estimated before. Below we print the sorted paths for furu End of explanation """ while True: Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \ bhs.sum_flux, [3], [0]) # generate nx graph from matrix JpathG = nx.DiGraph(Jpath.transpose()) # find shortest path try: path = nx.dijkstra_path(JpathG, 0, 3) pathlength = nx.dijkstra_path_length(JpathG, 0, 3) print " shortest path:", path, pathlength except nx.NetworkXNoPath: print " No path for %g -> %g\n Stopping here"%(0, 3) break # calculate contribution to flux f = bhs.J[path[1],path[0]] print "%2i -> %2i: %10.4e "%(path[0], path[1], bhs.J[path[1],path[0]]) path_fluxes = [f] for j in range(2, len(path)): i = j - 1 print "%2i -> %2i: %10.4e %10.4e"%(path[i], path[j], \ bhs.J[path[j],path[i]], \ bhs.J[path[j],path[i]]/Jnode[path[i]]) f *= bhs.J[path[j],path[i]]/Jnode[path[i]] path_fluxes.append(bhs.J[path[j],path[i]]) # find bottleneck ib = np.argmin(path_fluxes) print "bottleneck: %2i -> %2i"%(path[ib],path[ib+1]) # remove flux from edges for j in range(1,len(path)): i = j - 1 bhs.J[path[j],path[i]] -= f # numerically there may be some leftover flux in bottleneck bhs.J[path[ib+1],path[ib]] = 0. bhs.sum_flux -= f print ' flux from path ', path, ': %10.4e'%f print ' fluxes', path_fluxes print ' leftover flux: %10.4e\n'%bhs.sum_flux """ Explanation: Highest flux paths One of the great things of using TPT is that it allows for visualizing the highest flux paths. In general we cannot just enumerate all the paths, so we resort to Dijkstra's algorithm to find the highest flux path. The problem with this is that the algorithm does not find the second highest flux path. So once identified, we must remove the flux from one path, so that the next highest flux path can be found by the algorithm. An algorithm for doing this was elegantly proposed by Metzner, Schütte and Vanden Eijnden. Now we implement it for the model system. End of explanation """
christophebertrand/ada-epfl
HW02-Data_from_the_Web/master_data_analysis.ipynb
mit
all_data = pd.read_csv('all_data.csv', usecols=['Civilité', 'Nom_Prénom', 'title', 'periode_acad', 'periode_pedago','Orientation_Master', 'Spécialisation', 'Filière_opt.', 'Mineur', 'Statut', 'Type_Echange', 'Ecole_Echange', 'No_Sciper']) all_data.sort_values(by='No_Sciper', axis=0).head(10) len(all_data) """ Explanation: Master Data Obtain all the data for the Bachelor students, starting from 2007. Keep only the students for which you have an entry for both Bachelor semestre 1 and Bachelor semestre 6. Compute how many months it took each student to go from the first to the sixth semester. Partition the data between male and female students, and compute the average -- is the difference in average statistically significant? 2) Perform a similar operation to what described above, this time for Master students. Notice that this data is more tricky, as there are many missing records in the IS-Academia database. Therefore, try to guess how much time a master student spent at EPFL by at least checking the distance in months between Master semestre 1 and Master semestre 2. If the Mineur field is not empty, the student should also appear registered in Master semestre 3. Last but not the least, don't forget to check if the student has an entry also in the Projet Master tables. Once you can handle well this data, compute the "average stay at EPFL" for master students. Now extract all the students with a Spécialisation and compute the "average stay" per each category of that attribute -- compared to the general average, can you find any specialization for which the difference in average is statistically significant? Read the data from csv End of explanation """ all_data['periode_pedago'].unique() all_data['title'].unique() """ Explanation: Clean the data End of explanation """ all_data[all_data['periode_pedago'].isin(['Semestre printemps', 'Semestre automne'])]['title'].unique() """ Explanation: checkout what the Semester printemps and automne are End of explanation """ master_periode_pedago = ['Master semestre 1', 'Master semestre 2', 'Projet Master printemps', 'Master semestre 3', 'Projet Master automne'] master_data = all_data[all_data['periode_pedago'].isin(master_periode_pedago)] """ Explanation: This are the students exchange (students from other universities in exchange at EPFL) and students doing the Passerelle HES. Since the exchange students don't do a full master at EPFL we ignore them. Also we won't count students in Passerelle HES as beeing in the Master just yet because they have to succeed the passerelle to optain a master. So if they do a master they are inscribed in Master anyways. End of explanation """ master_data.dropna(axis=1, how='all', inplace=True) """ Explanation: remove the columns with only NaN (in case there is one) End of explanation """ master_data.rename(columns={'Civilité': 'Sex', 'Nom_Prénom': 'Name', 'Spécialisation': 'Specialisation'}, inplace=True) master_data['periode_pedago'].unique() """ Explanation: Rename the columns (remove the é and give shorter names) End of explanation """ master_data['periode_acad'].unique() # store the year of the entry def start_year(student): return int (student['periode_acad'].split('-')[0]) master_data['start_year'] = master_data.apply(start_year, axis=1) master_data['end_year'] = master_data.apply(lambda st: start_year(st)+1, axis=1) # make the indicator columns new_cols_map = { 'Master1': ['Master semestre 1'], 'Master2': ['Master semestre 2'], 'Master3': ['Master semestre 3'], 'Project_Master' : ['Projet Master printemps', 'Projet Master automne'] } for (new_col_name, match_list) in new_cols_map.items(): master_data[new_col_name] = master_data.apply(lambda student: student['periode_pedago'] in match_list, axis=1) # show the new master_data (sample randomly to keep it managable but still informative) master_data.sample(n = 10, axis=0, replace=False) """ Explanation: add some columns that make the use of the data easier later - store the start and end year of each entry ('2008-2009' -> 2008) and '2008-2009' -> 2009 - make a column for master1, master2, master3 and and project master and indicate if done it (true and false) End of explanation """ # find all students that have not done the 'master 1' semester -> have not studied enough to finish the master. grouped = master_data.groupby(by='No_Sciper') no_master_1 = pd.DataFrame(columns=['Civilité', 'Nom_Prénom', 'Orientation_Bachelor', 'Orientation_Master', 'Spécialisation', 'Filière_opt.', 'Mineur', 'Statut', 'Type_Echange', 'Ecole_Echange', 'No_Sciper', 'title', 'periode_acad', 'periode_pedago']) for scip, group in grouped: if (group.periode_pedago != 'Master semestre 1').all(): no_master_1 = pd.concat([no_master_1, group]) len(no_master_1.No_Sciper.unique()) # all the students that already studied in the '2007-2008' year: df_2006 = no_master_1[no_master_1.periode_acad == '2007-2008'] len(df_2006.No_Sciper.unique()) """ Explanation: We will remove all students that did certainly not finish the master. That is, we remove students that: - Have not done the Master 1 semestre. (There are not a lot of them (see later) and it is too cumbersome to track them and check if they eventually passed or not) - Did only 1 semester. (They did not pass) - Have a mineur or spec and did less than 3 semesters. (They also did not pass) - Are registered in the current semester (have not passed (yet)) We also remove students that already have been studing in 2007 (the start of our data) because we can not tell how long they already have studied before. End of explanation """ sciper_to_remove = no_master_1['No_Sciper'].unique() master_data = master_data[~master_data['No_Sciper'].isin(sciper_to_remove)] """ Explanation: remove the found students. The others will be removed later. End of explanation """ study_now = pd.DataFrame(columns=['Civilité', 'Nom_Prénom', 'Orientation_Bachelor', 'Orientation_Master', 'Spécialisation', 'Filière_opt.', 'Mineur', 'Statut', 'Type_Echange', 'Ecole_Echange', 'No_Sciper', 'title', 'periode_acad', 'periode_pedago']) for scip, group in grouped: if (group.periode_acad == '2016-2017').any(): study_now = pd.concat([study_now, group]) len(study_now.No_Sciper.unique()) """ Explanation: Find all the students that are registered in the current semester and filter them. End of explanation """ sciper_to_remove = study_now['No_Sciper'].unique() master_data = master_data[~master_data['No_Sciper'].isin(sciper_to_remove)] """ Explanation: ... and remove them to End of explanation """ def group_master_data(grouped_entries): # check that there are no two students with the same sciper number and different names or sex. must_be_unique_list = ['Sex', 'Name'] #No_Scyper also but we group by it -> unique by construction for unique_col in must_be_unique_list: if(len(grouped_entries[unique_col].unique()) > 1): raise ValueError('Two students of different '+unique_col+' with same No_Sciper') #aggregate the cols first_entry = grouped_entries.head(1) df_map = { 'No_Sciper' : first_entry['No_Sciper'].values[0], 'Name' : first_entry['Name'].values[0], 'Sex' : first_entry['Sex'].values[0], 'Specialisation' : grouped_entries['Specialisation'].dropna().unique(), # all the spcs the student was inscribed to 'Mineur' : grouped_entries['Mineur'].dropna().unique(),# all the minors the student was inscribed to 'first_year': grouped_entries['start_year'].min(), # smallest start year entry 'last_year' : grouped_entries['end_year'].max(), # highest year a studend appears #some brainfuck lines ;) 'first_semestre' : grouped_entries.sort_values(by=['start_year', 'periode_pedago'], axis=0, ascending=True)['periode_pedago'].values[0], # eg master1 'last_semestre' : grouped_entries.sort_values(by=['end_year', 'periode_pedago'], axis=0, ascending=False)['periode_pedago'].values[0], # the name of the last semester (eg. master 3) 'semesters_done' : grouped_entries.sort_values(by=['end_year'])['periode_pedago'].values, 'nombre_semestres' : len(grouped_entries), # how many different semesters the student did at epfl 'project_master' : grouped_entries['Project_Master'].sum() > 0 # if student did the master project } # if the student did any minor or spec df_map['mineur_or_spe'] = len(df_map['Specialisation']) + len(df_map['Mineur']) > 0 # True if the student has not (yet) finished the master df_map['to_remove'] = df_map['nombre_semestres'] <= 1 or (not df_map['mineur_or_spe'] and df_map['nombre_semestres'] <= 2) # if there are two spe, take the latest one if len(df_map['Specialisation']) > 1: df_map['Specialisation'] = grouped_entries[grouped_entries['end_year'] == df_map['last_year']]['Specialisation'].values[0] elif len(df_map['Specialisation']) == 1: df_map['Specialisation'] = df_map['Specialisation'][0] # take the spe out of the array # set correct NaNs if len(df_map['Mineur']) == 0: df_map['Mineur'] = np.nan if len(df_map['Specialisation']) == 0: df_map['Specialisation'] = np.nan # make Dataframe for (k, v) in df_map.items(): df_map[k] = [v] return pd.DataFrame.from_dict(df_map) grouped_master = master_data.groupby(by='No_Sciper', as_index=False, sort=True).apply(group_master_data) grouped_master.head() """ Explanation: Groupby and aggregate by student End of explanation """ students_to_remove = grouped_master[grouped_master['to_remove']] len(students_to_remove) grouped_master = grouped_master[~grouped_master['to_remove']] """ Explanation: remove the students that have not finished their master (described above) End of explanation """ grouped_master.set_index('No_Sciper', inplace=True) len(grouped_master) """ Explanation: Set the Sciper number as index End of explanation """ master_epfl = grouped_master.copy() plt.hist(master_epfl['nombre_semestres'], bins=8, range=[1, 9]) master_epfl.describe() """ Explanation: How many month did it take each student First without any further cleaning and thinking End of explanation """ def add_semester_if_no_PM(row): t = row.nombre_semestres if(not row.project_master): return t + 1 else: return t master_added_sem = master_epfl.copy() master_added_sem['nombre_semestres'] = master_epfl.apply(add_semester_if_no_PM, axis=1) master_added_sem.describe() """ Explanation: The mean of the number of semesters is around 3.6 semesters. However not all students are registered in the the Master Project (missing data?), but in the plot above we assume everyone finished their master. So we add to all students that are not inscribed in a Master Project one semester: End of explanation """ plt.hist(master_added_sem['nombre_semestres'], bins=8, range=[1, 9]) """ Explanation: It made a big difference. But that seems reasonable since most students do a Mineur/Spe which makes already (at least) 4 semesters (MP included) End of explanation """ master_with_spe = master_added_sem[~pd.isnull(master_added_sem['Specialisation'])] master_with_spe.describe() """ Explanation: And per specialisation We will take the 'added_semester' data since we think it is more accurate. Note that the average number of semesters is only 0.3 higher than the one of all students. End of explanation """ # how many percent of students do a specialisation 100/len(master_added_sem) * len(master_with_spe) # how many percent of students do a minor 100/len(master_added_sem) * len(master_added_sem[~pd.isnull(master_added_sem['Mineur'])]) """ Explanation: almost 50% of all students do a minor or a specialisation: End of explanation """ # The different specialisations master_with_spe['Specialisation'].unique() """ Explanation: What are the different specialisations and how popular are they? End of explanation """ sns.countplot(y="Specialisation", data=master_with_spe); """ Explanation: A nice plot showing that there is a huge difference in the number of people taking the different Specialisations End of explanation """ ax = sns.barplot(x='nombre_semestres', y='Specialisation', data=master_with_spe); plt.axvline(master_with_spe['nombre_semestres'].mean(), color='b', linestyle='dashed', linewidth=2) plt.axvline(master_added_sem['nombre_semestres'].mean(), color='r', linestyle='dashed', linewidth=2) """ Explanation: In the time taken to colmplete their studies they don't differ a lot. (The red dotted line is the averarge of all students, the blue line the average of the students with specialisations, the black bar is the confidence interval) End of explanation """ def calc_diff_from_average(col, average): return (lambda row: row[col] - average) master_with_spe['diff_from_average'] = master_with_spe.apply(calc_diff_from_average('nombre_semestres', master_added_sem['nombre_semestres'].mean()), axis=1) sns.barplot(x='diff_from_average', y='Specialisation', data=master_with_spe); """ Explanation: Compared to all master students, it seems that only the first two specs actually inply a longer stay at EPFL (their confidence interval just barely don't touch the general mean) End of explanation """ import scipy.stats as stats """ Explanation: Test if the difference in average is statistically significant for each spec End of explanation """ specialisation = master_with_spe.Specialisation.unique() for spec in specialisation: print(spec) data_spec = master_with_spe[master_with_spe.Specialisation == spec] print (stats.ttest_ind(a = data_spec.nombre_semestres, b= master_added_sem.nombre_semestres, equal_var=False)) print('\n') """ Explanation: We want to see if the difference of the average number of semesters for a particular specialisation and for all the master students in informatique are statistically significant with a threshold of 95% We use a Welch's T-Test (which does not assume equal population variance): it measures whether the average value differs significantly across samples. End of explanation """ Female = master_added_sem[master_added_sem.Sex == 'Madame'] Female.describe() Male = master_added_sem[master_added_sem.Sex == 'Monsieur'] Male.describe() """ Explanation: Only Signals, Images and Interfaces and Internet computing have a pvalue < 0.05 (pvalue = 0.044 and 0.028 resp.) thus we can reject the null hypothesis and tell that the difference is significant. For all other specialisation we cannot reject the null hypothesis of identical average scores (pvalue > 0.05), which means: for those specialisation we cannot say that the difference is in average statistically significant Female vs Male (Optional) Except for the number of students, male and females are quite similar End of explanation """ stats.ttest_ind(a = Female.nombre_semestres, b= Male.nombre_semestres, equal_var=False) """ Explanation: We want to see if the difference of the average years for female and male are statistically significant with a threshold of 95% We use a Welch's T-Test (which does not assume equal population variance): it measures whether the average value differs significantly across samples. End of explanation """ sns.regplot(x=Male.last_year, y=Male.nombre_semestres, marker="o",ci=95, label='Male', scatter_kws={'s':100}) ax =sns.regplot(x=Female.last_year, y=Female.nombre_semestres, marker="d", ci= 95, label='Female', color='r') ax.legend(loc="best") """ Explanation: Since the pvalue is > 0.05, we cannot reject the null hypothesis of identical average scores which means: we cannot say that the difference is in average statistically significant, in fact it is quite similar (p value close to 1) Also over time nothing much changed. The females might have a small upwards trend, but the confidence intervall is quite big. End of explanation """
kongjy/hyperAFM
Notebooks/multiple regression_1-varun.ipynb
mit
len(Amatrix[0]) #performing multiple simple linear regression for only the a,Amatrix, because of error of the .fit function from sklearn import linear_model regr=linear_model.LinearRegression()#performing the simple linear regression regr.fit(a[0].reshape(len(a),1),yactual.reshape(len(yactual),1)) """ Explanation: In the above cell, I have used the first element of the array for calculating 'yactual' value End of explanation """ plt.scatter(yactual.reshape(len(yactual),1),a[0].reshape(len(yactual),1)) plt.plot([0,2],[0,23],lw=4,color='red')#the line Y=2a+b+9c plt.show() """ Explanation: The .fit function is throwing out an error saying that first argument in that function must be 2 Dimensional or lesser. When I try to put in all the three matrixes A, B, C, it is giving an error saying that the first argument is four dimensional, which I could'nt resolve Hence, to see how it works out for a single matrix, I have used the fit function End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session08/Day1/OOP_problem.ipynb
mit
import numpy as np import matplotlib.pyplot as plt %matplotlib notebook """ Explanation: Building a Digital Orrery An exercise in Object Oriented Programming Version 0.1 It is your goal in this exercise to construct a Digital Orrery. An orrery is a mechanical model of the Solar System. Here, we will generalize this to anything that is mechanically similar to the solar system: a collection of things bound gravitationally. <img src="https://upload.wikimedia.org/wikipedia/commons/4/48/Grand_orrery_in_Putnam_Gallery%2C_2009-11-24.jpg" alt="Orrery" width="600"/> (image: wikimedia) By J. S. Oishi (Bates College) End of explanation """ class MyFirstClass(): pass class MySecondClass(): pass """ Explanation: Problem 1) Building a basic set of objects Our first task is to map our problem onto a set of objects that we instantiate (that is, make instances of) in order to solve our problem. Let's outline the scope of our problem. A solar system exists in a Universe; here we can ignore the gravitational perturbation on the Solar System from the rest of the Universe. Our model will consist of a small number of bodies containing mass. It might also contain bodies without mass, so called "test particles". The problem to be solved numerically is the gravitational N-body problem, $$\ddot{\mathbf{r}}i = -G\sum{i \ne j} \frac{m_j \mathbf{r}{ij}}{r{ij}^3},$$ where $\mathbf{r}_{ij} \equiv \mathbf{r_i} - \mathbf{r_j}$. This task itself can be broken into two components: the force calculation the ODE integrator to advance $\mathbf{r}_i$ and $\dot{\mathbf{r}}_i$ forward in time Problem 1a In disucssion with a classmate, sketch out a set of classes that you will need to complete this project. Don't worry about things like numerical integrators yet. Also, sketch out interfaces (start with the constructor), but don't worry about writing code right now. Once you're done, find me and I'll give you the minimal list of objects. End of explanation """ r0 = np.array([0,0,0]) rdot0 = np.array([0,0,0]) b = Body(1,r0, rdot0) """ Explanation: Problem 1b Wire them up! Now that you have the list, try them out. Python makes use of duck typing, you should too. That is, if your object has a mass m, a position r and a velocity rdot, it is a Body. End of explanation """ # good luck! """ Explanation: Problem 2 Now, we code the numerical algorithms. We're going to do the most simple things possible: a brute force ("direct N-Body" if you're feeling fancy) force calculation, and a leapfrog time integrator. The leapfrog scheme is an explicit, second order scheme given by $$r_{i+1} = r_{i} + v_{i} \Delta t + \frac{\Delta t^2}{2} a_{i}$$ $$v_{i+1} = v_{i} + \frac{\Delta t}{2} (a_{i} + a_{i+1}),$$ where $\Delta t$ is the time step (which we'll just keep constant), and the subscript refers to the iteration number $i$. Note that this scheme requires a force update in between calculating $r_{i+1}$ and $v_{i+1}$. Problem 2a Write a method that implements the force integrator. Test it on simple cases: * two equal 1 $M_\odot$ objects in your universe, 1 AU apart * a $1\ M_\odot$ object and a $1\ M_{\oplus}$ object, 1 AU apart Problem 2b Write the leapfrog integration as a method in the Universe class. Test it on one particle with no force (what should it do?) Problem 2c Wire it all up! Try a 3-body calculation of the Earth-Sun-Moon system. Try the Earth-Jupiter-Sun system! Challenge Problem Construct a visualization method for the Universe class Read about the Fast Multipole Method (FMM) here and implement one for the force calculation End of explanation """
CalPolyPat/phys202-2015-work
assignments/assignment05/InteractEx01.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display """ Explanation: Interact Exercise 01 Import End of explanation """ def print_sum(a, b): print(a+b) """ Explanation: Interact basics Write a print_sum function that prints the sum of its arguments a and b. End of explanation """ interact(print_sum, a=(-10,10,.1), b=(-8, 8, 2)) assert True # leave this for grading the print_sum exercise """ Explanation: Use the interact function to interact with the print_sum function. a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1 b should be an integer slider the interval [-8, 8] with step sizes of 2. End of explanation """ def print_string(s, length=False): if length==True: print("%s has length %d" %(s, len(s))) else: print(s) """ Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True. End of explanation """ interact(print_string, s = "Hello World", length = False) assert True # leave this for grading the print_string exercise """ Explanation: Use the interact function to interact with the print_string function. s should be a textbox with the initial value "Hello World!". length should be a checkbox with an initial value of True. End of explanation """
nathanielng/machine-learning
perceptron/logistic-regression.ipynb
apache-2.0
import numpy as np import matplotlib.pyplot as plt from scipy.optimize import minimize from numpy.random import permutation from sympy import var, diff, exp, latex, factor, log, simplify from IPython.display import display, Math, Latex %matplotlib inline """ Explanation: The Linear Model II <hr> linear classification | classification error | perceptron learning algorithm, pocket algorithm, ... linear regression | squared error | pseudo-inverse, ... third linear model (logistic regression) | cross-entropy error | gradient descent, ... nonlinear transforms <hr> 1. The Logistic Regression Linear Model 1.1 Hypothesis Functions In the case of linear models, inputs are combined linearly using weights, and summed into a signal, $s$: $$s = \sum\limits_{i=0}^d w_i x_i$$ Next, the signal passes through a function, given by: Linear classification: $h\left(\mathbf{x}\right) = \text{sign}\left(s\right)$ Linear regression: $h\left(\mathbf{x}\right) = s$ Logistic regression: $h\left(\mathbf{x}\right) = \theta\left(s\right)$ For logistic regression, we use a "soft threshold", by choosing a logistic function, $\theta$, that has a sigmoidal shape. The sigmoidal function can take on various forms, such as the following: $$\theta\left(s\right) = \frac{e^s}{1+e^s}$$ This model implements a probability that has a genuine probability interpretation. 1.2 Likelihood Measure and Probabilistic Connotations The likelihood of a dataset, $\mathcal{D} = \left(\mathbf{x_1},y_1\right), \dots, \left(\mathbf{x_N},y_N\right)$, that we wish to maximize is given by: $$\prod\limits_{n=1}^N P\left(y_n | \mathbf{x_n}\right) = \prod\limits_{n=1}^N \theta\left(y_n \mathbf{w^T x_n}\right)$$ It is possible to derive an error measure (that would maximise the above likelihood measure), which has a probabilistic connotation, and is called the in-sample "cross-entropy" error. It is based on assuming the hypothesis (of the logistic regression function) as the target function: $$E_{in}\left(\mathbf{w}\right) = \frac{1}{N}\sum\limits_{n=1}^N \ln\left[1 + \exp\left(-y_n \mathbf{w^T x_n}\right)\right]$$ $$E_{in}\left(\mathbf{w}\right) = \frac{1}{N}\sum\limits_{n=1}^N e\left[ h\left(\mathbf{x_n}\right), y_n \right]$$ While the above does not have a closed form solution, it is a convex function and therefore we can find the weights corresponding to the minimum of the above error measure using various techniques. Such techniques include gradient descent (and its variations, such as stochastic gradient descent and batch gradient descent) and there are others which make use of second order derivatives (such as the conjugate gradient method) or Hessians. 1.3 Libraries Used End of explanation """ var('x y w') logistic_cost = log(1 + exp(-y*w*x)) display(Math(latex(logistic_cost))) logistic_grad = logistic_cost.diff(w) display(Math(latex(logistic_grad))) display(Math(latex(simplify(logistic_grad)))) """ Explanation: 1.4 Gradient Descent for Logistic Regression 1.4.1 Gradient of the Cost Function - Derivation (using Sympy) The Python package, sympy, can be used to obtain the form for the gradient of the cost function in logistic regression: End of explanation """ def in_sample_err(N, sigma = 0.1, d = 8): return (sigma**2)*(1 - (d+1)/N) N_arr = [10, 25, 100, 500, 1000] err = [ in_sample_err(N) for N in N_arr ] for i in range(len(N_arr)): print("N = {:4}, E_in = {}".format(N_arr[i],err[i])) """ Explanation: 1.4.2 Gradient Descent Algorithm The gradient descent algorithm is a means to find the minimum of a function, starting from some initial weight, $\mathbf{w}()$. The weights are adjusted at each iteration, by moving them in the direction of the steepest descent ($\nabla E_{in}$). A learning rate, $\eta$, is used to scale the gradient, $\nabla E_{in}$. $$\mathbf{w}(t+1) = \mathbf{w}(t) - \eta\nabla E_{in}$$ For the case of logistic regression, the gradient of the error measure with respect to the weights, is calculated as: $$\nabla E_{in}\left(\mathbf{w}\right) = -\frac{1}{N}\sum\limits_{n=1}^N \frac{y_n\mathbf{x_N}}{1 + \exp\left(y_n \mathbf{w^T}(t)\mathbf{x_n}\right)}$$ 2. Linear Regression Error with Noisy Targets 2.1 Effect of Sample Size on In-Sample Errors Consider a noisy target, $y=\mathbf{w^{*T}x} + \epsilon$ where $\epsilon$ is a noise term with zero mean and variance, $\sigma^2$ The in-sample error on a training set, $\mathcal{D}$, $$\mathbb{E}\mathcal{D}\left[E{in}\left(\mathbf{w_{lin}}\right)\right] = \sigma^2\left(1 - \frac{d+1}{N}\right)$$ End of explanation """ result = minimize(lambda x: (0.008-in_sample_err(x))**2, x0=[20.0], tol=1e-11) if result.success is True: N = result.x[0] print("N = {}".format(N)) print("err({}) = {}".format(int(N),in_sample_err(int(N)))) print("err({}) = {}".format(int(N+1),in_sample_err(int(N+1)))) """ Explanation: Here, we can see that, for a noisy target, as the number of examples, $N$, increases, the in-sample error also increases. End of explanation """ def add_nonlinear_features(X): N = X.shape[0] X = np.hstack((X,np.zeros((N,3)))) X[:,3] = X[:,1]*X[:,2] X[:,4] = X[:,1]**2 X[:,5] = X[:,2]**2 return(X) def plot_data_nonlinear(fig,plot_id,w_arr,w_colors,titles): p = 2.0 x1 = np.linspace(-p,p,100) x2 = np.linspace(-p,p,100) X1,X2 = np.meshgrid(x1,x2) X1X2 = X1*X2 X1_sq= X1**2 X2_sq= X2**2 for i,w in enumerate(w_arr): Y = w[0] + w[1]*X1 + w[2]*X2 + w[3]*X1X2 + \ w[4]*X1_sq + w[5]*X2_sq ax = fig.add_subplot(plot_id[i]) cp0 = ax.contour(X1,X2,Y,1,linewidth=4, levels=[0.0], colors=w_colors[i]) ax.clabel(cp0, inline=True, fontsize=14) #cp1 = ax.contour(X1,X2,Y,N=1,linewidth=4, levels=[-1.0, 1.0], # linestyles='dashed', colors=w_colors[i], alpha=0.3) cp1 = ax.contourf(X1,X2,Y,1,linewidth=4, linestyles='dashed', alpha=0.8) ax.clabel(cp1, inline=True, fontsize=14) plt.colorbar(cp1) ax.set_title(titles[i]) #ax.set_axis_off() #ax.axis('off') ax.axes.xaxis.set_ticks([]) ax.axes.yaxis.set_ticks([]) """ Explanation: If we desire an in-sample error of not more than 0.008, then the maximum number of examples we should have is 44. 3. Non-linear Transforms 3.1 Background Consider the linear transform $z_i = \phi_i\left(\mathbf{x}\right)$ or $\mathbf{z} = \Phi\left(\mathbf{x}\right)$, with the following mapping: $$\mathbf{x} = \left(x_0, x_1, \dots, x_d\right) \rightarrow \mathbf{z} = \left(z_0, z_1, \dots, z_{\tilde d}\right)$$ The final hypothesis, $\mathcal{X}$ space is: $$g\left(\mathbf{x}\right) = \mathbf{\tilde w^T} \Phi\left(\mathbf{x}\right)$$ $$g\left(\mathbf{x}\right) = \left(w_0, w_1, w_2\right) \left(\begin{array}{c}1\x_1^2\x_2^2\end{array}\right) = w_0 + w_1 x_1^2 + w_2 x_2^2$$ The non-linear transforms are implemented in the subroutine add_nonlinear_features() below. The contour plots corresponding to the non-linear transforms are implemented in plot_data_nonlinear(). End of explanation """ w1 = np.array([ 1, 0, 0, 0, 0.0, 1.0]) w2 = np.array([ 1, 0, 0, 0, 1.0, 0.0]) w3 = np.array([ 1, 0, 0, 0, 1.0, 1.0]) w4 = np.array([ 1, 0, 0, 0,-1.0, 1.0]) w5 = np.array([ 1, 0, 0, 0, 1.0,-1.0]) w_arr = [w1,w2,w3,w4,w5] w_colors = ['red','orange','green','blue','black'] titles = ['(a) $w_1$ = 0, $w_2$ > 0', '(b) $w_1$ > 0, $w_2$ = 0', '(c) $w_1$ > 0, $w_2$ > 0', '(d) $w_1$ < 0, $w_2$ > 0', '(e) $w_1$ > 0, $w_2$ < 0'] plot_id_arr = [ 231, 232, 233, 234, 235 ] fig = plt.figure(figsize=(12,7)) plot_data_nonlinear(fig,plot_id_arr,w_arr,w_colors,titles) """ Explanation: Here we wish to consider the effects of the sign of the weights $\tilde w_1, \tilde w_2$ on the decision boundary. For simplicity, we choose the weights from [-1, 0, 1], as similar shapes would be obtained if the set of weights were scaled to something like [-2, 0, 2]. End of explanation """ var('u v') expr = (u*exp(v) -2*v*exp(-u))**2 display(Math(latex(expr))) """ Explanation: In the second last example, $\tilde w_1 <0, \tilde w_2 > 0$, (with $x_0 = 1$), we have: $$\mathbf{x} = \left(1, x_1, x_2\right) \rightarrow \mathbf{z} = \left(1, x_1^2, x_2^2\right)$$ $$g\left(\mathbf{x}\right) = 1 - x_1^2 + x_2^2$$ 4. Gradient Descent 4.1 Gradient Descent Example Using Sympy This example provides a demonstration of how the package sympy can be used to find the gradient of an arbitrary function, and perform gradient descent to the minimum of the function. Our arbitrary function in this case is: $$E\left(u,v\right) = \left(ue^v -2ve^{-u}\right)^2$$ End of explanation """ derivative_u = expr.diff(u) display(Math(latex(derivative_u))) display(Math(latex(factor(derivative_u)))) """ Explanation: The partial derivative of the function, $E$, with respect to $u$ is: End of explanation """ derivative_v = expr.diff(v) display(Math(latex(derivative_v))) display(Math(latex(factor(derivative_v)))) """ Explanation: The partial derivative of the function, $E$, with respect to $v$ is: End of explanation """ def err(uv): u = uv[0] v = uv[1] ev = np.exp(v) e_u= np.exp(-u) return (u*ev - 2.0*v*e_u)**2 def err_gradient(uv): u = uv[0] v = uv[1] ev = np.exp(v) e_u= np.exp(-u) return np.array([ 2.0*(ev + 2.0*v*e_u)*(u*ev - 2.0*v*e_u), 2.0*(u*ev - 2.0*e_u)*(u*ev - 2.0*v*e_u) ]) def err_gradient2(uv): du = derivative_u.subs(u,uv[0]).subs(v,uv[1]).evalf() dv = derivative_v.subs(u,uv[0]).subs(v,uv[1]).evalf() return np.array([ du, dv ], dtype=float) """ Explanation: Next, the functions to implement the gradient descent are implemented as follows. In the first case, err_gradient(), the derivatives are specified in the code. In the second case, err_gradient2(), the derivatives are calculated using sympy + evalf: End of explanation """ def gradient_descent(x0, err, d_err, eta=0.1): x = x0 for i in range(20): e = err(x) de = d_err(x) print("%2d: x = (%8.5f, %8.5f) | err' = (%8.4f, %8.4f) | err = %.3e" % (i,x[0],x[1],de[0],de[1],e)) if e < 1e-14: break x = x - eta*de def coordinate_descent(x0, err, d_err, eta=0.1): x = x0 for i in range(15): # Step 1: Move along the u-coordinate e = err(x) de = d_err(x) print("%2d: x = (%8.5f, %8.5f) | err' = (%8.4f, --------) | err = %.3e" % (i,x[0],x[1],de[0],e)) x[0] = x[0] - eta*de[0] if e < 1e-14: break # Step 2: Move along the v-coordinate e = err(x) de = d_err(x) print("%2d: x = (%8.5f, %8.5f) | err' = (--------, %8.4f) | err = %.3e" % (i,x[0],x[1],de[1],e)) x[1] = x[1] - eta*de[1] if e < 1e-14: break x0 = np.array([1.0,1.0]) gradient_descent(x0=x0, err=err, d_err=err_gradient) gradient_descent(x0=x0, err=err, d_err=err_gradient2) """ Explanation: To follow the gradient to the function minimum, we can either use $\nabla E$ in the gradient descent approach, or we can alternate between the individual derivatives, $\frac{\partial E}{\partial u}$ and $\frac{\partial E}{\partial v}$ in the coordinate descent approach. End of explanation """ err_fn = lambda x: (x[0]*np.exp([1]) - 2.0*x[1]*np.exp(-x[0]))**2 result = minimize(err_fn, x0=np.array([1.0,1.0]), tol=1e-5, method='CG') if result.success is True: x = result.x print("x = {}".format(x)) print("f = {}".format(result.fun)) print("evalf = {}".format(expr.subs(u,x[0]).subs(v,x[1]).evalf())) """ Explanation: Here, we can see that in both approaches of gradient descent above, it takes about 10 iterations to get the error below $10^{-14}$. For comparison, an attempt to find the roots of the minimum via scipy.optimize.minimize was made, but it yielded a different result. This could be due to the fact that another method was used (in this case, conjugate gradient). At the moment, scipy.optimize.minimize, does not appear to have a gradient descent implementation. End of explanation """ x0 = np.array([1.0,1.0]) coordinate_descent(x0=x0, err=err, d_err=err_gradient) x0 = np.array([1.0,1.0]) coordinate_descent(x0=x0, err=err, d_err=err_gradient2) """ Explanation: 4.2 Coordinate Descent Using the coordinate descent approach, the error minimization takes place more slowly. Even after 15 iterations, the error remains at only ~0.15, regardless of implementation. End of explanation """ def generate_data(n,seed=None): if seed is not None: np.random.seed(seed) x0 = np.ones(n) x1 = np.random.uniform(low=-1,high=1,size=(2,n)) return np.vstack((x0,x1)).T def get_random_line(seed=None): X = generate_data(2,seed=seed) x = X[:,1] y = X[:,2] m = (y[1]-y[0])/(x[1]-x[0]) c = y[0] - m*x[0] return np.array([-c,-m,1]) def draw_line(ax,w,marker='g--',label=None): m = -w[1]/w[2] c = -w[0]/w[2] x = np.linspace(-1,1,20) y = m*x + c if label is None: ax.plot(x,y,marker) else: ax.plot(x,y,marker,label=label) def get_hypothesis(X,w): h=np.dot(X,w) return np.sign(h).astype(int) """ Explanation: 5. Logistic Regression 5.1 Creating a target function For simplicity, we choose a target function, $f$, to be a 0/1 probability. For visualization purposes, we choose the domain of interest to be in 2 dimensions, and choose $\mathbf{x}$ to be picked uniformly from the region $\mathcal{X}=\left[-1,1\right] \times \left[-1,1\right]$, where $\times$ denotes the Cartesian Product. A random line is created, and to ensure that it falls within the region of interest, it is created from two random points, $(x_0,y_0)$ and $(x_1,y_1)$ which are generated within $\mathcal{X}$. The equation for this line in slope-intercept form and in the hypothesis / weights can be shown to be: Slope-Intercept Form $$m = - \frac{w_1}{w_2}, c = - \frac{w_0}{w_2}$$ Hypothesis Weights Form $$\mathbf{w} = \left(-c,-m,1\right)$$ End of explanation """ def plot_data(fig,plot_id,X,y=None,w_arr=None,my_x=None,title=None): ax = fig.add_subplot(plot_id) if y is None: ax.plot(X[:,1],X[:,2],'gx') else: ax.plot(X[y > 0,1],X[y > 0,2],'b+',label='Positive (+)') ax.plot(X[y < 0,1],X[y < 0,2],'ro',label='Negative (-)') ax.set_xlim(-1,1) ax.set_ylim(-1,1) ax.grid(True) if w_arr is not None: if isinstance(w_arr,list) is not True: w_arr=[w_arr] for i,w in enumerate(w_arr): if i==0: draw_line(ax,w,'g-',label='Theoretical') else: draw_line(ax,w,'g--') if my_x is not None: ax.plot([my_x[0]],[my_x[1]],'kx',markersize=10) if title is not None: ax.set_title(title) ax.legend(loc='best',frameon=True) def create_dataset(N,make_plot=True,seed=None): X = generate_data(N,seed=seed) w_theoretical = get_random_line() y = get_hypothesis(X,w_theoretical) if make_plot is True: fig = plt.figure(figsize=(7,5)) plot_data(fig,111,X,y,w_theoretical,title="Initial Dataset") return X,y,w_theoretical """ Explanation: 5.2 Plotting the Data End of explanation """ N = 100 X,y,w_theoretical = create_dataset(N=N,make_plot=True,seed=127) """ Explanation: We choose 100 training points at random from $\mathcal{X}$ and record the outputs, $y_n$, for each of the points, $\mathbf{x_n}$. End of explanation """ w = w_theoretical def cross_entropy(y_i,w,x): return np.log(1 + np.exp(-y_i*np.dot(x,w))) def gradient(y_i,w,x): return -y_i*x/(1+np.exp(y_i*np.dot(x,w))) assert np.allclose(cross_entropy(y[0],w,X[0,:]),np.log(1 + np.exp(-y[0]*np.dot(X[0,:],w)))) assert np.allclose(gradient(y[0],w,X[0,:]),-y[0]*X[0,:]/(1+np.exp(y[0]*np.dot(X[0,:],w)))) np.mean(cross_entropy(y,w,X)) np.set_printoptions(precision=4) assert np.linalg.norm(np.array([1.0, 2.0, 3.0])) == np.sqrt(1**2 + 2**2 + 3**2) def run_simulation(N=100,eta=0.01,make_plot=None,w0 = np.array([0,0,0],dtype=float)): X = generate_data(N) w_theoretical = get_random_line() y = get_hypothesis(X,w_theoretical) w_arr = [] w_arr2= [] e_arr = [] w = w0 h = get_hypothesis(X,w) assert y.dtype == h.dtype for t_epoch in range(1000): w_epoch = w for i,p in enumerate(permutation(N)): grad = gradient(y[p],w,X[p,:]) w = w - eta*grad; w_arr2.append(w) #Estimate out-of-sample error by re-generating data X_out = generate_data(N) h = get_hypothesis(X_out,w_theoretical) misclassified = np.mean(h != y) #E_out = np.mean(cross_entropy(y,w,X)) E_out = np.mean(cross_entropy(h,w,X_out)) delta_w = np.linalg.norm(w - w_epoch) w_arr.append(w) e_arr.append(E_out) #if t_epoch % 20 == 0: # print("epoch{:4}: miss={}, delta_w={}, E_out={}, w={}".format( # t_epoch, misclassified, np.round(delta_w,5), E_out, w)) if delta_w < 0.01: break print("Epochs = {}, E_out = {}, w = {}".format(t_epoch, E_out, w)) if make_plot is not None: fig = plt.figure(figsize=(7,5)) plot_data(fig,111,X,y,[w_theoretical,w],title="Converged") return e_arr, np.array(w_arr), X, y, np.array(w_arr2) """ Explanation: 5.3 Gradient Descent The gradient descent algorithm adjust the weights in the direction of the 'steepest descent' ($\nabla E_{in}$), with the adjustment of a learning rate, $\eta$: $$\mathbf{w}(t+1) = \mathbf{w}(t) - \eta\nabla E_{in}$$ We thus need to know the gradient of the error measure with respect to the weights, i.e.: $$\nabla E_{in}\left(\mathbf{w}\right) = -\frac{1}{N}\sum\limits_{n=1}^N \frac{y_n\mathbf{x_N}}{1 + \exp\left(y_n \mathbf{w^T}(t)\mathbf{x_n}\right)}$$ $$E_{in}\left(\mathbf{w}\right) = \frac{1}{N}\sum\limits_{n=1}^N \ln\left[1 + \exp\left(-y_n \mathbf{w^T x_n}\right)\right]$$ End of explanation """ t_arr = [] e_arr = [] w_arr = [] for n in range(50): e, w, _, _, _ = run_simulation() t_arr.append(len(e)-1) #Should I subtract 1 here? e_arr.append(e[-1]) w_arr.append(w[-1]) """ Explanation: Due to the randomness of starting with different target functions each time, we run stochastic gradient descent multiple times and consider the statistics in terms of the average number of epochs and the average out-of-sample errors. End of explanation """ print("<E_out> = {}".format(np.mean(e_arr))) print("<Epochs> = {}".format(np.mean(t_arr))) """ Explanation: The average out of sample error and the average number of epochs from the multiple runs above are: End of explanation """ def normalize_weights(w_arr): # You can't normalize the weights as this changes the cross entropy. w_arr[:,1] = w_arr[:,1] / w_arr[:,0] w_arr[:,2] = w_arr[:,2] / w_arr[:,0] w_arr[:,0] = 1.0 return w_arr def calculate_J(w0,w1,w2,X,y): J = np.zeros((w1.size,w2.size)) for j in range(w1.size): for i in range(w2.size): W = np.array([w0, w1[j], w2[i]]) J[i,j] = np.mean(cross_entropy(y,W,X)) return J def get_WJ(w_arr,X,y,n=100): w_arr = np.array(w_arr) w1_min = np.min(w_arr[:,1]) w2_min = np.min(w_arr[:,2]) w1_max = np.max(w_arr[:,1]) w2_max = np.max(w_arr[:,2]) sp = 10.0 w0 = w_arr[-1,0] # take a 2D slice through the final value of w_0 in the 3D space [w0,w1,w2] w1 = np.linspace(w1_min-sp,w1_max+sp,n) w2 = np.linspace(w2_min-sp,w2_max+sp,n) W1, W2 = np.meshgrid(w1,w2) J = calculate_J(w0,w1,w2,X,y) return w_arr,w1,w2,W1,W2,J from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm def visualise_SGD_3D(e_arr,w_arr,w_arr2,X,y,epoch_interval,elevation=30,azimuth=75): w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y) w0 = w_arr[-1,0] # take a 2D slice through the final value of w_0 in the 3D space [w0,w1,w2] z_arr = [ np.mean(cross_entropy(y,[w0,w_i[1],w_i[2]],X)) for w_i in w_arr ] z_arr2 = [ np.mean(cross_entropy(y,[w0,w_i[1],w_i[2]],X)) for w_i in w_arr2 ] fig = plt.figure(figsize=(14,10)) ax = fig.gca(projection='3d') surf = ax.plot_surface(W1,W2,J, rstride=10, cstride=10, cmap=cm.coolwarm, linewidth=0.3, antialiased=True, alpha=0.9) #, zorder=3) ax.set_xlabel(r'$w_1$', fontsize=18) ax.set_ylabel(r'$w_2$', fontsize=18) ax.set_zlabel(r'$E_{in}$', fontsize=18) ax.plot(w_arr[:,1],w_arr[:,2],z_arr,'k-',lw=0.8,label="Stochastic Gradient Descent (SGD)") ax.plot(w_arr2[:,1],w_arr2[:,2],z_arr2,'k-',lw=1.8,alpha=0.3,label="SGD within epochs") ax.plot(w_arr[::epoch_interval,1],w_arr[::epoch_interval,2],z_arr[::epoch_interval], 'ko',markersize=7,label=r"Intervals of $n$ Epochs") ax.scatter([w_arr[-1,1]],[w_arr[-1,2]],[z_arr[-1]], c='r', s=250, marker='x', lw=3); #fig.colorbar(surf, shrink=0.5, aspect=12) ax.legend(loc='best',frameon=False) ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) ax.axes.zaxis.set_ticklabels([]) ax.view_init(elev=elevation, azim=azimuth) def visualise_SGD_contour(e_arr,w_arr,w_arr2,X,y,epoch_interval): w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y) fig = plt.figure(figsize=(12,8)) ax = fig.gca() CS = plt.contour(W1,W2,J,20) #plt.clabel(CS, inline=1, fontsize=10) ax.set_xlabel(r'$w_1$', fontsize=18) ax.set_ylabel(r'$w_2$', fontsize=18) ax.plot(w_arr[:,1],w_arr[:,2],'k-',lw=0.8,label="Stochastic Gradient Descent (SGD)") ax.plot(w_arr2[:,1],w_arr2[:,2],'k-',lw=1.8,alpha=0.3,label="SGD within epochs") ax.plot(w_arr[::epoch_interval,1],w_arr[::epoch_interval,2], 'ko',markersize=7,label=r"Intervals of $n$ Epochs") ax.scatter([w_arr[-1,1]],[w_arr[-1,2]], c='r', s=150, marker='x', lw=3); ax.legend(loc='best',frameon=False) ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) plt.title(r'$E_{in}$', fontsize=16); def plot_epochs(e_arr,w_arr,X,y,epoch_interval): w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y) E_in = [ np.mean(cross_entropy(y,w_i,X)) for w_i in w_arr ] epoch = np.array(range(len(e_arr))) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(211) ax.set_ylabel(r'Error', fontsize=16) ax.plot(epoch,e_arr,c='g',markersize=1,marker='+',lw=1,alpha=0.8,label=r'$E_{out}$') #ax.scatter(epoch[::epoch_interval],e_arr[::epoch_interval],c='g',s=20,marker='o',lw=3,alpha=0.8) ax.plot(epoch,E_in,c='k',linestyle='--',label=r'$E_{in}$') ax.legend(loc='best',frameon=False, fontsize=16) ax.set_title('"Cross Entropy" Error', fontsize=16); ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) ax.grid(True) ax = fig.add_subplot(212) ax.set_xlabel(r'Epoch', fontsize=16) ax.set_ylabel(r'Error', fontsize=16) ax.loglog(epoch,e_arr,c='g',markersize=1,marker='+',lw=1,alpha=0.8,label=r'$E_{out}$') ax.loglog(epoch,E_in,c='k',linestyle='--',label=r'$E_{in}$') #ax.loglog(epoch[::epoch_interval],e_arr[::epoch_interval],c='g',markersize=8,marker='o',lw=3,alpha=0.8,ls='None') ax.legend(loc='best',frameon=False, fontsize=16) ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) ax.grid(True) np.random.seed(12345) e_arr, w_arr, X, y, w_arr2 = run_simulation(N=15,eta=0.8,w0=np.array([2.0, 10.0, -20.0])) visualise_SGD_3D(e_arr,w_arr,w_arr2,X,y,epoch_interval=100) visualise_SGD_contour(e_arr,w_arr,w_arr2,X,y,epoch_interval=100) plot_epochs(e_arr,w_arr,X,y,epoch_interval=100) """ Explanation: 5.4 Gradient Descent Visualization End of explanation """ var('y_n w_i x_n') expr = exp(-y_n * w_i * x_n) d_expr = expr.diff(w_i) display(Math(latex(d_expr))) expr = -y_n * w_i * x_n d_expr = expr.diff(w_i) display(Math(latex(d_expr))) expr = (y_n - w_i * x_n)**2 d_expr = simplify(expr.diff(w_i)) display(Math(latex(d_expr))) expr = log(1+exp(-y_n * w_i * x_n)) d_expr = simplify(expr.diff(w_i)) display(Math(latex(d_expr))) w_final = np.array(w_arr)[-1,:] e_a = np.mean(np.exp(-y*np.dot(X,w_final))) e_b = np.mean(-y*np.dot(X,w_final)) e_c = np.mean((y - np.dot(X,w_final))**2) e_d = np.mean(np.log(1 + np.exp(-y*np.dot(X,w_final)))) e_e = -y*np.dot(X,w_final); e_e[e_e > 0] = 0; e_e = np.mean(e_e) print("(a) e_n(w) = {}".format(e_a)) print("(b) e_n(w) = {}".format(e_b)) print("(c) e_n(w) = {}".format(e_c)) print("(d) e_n(w) = {}".format(e_d)) print("(e) e_n(w) = {}".format(e_e)) """ Explanation: 5.5 Stochastic Gradient Descent vs Perceptron Learning Algorithm "Consider that you are picking a point at random out of the $N$ points. In PLA, you see if it is misclassified then update using the PLA rule if it is and not update if it isn't. In SGD, you take the gradient of the error on that point w.r.t. $\mathbf{w}$ and update accordingly. Which of the 5 error functions would make these equivalent? (a): $e_n\left(\mathbf{w}\right) = \exp\left(-y_n \mathbf{w^T x_n}\right)$ (b): $e_n\left(\mathbf{w}\right) = -y_n \mathbf{w^T x_n}$ (c): $e_n\left(\mathbf{w}\right) = \left(y_n - \mathbf{w^T x_n}\right)^2$ (d): $e_n\left(\mathbf{w}\right) = \ln\left[1 + \exp\left(-y_n \mathbf{w^T x_n}\right)\right]$ (e): $e_n\left(\mathbf{w}\right) = -\min\left(0, y_n \mathbf{w^T x_n}\right)$ Answer: (e) Notes: an attempt to evaluate the gradients of the above functions using sympy was carried out as follows (the final expression, which contains the function min was excluded): End of explanation """ def my_err_fn(y,W,X): #e = np.exp(-y*np.dot(X,W)) # e_a #e = -y*np.dot(X,W) # e_b #e = (y - np.dot(X,W))**2 # e_c e = np.log(1 + np.exp(-y*np.dot(X,W))) # e_d #e = -y*np.dot(X,W); e[e > 0] = 0 # e_e return np.mean(e) def calculate_J(w0,w1,w2,X,y,my_err_fn): J = np.zeros((w1.size,w2.size)) for j in range(w1.size): for i in range(w2.size): W = np.array([w0, w1[j], w2[i]]) J[i,j] = my_err_fn(y,W,X) return J def get_WJ(w_arr,X,y,my_err_fn,n=100): w_arr = np.array(w_arr) w1_min = np.min(w_arr[:,1]) w2_min = np.min(w_arr[:,2]) w1_max = np.max(w_arr[:,1]) w2_max = np.max(w_arr[:,2]) sp = 10.0 w0 = w_arr[-1,0] # take a 2D slice through the final value of w_0 in the 3D space [w0,w1,w2] w1 = np.linspace(w1_min-sp,w1_max+sp,n) w2 = np.linspace(w2_min-sp,w2_max+sp,n) W1, W2 = np.meshgrid(w1,w2) J = calculate_J(w0,w1,w2,X,y,my_err_fn) return w_arr,w1,w2,W1,W2,J def visualise_SGD_contour2(e_arr,w_arr,X,y,my_err_fn): w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y,my_err_fn) fig = plt.figure(figsize=(10,7)) ax = fig.gca() CS = plt.contour(W1,W2,J,20) plt.clabel(CS, inline=1, fontsize=10) ax.set_xlabel(r'$w_1$', fontsize=18) ax.set_ylabel(r'$w_2$', fontsize=18) ax.plot(w_arr[:,1],w_arr[:,2],'k-',label="Gradient Descent") ax.plot(w_arr[::100,1],w_arr[::100,2],'ko',markersize=7,label=r"Intervals of $n$ Epochs") ax.scatter([w_arr[-1,1]],[w_arr[-1,2]], c='r', s=150, marker='x', lw=3); ax.legend(loc='best',frameon=False) ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) plt.title(r'$E_{in}$', fontsize=16) np.random.seed(12345) e_arr, w_arr, X, y, w_arr2 = run_simulation(N=300,eta=0.15) visualise_SGD_contour2(e_arr,w_arr,X,y,my_err_fn) """ Explanation: An attempt was also made to visualize the gradient descent algorithm when performed on the various error functions. End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/c3c186a71be1cfa94a34ecef5331099f/plot_brainstorm_phantom_elekta.ipynb
bsd-3-clause
# sphinx_gallery_thumbnail_number = 9 # Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np import matplotlib.pyplot as plt import mne from mne import find_events, fit_dipole from mne.datasets.brainstorm import bst_phantom_elekta from mne.io import read_raw_fif from mayavi import mlab print(__doc__) """ Explanation: Brainstorm Elekta phantom dataset tutorial Here we compute the evoked from raw for the Brainstorm Elekta phantom tutorial dataset. For comparison, see [1]_ and: https://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta References .. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages, 2011. doi:10.1155/2011/879716 End of explanation """ data_path = bst_phantom_elekta.data_path(verbose=True) raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif') raw = read_raw_fif(raw_fname) """ Explanation: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data are read to construct instances of :class:mne.io.Raw. End of explanation """ events = find_events(raw, 'STI201') raw.plot(events=events) raw.info['bads'] = ['MEG1933', 'MEG2421'] """ Explanation: Data channel array consisted of 204 MEG planor gradiometers, 102 axial magnetometers, and 3 stimulus channels. Let's get the events for the phantom, where each dipole (1-32) gets its own event: End of explanation """ raw.plot_psd(tmax=30., average=False) """ Explanation: The data have strong line frequency (60 Hz and harmonics) and cHPI coil noise (five peaks around 300 Hz). Here we plot only out to 60 seconds to save memory: End of explanation """ raw.plot(events=events) """ Explanation: Our phantom produces sinusoidal bursts at 20 Hz: End of explanation """ tmin, tmax = -0.1, 0.1 bmax = -0.05 # Avoid capture filter ringing into baseline event_id = list(range(1, 33)) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, bmax), preload=False) epochs['1'].average().plot(time_unit='s') """ Explanation: Now we epoch our data, average it, and look at the first dipole response. The first peak appears around 3 ms. Because we low-passed at 40 Hz, we can also decimate our data to save memory. End of explanation """ sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08) mne.viz.plot_alignment(epochs.info, subject='sample', show_axes=True, bem=sphere, dig=True, surfaces='inner_skull') """ Explanation: Let's use a sphere head geometry model &lt;ch_forward_spherical_model&gt; and let's see the coordinate alignment and the sphere location. The phantom is properly modeled by a single-shell sphere with origin (0., 0., 0.). End of explanation """ # here we can get away with using method='oas' for speed (faster than "shrunk") # but in general "shrunk" is usually better cov = mne.compute_covariance(epochs, tmax=bmax) mne.viz.plot_evoked_white(epochs['1'].average(), cov) data = [] t_peak = 0.036 # true for Elekta phantom for ii in event_id: # Avoid the first and last trials -- can contain dipole-switching artifacts evoked = epochs[str(ii)][1:-1].average().crop(t_peak, t_peak) data.append(evoked.data[:, 0]) evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.) del epochs dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=1) """ Explanation: Let's do some dipole fits. We first compute the noise covariance, then do the fits for each event_id taking the time instant that maximizes the global field power. End of explanation """ fig, axes = plt.subplots(2, 1) evoked.plot(axes=axes) for ax in axes: ax.texts = [] for line in ax.lines: line.set_color('#98df81') residual.plot(axes=axes) """ Explanation: Do a quick visualization of how much variance we explained, putting the data and residuals on the same scale (here the "time points" are the 32 dipole peak values that we fit): End of explanation """ actual_pos, actual_ori = mne.dipole.get_phantom_dipoles() actual_amp = 100. # nAm fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7)) diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1)) print('mean(position error) = %0.1f mm' % (np.mean(diffs),)) ax1.bar(event_id, diffs) ax1.set_xlabel('Dipole index') ax1.set_ylabel('Loc. error (mm)') angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1)))) print(u'mean(angle error) = %0.1f°' % (np.mean(angles),)) ax2.bar(event_id, angles) ax2.set_xlabel('Dipole index') ax2.set_ylabel(u'Angle error (°)') amps = actual_amp - dip.amplitude / 1e-9 print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),)) ax3.bar(event_id, amps) ax3.set_xlabel('Dipole index') ax3.set_ylabel('Amplitude error (nAm)') fig.tight_layout() plt.show() """ Explanation: Now we can compare to the actual locations, taking the difference in mm: End of explanation """ def plot_pos_ori(pos, ori, color=(0., 0., 0.), opacity=1.): """Plot dipole positions and orientations in 3D.""" x, y, z = pos.T u, v, w = ori.T mlab.points3d(x, y, z, scale_factor=0.005, opacity=opacity, color=color) q = mlab.quiver3d(x, y, z, u, v, w, scale_factor=0.03, opacity=opacity, color=color, mode='arrow') q.glyph.glyph_source.glyph_source.shaft_radius = 0.02 q.glyph.glyph_source.glyph_source.tip_length = 0.1 q.glyph.glyph_source.glyph_source.tip_radius = 0.05 mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces='inner_skull', coord_frame='head', meg='helmet', show_axes=True) # Plot the position and the orientation of the actual dipole plot_pos_ori(actual_pos, actual_ori, color=(0., 0., 0.), opacity=0.5) # Plot the position and the orientation of the estimated dipole plot_pos_ori(dip.pos, dip.ori, color=(0.2, 1., 0.5)) mlab.view(70, 80, distance=0.5) """ Explanation: Let's plot the positions and the orientations of the actual and the estimated dipoles End of explanation """
olinguyen/self-driving-cars
p2-traffic-sign-classification/Traffic_Signs_Recognition.ipynb
mit
# Load pickled data import pickle import os training_file = "./train.p" testing_file = "./test.p" with open(training_file, mode='rb') as f: train = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_test, y_test = test['features'], test['labels'] ### To start off let's do a basic data summary. import random import numpy as np keep_ratio = 1 image_shape = X_train[0].shape train_idx = np.random.randint(0, X_train.shape[0], size=(X_train.shape[0] * keep_ratio)) n_train = int(X_train.shape[0] * keep_ratio) test_idx = np.random.randint(0, X_test.shape[0], size=(X_test.shape[0] * keep_ratio)) n_test = int(X_test.shape[0] * keep_ratio) X_train = X_train[train_idx] y_train = y_train[train_idx] X_test = X_test[test_idx] y_test = y_test[test_idx] n_classes = y_train.max() + 1 print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) ### Data exploration visualization goes here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt import random %matplotlib inline fig = plt.figure() for i in range(1, 9): a=fig.add_subplot(2,4,i) idx = random.randint(0, n_train) plt.imshow(X_train[idx]) """ Explanation: Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition Classifier Step 1: Dataset Exploration The pickled data is a dictionary with 4 key/value pairs: features -> the images pixel values, (width, height, channels) labels -> the label of the traffic sign sizes -> the original width and height of the image, (width, height) coords -> coordinates of a bounding box around the sign in the image, (x1, y1, x2, y2). Based the original image (not the resized version). End of explanation """ ### Preprocess the data here. ### Feel free to use as many code cells as needed. import cv2 import numpy as np from sklearn import preprocessing def rgb_to_grayscale(images, flatten=0): """ images: matrix of RGB images return: flattened grayscale images """ image_shape = images.shape if flatten: return np.average(images, axis=3).reshape(image_shape[0], image_shape[1] * image_shape[2]) else: return np.average(images, axis=3).reshape(image_shape[0], image_shape[1], image_shape[2], 1) def normalize(images, flatten=0): """ images: matrix of grayscale return: mean subtracted, scaled between -1 and 1 """ return images n_train = images.shape[0] if flatten: subtracted_mean = images - np.mean(images, axis=1).reshape(n_train, 1) else: subtracted_mean = images - np.mean(images) return subtracted_mean #return preprocessing.scale(images) #min_max_scaler = preprocessing.MinMaxScaler(feature_range=(-1,1)) #return min_max_scaler.fit_transform(subtracted_mean) """ Explanation: Step 2: Design and Test a Model Architecture The model is trained and tested on the German Traffic Sign Dataset. End of explanation """ ### Generate data additional (if you want to!) ### and split the data into training/validation/testing sets here. ### Feel free to use as many code cells as needed. from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer """ X_train_gray = rgb_to_grayscale(X_train) X_train_normalized = normalize(X_train_gray) X_test_gray = rgb_to_grayscale(X_test) test_features = normalize(X_test_gray) """ train_features = normalize(X_train) test_features = normalize(X_test) encoder = LabelBinarizer() encoder.fit(y_train) train_labels = encoder.transform(y_train) test_labels = encoder.transform(y_test) train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) import os pickle_file = 'traffic_signs_preprocessed.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open(pickle_file, 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.') """ Explanation: Question 1 Describe the techniques used to preprocess the data. Answer: I initially tried preprocessing the images by obtaining the grayscale, and to flatetenning the the images into a single color channel. Additionally, I applied min-max scaling to have the values between 0 and 1 to ensure that all features (or pixels) are treated equally, and hence improves the accuracy of the classifier. I then decided to keep the 3 color channels, as color is important in determining the meaning of a traffic sign. The labels use one-hot encoding since that is the input to the model, and was computed with the sklearn module. End of explanation """ import tensorflow as tf tf.reset_default_graph() def get_weight(shape): return tf.Variable(tf.truncated_normal(shape, stddev=0.1)) def get_bias(shape, constant=1): if constant != 1: return tf.Variable(tf.zeros(shape)) else: return tf.constant(0.1, shape=shape) def get_conv2d(x, W, stride): return tf.nn.conv2d(x, W, [1, stride, stride, 1], padding='SAME') def get_loss(logits, y_true): cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, y_true) #cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, y_true) loss = tf.reduce_mean(cross_entropy) return loss def get_maxpool2d(x, k=2): return tf.nn.max_pool( x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME') def inference(images, keep_prob): n_features = image_shape[0] * image_shape[1] filter_size_width = 5 filter_size_height = 5 color_channels = 3 k_output = [32, 64, 192, 256] learning_rate = 0.001 # conv0 layer : 32 x 32 x 3 with tf.name_scope('conv0'): W_conv0 = get_weight([filter_size_width, filter_size_height, color_channels, k_output[0]]) b_conv0 = get_bias([k_output[0]], constant=0) conv0 = get_conv2d(images, W_conv0, stride=1) h_conv0 = tf.nn.relu(conv0 + b_conv0) h_conv0 = get_maxpool2d(h_conv0, k=2) # conv1 layer : 16 x 16 x 32 with tf.name_scope('conv1'): W_conv1 = get_weight([filter_size_width, filter_size_height, k_output[0], k_output[1]]) b_conv1 = get_bias([k_output[1]]) conv1 = get_conv2d(h_conv0, W_conv1, stride=1) h_conv1 = tf.nn.relu(conv1 + b_conv1) h_conv1 = get_maxpool2d(h_conv1, k=2) # conv2 layer : 8 x 8 x 64 with tf.name_scope('conv2'): W_conv2 = get_weight([filter_size_width, filter_size_height, k_output[1], k_output[2]]) b_conv2 = get_bias([k_output[2]]) conv2 = get_conv2d(h_conv1, W_conv2, stride=1) h_conv2 = tf.nn.relu(conv2 + b_conv2) h_conv2 = get_maxpool2d(h_conv2, k=2) # fc1 layer : 4 x 4 x 192 with tf.name_scope('fc1'): prev_layer_shape = h_conv2.get_shape().as_list() prev_dim = prev_layer_shape[1] * prev_layer_shape[2] * prev_layer_shape[3] W_fc1 = get_weight([prev_dim, 512]) b_fc1 = get_bias([512]) h_conv2_flat = tf.reshape(h_conv2, [-1, prev_dim]) # 1 x 1 x 3072 fc1 = tf.matmul(h_conv2_flat, W_fc1) + b_fc1 fc1 = tf.nn.relu(fc1) # fc2 layer : 1 x 1 x 512 with tf.name_scope('fc2'): W_fc2 = get_weight([512, 256]) b_fc2 = get_bias([256]) fc2 = tf.matmul(fc1, W_fc2) + b_fc2 fc2 = tf.nn.relu(fc2) fc2 = tf.nn.dropout(fc2, keep_prob=keep_prob, seed=66478) # fc3 layer : 1 x 1 x 256 with tf.name_scope('fc3'): W_fc3 = get_weight([256, n_classes]) b_fc3 = get_bias([n_classes]) fc3 = tf.matmul(fc2, W_fc3) + b_fc3 #fc3 = tf.nn.relu(fc3) # 1 x 1 x 43 # L2 regularization for the fully connected parameters. regularizers = (tf.nn.l2_loss(W_fc1) + tf.nn.l2_loss(b_fc1) + tf.nn.l2_loss(W_fc2) + tf.nn.l2_loss(b_fc2) + tf.nn.l2_loss(W_fc3) + tf.nn.l2_loss(b_fc3)) return fc3, regularizers x = tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]]) y = tf.placeholder(tf.float32, [None, n_classes]) keep_prob = tf.placeholder(tf.float32) logits, regularizers = inference(x, keep_prob) ######## testing ######## learning_rate = 0.0001 loss = get_loss(logits, y) # Add the regularization term to the loss. loss += 5e-4 * regularizers with tf.name_scope('accuracy'): # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(tf.nn.softmax(logits), 1), tf.argmax(y, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) tf.scalar_summary('accuracy', accuracy) # Add a scalar summary for the snapshot loss. tf.scalar_summary("loss_value", loss) # Create a variable to track the global step. global_step = tf.Variable(0, name='global_step', trainable=False) #optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss)#, global_step=global_step) optimizer = tf.train.AdamOptimizer(5e-4).minimize(loss) # Build the summary Tensor based on the TF collection of Summaries. summary = tf.merge_all_summaries() init = tf.initialize_all_variables() # Create a saver for writing training checkpoints. saver = tf.train.Saver() import time training_epochs = 50 batch_size = 100 display_step = 1 log_batch_step = 50 dropout_keep_prob = 0.5 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] # Feed dicts for training, validation, and test session train_feed_dict = {x: train_features, y: train_labels, keep_prob: dropout_keep_prob} valid_feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0} test_feed_dict = {x: test_features, y: test_labels, keep_prob: 1.0} log_dir = "data3" # Instantiate a SummaryWriter to output summaries and the Graph. with tf.Session() as sess: summary_writer = tf.train.SummaryWriter(log_dir, sess.graph) #sess.run(init) saver.restore(sess, "data3/checkpoint-14") print("Model restored.") total_batches = int(len(train_features)/batch_size) for epoch in range(training_epochs): start_time = time.time() for i in range(total_batches): batch_start = i * batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] _, l = sess.run( [optimizer, loss], feed_dict={x: batch_features, y: batch_labels, keep_prob: 0.8}) if i % log_batch_step == 0: previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) training_accuracy = sess.run(accuracy, feed_dict={x: batch_features, y: batch_labels, keep_prob: 1.0}) validation_accuracy = sess.run(accuracy, feed_dict=valid_feed_dict) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) duration = time.time() - start_time print("Epoch:", '%04d' % (epoch+1), "Step: %d" % (epoch * batch_size + i), "loss =", \ "{:.9f}".format(l), "Accuracy: %.7f" % (validation_accuracy),"duration = ", duration) summary_str = sess.run(summary, feed_dict=valid_feed_dict) summary_writer.add_summary(summary_str, epoch) summary_writer.flush() checkpoint_file = os.path.join(log_dir, 'checkpoint') saver.save(sess, checkpoint_file, global_step=epoch) # Check accuracy against Validation data validation_accuracy = sess.run(accuracy, feed_dict=valid_feed_dict) print("Validation Accuracy:", validation_accuracy) #test_accuracy = sess.run(accuracy, feed_dict=test_feed_dict) #print("Test Accuracy:", test_accuracy) """ Explanation: Question 2 Describe how you set up the training, validation and testing data for your model. If you generated additional data, why? Answer: The training set was divided into two groups: the training and the validation data. Using sklearn's train_test_split, I chose 5% of the total training data to be used for validation, ensuring that we aren't overfitting to the training data. The test set is kept as is. To improve the model, fake data can be generated by manipulating the current training data and applying some iamge processing to simulate new data. Since traffic signs in the real world can be affected by different lighting conditions, obstructed and many more things, we can fake these conditions by changing the luminosity of the image, adding shadows, rotating the signs, adding noise and cropping out the images to have more data. End of explanation """ ### Train your model here. ### Feel free to use as many code cells as needed. # Parameters """ Explanation: Question 3 What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom. Answer: My final architecture has 3 convolutional layers which use a ReLu as activation function, followed by a maxpool with a stride of 2. Spatially, the input image dimensions (width and height) are reduced progressively, but increase in depth. The network then includes 3 fully connected layers, where the final layer has a size of 43 classes. Before the third and final fully connected layer, a dropout component was added with a certain probability. End of explanation """ ### Load the images and plot them here. ### Feel free to use as many code cells as needed. import matplotlib.image as mpimg from PIL import Image #reading in an image image1 = Image.open('100km.jpg') image1 = image1.resize((32, 32), Image.ANTIALIAS) image1 = np.array(image1) image2 = Image.open('60km.jpg') image2 = image2.resize((32, 32), Image.ANTIALIAS) image2 = np.array(image2) image3 = Image.open('stop-quebec.JPG') image3 = image3.resize((32, 32), Image.ANTIALIAS) image3 = np.array(image3) image4 = Image.open('yield.jpg') image4 = image4.resize((32, 32), Image.ANTIALIAS) image4 = np.array(image4) image5 = Image.open('priority-road.jpg') image5 = image5.resize((32, 32), Image.ANTIALIAS) image5 = np.array(image5) new_images = [image1, image2, image3, image4, image5] new_labels = [7, 3, 14, 13, 12] #printing out some stats and plotting fig = plt.figure() for i in range(1, 6): a = fig.add_subplot(1,5,i) a.set_title(str(new_labels[i-1])) plt.imshow(new_images[i-1]) new_labels = encoder.transform(new_labels) new_labels = new_labels.astype(np.float32) test_feed_dict = {x: new_images, y: new_labels, keep_prob: 1.0} prediction = tf.argmax(tf.nn.softmax(logits), 1) with tf.Session() as sess: saver.restore(sess, "data3/checkpoint-17") print("Model restored.") prediction = sess.run(prediction, feed_dict=test_feed_dict) print(prediction) """ Explanation: Question 4 How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.) Answer: Initially, I used the GradientDescentOptimizer, and tried with various step sizes (0.001 and 0.0001 for example). However, I would either hit a plateau around ~3.00 loss and always obtain less than 10% accuracy. Using AdamOptimizer obtained significantly better results. I got passed the initial local minimum where I seemed to get stuck with the previous optimizer. One down side of this optimizer is that it is slightly longer to compute. In both cases, I experimented with batch sizes of 50 and 100 (the difference was not thoroughly compared). It took approximately 20 epochs to obtain a loss of 0.50 and accuracy over 90%. The plots from Tensorboard are attached below. <img src="http://i.imgur.com/Wbgkihh.png"> Question 5 What approach did you take in coming up with a solution to this problem? Answer: To come up with the model architecture, I was inspired by AlexNet which has proven to be effective at classifying images (see ImageNet results) and is relatively simple to implement. Since I am limited by my computing resources, I tried to simplify the architecture as much as I could, without compromising performance. In the end, my model includes convolutional layers to capture enough complexity in the image data, fully connected layers to finally end up with the class predictions and pooling to downscale and reduce the dimensions of the images. This model is a simplified version of AlexNet which allowed for reasonable training time given my resources, but still resulted in good performance (> 96%) as seen above. Finally, the prevent overfitting, I added dropout layers with 50% probability, and added regularization to penalize weights with higher value. As mentionned above, I tried to play around with the learning rate to overcome the issues I had. For instance, my learning rate value was too high, which caused the loss the diverge (it was reaching Inf/NaN). After lowering the learning rate, I got into alot of plateaus and local minimas. Tweaking the learning rate was enough to get past the difficulties. In the above plot, we can see that past tries where I struggled to improve the loss/accuracy. Step 3: Test a Model on New Images Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless. You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name. Implementation Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow. End of explanation """ fig = plt.figure() for i in range(1, 5): a=fig.add_subplot(3,4,i) plt.imshow(test_features[y_test == 7][i]) a=fig.add_subplot(3,4,i+4) plt.imshow(test_features[y_test == 3][i]) a=fig.add_subplot(3,4,i+8) plt.imshow(test_features[y_test == 11][i]) """ Explanation: Question 6 Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It would be helpful to plot the images in the notebook. Answer: My algorithm properly classified the stop sign. the yield and the priority road sign. However, for speed limit signs, it had a false prediction, thinking it was a "right of way at the next intersection" sign. If we look at the speed limit signs from the training set, it is clear that the speed limit sign shape is different than the ones that I selected online (Quebec road signs). Indeed, the German speed limit signs have circular shape, and have a red outline. Without any training images that have square shaped speed limit signs, it is very likely that the classifier does not predict them properly. End of explanation """ ### Run the predictions here. ### Feel free to use as many code cells as needed. test_feed_dict = {x: test_features, y: test_labels, keep_prob: 1.0} with tf.Session() as sess: saver.restore(sess, "data3/checkpoint-17") print("Model restored.") validation_accuracy = sess.run(accuracy, feed_dict=test_feed_dict) print("Test Accuracy:", validation_accuracy) #test_feed_dict = {x: test_features, y: test_labels, keep_prob: 1.0} fig = plt.figure() for i in range(1, 6): a = fig.add_subplot(1,5,i) a.set_title(str(np.argmax(test_labels[i+1]))) plt.imshow(test_features[i+1]) test_feed_dict = {x: test_features[2:7], y: test_labels[2:7], keep_prob: 1.0} prediction = tf.argmax(tf.nn.softmax(logits), 1) with tf.Session() as sess: saver.restore(sess, "data3/checkpoint-17") print("Model restored.") prediction = sess.run(prediction, feed_dict=test_feed_dict) print(prediction) """ Explanation: Below are results from the testing data End of explanation """ ### Visualize the softmax probabilities here. ### Feel free to use as many code cells as needed. test_feed_dict = {x: new_images, y: new_labels, keep_prob: 1.0} probs = tf.nn.softmax(logits) top_5 = tf.nn.top_k(probs, 5) with tf.Session() as sess: saver.restore(sess, "data3/checkpoint-17") print("Model restored.") probs = sess.run(top_5, feed_dict=test_feed_dict) print(probs) """ Explanation: As seen from the output above, the algorithm correctly predicted 4 out of 5 signs, but had trouble with the first sign (true label = 12 or "Priority road"). The image is in fact of poor quality and would be difficult even for a human driver. It is too bright and the content of the sign is unreadable, which explains the incorrect prediction. Question 7 Is your model able to perform equally well on captured pictures or a live camera stream when compared to testing on the dataset? Answer: It would depend on the source of the captured images or the live camera stream. Additionally, these images are cropped out to only have the road sign in the image. In a live camera stream there are lots of other images, hence one would need to detect the objects, crop them and out only pass those to the classifier. The current classifier for would perform fairly well on capture pictures/live camera stream of someone driving in Germany, since that is what the classifier was trained mostly with. However, it was seen from above that the results from the testing set that the performance was inferior to the training set scores, which yielded ~90% accuracy compared to >96% accuracy. When comparing with new images, the captured pictures had an accuracy of 80% which is close to the testing sets' score of 89%. As mentioned previously, this would be resolved with a larger training set that goes beyond ther German traffic sign dataset (since the captured images were from Quebec, Canada). End of explanation """
rgerkin/sciunit
docs/chapter3.ipynb
mit
import sciunit """ Explanation: SciUnit is a framework for validating scientific models by creating experimental-data-driven unit tests. Chapter 3. Testing with help from the SciUnit standard library (or back to Chapter 2) End of explanation """ from sciunit.models import ConstModel # One of many dummy models included for illustration. const_model_37 = ConstModel(37, name="Constant Model 37") """ Explanation: In this chapter we will use the same toy model in Chapter 1 but write a more interesting test with additional features included in SciUnit. End of explanation """ from sciunit.capabilities import ProducesNumber from sciunit.scores import ZScore # One of many SciUnit score types. from sciunit.comparators import compute_zscore # A function for computing raw z-scores. from sciunit import ObservationError # An exception class raised when a test is instantiated # with an invalid observation. class MeanTest(sciunit.Test): """Tests if the model predicts the same number as the observation.""" required_capabilities = (ProducesNumber,) # The one capability required for a model to take this test. score_type = ZScore # This test's 'judge' method will return a BooleanScore. def validate_observation(self, observation): if type(observation) is not dict: raise sciunit.ObservationError("Observation must be a python dictionary") if 'mean' not in observation: raise sciunit.ObservationError("Observation must contain a 'mean' entry") def generate_prediction(self, model): return model.produce_number() # The model has this method if it inherits from the 'ProducesNumber' capability. def compute_score(self, observation, prediction): z = compute_zscore(observation,prediction) # Compute a z-score. score = self.score_type(z.score) # Returns a ZScore object. score.description = ("A z-score corresponding to the normalized location of the observation " "relative to the predicted distribution.") return score """ Explanation: Now let's write a test that validates the observation and returns more informative score type. End of explanation """ observation = {'mean':37.8, 'std':2.1} mean_37_test = MeanTest(observation, name='=37') """ Explanation: We've done two new things here: - The optional validate_observation method checks the observation to make sure that it is the right type, that it has the right attributes, etc. This can be used to ensures that the observation is exactly as the other core test methods expect. If we don't provide the right kind of observation: python -&gt; mean_37_test = MeanTest(37, name='=37') ObservationError: Observation must be a python dictionary then we get an error. In contrast, this is what our test was looking for: End of explanation """ score = mean_37_test.judge(const_model_37) """ Explanation: Instead of returning a BooleanScore, encoding a True/False value, we return a ZScore encoding a more quantitative summary of the relationship between the observation and the prediction. When we execute the test: End of explanation """ score.summarize() score.describe() """ Explanation: Then we get a more quantitative summary of the results: End of explanation """
wdwvt1/bcp
ipynbs/drinking.ipynb
mit
%matplotlib inline from IPython.display import Image Image('./drinking/water_usage_exp1.png') # for y in [w1, w4, w7]: # plt.plot(t, y, 'g') # for y in [w2, w5, w6, w8]: # plt.plot(t, y, 'r') # plt.ylabel('Water remaining (g)') # plt.xlabel('Day') # plt.xticks([i[1] for i in e], ['End Night %s' % i for i in range(len(e))], rotation=90, size=8) # for i in e: # plt.axvspan(i[0], i[1], color='gray', alpha=.3) # plt.show() """ Explanation: Water or lickometer preprocessing Water data from experiment 1 shows significant differences between mice unrelated to the infection status. In the cell below, the mice in green are healthy controls (1,4,7) and the mice in red are the infected mice (2,3,5,8, 6 is excluded because it died). End of explanation """ Image('./drinking/water_usage_example.png') """ Explanation: For a ~25 day experiment, assuming the mouse drinks ~3ml=3g/d, we'd expect about 75g water loss. The top two traces show some loss of 65.35, and 94.3 grams. This is reasonable. The other traces show between 178 and 230g loss. We assume there is some amount of leakage. To test this, we ran the 'lickometers' without mice. The results were: The traces for drinking events throughout the night look pretty uniform. Each drop in the weight appears to consist of a downward spike and then recovery as shown below. This image came from animal 1's water usage. End of explanation """ from IPython.display import Image Image('./drinking/m1_k5_sl200.png') """ Explanation: Methods for extracting drinking events from raw data The machine learning methods that we will use to determine health or sickness status will rely on features extracted from the direct measurements made by the Promethion cage. In the case of drinking, the perfect system would allow us to know if the animal was drinking (and how much) at any given second. In reality, this will be hard because of variability in the signals produced by the water sensors as well as the variability in drinking behaviors. We may have to settle for a method which will detect that a drink has occurred in the last x seconds. To validate our feature extraction methods we have two sources of data (other than our intuitition based on graphing the data): 1) Promethion ethovision software classification. 2) Monitoring mice with a camera in conjunction with recording from the cages. The methods we will use for feature extraction are: 1. K-means clustering 2. Wavelet analysis 3. Spectral analysis 4. HMM's 5. SVM, RF, etc? K-means clustering for feature extraction Our initial attempt to classify events as drinking (so that we can count them, determine how much has been drunk, etc.) will be based on kmeans clustering. In general, we are attempting to build a codebook that has a set of signals (vectors) which span the space of signals recorded by the water sensor. We are particularly interested in finding something that can tell us when a drink has been taken. The codebook we are creating will depend very much on the length of the signal we are processing. If we process a collection of signals that are each 10 seconds long, they will look different in many ways than a collection of signals (from the same data) that are 100 seconds long. At a high level, the input data are traces of water consumption. This means a 1D vector where the ith entry is the amount of water remaining in the cage at time i (more accurately i measurements since the start of the experiment). This data is split at regular intervals into sub-signals of length k. For instance, if we measured for 1005 seconds, and wanted to look for different signal types in the range of 10 seconds we'd split the data into 100 length 10 signals (and discard the 5 seconds at the end). We'd then have 100 independent vectors to run our computations on (i.e. to classify as closest to different members of our codebook). The scipy.cluster documentation suggests that for kmeans clustering it is essential to normalize the columns of our signals matrix (i.e. the features of each vector) to have unit variance. To do this, we divide each column by it's standard deviation. We also subtract the mean of the data from each vector (each row). Tests without this subtraction show that the cluster centroids are picked mainly to accomodate the different data scales that happen. Specifically, the mean water level is ~300g at the start of the experiment and ~100g at the end of the experiment for some mice. The centroids picked when the data are not centered end up being far more determined by the general water level than the specific signal changes. Below is an example of using K-means clustering to find the drinking events in a specific signal. This data is from mouse 1, using 5 clusters, 0 regularization value, and a signal length of 100. The top graph shows the actual water trace (in dark blue) and each window of 100 seconds is colored according to which of the k centroids that signal is closest too. You can see that many of the drops which we beleive to be drinking events are highlighted in either light blue or orange. This means that they are closer to the light blue and orange centroid than they are to the darker blue centroid (which appears to be the centroid representing flat signals). The bottom graph shows the trace of the centroid vectors (i.e. the means). The green and purple centroid vectors don't show up in the top graph but are found in other parts of the the whole trace. Interestingly, the orange and blue centroids (which are classifying the feeding events) appear to have very constant values. The X axis on the bottom graph represents the position in the vector (so blue_centroid[50] is the closest value - in the sense of minimizing least squared error - to v[50] for all v which are in the blue cluster in the entire experiment). End of explanation """ %matplotlib inline from bcp.feature_extraction import trace_to_signals_matrix from bcp.stats import centered_moving_average from bcp.preprocess import (weight_sensor_positive_spikes, smooth_positive_spikes) import numpy as np from scipy.cluster.vq import kmeans, vq import matplotlib.pyplot as plt w1 = np.load('../data/exp1/Water_1.npy') t = np.load('../data/exp1/time.npy') # We are going to use only a small bit of the water trace. start_ind = 110000 end_ind = 130000 # The signal_length is the length of the vectors we will use for the inputs to k-means. # K is the number of vector centroids we have. signal_length = 100 k = 5 # Calculate the vectors for input to kmeans. m_start_ind = start_ind / signal_length m_end_ind = end_ind / signal_length # Kmeans with no prior data smoothing. w1_sm = trace_to_signals_matrix(w1, signal_length, regularization_value=.001) cb, _ = kmeans(w1_sm, k) obs = w1_sm[m_start_ind: m_end_ind] m, d = vq(obs, cb) # Kmeans with center moving average applied to data # The cma_radius indicates the window of the moving average around the given point. cma_radius = 5 w1_cma = centered_moving_average(w1, cma_radius) # set the edges of the signals to what they would otherwise be. # without this, there are significant effects on the set of kmeans. w1_cma[:cma_radius] = w1[:cma_radius] w1_cma[-cma_radius:] = w1[-cma_radius:] w1_cma_sm = trace_to_signals_matrix(w1_cma, signal_length, regularization_value=.001) cb_cma, _ = kmeans(w1_cma_sm, k) obs_cma = w1_cma_sm[m_start_ind: m_end_ind] m_cma, d_cma = vq(obs_cma, cb_cma) # Kmeans with removal of spiking values as illustrated in trace above at roughly t=112000. threshold = .3 spikes = weight_sensor_positive_spikes(w1, t, threshold) backward_window = 10 forward_window = 5 w1_psr = smooth_positive_spikes(w1, spikes, backward_window, forward_window) w1_psr_sm = trace_to_signals_matrix(w1_psr, signal_length, regularization_value=.001) cb_psr, _ = kmeans(w1_psr_sm, k) obs_psr = w1_psr_sm[m_start_ind: m_end_ind] m_psr, d_psr = vq(obs_psr, cb_psr) # Kmeans with removal of spiking values as illustrated in trace above at roughly t=112000 # and cma threshold = .3 spikes = weight_sensor_positive_spikes(w1, t, threshold) backward_window = 10 forward_window = 5 w1_psr = smooth_positive_spikes(w1, spikes, backward_window, forward_window) cma_radius = 5 w1_psr_cma = centered_moving_average(w1_psr, cma_radius) w1_psr_cma[:cma_radius] = w1[:cma_radius] w1_psr_cma[-cma_radius:] = w1[-cma_radius:] w1_psr_cma_sm = trace_to_signals_matrix(w1_psr_cma, signal_length, regularization_value=.001) cb_psr_cma, _ = kmeans(w1_psr_cma_sm, k) obs_psr_cma = w1_psr_cma_sm[m_start_ind: m_end_ind] m_psr_cma, d_psr_cma = vq(obs_psr_cma, cb_psr_cma) # plot results ms = [m, m_cma, m_psr, m_psr_cma] cbs = [cb, cb_cma, cb_psr, cb_psr_cma] ws = [w1, w1_cma, w1_psr, w1_psr_cma] colors = [plt.cm.Paired(i/float(k)) for i in range(k)] f, axarr = plt.subplots(nrows=4, ncols=2, figsize=(25,20)) for i in range(4): axarr[i,0].plot(t[start_ind:end_ind], ws[i][start_ind:end_ind]) for n in range(len(ms[i])): xmin_ind = start_ind + n * signal_length xmax_ind = xmin_ind + signal_length axarr[i, 0].axvspan(t[xmin_ind], t[xmax_ind], color=colors[ms[i][n]], alpha=.2) axarr[i, 0].set_xlim(start_ind, end_ind) axarr[i, 0].set_ylim(319.8, 320.6) for j in range(k): axarr[i, 1].plot(cbs[i][j], lw=2, color=colors[j]) plt.show() # Number in each class in the entire data from mouse 1, and in just the subset we classified. #print np.bincount(vq(_w1, cb)[0], minlength=k) #print np.bincount(m, minlength=k) """ Explanation: Can we improve k-means using noise reduction or moving averages? Below is working code for building k means clustering using a variety of different additional filtration and regularization steps. It is the same data that generated the plot above, but with different parameters. The different approaches for preprocessing are centered moving average - just a window moving average, and positive spike removal - a process which we are using to remove the times when the mouse interacts with the device and causes an apparent spike well above background. End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt w1 = np.load('../data/exp1/Water_1.npy') t = np.load('../data/exp1/time.npy') start = 120000 stop = 150000 data = w1[start:stop] fs = np.fft.fft(data) f, axarr = plt.subplots(nrows=3, figsize=(20,10)) axarr[0].plot(t[start:stop], data) axarr[0].set_ylabel('Mouse 1\nwater trace') axarr[0].set_xlabel('Time since start (s)') axarr[1].plot(np.log10(np.abs(fs))) axarr[1].set_ylabel('log10(amplitdues)\nFourier spectrum') axarr[1].set_xlabel('Component') axarr[2].plot(np.log10(np.abs(fs[:100]))) axarr[2].set_ylabel('fs[:100]\nlog10(amplitudes)') axarr[2].set_xlabel('Component') plt.subplots_adjust(hspace=.4) plt.show() print 'Top 10 Components (Magnitudes, Frequencies)' for i in range(10): print '\t'.join(map(str, ['Component %s' % i, np.abs(fs[i])*(2./(stop-start)**.5), i/float(stop-start)])) """ Explanation: FFTs/DFTs Another approach suggested by Katie's earlier work as well as the signal nature of the problem is to use discrete Fourier Transforms on the drinking data to attempt to pull out the drinking events/signal components. This link is an excellent guide to Fourier transforms, and I have used it in conjunction with this video series. The ultimate goal is similar to the k-means above; can we take water traces and detect where the drinking events are occuring? Below is the code that runs a Fourier transform on a water signal. End of explanation """ from IPython.display import Image Image('./drinking/w1_fourier_spectrum.png') # start = 0 # stop = 2342138 # data = w1[start:stop] # fs = np.fft.fft(data) # f, axarr = plt.subplots(nrows=3, figsize=(20,10)) # axarr[0].plot(t[start:stop], data) # axarr[0].set_ylabel('Mouse 1\nwater trace') # axarr[0].set_xlabel('Time since start (s)') # axarr[1].plot(np.log10(np.abs(fs))) # axarr[1].set_ylabel('log10(amplitdues)\nFourier spectrum') # axarr[1].set_xlabel('Component') # axarr[2].plot(np.log10(np.abs(fs[:100]))) # axarr[2].set_ylabel('fs[:100]\nlog10(amplitudes)') # axarr[2].set_xlabel('Component') # plt.subplots_adjust(hspace=.4) # plt.show() # print 'Top 10 Components (Magnitudes, Frequencies)' # for i in range(10): # print '\t'.join(map(str, ['Component %s' % i, np.abs(fs[i])*(2./(stop-start)**.5), i/float(stop-start)])) """ Explanation: Notice that the components are generally decreasing with increasing frequency. This suggests that we mainly have low frequency phenomena (i.e. the general electronic noise in the system). There is symmetry around the 15000th component because of the way Fourier transforms work (the phase is exactly pi off). It is unclear if the Fourier transform can be useful for us in this context. When we conduct the Fourier transform on the entire water 1 trace we get the following image. Katie reported that when she conducted this analysis on feeding data, the highest amplitude component (other than a0) was one with a period of 24h = 1.1574074074074073e-05 Hz. For the below data, we find that the order of the most important components is: [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 27, 22, 23, 24, 26, 25, 29, 30, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 48, 47, 50, 49, 51, 54, 52, 53, 56, 58, 57, 60, 59, 55, 61, 62, 63, 64, 65, 66, 67, 69, 68, 70, 71, 72, 73, 81, 74, 78, 76, 75, 77, 80, 79, 83, 85, 84, 82, 86, 89, 87, 88, 90, 94, 92, 93, 91, 96, 95, 98, 97, 99] The frequency to component relationship is: f = (s-1)/(stop-start) f = 1.1574074074074073e-05 f * 2342138 +1 = 28.108078703703704 Note the (s-1) component of the above formula may be an artifact of matlab's 1-based indexing. Check this. Thus, we'd expect the 28th component to be the largest contribution if a signal with a period of 24h was dominant. It is in the top 30, but is not the first. This suggests that either I have done something wrong or the detrending of the overall signal would be required to see the 24h period. End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt from bcp.stats import centered_moving_average w1 = np.load('/Users/wdwvt/Desktop/Sonnenburg/cumnock/bcp/data/exp1/Water_1.npy') t = np.load('/Users/wdwvt/Desktop/Sonnenburg/cumnock/bcp/data/exp1/time.npy') data = w1[120000:150000] cma_data = centered_moving_average(data, 5) f, axarr = plt.subplots(nrows=2) axarr[0].plot(np.log10(np.abs(np.fft.fft(data)))) axarr[1].plot(np.log10(np.abs(np.fft.fft(cma_data)))) plt.show() """ Explanation: Wavelet analysis http://gtwavelet.bme.gatech.edu/wp/kidsA.pdf https://www.eecis.udel.edu/~amer/CISC651/IEEEwavelet.pdf http://jseabold.net/blog/2012/02/23/wavelet-regression-in-python/ https://www.youtube.com/watch?v=gZNm7L96pfY Random notes It has been suggested that a moving average is the best filter for things in the time domain if you don't care about the frequency domain. See this link in particular. It appears to be a very bad idea to pair a centered moving average with an FFT. As the following shows End of explanation """
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE
notebooks/14_standard_library.ipynb
mit
%%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') """ Explanation: A tour through the python standard library python comes with "batteries included", the standard library is extremely rich and powerfull End of explanation """ import os """ Explanation: <h1 id="tocheading">Table of Contents</h1> <div id="toc"></div> os, os.path: Accessing the operation system End of explanation """ os.getenv('STUFF'), os.getenv('STUFF', default='foo') os.environ['STUFF'] = 'bar' os.getenv('STUFF', default='foo') del os.environ['STUFF'] """ Explanation: environment variables useful for configs End of explanation """ print(os.getcwd()) os.chdir('..') print(os.getcwd()) os.chdir('notebooks') """ Explanation: pwd, cwd End of explanation """ os.makedirs('example', exist_ok=True) fname = 'example/test.txt' with open(fname, 'w') as f: f.write('Hello World\n') stat_res = os.stat(fname) '{:o}'.format(stat_res.st_mode) # use an octal integer os.chmod(fname, 0o600) # equivalent to chmod 666 <filename> stat_res = os.stat(fname) '{:o}'.format(stat_res.st_mode) os.makedirs('example/build') print(os.listdir('example/')) if os.path.exists('example/build'): os.rmdir('example/build') print(os.listdir('example')) if os.path.isfile('example/test.txt'): os.remove('example/test.txt') print(os.listdir('example')) os.path.join('build', 'example', 'test.txt') os.path.splitext('test.txt') """ Explanation: Manipulating files End of explanation """ import itertools import random dirnames = ['foo', 'bar', 'baz'] extensions = ['.txt', '.dat', '.docx', '.xslx', '.fits', '.png'] filenames = [name + ext for name in dirnames for ext in extensions] def create_files_and_subfolders(path, depth=3): n_files = random.randint(1, 5) for i in range(n_files): open(os.path.join(path, random.choice(filenames)), 'w').close() if depth == 0: return n_subdirs = random.randint(1, 3) for i in range(n_subdirs): subdir = os.path.join(path, random.choice(dirnames)) os.makedirs(subdir, exist_ok=True) create_files_and_subfolders(subdir, depth=depth - 1) create_files_and_subfolders('example', 3) # os.walk goes recursivly through all directories and returns files and subdirectories for root, dirs, files in os.walk('example'): print(root) for d in sorted(dirs): print(' ', d) for f in sorted(files): print(' ', f) import shutil shutil.rmtree('example', ignore_errors=True) # rm -rf """ Explanation: Let's create a lot of files End of explanation """ import subprocess as sp result = sp.check_output(['conda', 'list', 'numpy']) print(result) print() print(result.decode()) """ Explanation: subprocess, calling shell commands End of explanation """ url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/00/Crab_Nebula.jpg/480px-Crab_Nebula.jpg' process = sp.run( ['curl', url], stdout=sp.PIPE, stderr=sp.PIPE, ) import matplotlib.pyplot as plt from scipy.misc import imread %matplotlib inline from io import BytesIO # File-Like object in memory img = BytesIO(process.stdout) plt.imshow(imread(img)) """ Explanation: more complex task, provide read stdout End of explanation """ from random import random import time def do_work(): time.sleep(random()) print('hello') time.sleep(1) print('world') for i in range(3): do_work() from threading import Thread threads = [Thread(target=do_work) for i in range(3)] for t in threads: t.start() # block until all threads are done for t in threads: t.join() import random def pi_mc(n): n_circle = 0 for i in range(n): x = random.uniform(-1, 1) y = random.uniform(-1, 1) if (x**2 + y**2) <= 1: n_circle += 1 return 4 * n_circle / n from multiprocessing import Pool n_jobs = 100 n_iters = 100000 iterable = [n_iters] * n_jobs print(iterable) with Pool(4) as pool: results = pool.map(pi_mc, iterable) print(sum(results) / len(results)) """ Explanation: threading, multiprocessing: Doing stuff in parallel There are much more advanced libraries for this, e.g. joblib https://pythonhosted.org/joblib/ Python can only run one python statement at a time through one interpreter, even when using multiple threads, only one thread at a time will be executed. This is called the Global Interpreter Lock (GIL). So you only gain in perfomance using threads, when: there are I/O bound operations (Reading files, downloads, waiting on sockets) When you use a lot of c-extensions (like numpy, pandas and basically all the scientific python stack) sleeping For truly parallel operations, you need new python processes, this can be done with the multiprocessing module. End of explanation """ text = '''Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do: once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it, `and what is the use of a book,' thought Alice `without pictures or conversation?' So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble of getting up and picking the daisies, when suddenly a White Rabbit with pink eyes ran close by her. There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, `Oh dear! Oh dear! I shall be late!' (when she thought it over afterwards, it occurred to her that she ought to have wondered at this, but at the time it all seemed quite natural); but when the Rabbit actually took a watch out of its waistcoat-pocket, and looked at it, and then hurried on, Alice started to her feet, for it flashed across her mind that she had never before seen a rabbit with either a waistcoat-pocket, or a watch to take out of it, and burning with curiosity, she ran across the field after it, and fortunately was just in time to see it pop down a large rabbit-hole under the hedge. ''' # remove punctuation text = text.translate({ord(c): None for c in ',`;.!:?()\''}) print(text) # solution one, pure python counts = {} for word in text.split(): if word not in counts: counts[word] = 0 counts[word] += 1 for name, count in sorted(counts.items(), key=lambda s: s[1], reverse=True)[:10]: print(name, count) """ Explanation: collections: Useful Containers example: count words End of explanation """ # solution 2, using default dict from collections import defaultdict counts = defaultdict(int) # int() returns 0 for word in text.split(): counts[word] += 1 for name, count in sorted(counts.items(), key=lambda s: s[1], reverse=True)[:10]: print(name, count) # solution 3, Counter from collections import Counter counts = Counter(text.split()) for word, count in counts.most_common(10): print(word, count) color = (1.0, 0, 0, 1.0) ## what is this? RGBA? CMYK? Something else from collections import namedtuple RGBAColor = namedtuple('RGBAColor', ['r', 'g', 'b', 'a']) color = RGBAColor(1.0, 0, 0, 1.0) color """ Explanation: collections.defaultdict takes a function that initialises entries End of explanation """ import functools # reduce was a builtin in py2 functools.reduce(lambda v1, v2: v1 + v2, range(100)) newline_print = functools.partial(print, sep='\n') newline_print(*range(5)) def fib(n): if n == 0: return 0 if n in (1, 2): return 1 else: return fib(n - 1) + fib(n - 2) @functools.lru_cache(maxsize=500) def fib_cached(n): if n == 0: return 0 if n in (1, 2): return 1 else: return fib(n - 1) + fib(n - 2) fib(10) print('cached') fib_cached(7) %%timeit fib(15) %%timeit fib_cached(15) """ Explanation: functools: functional programming in python End of explanation """ import re files = ['img001.png', 'img002.png', 'world.txt', 'foo.txt', 'stuff.dat', 'test.xslx'] for f in files: m = re.match('img([0-9]{3}).png', f) if m: print(f, m.groups()) """ Explanation: re, regular expressions End of explanation """ import itertools longer = [1, 2, 3, 4, 5] shorter = ['a', 'b', 'c'] print('{:-^40}'.format(' zip ')) for l, s in zip(longer, shorter): print(l, s) print('{:-^40}'.format(' zip_longest ')) for l, s in itertools.zip_longest(longer, shorter): print(l, s) print('{:-^40}'.format(' zip_longest, with fillvalue')) for l, s in itertools.zip_longest(longer, shorter, fillvalue='z'): print(l, s) list(itertools.permutations('ABC')) list(itertools.combinations('ABC', 2)) """ Explanation: itertools, more iterations End of explanation """ from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('inputfile') # positional argument parser.add_argument('-o', '--output') # option parser.add_argument('-n', '--number', default=0, type=int) # option with default value and type None args = parser.parse_args(['data.csv']) # of no arguments give, use sys.argv print(args.number, args.inputfile, args.output) args = parser.parse_args(['data.csv', '--number=5', '-o', 'test.csv']) print(args.number, args.inputfile, args.output) """ Explanation: argparse: commandline options Alternatives: click: http://click.pocoo.org/6/ docopt: http://docopt.org/ End of explanation """ a = [1, 2, [4, 5]] b = a b[1] = 3 b[2][1] = 'Hello' print(a) print(b) from copy import copy a = [1, 2, [4, 5]] b = copy(a) b[1] = 3 b[2][1] = 'Hello' print(a) print(b) from copy import deepcopy a = [1, 2, [4, 5]] b = deepcopy(a) b[1] = 3 b[2][1] = 'Hello' print(a) print(b) """ Explanation: copy, copy operations End of explanation """ import tempfile import os print(tempfile.gettempdir()) # file will be deleted when exiting the with block with tempfile.NamedTemporaryFile(prefix='python_course_', suffix='.txt', mode='w') as f: path = f.name f.write('Hello World') print('f exists:', os.path.exists(path)) print('f exists:', os.path.exists(path)) # directory with all contents will be deleted when we exit the with block with tempfile.TemporaryDirectory() as d: print(d) with open(os.path.join(d, 'myfile.txt'), 'w') as f: print(f.name) f.write('Hello World') print('d exists:', os.path.exists(d)) print('d exists:', os.path.exists(d)) """ Explanation: tempfile, Tempory Files and Directories End of explanation """ import random import struct # pack data struct.pack('II', 2, 1024) # pack to unsigned 32bit integers # unpack data struct.unpack('f', b'\xdb\x0f\x49\x40') # 32-bit float # create a binary file, let's add a comment first with open('letsinventourownbinaryformat.dat', 'wb') as f: comment = 'Here, have this awesome data!'.encode('ascii') f.write(struct.pack('I', len(comment))) f.write(comment) for i in range(1000): x = random.uniform(-1, 1) y = random.uniform(-1, 1) n = random.randint(1, 200 - int(100 * (x**2 + y**2))) f.write(struct.pack('ddI', x, y, n)) # read the file back in xs, ys, ns = [], [], [] with open('letsinventourownbinaryformat.dat', 'rb') as f: comment_length, = struct.unpack('I', f.read(4)) comment = f.read(comment_length).decode('ascii') size = struct.calcsize('ddI') data = f.read(size) while data: x, y, n = struct.unpack('ddI', data) xs.append(x) ys.append(y) ns.append(n) data = f.read(size) print(comment) print(len(xs)) """ Explanation: struct, parsing binary data It's like every other day in the office. Your supervisor does not like standardized file formats. Like .fits, or .hdf or, ("Are you completely insane?") .json or .yaml. Because, you know, they are super hard to read using Fortran 77. So he sends you data in "an easy to read" file format, a custom, proprietary binary blob: First 4 bytes is an unsigned integer, containing the length of the comment string Then N bytes comment encoded using utf-8 utf-8? Are you kidding me? ASCII!!! Then triples of double, double, unsigned int for x, y, n End of explanation """ import smtplib from email.message import EmailMessage from getpass import getpass text = '''Hello Participants, Thanks for attending! Do not forget to provide feedback Cheers, Max ''' msg = EmailMessage() msg.set_content(text) msg['Subject'] = 'Email demonstration at the Python Course' msg['From'] = 'Firstname Surname <email address>' msg['To'] = 'Firstname Surname <email address>, Firstname Surname <email address>' # Send the message via our own SMTP server. s = smtplib.SMTP_SSL(host='server') s.login(input('Username: '), getpass('Enter password: ')) s.send_message( from_addr=msg['From'], to_addrs=msg['To'], msg=msg, ) s.quit() """ Explanation: email, smtplib, getpass End of explanation """
darkomen/TFG
medidas/03082015/.ipynb_checkpoints/datos-checkpoint.ipynb
cc0-1.0
#Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Abrimos el fichero csv con los datos de la muestra datos = pd.read_csv('03081535.csv') datos_filtrados = datos[(datos['Diametro X'] >= 1.2) & (datos['Diametro Y'] >= 1.2)] %pylab inline #Mostramos un resumen de los datos obtenidoss datos_filtrados.describe() #datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']] #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar columns = ['Diametro X', 'Diametro Y', 'RPM t'] #Mostramos en varias gráficas la información obtenida tras el ensayo datos[columns].plot(subplots=True, figsize=(20,20)) """ Explanation: Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. Los datos analizados son del filamento de bq el día 20 de Julio del 2015 End of explanation """ datos.ix[:, "Diametro X [mm]":"Diametro Y [mm]"].plot(figsize=(16,3)) datos.ix[:, "Diametro X [mm]":"Diametro Y [mm]"].boxplot(return_type='axes') """ Explanation: Representamos ambos diámetros en la misma gráfica End of explanation """ pd.rolling_mean(datos[columns], 50).plot(subplots=True, figsize=(12,12)) """ Explanation: Mostramos la representación gráfica de la media de las muestras End of explanation """ plt.scatter(x=datos['Diametro X [mm]'], y=datos['Diametro Y [mm]'], marker='.') """ Explanation: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento End of explanation """ datos_filtrados = datos[(datos['Diametro X [mm]'] >= 0.9) & (datos['Diametro Y [mm]'] >= 0.9)] """ Explanation: Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. End of explanation """ plt.scatter(x=datos_filtrados['Diametro X [mm]'], y=datos_filtrados['Diametro Y [mm]'], marker='.') """ Explanation: Representación de X/Y End of explanation """ ratio = datos_filtrados['Diametro X [mm]']/datos_filtrados['Diametro Y [mm]'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5)) """ Explanation: Analizamos datos del ratio End of explanation """ Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X [mm]'] > Th_u) | (datos['Diametro X [mm]'] < Th_d) | (datos['Diametro Y [mm]'] > Th_u) | (datos['Diametro Y [mm]'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12)) """ Explanation: Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$ End of explanation """
james-prior/cohpy
20170615-dojo-days-of-months.ipynb
mit
MONTHS_PER_YEAR = 12 # for unknown year def max_month_length(month): """Return maximum number of days for given month. month is zero-based. That is, 0 means January, 11 means December, 12 means January (again) -2 means November (yup, wraps around both ways)""" max_month_lengths = ( 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31) return max_month_lengths[month % len(max_month_lengths)] max_month_length(0) # January max_month_length(13) # February max_month_length(-2) # November def max_days_n_months(n_months, starting_month=None): """Return the maximum number of days in n_months whole consecutive months, optionally starting with starting_month. If starting_month is None or is not specified, return highest value for all possible starting months.""" if starting_month is not None: return sum( max_month_length(month) for month in range(starting_month, starting_month + n_months) ) return max( max_days_n_months(n_months, starting_month) for starting_month in range(MONTHS_PER_YEAR) ) def foo(n): for n_months in range(1, n+1): n_days = max_days_n_months(n_months) yield n_months, n_days n = MONTHS_PER_YEAR # %timeit list(foo(n)) list(foo(n)) """ Explanation: Per R P Herrold's challenge. The hardest part was figuring out what the program was supposed to accomplish. From the email, code documentation, and code itself, I had a very difficult time understanding what the code was trying to accomplish, nevermind how the code was accomplishing it. After studying the code, I made my description below of what the code is to accomplish. It might be different that what the original author envisioned. For each number of consecutive months from 1 to 12 inclusive, calculate the maximum possible number of days those consecutive months could have. (Choose the starting month and year that gives highest possible answer.) Also, figure out what the earliest and lastest starting months are that can yield the maximum number of days. For all my code below, months are numbered starting at 0. That is, 0 means January 11 means December End of explanation """ from collections import defaultdict def max_n_days_for_months(): """Yield tuples of number of consecutive months, maximum number of days for those consecutive months and list of starting months which produce that above maximum for all numbers of consecutive months up to a year.""" for n_months in range(1, MONTHS_PER_YEAR+1): d = defaultdict(list) for starting_month in range(MONTHS_PER_YEAR): n_days = max_days_n_months(n_months, starting_month) d[n_days].append(starting_month) max_n_days = max(d) yield n_months, max_n_days, sorted(d[max_n_days]) # %timeit list(max_n_days_for_months()) list(max_n_days_for_months()) def pretty(): """Yields lines to be absolutely identical to that from days_spanned.sh.""" for selector, name in ((min, '-gt'), (max, '-ge')): yield f'Compare: {name} ' for n_months, max_n_days, months in max_n_days_for_months(): month = selector(months) + 1 if n_months == 1: yield f'month: {month} month spans: {max_n_days} days ' elif n_months == 2: yield f'month: {month} plus following month spans: {max_n_days} days ' else: yield f'month: {month} plus following {n_months - 1} months spans: {max_n_days} days ' yield '' for line in pretty(): print(line) """ Explanation: The maximum number of days in the above output matches that output by days_spanned.sh. Now to add the stuff that keeps track of which starting months can yield those maximum number of days. End of explanation """ known_good_output = ''.join(f'{line}\n' for line in pretty()) def pretty(): """Yields lines to be absolutely identical to that from days_spanned.sh.""" for selector, name in ((min, '-gt'), (max, '-ge')): yield f'Compare: {name} ' for n_months, max_n_days, months in max_n_days_for_months(): month = selector(months) + 1 if n_months == 1: duration_prose = f'month' elif n_months == 2: duration_prose = f'plus following month' else: duration_prose = f'plus following {n_months - 1} months' yield f'month: {month} {duration_prose} spans: {max_n_days} days ' yield '' assert known_good_output == ''.join(f'{line}\n' for line in pretty()) """ Explanation: The above output exactly matches the output from days_spanned.sh, but my code is ugly. The code above is not as readable as I like. Later, I polished below. End of explanation """ MONTH_NAMES = ''' January February March April May June July August September October November December '''.split() MONTH_NAMES # derived from https://stackoverflow.com/questions/38981302/converting-a-list-into-comma-separated-string-with-and-before-the-last-item def oxford_comma_join(items, join_word='and'): # print(f'items={items!r} join_word={join_word!r}') items = list(items) if not items: return '' elif len(items) == 1: return items[0] elif len(items) == 2: return f' {join_word} '.join(items) else: return ', '.join(items[:-1]) + f', {join_word} ' + items[-1] test_data = ( # (args for oxford_comma_join, correct output), ((('',),), ''), ((('lonesome term',),), 'lonesome term'), ((('here', 'there'),), 'here and there'), ((('you', 'me', 'I'), 'or'), 'you, me, or I'), ((['here', 'there', 'everywhere'], 'or'), 'here, there, or everywhere'), ) for args, known_good_output in test_data: # print(f'args={args!r}, k={known_good_output!r}, output={oxford_comma_join(*args)!r}') assert oxford_comma_join(*args) == known_good_output import inflect p = inflect.engine() from textwrap import wrap def pretty(): """For number of consecutive months, up to 12, yields sentences that show for each number of consecutive months, the maximum possible number of days in those consecutive months, and for which starting months one can have those maximum possible number of days.""" for n_months, max_n_days, months in max_n_days_for_months(): month_names = (MONTH_NAMES[month] for month in months) yield ( f'{n_months} consecutive {p.plural("month", n_months)} ' f'can have at most {max_n_days} days ' f'if starting in {oxford_comma_join(month_names, "or")}.' ) for sentence in pretty(): for line in wrap(sentence): print(line) def not_so_pretty(): """For number of consecutive months, up to 12, yields sentences that show for each number of consecutive months, the maximum possible number of days in those consecutive months, and for which starting months one can have those maximum possible number of days.""" for n_months, max_n_days, months in max_n_days_for_months(): month_names = (MONTH_NAMES[month][:3] for month in months) yield ( f'{n_months} months ' f'have max {max_n_days} days ' f'starting in {oxford_comma_join(month_names, "or")}.' ) for sentence in not_so_pretty(): print(sentence) """ Explanation: Now to change the output to be more readable to me. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/ocean.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'ocean') """ Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: MESSY-CONSORTIUM Source ID: EMAC-2-53-AERCHEM Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:10 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) """ Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) """ Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) """ Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) """ Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) """ Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation """
samoturk/HUB-machine-learning
ipython/Prediction of diabetes with scikit-learn.ipynb
bsd-3-clause
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, roc_auc_score, auc, recall_score, accuracy_score, confusion_matrix from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier import seaborn as sns """ Explanation: Prediction of diabetes with scikit-learn Imports End of explanation """ %matplotlib inline sns.set() """ Explanation: Enable plotting End of explanation """ df = pd.read_csv('../data/pima-indians-diabetes-data.csv', index_col=[0]) df.head() df.describe() """ Explanation: About the data Pima Indians Diabetes Database Sources: (a) Original owners: National Institute of Diabetes and Digestive and Kidney Diseases (b) Donor of database: Vincent Sigillito (vgs@aplcen.apl.jhu.edu), Applied Physics Laboratory, The Johns Hopkins University (c) Date received: 9 May 1990 Relevant Information: Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage. ADAP is an adaptive learning routine that generates and executes digital analogs of perceptron-like devices. It is a unique algorithm; see the paper for details. Number of Instances: 768 Number of Attributes: 8 plus class For Each Attribute: (all numeric-valued) Number of times pregnant Plasma glucose concentration a 2 hours in an oral glucose tolerance test Diastolic blood pressure (mm Hg) Triceps skin fold thickness (mm) 2-Hour serum insulin (mu U/ml) Body mass index (weight in kg/(height in m)^2) Diabetes pedigree function Age (years) Class variable (0 or 1) Class Distribution: (class value 1 is interpreted as "tested positive for diabetes") | Class | Number of instances | |-------|---------------------| | 0 | 500 | | 1 | 268 | Brief statistical analysis: | Attribute number:| Mean: | Standard Deviation:| |------------------|-------|--------------------| | 1. | 3.8| 3.4 | | 2. | 120.9| 32.0 | | 3. | 69.1| 19.4 | | 4. | 20.5| 16.0 | | 5. | 79.8| 115.2 | | 6. | 32.0| 7.9 | | 7. | 0.5 | 0.3 | | 8. | 33.2| 11.8 | Load data and have initial look End of explanation """ len(df[df['class'] == 1]), len(df[df['class'] == 0]) sns.pairplot(df, x_vars=['plasma_glucose_c', 'blood_presure', 'BMI'], y_vars=['plasma_glucose_c', 'blood_presure', 'BMI'], hue='class') """ Explanation: Look at class distribution End of explanation """ X = df.drop('class', axis=1).values y = df['class'].values """ Explanation: Data in the table is organized the following way: | Samples | Feature 1 | Feature 2 | ... | Class | |----------|-----------|-----------|-----|-------| | Sample 1 | 12 | 600 | ... | 1 | | Sample 2 | 9 | 932 | ... | 0 | Extract values for the machine learning. X - are features y - class, target value End of explanation """ X_train, X_test, y_train, y_test = train_test_split(X, y) """ Explanation: Split the data in training and test set End of explanation """ clf = RandomForestClassifier(n_estimators=100) clf.fit(X_train, y_train) """ Explanation: Train the model End of explanation """ y_pred = clf.predict(X_test) y_pred_proba = clf.predict_proba(X_test) """ Explanation: Predict y (class) on test set and probabilities that sample belongs to each of two classes. End of explanation """ def plot_hist(y, scores, title, size=(1.5,1.5)): fig = plt.figure(figsize=size, dpi=80) axes = fig.add_axes([0, 0, 1, 1]) bins = np.linspace(0, 1, 11) axes.hist([x[0] for x in zip(scores, y) if x[1] == 1], bins, alpha=0.5, color= 'b') axes.hist([x[0] for x in zip(scores, y) if x[1] == 0], bins, alpha=0.5, color= 'r') axes.vlines(0.5, 0, np.histogram(scores, bins)[0].max(), color='black', linestyles='--') axes.set_ylim((0, np.histogram(scores, bins)[0].max())) axes.set_xlabel(title) axes.set_ylabel('#') return fig def plot_ROC(observations, probabilities, title="", labels=True, size='auto'): """ Creates ROC plot from observations (y_test) and probabilities (y_pred_proba) title -- title of the plot size -- tuple, size in inch, defaults to 'auto' labels -- toogle display of title and x and y labels and tick labels """ if size is 'auto': fig = plt.figure() else: fig = plt.figure(num=None, figsize=size, dpi=80) axes = fig.add_axes([0, 0, 1, 1]) fpr, tpr, thresholds = roc_curve(observations, probabilities) axes.plot(fpr, tpr) axes.plot([0, 1], [0, 1], 'k--') axes.set_aspect('equal') if labels: axes.set_title(title) axes.set_xlabel('False Positive Rate') axes.set_ylabel('True Positive Rate') else: axes.get_xaxis().set_ticks([]) axes.get_yaxis().set_ticks([]) return fig """ Explanation: Helper functions that facilitate plotting End of explanation """ plot_hist(y_test, y_pred_proba.T[1], 'probability', size=(3,3)); """ Explanation: Plot distribution of probabilities End of explanation """ plot_ROC(y_test, y_pred_proba.T[1], size=(3,3)); print("AUC: %.3f" % roc_auc_score(y_test, y_pred_proba.T[1])) """ Explanation: Plot ROC curve End of explanation """ confusion_matrix(y_test, y_pred) recall_score(y_test, y_pred, pos_label=1) # Low-moderate sensitivity recall_score(y_test, y_pred, pos_label=0) # High specificity """ Explanation: Calculate confusion matrix | | Predicted Positive | Predicted Negative| |----------|-----------|----------| | Positive | TP | FN | | Negative | FP | TN | End of explanation """
ComputationalModeling/spring-2017-danielak
past-semesters/spring_2016/homework_assignments/Homework_1.ipynb
agpl-3.0
# write any code you need here! # Create additional cells if you need them by using the # 'Insert' menu at the top of the browser window. """ Explanation: Homework #1 This notebook contains the first homework for this class, and is due on Sunday, January 31st, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office horus and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment. Some links that you may find helpful: Markdown tutorial The matplotlib website The matplotlib figure gallery (this is particularly helpful for getting ideas!) The Pyplot tutoiral Your name Put your name here! Section 1: Carbon dioxide Part 1. Consider this: How much carbon dioxide does a square kilometer of forest remove from the Earth's atmosphere each year? And, how does that compare to the amount of carbon dioxide that a car adds to the atmosphere each year? Come up with a simple order-of-magnitude approximation for each of those two questions, and in the cell below this one write a paragraph or two addressing each of the two questions above. What are the factors you need to consider? What range of values might they have? In what way is your estimate limited? (Also, to add a twist: does it matter how old the trees in the forst are, or the car?) Note: if you use a Google search or two to figure out what range of values you might want to use, include links to the relevant web page. You can either just paste the URL, or do something prettier, like this: google!. The syntax for that second one is [google!](http://google.com). put your answer here! Part 2. In the space below, write a Python program to model the answer to both of those questions, and keep track of the answers in a numpy array. Plot your answers to both questions in some convenient way (probably not a scatter plot - look at the matplotlib gallery for inspiration!). Do the answers you get make sense to you? End of explanation """ # Create any Python and Markdown cells you need # to write your letter, do calculations, and make figures # You can add more cells using the 'Insert' menu # Note: you do not actually have to send this letter, but you can if you want! """ Explanation: Section 2: Get the Lead Out, continued As described in the in-class assignment on this subject, You're going to create a letter to send to the Governor's office based on the data anlysis you did in class and what you do here. Use the rest of this notebook (starting with the "Your Document to the Governor's Office") to write that letter. Consider this core question: Did water lead levels exceed the EPA's action limits? And if they did, how can we understand how badly it exceeded the limits? Your document should be about 3-4 paragraphs long. You're encouraged to use code and results from your in-class work in your document. And, your should do the following: State your position on whether lead levels exceeded EPA limits. Make it clear what your investigation found. Justify your position with graphics and written analysis to explain why you think what you think. Consider counterarguments. Could someone try to use the same data to arrive at a different conclusion than yours? If they could, explain why you think that position is flawed. Remember: This is real data. So, The conclusions you draw matter. These are Flint resident's actual living conditions. You may find other results online, but you still have to do your own analysis to decide whether you agree with their results. Any numerical conclusions you draw should be backed up by your code. If you say the average lead level was below EPA limits, you'll need to be able to back up that claim in your notebook either with graphical evidence or numerical evidence (calculations). Your Letter to the Governor's Office End of explanation """
tofgarion/lp-visu
lp_visu/lp_visu_ex.ipynb
gpl-3.0
from lp_visu import LPVisu from scipy.optimize import linprog import numpy as np """ Explanation: This is a simple Jupyter Notebook example presenting how to use the LPVisu class. First, import LPVisu class and necessary Python packages: End of explanation """ A = [[1.0, 0.0], [1.0, 2.0], [2.0, 1.0]] b = [8.0, 15.0, 18.0] c = [4.0, 3.0] """ Explanation: Define the problem: End of explanation """ x1_bounds = (0, None) x2_bounds = (0, None) x1_gui_bounds = (-1, 16) x2_gui_bounds = (-1, 10) visu = LPVisu(A, b, c, x1_bounds, x2_bounds, x1_gui_bounds, x2_gui_bounds, scale = 0.8, pivot_scale = 2.0, xk = (1, 1), obj = 40) """ Explanation: Define the bounds for the two variables x1 and x2, the GUI bounds and create the visualization object (add a "fake" pivot at (1, 2) and draw objective function for value 40): End of explanation """ def lp_simple_callback(optimizeResult): """A simple callback function to see what is happening to print each step of the algorithm and to use the visualization. """ print("current iteration: " + str(optimizeResult["nit"])) print("current slack: " + str(optimizeResult["slack"])) print("current solution: " + str(optimizeResult["x"])) print() LPVisu(A, b, c, x1_bounds, x2_bounds, x1_gui_bounds, x2_gui_bounds, scale = 0.8, pivot_scale = 2.0, xk = optimizeResult["x"]) """ Explanation: Define a simple callback function to be called at each step of the linprog simplexe algorithm. This callback function must use a OptimizeResult object as parameter: End of explanation """ res = linprog(-1.0 * np.array(c), A_ub=A, b_ub=b, bounds=(x1_bounds, x2_bounds), callback=lp_simple_callback, options={"disp": True}) print(res) """ Explanation: Solve the problem using the callback function and print the result: End of explanation """
retnuh/deep-learning
autoencoder/Convolutional_Autoencoder.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. End of explanation """ learning_rate = 0.01 inputs_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name='inputs') targets_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2, 'same') # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2, 'same') # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, 2, 2, 'same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(conv6, filters=1, kernel_size=(3,3), strides=(1, 1), padding='same', activation=None) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. End of explanation """ sess = tf.Session() epochs = 20 batch_size = 500 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}\r".format(batch_cost), end='') fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. End of explanation """ learning_rate = 0.01 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2, 'same') # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2, 'same') # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv3, 2, 2, 'same') # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7)) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14)) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28)) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu) # Now 28x28x32 logits = tf.layers.conv2d(conv6, filters=1, kernel_size=(3,3), strides=(1, 1), padding='same', activation=None) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 500 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}\r".format(batch_cost), end='') print() """ Explanation: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) """ Explanation: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. End of explanation """
QuantScientist/Deep-Learning-Boot-Camp
day03/2.2 CNN HandsOn - MNIST Dataset.ipynb
mit
import numpy as np import keras from keras.datasets import mnist # Load the datasets (X_train, y_train), (X_test, y_test) = mnist.load_data() """ Explanation: CNN HandsOn with Keras Problem Definition Recognize handwritten digits Data The MNIST database (link) has a database of handwritten digits. The training set has $60,000$ samples. The test set has $10,000$ samples. The digits are size-normalized and centered in a fixed-size image. The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset. Load the data The data is available in the repo's data folder. Let's load that using the keras library. For now, let's load the data and see how it looks. End of explanation """ # What is the type of X_train? # What is the type of y_train? # Find number of observations in training data # Find number of observations in test data # Display first 2 records of X_train # Display the first 10 records of y_train # Find the number of observations for each digit in the y_train dataset # Find the number of observations for each digit in the y_test dataset # What is the dimension of X_train?. What does that mean? """ Explanation: Basic data analysis on the dataset End of explanation """ from matplotlib import pyplot import matplotlib as mpl %matplotlib inline # Displaying the first training data fig = pyplot.figure() ax = fig.add_subplot(1,1,1) imgplot = ax.imshow(X_train[0], cmap=mpl.cm.Greys) imgplot.set_interpolation('nearest') ax.xaxis.set_ticks_position('top') ax.yaxis.set_ticks_position('left') pyplot.show() # Let's now display the 11th record """ Explanation: Display Images Let's now display some of the images and see how they look We will be using matplotlib library for displaying the image End of explanation """
quantumlib/Cirq
docs/tutorials/google/echoes.ipynb
apache-2.0
try: import cirq except ImportError: !pip install --quiet cirq --pre from typing import Optional, Sequence import matplotlib.pyplot as plt import numpy as np import cirq import cirq_google as cg from cirq.experiments import random_rotations_between_grid_interaction_layers_circuit """ Explanation: Qubit picking with Loschmidt echoes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/google/echoes"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/echoes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/echoes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/echoes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> A Loschmidt echo circuit applies $UU^\dagger$ for some unitary $U$ and measures the probability of the ground state $p(0)$. In the noiseless case, $p(0) = 1$, and the deviation from this value gives some indication to the amount of noise in the processor. In particular, by fitting an exponential decay to the measured ground state probability vs. number of cycles (depth of $U$), we can estimate the gate error per cycle on a set of qubits. By varying this experiment over different qubit configurations, we can select the best configuration (lowest gate error per cycle) to run an experiment on. Disclaimer: The data shown in this tutorial is exemplary and does not reflect the performance of QCS in production. Setup We first install Cirq then import packages we will use. Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre. End of explanation """ # The Google Cloud Project id to use. project_id = '' #@param {type:"string"} processor_id = "" #@param {type:"string"} from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook device_sampler = get_qcs_objects_for_notebook(project_id, processor_id) """ Explanation: Next, we authorize to use the Quantum Computing Service. End of explanation """ def create_loschmidt_echo_circuit( qubits: Sequence[cirq.GridQubit], cycles: int, twoq_gate: cirq.Gate = cirq.FSimGate(np.pi / 4, 0.0), pause: Optional[cirq.Duration] = None, seed: Optional[int] = None, ) -> cirq.Circuit: """Returns a Loschmidt echo circuit using a random unitary U. Args: qubits: Qubits to use. cycles: Depth of random rotations in the forward & reverse unitary. twoq_gate: Two-qubit gate to use. pause: Optional duration to pause for between U and U^\dagger. seed: Seed for circuit generation. """ # Forward (U) operations. forward = random_rotations_between_grid_interaction_layers_circuit( qubits, depth=cycles, two_qubit_op_factory=lambda a, b, _: twoq_gate.on(a, b), pattern=cirq.experiments.GRID_STAGGERED_PATTERN, single_qubit_gates=[cirq.PhasedXPowGate(phase_exponent=p, exponent=0.5) for p in np.arange(-1.0, 1.0, 0.25)], seed=seed ) # Optional pause. if pause is not None: wait = cirq.Moment(cirq.WaitGate(pause).on(q) for q in qubits) else: wait = [] # Reverse (U^\dagger) operations. reverse = cirq.inverse(forward) # Measure all qubits. measure = cirq.measure(*qubits, key="z") return forward + wait + reverse + measure """ Explanation: Creating the circuits The function below creates a Loschmidt echo using a random circuit for $U$ on a given set of qubits for a given number of cycles. A pause can be optionally applied after $U$ and before $U^\dagger$. End of explanation """ """See an example circuit.""" circuit = create_loschmidt_echo_circuit( qubits=cirq.GridQubit.square(2), cycles=2, pause=cirq.Duration(nanos=5.0) ) circuit """ Explanation: For example, we visualize a Loschmidt echo circuit below. End of explanation """ """Loschmidt echo benchmark on a simulator.""" # Simulate the circuit. nreps = 20_000 res = cirq.Simulator().run(circuit, repetitions=nreps) # Verify the survival probability is 1.0. ground_state_prob = np.mean(np.sum(res.measurements["z"], axis=1) == 0) print("Survival probability:", ground_state_prob) """ Explanation: As mentioned, without noise all measurements should be $0$s. We verify this below by computing the ground state probability (or survival probability) on a noiseless simulator. End of explanation """ def to_ground_state_prob(result: cirq.Result) -> float: return np.mean(np.sum(result.measurements["z"], axis=1) == 0) """ Explanation: For convenience, we define a helper function to compute the ground state probability from a measurement result below. End of explanation """ """Set parameters for Loschmidt echo benchmark.""" processor_id = "weber" cycle_values = range(0, 80 + 1, 2) pause = None nreps = 20_000 trials = 10 """ Explanation: Running the circuits We now create several Loschmidt echo circuits and run them on the Quantum Engine. The next cell sets various parameters including processor_id, list of cycles (depths) to use, (optional) pause, and the number of repetitions nreps. The trials parameter is the number of independent experiments to perform with the above parameters. End of explanation """ calibration = cg.get_engine_calibration(processor_id=processor_id) metric = "two_qubit_sqrt_iswap_gate_xeb_pauli_error_per_cycle" _ = calibration.heatmap(metric).plot() """ Explanation: We now select several qubit configurations to run the Loschmidt echo experiment on. A good starting point for picking qubits is the calibration data. End of explanation """ """Pick sets of qubits to run Loschmidt echoes on.""" qubit_sets_indices = [ [(4, 7), (4, 8), (5, 8), (5, 7)], [(0, 5), (0, 6), (1, 6), (1, 5)], # From the calibration, we expect this to be the worst configuration. [(2, 6), (2, 7), (3, 7), (3, 6)], [(7, 3), (7, 4), (8, 4), (8, 3)], ] # Convert indices to grid qubits. qubit_sets = [[cirq.GridQubit(*idx) for idx in qubit_indices] for qubit_indices in qubit_sets_indices] """ Explanation: Using this calibration information, we select several candidate sets of qubits to use. Note: We intentionally select one qubit configuration with a high calibration error to show this propagates through in our results. In practice, one would usually want all qubit configurations to have low errors to find the best one. End of explanation """ """Run the Loschmidt echo experiment.""" sampler = cg.get_engine_sampler(processor_id=processor_id, gate_set_name="sqrt_iswap") probs = [] for trial in range(trials): print("\r", f"Status: On trial {trial + 1} / {trials}", end="") # Create the batch of circuits. batch = [ create_loschmidt_echo_circuit(qubits, cycles=c, pause=pause, seed=trial) for qubits in qubit_sets for c in cycle_values ] # Run the batch. results = sampler.run_batch(programs=batch, repetitions=nreps) # Determine the ground state probability for each result. probs.append([to_ground_state_prob(*res) for res in results]) """ Explanation: We now run the Loschmidt echo circuits on each candidate set of qubits. End of explanation """ # Average data over trials. avg_probs = np.average(probs, axis=0).reshape(len(qubit_sets), len(cycle_values)) std_probs = np.std(probs, axis=0).reshape(len(qubit_sets), len(cycle_values)) # Plotting. plt.figure(figsize=(7, 5)) for i in range(len(qubit_sets)): plt.errorbar( x=cycle_values, y=avg_probs[i], yerr=std_probs[i], capsize=5, lw=2, label=f"Qubit configuration {i}" ) plt.legend() plt.ylabel("Survival probability") plt.xlabel("Cycle") plt.grid("on"); """ Explanation: Plotting the results Below we plot the average survival probability on each qubit configuration. End of explanation """ """Fit an exponential decay to the collected data.""" from scipy.optimize import curve_fit def fit(cycle, a, f): return a * np.exp((f - 1.0) * cycle) for i in range(len(qubit_sets)): (a, f), _ = curve_fit( fit, xdata=cycle_values, ydata=avg_probs[i], ) print(f"Error/cycle on qubit configuration {i}: {round((1 - f) * 100, 2)}%") """ Explanation: The initial point (at zero cycles) reflects readout error, and the decay rate reflects the gate error per cycle. We fit an exponential to each curve to determine the gate error per cycle below. Note: To ensure good fit parameters are calculated, it is important to collect enough data such that the above curves reach their asymptote. End of explanation """
crystalzhaizhai/cs207_yi_zhai
lectures/L2/L2.ipynb
mit
%%bash cd /tmp rm -rf playground #remove if it exists git clone https://github.com/dsondak/playground.git %%bash ls -a /tmp/playground """ Explanation: Lecture 2: Version Control with Git This tutorial is largely based on the repository: git@github.com:rdadolf/git-and-github.git which was created for IACS's ac297r course by Robert Adolf. Most of the credit goes to him. It has been edited for cs109 and for various versions of cs207. Version control is a way of tracking the change history of a project. Even if you have never used a version control tool, you've probably already done it manually: copying and renaming project folders ("paper-1.doc", "paper-2.doc", etc.) is a form of version control. Git is a tool that automates and enhances a lot of the tasks that arise when dealing with larger, longer-living, and collaborative projects. It's also become the common underpinning to many popular online code repositories, GitHub being the most popular. We'll go over the basics of git, but we should point out that a lot of talented people have given git tutorials, and we won't do any better than they have. In fact, if you're interested in learning git deeply and have some time on your hands, I suggest you stop reading this and instead read the Git Book. Scott Chacon and Ben Straub have done a tremendous job, and if you want to understand both the interfaces and the mechanisms behind git, this is the place to start. Table of Contents Lecture 2: Version Control with Git Why should you use version control? Git Basics Common Tasks in the version control of files. Forking a repository Cloning a repository Poking around Making changes Remotes and fetching from them Git habits Why should you use version control? If you ask 10 people, you'll get 10 different answers, but one of the commonalities is that most people don't realize how integral it is to their development process until they've started using it. Still, for the sake of argument, here are some highlights: You can undo anything: Git provides a complete history of every change that has ever been made to your project, timestamped, commented, and attributed. If something breaks, you always have the choice of going back to a previous state. You won't need to keep undo-ing things: One of the advantages of using git properly is that by keeping new changes separate from a stable base, you tend to avoid the massive rollbacks associated with constantly tinkering with a single code. You can identify exactly when and where changes were made: Git allows you to pinpoint when a particular piece of code was changed, so finding what other pieces of code a bug might affect or figuring out why a certain expression was added is easy. Git forces teams to face conflicts directly: On a team-based project, many people are often working with the same code. By having a tool which understands when and where files were changed, it's easy to see when changes might conflict with each other. While it might seem troublesome sometimes to have to deal with conflicts, the alternative&mdash;not knowing there's a conflict&mdash;is much more insidious. Git Basics The first thing to understand about git is that the contents of your project are stored in several different states and forms at any given time. If you think about what version control is, this might not be surprising: in order to remember every change that's ever been made, you need to store a record of those changes somewhere, and to be able to handle multiple people changing the same code, you need to have different copies of the project and a way to combine them. You can think about git operating on four different areas: The working directory is what you're currently looking at. When you use an editor to modify a file, the changes are made to the working directory. The staging area is a place to collect a set of changes made to your project. If you have changed three files to fix a bug, you will add all three to the staging area so that you can remember the changes as one historical entity. It is also called the index. You move files from the working directory to the index using the command git add. The local repository is the place where git stores everything you've ever done to your project. Even when you delete a file, a copy is stored in the repo (this is necessary for always being able to undo any change). It's important to note that a local repository doesn't look much at all like your project files or directories. Git has its own way of storing all the information, and if you're curious what it looks like, look in the .git directory in the working directory of your project. Files are moved from the index to the local repository via the command git commit. When working in a team, every member will be working on their own local repository. An upstream repository allows everyone to agree on a single version of history. If two people have made changes on their local repositories, they will combine those changes in the upstream repository. In our case this upstream repository is hosted by github. This need not be the case; SEAS provides git hosting, as do companies like Atlassian (bitbucket). This upstream repository is also called a remote in git parlance. The standard github remote is called the origin: it is the repository which is given a web page on github. One usually moves code from local to remote repositories using git push, and in the other direction using git fetch. You can think of most git operations as moving code or metadata from one of these areas to another. Common Tasks in the version control of files. Forking a repository Forking a repository done on github. On github, go to the url https://github.com/IACS-CS-207/playground. Click the "Fork button on the upper right side. Forking brings a repository into your own namespace. Its really a cloning process (see below), but its done between two "remotes" on the server. In other words it creates a second upstream repository on the server, called the origin. The forking process on github will ask you where you want to fork the repository. Choose your own github id. In my case I will choose @dsondak, as in the screenshot above. In this tutorial, wherever you see dsondak, substitute your own github id. The forking procedure leaves me with my own repository, dsondak/playground. Cloning a repository Now that we have a fork of the IACS-CS-207/playground repository, let's clone it down to our local machines. clone Cloning a repository does two things: it takes a repository from somewhere (usually an upstream repository) and makes a local copy (your new local repository), and it creates the most recent copy of all of the files in the project (your new working directory). This is generally how you will start working on a project for the first time. Cloning a repository depends a lot on the type of repository you're using. If you're cloning out of a directory on the machine you're currently on, it's just the path to the &lt;project&gt;.git file. NOTE: From this point on, you will see cells containing code. You should type those commands into your terminal. WARNING! The code in the following cells is always preceeded by a combination of the following commands: 1. %%bash and 2. cd /tmp or cd /tmp/playground. DO NOT type any of those commands into your terminal. They are used only in the notebook environment! End of explanation """ %%bash cd /tmp/playground git log """ Explanation: Poking around We have a nice smelling fresh repository. We'll explore the repo from the Git point of view using Git commands. log Log tells you all the changes that have occured in this project as of now... End of explanation """ %%bash cd /tmp/playground git status """ Explanation: Each one of these "commits" is an SHA hash. It uniquely identifies all actions that have happened to this repository previously. Getting help with commands If you ever need help on a command, you can find the git man pages by hyphenating git and the command name. Try it! Press the spacebar to scoll down and q to quit. status Status is your window into the current state of your project. It can tell you which files you have changed and which files you currently have in your staging area. End of explanation """ %%bash cd /tmp/playground cat .git/config """ Explanation: Pay close attention to the text above. It says we are on the master branch of our local repository, and that this branch is up-to-date with the master branch of the upstream repository or remote named origin. We know this as clone brings down a copy of the remote branch: origin/master represents the local copy of the branch that came from the upstream repository (nicknamed origin in this case). Branches are different, co-existing versions of your project. Here we have encountered two of them, but remember there is a third one in the repository we forked from, and perhaps many more, depending on who else made these forks. We'll have much more to say about branches later in this lecture or the next lecture. Branches represent a snapshot of the project by someone at some particular point in time. In general you will only care about your own branches and those of the "parent" remotes you forked/cloned from. Configuration Information is stored in a special file config, in a hidden folder called .git in your working directory. (The index and the local repository are stored there as well...more on that in a bit.) Note: Hidden files and directories are preceded by a dot. The only way to see them is to type ls -a where the a option tells the ls command to list hidden files and directories. A few special files and directories End of explanation """ %%bash cd /tmp/playground cat .gitignore """ Explanation: Notice that this file tells us about a remote called origin which is simply the github repository we cloned from. So the process of cloning left us with a remote. The file also tells us about a branch called master, which "tracks" a remote branch called master at origin. Finally I set us up with a .gitignore file hidden in the repository folder. It tells us what files to ignore when adding files to the index and comitting to the local repository. We use this file to ignore temporary data files and such when working in our repository. Folders are indicated with a / at the end, in which case, all files in that folder are ignored. For example, one of the lines in the .gitignore file is *.so. That line tells Git to ignore all files with the extension .so. Note that the .gitignore file is specialized to the Python language. Note too that when creating a GitHub repo, you are asked if you want to create a .gitignore file. You don't have to create one, but it's a good idea. Of course, you can always add one later if you so desire. End of explanation """ %%bash cd /tmp/playground echo '# Hello world!' > world.md git status """ Explanation: Making changes Ok! Enough poking around. Lets get down to business and add some files into our folder. Now let's say that we want to add a new file to the project. The canonical sequence is "edit&ndash;add&ndash;commit&ndash;push". End of explanation """ %%bash cd /tmp/playground git add world.md git status """ Explanation: We've created a file in the working directory, but it hasn't been staged yet. add When you've made a change to a set of files and are ready to create a commit, the first step is to add all of the changed files to the staging area. That is what add is for. Remember that what you see in the filesystem is your working directory, so the way to see what's in the staging area is with the status command. This also means that if you add something to the staging area and then edit it again, you'll need to add the file to the staging area again if you want to remember the new changes. See the Staging Modified Files section at Git - Recording Changes to the Repository. End of explanation """ %%bash cd /tmp/playground git commit -m "Hello world file to make sure things are working." %%bash cd /tmp/playground git status """ Explanation: Now our file is in the staging area (Index) waiting to be committed. The file is still not even in our local repository. Instead of doing git add world.md you could use git add . in the top level of the repository. This adds all new files and changed files to the index, and is particularly useful if you have created multiple new files. Of course, you should be careful with this because it's a little bit annoying if you decide that you didn't want to add a file. I usually avoid this if I can, but sometimes it's the way to go. commit When you're satisfied with the changes you've added to your staging area, you can commit those changes to your local repository with the commit command. Those changes will have a permanent record in the repository from now on. Every commit has two features you should be aware of: 1. The first is a hash. This is a unique identifier for all of the information about that commit, including the code changes, the timestamp, and the author. We saw this already when we used git log earlier. 2. The second is a commit message. This is text that you can (and should) add to a commit to describe what the changes were. Good commit messages are important! Commit messages are a way of quickly telling your future self and your collaborators what a commit was about. For even a moderately sized project, digging through tens or hundreds of commits to find the change you're looking for is a nightmare without friendly summaries. By convention, commit messages start with a single-line summary, then an empty line, then a more comprehensive description of the changes. This is an okay commit message. The changes are small, and the summary is sufficient to describe what happened. This is better. The summary captures the important information (major shift, direct vs. helper), and the full commit message describes what the high-level changes were. This. Don't do this. End of explanation """ %%bash cd /tmp/playground git branch -av """ Explanation: The git commit -m... version is just a way to specify a commit message without opening a text editor. The ipython notebook can't handle text editors. Don't worry, you'll get to use a text editor in your homework. If you use a text editor you just say git commit. Another nice command is to use git commit with the -a option: git commit -a. Note that git commit -a is shorthand to stage and commit a file which is already tracked all at once. It will not stage a file that is not yet tracked! End of explanation """ %%bash cd /tmp/playground git push git status """ Explanation: We see that our branch, "master", has one more commit than the "origin/master" branch, the local copy of the branch that came from the upstream repository (nicknamed "origin" in this case). Let's push the changes. push The push command takes the changes you have made to your local repository and attempts to update a remote repository with them. If you're the only person working with both of these (which is how a solo GitHub project would work), then push should always succeed. End of explanation """ %%bash cd /tmp/playground git remote add course https://github.com/IACS-CS-207/playground.git cat .git/config """ Explanation: You can go to your remote repo and see the changes! Remotes and fetching from them If you're working with other people, then it's possible that they have made changes to the remote repository between the time you first cloned it and now. push will fail. In our particular case of the playground repository, this is not going to happen, since you just cloned it and presumably havent invited anyone to collaborate with you on it. However you can imagine that the original repository IACS-CS-207/playground, which you are now divorced from, has changed, and that you somehow want to pull those changes in. That's where the next two commands come in. remote We have seen so far that our repository has one "remote", or upstream repository, which has been identified with the word origin, as seen in .git/config. We now wish to add another remote, which we shall call course, which points to the original repository we forked from. We want to do this to pull in changes, in case something changed there. This is the kind of thing you will do all the time in this course. Remember, in the first lecture we set up an upstream remote and today we pulled in changes from the upstream remote to your local repository. End of explanation """ %%bash cd /tmp/playground git fetch course """ Explanation: Notice that the master branch only tracks the same branch on the origin remote. We havent set up any connection with the course remote as yet. Now lets figure out how to get changes from an upstream repository, be it our origin upstream that a collaborator has pushed to, or another course remote to which one of the teaching staff has posted a change. fetch Let's say a collaborator has pushed changes to your shared upstream repository while you were editing. Their local repository and the upstream repository now both contain their changes, but your local repository does not. To update your local repository, you run fetch. But what if you've committed changes in the meantime? Does your local repository contain your changes or theirs? The answer is that it contains a record of both, but they are kept separate. Remember that git repositories are not copies of your project files. They store all the contents of your files, along with a bunch of metadata, but in its own internal format. Let's say that you and your collaborator both edited the same line of the same file at the same time in different ways. On your respective machines, you both add and commit your different changes, and your collaborator pushes theirs to the upstream repository. When you run fetch, git adds a record of their changes to your local repository alongside your own. These are called branches, and they represent different, coexisting versions of your project. The fetch command adds your collaborator's branch to your local repository, but keeps yours as well. End of explanation """ %%bash cd /tmp/playground git branch -avv """ Explanation: A copy of a new remote branch has been made. To see this, provide the -avv argument to git branch. End of explanation """ %%bash cd /tmp/playground git merge course/master git status """ Explanation: Indeed, the way git works is by creating copies of remote branches locally. Then it just compares to these "copy" branches to see what changes have been made. Sometimes we really do want to merge the changes. In this case, we want to merge the change from remotes/course/master. Eventually, we'll consider a case where you want to simply create another branch yourself and do things on that branch. merge Having multiple branches is fine, but at some point, you'll want to combine the changes that you've made with those made by others. This is called merging. There are two general cases when merging two branches: 1. First, the two branches are different but the changes are in unrelated places 2. Second, the two branches are different and the changes are in the same locations in the same files. The first scenario is easy. Git will simply apply both sets of changes to the appropriate places and put the resulting files into the staging area for you. Then you can commit the changes and push them back to the upstream repository. Your collaborator does the same, and everyone sees everything. The second scenario is more complicated. Let's say the two changes set some variable to different values. Git can't know which is the correct value. One solution would be to simply use the more recent change, but this very easily leads to self-inconsistent programs. A more conservative solution, and the one git uses, is to simply leave the decision to the user. When git detects a conflict that it cannot resolve, merge fails, and git places a modified version of the offending file in your project directory. This is important: the file that git puts into your directory is not actually either of the originals. It is a new file that has special markings around the locations that conflicted. We shall not consider this case yet, but will return to dealing with conflicts soon. Lets merge in the changes from course/master: (The next 2-3 inputs only make sense if IACS-CS-207/playground master is edited in-between.) End of explanation """ %%bash cd /tmp/playground git log -3 """ Explanation: We seem to be ahead of our upstream-tracking repository by 2 commits..why? End of explanation """ %%bash cd /tmp/playground git push git status """ Explanation: Aha: one commit came from the course upstream, and one was a merge commit. In the case you had edited the README.md at the same time and comitted locally, you would have been asked to resolve the conflict in the merge (the second case above). These changes are only on our local repo. We would like to have them on our remote repo. Lets push these changes to the origin now. End of explanation """
diogro/ode_examples
Numerical Integration Tutorial.ipynb
mit
%matplotlib inline from numpy import * from matplotlib.pyplot import * # time intervals tt = arange(0, 10, 0.5) # initial condition xx = [0.1] def f(x): return x * (1.-x) # loop over time for t in tt[1:]: xx.append(xx[-1] + 0.5 * f(xx[-1])) # plotting plot(tt, xx, '.-') ta = arange(0, 10, 0.01) plot(ta, 0.1 * exp(ta)/(1+0.1*(exp(ta)-1.))) xlabel('t') ylabel('x') legend(['approximation', 'analytical solution'], loc='best',) """ Explanation: Numerically solving differential equations with python This is a brief description of what numerical integration is and a practical tutorial on how to do it in Python. Software required In order to run this notebook in your own computer, you need to install the following software: python numpy and scipy - python scientific libraries matplotlib - a library for plotting the ipython notebook (now renamed to Jupyter) On Windows and Mac, we recommend installing the Anaconda distribution, which includes all of the above in a single package (among several other libraries), available at http://continuum.io/downloads. On Linux, you can install everything using your distribution's prefered way, e.g.: Debian/Ubuntu: sudo apt-get install python-numpy python-scipy python-matplotlib python-ipython-notebook Fedora: sudo yum install python-numpy python-scipy python-matplotlib python-ipython-notebook Arch: `sudo pacman -S python-numpy python-scipy python-matplotlib jupyter Code snippets shown here can also be copied into a pure text file with .py extension and ran outside the notebook (e.g., in an python or ipython shell). From the web Alternatively, you can use a service that runs notebooks on the cloud, e.g. SageMathCloud or wakari. It is possible to visualize publicly-available notebooks using http://nbviewer.ipython.org, but no computation can be performed (it just shows saved pre-calculated results). How numerical integration works Let's say we have a differential equation that we don't know how (or don't want) to derive its (analytical) solution. We can still find out what the solutions are through numerical integration. So, how dows that work? The idea is to approximate the solution at successive small time intervals, extrapolating the value of the derivative over each interval. For example, let's take the differential equation $$ \frac{dx}{dt} = f(x) = x (1 - x) $$ with an initial value $x_0 = 0.1$ at an initial time $t=0$ (that is, $x(0) = 0.1$). At $t=0$, the derivative $\frac{dx}{dt}$ values $f(0.1) = 0.1 \times (1-0.1) = 0.09$. We pick a small interval step, say, $\Delta t = 0.5$, and assume that that value of the derivative is a good approximation over the whole interval from $t=0$ up to $t=0.5$. This means that in this time $x$ is going to increase by $\frac{dx}{dt} \times \Delta t = 0.09 \times 0.5 = 0.045$. So our approximate solution for $x$ at $t=0.5$ is $x(0) + 0.045 = 0.145$. We can then use this value of $x(0.5)$ to calculate the next point in time, $t=1$. We calculate the derivative at each step, multiply by the time step and add to the previous value of the solution, as in the table below: | $t$ | $x$ | $\frac{dx}{dt}$ | | ---:|---------:|----------:| | 0 | 0.1 | 0.09 | | 0.5 | 0.145 | 0.123975 | | 1.0 | 0.206987 | 0.164144 | | 1.5 | 0.289059 | 0.205504 | | 2.0 | 0.391811 | 0.238295 | Of course, this is terribly tedious to do by hand, so we can write a simple program to do it and plot the solution. Below we compare it to the known analytical solution of this differential equation (the logistic equation). Don't worry about the code just yet: there are better and simpler ways to do it! End of explanation """ # everything after a '#' is a comment ## we begin importing libraries we are going to use # import all (*) functions from numpy library, eg array, arange etc. from numpy import * # import all (*) interactive plotting functions, eg plot, xlabel etc. from matplotlib.pyplot import * # import the numerical integrator we will use, odeint() from scipy.integrate import odeint # time steps: an array of values starting from 0 going up to (but # excluding) 10, in steps of 0.01 t = arange(0, 10., 0.01) # parameters r = 2. K = 10. # initial condition x0 = 0.1 # let's define the right-hand side of the differential equation # It must be a function of the dependent variable (x) and of the # time (t), even if time does not appear explicitly # this is how you define a function: def f(x, t, r, K): # in python, there are no curling braces '{}' to start or # end a function, nor any special keyword: the block is defined # by leading spaces (usually 4) # arithmetic is done the same as in other languages: + - * / return r*x*(1-x/K) # call the function that performs the integration # the order of the arguments is as below: the derivative function, # the initial condition, the points where we want the solution, and # a list of parameters x = odeint(f, x0, t, (r, K)) # plot the solution plot(t, x) xlabel('t') # define label of x-axis ylabel('x') # and of y-axis # plot analytical solution # notice that `t` is an array: when you do any arithmetical operation # with an array, it is the same as doing it for each element plot(t, K * x0 * exp(r*t)/(K+x0*(exp(r*t)-1.))) legend(['approximation', 'analytical solution'], loc='best') # draw legend """ Explanation: Why use scientific libraries? The method we just used above is called the Euler method, and is the simplest one available. The problem is that, although it works reasonably well for the differential equation above, in many cases it doesn't perform very well. There are many ways to improve it: in fact, there are many books entirely dedicated to this. Although many math or physics students do learn how to implement more sophisticated methods, the topic is really deep. Luckily, we can rely on the expertise of lots of people to come up with good algorithms that work well in most situations. Then, how... ? We are going to demonstrate how to use scientific libraries to integrate differential equations. Although the specific commands depend on the software, the general procedure is usually the same: define the derivative function (the right hand side of the differential equation) choose a time step or a sequence of times where you want the solution provide the parameters and the initial condition pass the function, time sequence, parameters and initial conditions to a computer routine that runs the integration. A single equation So, let's start with the same equation as above, the logistic equation, now with any parameters for growth rate and carrying capacity: $$ \frac{dx}{dt} = f(x) = r x \left(1 - \frac{x}{K} \right) $$ with $r=2$, $K=10$ and $x(0) = 0.1$. We show how to integrate it using python below, introducing key language syntax as necessary. End of explanation """ # we didn't need to do this again: if the cell above was run already, # the libraries are imported, but we repeat it here for convenience from numpy import * from matplotlib.pyplot import * from scipy.integrate import odeint t = arange(0, 50., 0.01) # parameters r = 2. c = 0.5 e = 0.1 d = 1. # initial condition: this is an array now! x0 = array([1., 3.]) # the function still receives only `x`, but it will be an array, not a number def LV(x, t, r, c, e, d): # in python, arrays are numbered from 0, so the first element # is x[0], the second is x[1]. The square brackets `[ ]` define a # list, that is converted to an array using the function `array()`. # Notice that the first entry corresponds to dV/dt and the second to dP/dt return array([ r*x[0] - c * x[0] * x[1], e * c * x[0] * x[1] - d * x[1] ]) # call the function that performs the integration # the order of the arguments is as below: the derivative function, # the initial condition, the points where we want the solution, and # a list of parameters x = odeint(LV, x0, t, (r, c, e, d)) # Now `x` is a 2-dimension array of size 5000 x 2 (5000 time steps by 2 # variables). We can check it like this: print('shape of x:', x.shape) # plot the solution plot(t, x) xlabel('t') # define label of x-axis ylabel('populations') # and of y-axis legend(['V', 'P'], loc='upper right') """ Explanation: We get a much better approximation now, the two curves superimpose each other! Now, what if we wanted to integrate a system of differential equations? Let's take the Lotka-Volterra equations: $$ \begin{aligned} \frac{dV}{dt} &= r V - c V P\ \frac{dP}{dt} &= ec V P - dP \end{aligned}$$ In this case, the variable is no longer a number, but an array [V, P]. We do the same as before, but now x is going to be an array: End of explanation """ # `x[0,0]` is the first value (1st line, 1st column), `x[0,1]` is the value of # the 1st line, 2nd column, which corresponds to the value of P at the initial # time. We plot just this point first to know where we started: plot(x[0,0], x[0,1], 'o') print('Initial condition:', x[0]) # `x[0]` or (equivalently) x[0,:] is the first line, and `x[:,0]` is the first # column. Notice the colon `:` stands for all the values of that axis. We are # going to plot the second column (P) against the first (V): plot(x[:,0], x[:,1]) xlabel('V') ylabel('P') # Let's calculate and plot another solution with a different initial condition x2 = odeint(LV, [10., 4.], t, (r, c, e, d)) plot(x2[:,0], x2[:,1]) plot(x2[0,0], x2[0,1], 'o') """ Explanation: An interesting thing to do here is take a look at the phase space, that is, plot only the dependent variables, without respect to time: End of explanation """ def LV_plot(r=2, c=0.5, e=0.5, d=1): # Time range t = arange(0, 50., 0.01) # Initial conditions x0 = array([1., 3.]) # The function to be integrated def LV(x, t, r, c, e, d): return array([ r*x[0] - c * x[0] * x[1], e * c * x[0] * x[1] - d * x[1] ]) #integrating y = odeint(LV, x0, t, (r, c, e, d)) # ploting: use the function show show(plot(t, y)) """ Explanation: Congratulations: you are now ready to integrate any system of differential equations! (We hope generalizing the above to more than 2 equations won't be very challenging). Exploring parameters with a simple interface IPython’s widgets allow to create user interface (UI) controls for exploring your code interactively. To use this resource you have to run the code below in a computer with all the software required (see first section), plus the library ipywidgets. The interact function provides a quick way to use widgets to explore the parameter space of ODEs. To do this, first create a new function that integrate your ODEs and return a plot. The argumnts of this functions should be the parameters that you want to explore. End of explanation """ #Libraries to use interact from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets #Now call the function to be integrated within interact interact(LV_plot, r=(0,5.,0.1), c = (0,1,0.1), e = (0,1,0.1), d = (0,5, 0.1)) """ Explanation: Now call your the function you created above within interact. The arguments for the sliders that set each parameter of the equations are (min, max, step). End of explanation """
ehongdata/Network-Analysis-Made-Simple
4. Cliques, Triangles and Squares (Instructor).ipynb
mit
G = nx.Graph() G.add_nodes_from(['a', 'b', 'c']) G.add_edges_from([('a','b'), ('b', 'c')]) nx.draw(G, with_labels=True) """ Explanation: Cliques, Triangles and Squares Let's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such: End of explanation """ G.add_node('d') G.add_edge('c', 'd') G.add_edge('d', 'a') nx.draw(G, with_labels=True) """ Explanation: Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like? End of explanation """ # Load the network. G = nx.read_gpickle('Synthetic Social Network.pkl') nx.draw(G, with_labels=True) """ Explanation: The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square. You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well. If all of the triangles in the two small-scale networks were closed, then the graph would have represented cliques, in which everybody within that subgraph knows one another. In this section, we will attempt to answer the following questions: Can we identify cliques? Can we identify potential cliques that aren't captured by the network? Can we model the probability that two unconnected individuals know one another? As usual, let's start by loading the synthetic network. End of explanation """ # Example code that shouldn't be too hard to follow. def in_triangle(G, node): neighbors1 = G.neighbors(node) neighbors2 = [] for n in neighbors1: neighbors = G.neighbors(n) if node in neighbors2: neighbors2.remove(node) neighbors2.extend(G.neighbors(n)) neighbors3 = [] for n in neighbors2: neighbors = G.neighbors(n) neighbors3.extend(G.neighbors(n)) if node in neighbors3: return True else: return False in_triangle(G, 3) """ Explanation: Cliques In a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not. The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself. End of explanation """ nx.triangles(G, 3) """ Explanation: In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice. End of explanation """ # Possible answer def get_triangles(G, node): neighbors = set(G.neighbors(node)) triangle_nodes = set() """ Fill in the rest of the code below. """ for n in neighbors: neighbors2 = set(G.neighbors(n)) neighbors.remove(n) neighbors2.remove(node) triangle_nodes.update(neighbors2.intersection(neighbors)) neighbors.add(n) triangle_nodes.add(node) return triangle_nodes # Verify your answer with the following funciton call. Should return: # {1, 2, 3, 6, 23} get_triangles(G, 3) # Then, draw out those nodes. nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True) neighbors3 = G.neighbors(3) neighbors3.append(3) nx.draw(G.subgraph(neighbors3), with_labels=True) """ Explanation: Exercise Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship. Hint: Python Sets may be of great use for this problem. https://docs.python.org/2/library/stdtypes.html#set Verify your answer by drawing out the subgraph composed of those nodes. End of explanation """ # Possible Answer, credit Justin Zabilansky (MIT) for help on this. def get_open_triangles(G, node): """ There are many ways to represent this. One may choose to represent only the nodes involved in an open triangle; this is not the approach taken here. Rather, we have a code that explicitly enumrates every open triangle present. """ open_triangle_nodes = [] neighbors = set(G.neighbors(node)) for n in neighbors: neighbors2 = set(G.neighbors(n)) neighbors2.remove(node) overlaps = set() for n2 in neighbors2: if n2 in neighbors: overlaps.add(n2) difference = neighbors.difference(overlaps) difference.remove(n) for n2 in difference: if set([node, n, n2]) not in open_triangle_nodes: open_triangle_nodes.append(set([node, n, n2])) return open_triangle_nodes # # Uncomment the following code if you want to draw out each of the triplets. # nodes = get_open_triangles(G, 2) # for i, triplet in enumerate(nodes): # fig = plt.figure(i) # nx.draw(G.subgraph(triplet), with_labels=True) print(get_open_triangles(G, 3)) len(get_open_triangles(G, 3)) """ Explanation: Friend Recommendation: Open Triangles Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles. Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. Exercise Can you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one? Hint: You may still want to stick with set operations. Suppose we have the A-B-C triangle. If there are neighbors of C that are also neighbors of B, then those neighbors are in a triangle with B and C; consequently, if there are nodes for which C's neighbors do not overlap with B's neighbors, then those nodes are in an open triangle. The final implementation should include some conditions, and probably won't be as simple as described above. End of explanation """
pdhimal1/AI-Project
Predictor/notebook_predictor.ipynb
mit
%matplotlib inline x_axis = np.arange(0+1, len(historical)+1) plt.plot(x_axis, historical_opening, 'b', x_axis, historical_closing, 'r') plt.xlabel('Day') plt.ylabel('Price ($)') #plt.figure(figsize=(20,10)) plt.title("Stock price: Opening vs Closing") plt.show(); """ Explanation: Plots Opening vs Closing blue - opening red - closing End of explanation """ plt.plot(x_axis, historical_high, 'b', x_axis, historical_low, 'g', x_axis, historical_opening, 'y', x_axis, historical_closing, 'r') plt.xlabel('Day') plt.ylabel('Price ($)') #plt.figure(figsize=(20,10)) plt.show(); """ Explanation: Historical opening, closing, high, low - opening - yellow - closing - red - high - blue - low - green End of explanation """ plt.plot(x_axis,historical_volume, 'g', x_axis, average_volume, 'b') plt.xlabel('Day') plt.ylabel('Volume') plt.show() """ Explanation: Volume vs Average Volume End of explanation """ opening = np.array(historical_opening) volume = np.array(historical_volume) high = np.array(historical_high) low = np.array(historical_low) avg_vol = np.array(average_volume) closing = np.array(historical_closing) """ Explanation: Convert the data collected into numpy arrays End of explanation """ data = np.vstack((opening, high, low, volume, avg_vol)) shape1, shape2 = data.shape data = data.reshape(shape2, shape1) data.shape """ Explanation: Stack the data Reshape (sample_size, #_of_features End of explanation """ opening_price = company.get_open() todays_volume = company.get_volume() high = company.get_days_high() low = company.get_days_low() avg_volume = average_volume[0] print opening_price print todays_volume print high print low print avg_volume #target_pre = np.asarray(closing) #target = np.vstack(target_pre) """ Explanation: Need today's data for the features selected End of explanation """ today =np.array((opening_price, high, low, todays_volume, avg_volume)) """ Explanation: Collect today's data int a numpy array End of explanation """ from sklearn import svm clf = svm.SVR(gamma=0.00001, C=29000) #Fit takes in data (#_samples X #_of_features array), and target(closing - 1 X #_of_Sample_size array) fit = clf.fit(data[:-10],closing[:-10]) predict = clf.predict(today) graph = clf.fit(data, closing).predict(data) date = company.get_trade_datetime() name = cn.find_name(ticker) print name, "[" , ticker, "]" #get company name using the ticker symbol print "\nPredicted [closing] price for ", date[:10], ": $", predict[0] company.refresh() #change = (company.get_price() - company.get_open())/company.get_open() #print change print "current price $", company.get_price() #print "% change today ", print data[:,0].shape print closing.shape print graph plt.scatter(data[:,0], closing, c='k', label='data') plt.hold('on') plt.plot(data[:,0],graph, c='g', label = 'Linear') plt.ylabel('target') plt.xlabel('data') plt.show() """ Explanation: Scikit-learn - Import svm - fit - predict End of explanation """
tiffanyj41/hermes
notebooks/CF - Bayes, Pearson Correlation, etc.ipynb
apache-2.0
import datetime, time # timestamp is not correct; it is 8 hours ahead print (datetime.datetime.now() - datetime.timedelta(hours=8)).strftime('%Y-%m-%d %H:%M:%S') """ Explanation: Comparing Collaborative Filtering Systems According to studies done by the article "Comparing State-of-the-Art Collaborative Filtering Systems" by Laurent Candillier, Frank Meyer, and Marc Boulle, the best user based approach is based on pearson similarity and 1500 neighbors. The best item based approach is based on probabilistic similarity and 400 neighbors. The best model based approach is using K-means with euclidean distance, 4 clusters and prediction scheme based on the nearest cluster and Bayes model minimizing MAE. Lastly, the best default approach is based on Bayes rule minimizing MAE. We will try to implement the studies done by this article and see if we will achieve the same results. What we have implemented so far: * Bayes * Bayes MAP * Bayes MSE * Bayes MAE * Pearson Correlation (partially) Table of Contents: 0. Last updated 1. Install and import libraries 2. Load dataset 3. Convert dataset to DataFrame (optional) 4. Determine characteristics of data (optional) 5. Splitting the data (optional) 6. Calculate similarities and find nearest neighbors 7. Develop Model 8. Evaluate Metrics 9. Compare different CF systems Explanation of Bayes: * https://www.countbayesie.com/blog/2015/2/18/bayes-theorem-with-lego * http://www.analyticsvidhya.com/blog/2015/09/naive-bayes-explained/ 0. Last updated End of explanation """ import importlib import pip def _install(package): pip.main(['install', package]) def _import(package): importlib.import_module(package) def install_and_import(package): try: _import(package) except ImportError: _install(package) # install PyMC install_and_import("git+git://github.com/pymc-devs/pymc.git") # zip up source_dir located in GitHub remote_url's remote_branch and add it to Spark's source context remote_url = "https://github.com/lab41/hermes.git" remote_branch = "master" source_dir = "src" debug = True # helper functions import os import functools def _list_all_in_dir(dir_path): for path, subdirs, files in os.walk(dir_path): for filename in files: print os.path.join(path, filename) def _zip_dir(srcdir_path, zipfile_handler): try: zipfile_handler.writepy(srcdir_path) finally: zipfile_handler.close() def trackcalls(func): @functools.wraps(func) def wrapper(*args, **kwargs): wrapper.has_been_called = True return func(*args, **kwargs) wrapper.has_been_called = False return wrapper @trackcalls def _add_zipfile_to_sc(zipfile_path): sc.addPyFile(zipfile_path) import git import os import tempfile import shutil import zipfile # create a temporary directory tmpdir_path = tempfile.mkdtemp() if debug: print "temporary directory: %s\n" % tmpdir_path # ensure file is read/write by creator only saved_umask = os.umask(0077) # create a zipfile handler to zip the necessary files ziptmpdir_path = tempfile.mkdtemp() if debug: print "temporary directory for zip file: %s\n" % ziptmpdir_path zipfile_path = ziptmpdir_path + "/hermes_src_2.zip" if debug: print "zip file's path: %s\n" % zipfile_path zipfile_handler = zipfile.PyZipFile(zipfile_path, "w") # make zipfile handler verbose for debugging zipfile_handler.debug = 3 try: # clone "framework" branch from GitHub into temporary directory local_branch = git.Repo.clone_from(remote_url, tmpdir_path, branch=remote_branch) if debug: print "current branch: %s\n" % local_branch.head.ref if debug: print "list all in %s:" % tmpdir_path; _list_all_in_dir(tmpdir_path); print "\n" # zip "hermes" directory if debug: print "zipping: %s\n" % os.path.join(tmpdir_path, source_dir) _zip_dir(os.path.join(tmpdir_path, source_dir), zipfile_handler) # check zip file if debug: print "Is zip file %s valid? %s\n" % (zipfile_path, zipfile.is_zipfile(zipfile_path)) # add zip to SparkContext # note: you can only add zip to SparkContext one time if not _add_zipfile_to_sc.has_been_called: if debug: print "add zip file %s into spark context\n" % zipfile_path _add_zipfile_to_sc(zipfile_path) else: if debug: print "zip file %s is already added into spark context; will not re-add\n" % zipfile_path except IOError as e: raise e else: os.remove(zipfile_path) finally: os.umask(saved_umask) shutil.rmtree(tmpdir_path) shutil.rmtree(ziptmpdir_path) # import the required modules from Hermes from src.algorithms import performance_metrics as pm from src.data_prep import movieLens_vectorize as mv from src.utils import save_load as sl # import other modules import os import time class Timer(object): """ To time how long a particular function runs. Example: import Timer with Timer() as t: somefunction() print("somefunction() takes %s seconds" % t.secs) print("somefunction() takes %s milliseconds" % t.msecs) """ def __enter__(self): self.start = time.time() return self def __exit__(self, *args): self.end = time.time() self.secs = self.end - self.start self.msecs = self.secs * 1000 """ Explanation: 1. Install and import libraries End of explanation """ # ratings_json_path # movies_json_path """ Explanation: 2. Load dataset, in this case MovieLens data We are going to use MovieLens's 1M data. End of explanation """ def convert_dataset_to_dataframe(dataset_path): df = sqlCtx.read.json(dataset_path, None) df = df.repartition(sc.defaultParallelism * 3) return df # obtaining ratings dataframe ratingsdf = convert_dataset_to_dataframe(ratings_json_path) # obtaining movies dataframe moviesdf = convert_dataset_to_dataframe(movies_json_path) """ Explanation: 3. Convert dataset to Dataframe Run this only when you load datasets from your home directory. End of explanation """ # extract most commonly used vectors to be used later on # 1. using ratingsdf # a. [(user_id, movie_id, rating)] umr = ratingsdf.map(lambda row: (row.user_id, row.movie_id, row.rating)) # b. [(user_id, movie_id, rating)] where rating >= 3 umr_weighted = umr.filter(lambda (user_id, movie_id, rating): rating >= 3) print "-" * 80 print "format: [(user_id, movie_id, rating)]\n" print "umr:\n", umr.take(2) print "umr_weighted:\n", umr_weighted.take(2) print "-" * 80 print "\nTo identify user-to-user similarity:" print "format: [(movie_id, (user_id, rating))]\n" # c. [(movie_id, (user_id, rating)] -> to identify user-to-user similarity m_ur = ratingsdf.map(lambda row: (row.movie_id, (row.user_id, row.rating))) # d. [(movie_id, (user_id, rating)] where rating >= 3 m_ur_weighted = m_ur.filter(lambda (movie_id, (user_id, rating)): rating >= 3) print "m_ur:\n", m_ur.take(2) print "m_ur_weighted (aka rating >=3):\n", m_ur_weighted.take(2) print "-" * 80 print "\nTo identify movie-to-movie similarity:" print "format: [(user_id, (movie_id, rating))]\n" # e. [(user_id, (movie_id, rating))] -> to identify movie-to-movie similarity u_mr = ratingsdf.map(lambda row: (row.user_id, (row.movie_id, row.rating))) # f. [(user_id, (movie_id, rating))] where rating >= 3 u_mr_weighted = u_mr.filter(lambda (user_id, (movie_id, rating)): rating >= 3) print "um_r:\n", u_mr.take(2) print "um_r_weighted (aka rating >=3):\n", u_mr_weighted.take(2) print "-" * 80 # total number of distinct users num_distinct_users = ratingsdf.map(lambda row: row.user_id).distinct().count() num_users = ratingsdf.map(lambda row: row.user_id).count() print "total number of distinct users = ", num_distinct_users print "total number of users = ", num_users # total number of ratings # should be the same as num_users num_ratings = ratingsdf.map(lambda row: row.rating).count() print "total number of ratings = ", num_ratings # total number of distinct movies num_distinct_movies = moviesdf.map(lambda row: row.movie_id).distinct().count() num_movies = moviesdf.map(lambda row: row.movie_id).count() print "total number of distinct movies = ", num_distinct_movies print "total number of movies = ", num_movies # what is the average number of ratings a user rates = number of ratings / number of users # round it to the fourth digit avg_num_ratings_per_user = round(float(num_ratings) / float(num_distinct_users), 4) print "average number of ratings a user rates = ", avg_num_ratings_per_user # what is the average number of ratings a movie receives = number of ratings / number of movies avg_num_ratings_per_movie = round(float(num_ratings) / float(num_distinct_movies), 4) print "average number of ratings a movie receives = ", avg_num_ratings_per_movie # completeness = number of ratings / (number of users * number of movies) completeness = round(float(num_ratings) / (float(num_distinct_users) * float(num_distinct_movies)), 4) print "completeness = ", completeness # mean rating mean_rating = ratingsdf.map(lambda row: row.rating).mean() print "mean rating = ", mean_rating # mean rating per movie # [(movie_id, rating)] movie_rating_pair = ratingsdf.map(lambda row: (row.movie_id, row.rating)) """ combineByKey() requires 3 functions: * createCombiner: first aggregation step for each key -> lambda first_rating: (first_rating, 1) * mergeValue: what to do when a combiner is given a new value -> lambda x, first_rating: x[0] + first_rating, x[1] + 1 -> lambda thisNewRating_thisNumRating, firstRating: thisNewRating + firstRating, thisNumRating + 1 * mergeCombiner: how to merge two combiners -> lambda x, y: (x[0] + y[0], x[1] + y[1]) -> lambda sumRating1_numRating1, sumRating2_numRating2: (sumRating1 + sumRating2, numRating1 + numRating2) """ # [(movie_id, (sum_rating, num_rating))] movie_sumRating_numRating_pair = movie_rating_pair.combineByKey( lambda first_rating: (first_rating, 1), lambda x, first_rating: (x[0] + first_rating, x[1] + 1), lambda x, y: (x[0] + y[0], x[1] + y[1])) # [(movie_id, mean_rating)] movie_meanRating_pair = movie_sumRating_numRating_pair.map(lambda (movie_id, (sum_rating, num_rating)): (movie_id, sum_rating/num_rating)) movie_meanRating_pair.take(3) # meanRating_numRating_pair will be used in plotting in the next cell # where _1 = mean rating of the movie # _2 = number of users who review the movie # [(mean_rating, num_rating)] meanRating_numRating_pair = movie_sumRating_numRating_pair.map(lambda (movie_id, (sum_rating, num_rating)): (sum_rating/num_rating, num_rating)) meanRating_numRating_pair_df = meanRating_numRating_pair.toDF() meanRating_numRating_pair_df.show() # plot mean rating per movie %matplotlib inline import matplotlib.pyplot as plt import pandas as pd meanRating_numRating_pair = movie_sumRating_numRating_pair.map(lambda (movie_id, (sum_rating, num_rating)): (sum_rating/num_rating, num_rating)) meanRating_numRating_pair_df = meanRating_numRating_pair.toDF() meanRating_numRating_pair_panda_df = meanRating_numRating_pair_df.toPandas() plot = meanRating_numRating_pair_panda_df.plot( x="_2", \ y="_1", \ kind="hexbin", \ xscale="log", \ cmap="YlGnBu", \ gridsize=12, \ mincnt=1, \ title="Mean vs Number of Reviewers") plot.set_xlabel("Number of Reviewers Per Movie") plot.set_ylabel("Mean Rating Per Movie") plt.show() """ Explanation: 4. Determine characteristics of the MovieLens data (optional) Run this only when you load datasets from your home directory. Format: * ratings = [user_id, movie_id, rating, timestamp] * movies = [movie_id, title, genres] End of explanation """ weights = [0.9, 0.1, 0] seed = 41 # 1. using ratingsdf # a. [(user_id, movie_id, rating)] umr_train, umr_test, umr_validation = umr.randomSplit(weights, seed) # b. [(user_id, movie_id, rating)] where rating >= 3 umr_weighted_train, umr_weighted_test, umr_weighted_validation = umr_weighted.randomSplit(weights, seed) # c. [(movie_id, (user_id, rating)] m_ur_train, m_ur_test, m_ur_validation = m_ur.randomSplit(weights, seed) # d. [(movie_id, (user_id, rating)] where rating >= 3 m_ur_weighted_train, m_ur_weighted_test, m_ur_weighted_validation = m_ur_weighted.randomSplit(weights, seed) # e. [(user_id, (movie_id, rating)] u_mr_train, u_mr_test, u_mr_validation = u_mr.randomSplit(weights, seed) # f. [(user_id, (movie_id, rating)] where rating >= 3 u_mr_weighted_train, u_mr_weighted_test, u_mr_weighted_validation = u_mr_weighted.randomSplit(weights, seed) """ Explanation: This figure shows that the average rating of a movie is actually slightly higher than 3. Hypothesis: * We can safely predict the mean rating after 100 reviews. * After 100 reviews, the average rating is approximately in between 3.0 and 4.0. 5. Splitting the data (optional) Run this only when you load datasets from your home directory. Default split data into: * 90% training * 10% test * 0% validation * seed = 41 Remember that calling randomSplit when you restart the kernel will provide you with a different training, test, and validation data even though the weights and the seed are the same. End of explanation """ # helper functions from scipy.stats import pearsonr import math # filter out duplicate pairs # user-based approach: # input and output: [( movie_id, ((user_id_1, rating_1), (user_id_2, rating_2)) )] # item-based approach: # input and output: [( user_id, ((movie_id_1, rating_1), (movie_id_2, rating_2)) )] def removeDuplicates((key_id, ratings)): (value_id_1, rating_1) = ratings[0] (value_id_2, rating_2) = ratings[1] return value_id_1 < value_id_2 # rearrange so that it will be in the format of pairs # user-based approach: # input: [( movie_id, ((user_id_1, rating_1), (user_id_2, rating_2)) )] # output: [( movie_id, ((user_id_1, user_id_2), (rating_1, rating_2)) )] # item-based approach: # input: [( user_id, ((movie_id_1, movie_id_2), (rating_1, rating2)) )] # output: [( user_id, ((movie_id_1, movie_id_2), (rating_1, rating2)) )] def createPairs((key_id, ratings)): (value_id_1, rating_1) = ratings[0] (value_id_2, rating_2) = ratings[1] return ((value_id_1, value_id_2), (rating_1, rating_2)) # aggregate pairs using combineByKey() instead of groupByKey() # [( test_user_id, train_user_id), (test_rating_1, train_rating_1), (test_rating_2, train_rating_2), ...] def aggregatePairs(keyPairs): return keyPairs.combineByKey( lambda firstRatingPair: ((firstRatingPair),), lambda newRatingPair, firstRatingPair: newRatingPair + ((firstRatingPair),), lambda tupleRatingPairs1, tupleRatingPairs2: tupleRatingPairs1 + tupleRatingPairs2) # calculate pearson correlation when you passed in the values of # user-based approach: # input: values of [(user_id_1, user_id_2), ((rating_1, rating_2), (rating_1, rating_2)...)] # output: values of [(user_id_1, user_id_2), (pearson_correlation, num_rating_pairs, p_value)] # item-based approach: # input: values of [(movie_id_1, movie_id_2), ((rating_1, rating_2), (rating_1, rating_2)...)] # output: values of [(movie_id_1, movie_id_2), (pearson_correlation, num_rating_pairs, p_value)] # NOTE: ignore p_value def calculatePearson(ratingPairs): rating1s = [rating1 for (rating1, _) in ratingPairs] rating2s = [rating2 for (_, rating2) in ratingPairs] pearson_correlation, p_value = pearsonr(rating1s, rating2s) return (pearson_correlation, len(ratingPairs)) """ Explanation: 6. Calculate similarity and find nearest neighbors These are the different similarity measurement implemented in the article: * pearson * cosine * constraint pearson: in the case of MovieLens data, it means any ratings greater than 3 (aka positive ratings) * adjusted cosine * probabilistic "When implementing a user- or item-based approach, one may choose: * a similarity measure: pearson, cosine, constraint pearson, adjusted cosine, or probabilistic * a neighborhood size * and how to compute predictions: using a weighted sum of rating values or using a weighted sum of deviations from the mean." Table of Contents: 6.A.1. Calculate Pearson Correlation a. user-based: DONE except for prediction b. item-based 6.A.2. Calculate Weighted Pearson Correlation a. user-based b. item-based 6.A.3. Calculate Pearson Deviation a. user-based b. item-based 6.B.1. Calculate Probabilistic Similarity a. user-based b. item-based 6.B.2. Calculate Probabilistic Deviation a. user-based b. item-based 6.C.1. Calculate Cosine Similarity a. user-based b. item-based 6.C.2. Calculate Adjusted Cosine Similarity a. user-based b. item-based 6.D. Comparing Similarities' Measurement 6.A.1. Calculate Pearson Correlation End of explanation """ #((user_id, movie_id), rating) a = sc.parallelize([ ((1, 2), 3), ((2, 2), 4) ]) #((user_id, movie_id), predicted_rating) b = sc.parallelize([ ((1, 2), 2), ((2, 2), 5) ]) #((user_id, movie_id), (rating, predicted_rating) c = a.join(b) c.collect() # combine test and train together so that # [movie_id, ( (test_user_id, test_rating), (train_user_id, train_rating) )] M_testUR_trainUR = m_ur_test.join(m_ur_train) print M_testUR_trainUR.count() M_testUR_trainUR.take(5) # remove duplicates M_testUR_trainUR = M_testUR_trainUR.filter(removeDuplicates) print M_testUR_trainUR.count() M_testUR_trainUR.take(2) # rearrange so that it will be in the format # [(test_user_id, train_user_id), (test_rating, train_rating)] userPairs = M_testUR_trainUR.map(createPairs) print userPairs.count() userPairs.take(2) # congregate all ratings for each user pair so that it will be in the format of: # [( test_user_id, train_user_id), (test_rating_1, train_rating_1), (test_rating_2, train_rating_2), ...] # instead of using groupByKey(), use combineByKey() instead. """ # Implemented using groupByKey(): with Timer() as t: aggUserPairs = userPairs.groupByKey() print "aggregate user pairs approach #1: %s seconds" % t.secs print aggUserPairs.count() aggUserPairs.take(5) ----------------------------------------------------------------- # Output: aggregate user pairs: 0.0353801250458 seconds 10728120 Out[20]: [((1274, 2736), <pyspark.resultiterable.ResultIterable at 0x7f180eb55350>), ((2117, 5393), <pyspark.resultiterable.ResultIterable at 0x7f180eb55510>), ((1422, 3892), <pyspark.resultiterable.ResultIterable at 0x7f180eb55550>), ((1902, 5636), <pyspark.resultiterable.ResultIterable at 0x7f180eb55590>), ((3679, 5555), <pyspark.resultiterable.ResultIterable at 0x7f180eb555d0>)] ----------------------------------------------------------------- output = aggUserPairs.mapValues(lambda iterable: tuple(iterable)) output.take(2) ----------------------------------------------------------------- # Output: [((3848, 4390), ((5.0, 5.0),)), ((897, 2621), ((4.0, 5.0), (4.0, 4.0), (2.0, 2.0)))] ----------------------------------------------------------------- """ with Timer() as t: aggUserPairs = aggregatePairs(userPairs) print "aggregate user pairs: %s seconds" % t.secs print aggUserPairs.count() aggUserPairs.take(2) # calculate pearson correlation to figure out user-to-user similarity in the format of: # [( (test_user_id, train_user_id), (pearson_correlation, num_rating_pairs) )] userPairSimilarities = aggUserPairs.mapValues(calculatePearson) userPairSimilarities.sortByKey() print userPairSimilarities.count() userPairSimilarities.take(5) """ Explanation: 6.A.1. Pearson's User-Based Approach: comparing USER similarities According to the article, this is supposed to be the best user-based approach. End of explanation """ # 1. # a. select neighbors whose similarity correlation is greater than the threshold of 0.5 # b. remove user pairs that do not share a minimum of 5 reviews # output: number of user pairs that passes minPearson = 1692207 # number of user pairs that passes both minPearson and minSimilarReviews = 533407 minPearson = 0.5 minSimilarReviews = 5 userPairPassThreshold = userPairSimilarities.filter( lambda (userPair, (pearson_correlation, num_rating_pairs)): pearson_correlation > minPearson and num_rating_pairs >= minSimilarReviews ) print userPairPassThreshold.count() userPairPassThreshold.take(5) # 2. select top n neighbors for each test user from pyspark.rdd import RDD import heapq def takeOrderedByKey(self, topN, sortValueFn=None, ascending=False): def base(a): return [a] def combiner(agg, a): agg.append(a) return getTopN(agg) def merger(x, y): agg = x + y return getTopN(agg) def getTopN(agg): if ascending == True: return heapq.nsmallest(topN, agg, sortValueFn) else: return heapq.nlargest(topN, agg, sortValueFn) return self.combineByKey(base, combiner, merger) # add takeOrderedByKey() function to RDD class RDD.takeOrderedByKey = takeOrderedByKey # convert # [( (test_user_id, train_user_id), (pearson_correlation, num_rating_pairs) )] # to # [( test_user_id, [(test_user_id, train_user_id), (pearson_correlation, num_rating_pairs)] )] # so that you can sort by test_user_id after sorting the highest pearson correlation per test_user_id testU_testUtrainU_sim = userPairPassThreshold.map( lambda ((test_user_id, train_user_id), (pearson_correlation, num_rating_pairs)): (test_user_id, ((test_user_id, train_user_id), (pearson_correlation, num_rating_pairs))) ) print testU_testUtrainU_sim.count() testU_testUtrainU_sim.take(5) # for each test user, take the top N neighbors and ordering with the highest pearson correlation first # [( test_user_id, [(test_user_id, train_user_id), (pearson_correlation, num_rating_pairs)] )] topN = 20 testUserTopNeighbors = testU_testUtrainU_sim.takeOrderedByKey( topN, sortValueFn=lambda ((test_user_id, train_user_id), (pearson_correlation, num_rating_pairs)): (pearson_correlation, num_rating_pairs), ascending=False) # note: testUserTopNeighbors.count() should be less than the number of users print testUserTopNeighbors.count() testUserTopNeighbors.take(5) num_distinct_test_users = m_ur_test.map(lambda (movie_id, (user_id, rating)): user_id).distinct().count() num_distinct_test_users_pass_threshold = userPairPassThreshold.map(lambda ((test_user_id, train_user_id), (pearson_correlation, num_rating_pairs)): test_user_id).distinct().count() num_test_users_in_top_neighbors = testUserTopNeighbors.count() print "num_distinct_test_users = ", num_distinct_test_users print "num_distinct_test_users that passes the threshold check (aka pearson > 0.5, minReviews >= 5) = ", num_distinct_test_users_pass_threshold print "num_test_users in testUserTopNeighbors = ", num_test_users_in_top_neighbors # flattened version, meaning # convert # [( test_user_id, [(test_user_id, train_user_id), (pearson_correlation, num_rating_pairs)] )] # to # [( (test_user_id, train_user_id), (pearson_correlation, num_rating_pairs) )] testUserTopNeighborsFlattened = testUserTopNeighbors.flatMap(lambda (test_user_id, rest): rest) print testUserTopNeighborsFlattened.count() testUserTopNeighborsFlattened.take(5) """ Explanation: find nearest neighbors 1. select neighbors whose similarity correlation is greater than the threshold of 0.5 2. select top n neighbors with the highest correlation End of explanation """ # determine mean rating of each test user (aka find M) # output: [(user_id, mean_rating)] # convert to [(user_id, rating)] ur = m_ur.map(lambda (movie_id, (user_id, rating)): (user_id, rating)) # [(user_id, (sum_rating, num_rating))] u_sumRating_numRating = ur.combineByKey( lambda first_rating: (first_rating, 1), lambda x, first_rating: (x[0] + first_rating, x[1] + 1), lambda x, y: (x[0] + y[0], x[1] + y[1])) # [(test_user_id, mean_rating)] u_meanRating = u_sumRating_numRating.map( lambda (user_id, (sum_rating, num_rating)): (user_id, sum_rating/num_rating)) u_meanRating.take(5) # for each movie i, # determine pearson correlation of user a and all other users who rates movie i # determine rating of each user u on movie i - mean rating of user u # testUserTopNeighborsFlattened == [( (test_user_id, train_user_id), (pearson_correlation, num_rating_pairs) )] # M_testUR_trainUR == # [movie_id, ( (test_user_id, test_rating), (train_user_id, train_rating) )] # movie_id, (for every users who rate movie_id, add all pearson correlation * rating of user u on movie i - mean rating of user u) # compute predictions #2 # using a weighted sum of deviations from the mean """ sum of user u has rated(pearson correlation of user a and user u) * (rating of user u on movie i - mean rating of user u) divided by """ """ Explanation: Compute Predictions #1: using weighted sum of rating values !!!!!! TODO !!!!!!! Compute Predictions #2: using a weighted sum of deviations from the mean ``` P = predicted rating of user a on movie i M = mean rating of user a S = sum of ((pearson correlation of user a and each user u who rates movie i) * (rating of each user u on movie i - mean rating of user u)) D = sum of (absolute value of pearson correlation of user a and each user u who rates movie i) P = M + S/D ``` !!!!!! TODO !!!!!!! End of explanation """ # list all ratings in the format: # [user_id, (movie_id, rating)] print u_mr.count() u_mr.take(5) # list all combinations of movies rated by the same user in the format: # [user_id, ( (movie_id_1, rating_1), (movie_id_2, rating_2) )] # this is to find movie's similarity with each other sameUserRatingsCombo = u_mr.join(u_mr) print sameUserRatingsCombo.count() sameUserRatingsCombo.take(5) # filter out duplicate pairs def removeDuplicates((user_id, ratings)): (movie_id_1, rating_1) = ratings[0] (movie_id_2, rating_2) = ratings[1] return movie_id_1 < movie_id_2 sameUserRatingsCombo = sameUserRatingsCombo.filter(removeDuplicates) print sameUserRatingsCombo.count() sameUserRatingsCombo.take(5) # rearrange so that it will be in the format of movie pairs: # [(movie_id_1, movie_id_2), (rating_1, rating2)] def createMoviePairs((user_id, ratings)): (movie_id_1, rating_1) = ratings[0] (movie_id_2, rating_2) = ratings[1] return ((movie_id_1, movie_id_2), (rating_1, rating_2)) moviePairs = sameUserRatingsCombo.map(createMoviePairs) print moviePairs.count() moviePairs.take(5) # congregate all ratings for each movie pair so that it will be in the format of: # [( movie_id_1, movie_id_2), (rating_1, rating_2), (rating_1, rating_2), ...] moviePairRatings = moviePairs.groupByKey() print moviePairRatings.count() moviePairRatings.take(5) # calculate pearson correlation approach #1 # using udemy's approach # I prefer approach #2 import math def computePearsonCorrelationCoefficient(ratingPairs): numPairs = 0 if not ratingPairs: return (0, 0) muX = sum(1.*ratingX for (ratingX, _) in ratingPairs)/len(ratingPairs) muY = sum(1.*ratingY for (_, ratingY) in ratingPairs)/len(ratingPairs) cov = sum_sqdev_x = sum_sqdev_y = 0 for ratingX, ratingY in ratingPairs: dev_x = ratingX - muX dev_y = ratingY - muY cov += dev_x * dev_y sum_sqdev_x += dev_x**2 sum_sqdev_y += dev_y**2 numPairs += 1 numerator = cov denominator = math.sqrt(sum_sqdev_x) * math.sqrt(sum_sqdev_y) score = 0 if (denominator): score = (numerator / (float(denominator))) return (score, numPairs) moviePairSimilarities = moviePairRatings.mapValues(computePearsonCorrelationCoefficient).cache() moviePairSimilarities.sortByKey() moviePairSimilarities.take(5) print moviePairRatings.count() print moviePairSimilarities.count() # calculate pearson correlation approach #2 # using scipy # note: you cannot use pyspark.mllib.stat.Statistics's corr() function within the map function from scipy.stats import pearsonr def calculatePearson(ratingPairsPerMoviePairResultIterable): ratingPairsPerMoviePair = tuple(ratingPairsPerMoviePairResultIterable) rating1s = [rating1 for (rating1, _) in ratingPairsPerMoviePair] rating2s = [rating2 for (_, rating2) in ratingPairsPerMoviePair] pearson_correlation, p_value = pearsonr(rating1s, rating2s) return (pearson_correlation, len(ratingPairsPerMoviePair)) moviePairSimilarities2 = moviePairRatings.mapValues(calculatePearson).cache() moviePairSimilarities2.sortByKey() moviePairSimilarities2.take(5) print moviePairRatings.count() print moviePairSimilarities2.count() """ Explanation: 6.A.1. Pearson's Item-Based Approach: comparing MOVIES similarities End of explanation """ # divide movielens data into 10 parts to perform 10-fold cross-validation # training model using 9 parts # test model using last part # results are better when default ratings are based on item information than when they are based on user information # using mean rating is better than using majority rating """ Explanation: 6.A.2. Calculate Constraint Pearson Correlation In the case of MovieLens data, it means any ratings greater than 3 (aka positive ratings). 6.A.2. Pearson's User-Based Approach: comparing USERS similarities This is the same as Pearson's User-Based Approach with the exception that it filters out ratings that are 2 or less. 6.A.2. Constraint Pearson's Item-Based Approach: comparing MOVIES similarities This is the same as Pearson's Item-Based Approach with the exception that it filters out ratings that are 2 or less. 6.B. Calculate Probabilistic Similarity 6.B. Probabilistic's Item-Based Approach: comparing MOVIES similarity According to the article, this is supposed to be the best item-based approach. 6.C.1. Calculate Cosine Similarity 6.C.2. Calculate Adjusted Cosine Similarity 6.D. Comparing Similarities' Measurement Graph user-based approaches using the deviation prediction scheme (MAE) and different neighborhood sizes (K) 7. Develop Model Comparing distance measures for model-based approaches using the mean item rating prediction scheme and different number of clusters (K) * Manhattan * Euclidian Comparing prediction schemes for model-based approaches using the euclidian distance and different numbers of clusters (K) * Mean Item * Bayes MAE Comparing different clustering algorithms for model-based approaches using the euclidian distance, the mean item rating prediction scheme, and different numbers of clusters (K) * K-Means * Bisecting * LAC * SSC End of explanation """ from pyspark.mllib.classification import NaiveBayes from pyspark.mllib.regression import LabeledPoint # To use MLlib's Naive Bayes model, it requires the input to be in a format of a LabeledPoint # therefore, convert dataset so that it will be in the following format: # [(rating, (user_id, movie_id))] r_um = ratingsdf.map(lambda row: LabeledPoint(row.rating, (row.user_id, row.movie_id))) # split the data r_um_train, r_um_test, r_um_validation = r_um.randomSplit(weights, seed) # train a Naive Bayes model naiveBayesModel = NaiveBayes.train(r_um_train, lambda_=1.0) # save this Naive Bayes model #naiveBayesModel.save(sc, "NaiveBayes_MovieLens1M_UserUser") # load this Naive Bayes model into the SparkContext #sameNaiveBayesModel = NaiveBayesModel.load(sc, "NaiveBayes_MovieLens1M_UserUser") # make prediction # [((test_user_id, test_movie_id), (predicted_rating, actual_rating))] r_um_predicted = r_um_test.map( lambda p: ( (p.features[0], p.features[1]), (naiveBayesModel.predict(p.features), p.label) ) ) print r_um_predicted.take(5) """ Explanation: 7.A. Implement Bayes What is Bayes? 1. We have a prior belief in A 2. We have a posterior probability X where X is the number of tests it passes 3. Bayesian inference merely uses it to connect prior probabilities P(A) with an updated posterior probabilities P(A|X) P(A|X) = P(X|A) * P(A) / P(X) P(A|X) = Posterior Probability: the posterior probability of class (A, target) given predictor(X, attributes) P(X|A) = Likelihood: the likelihood which is the probability of predictor given class P(A) = Class Prior Probability: prior probability of class P(X) = Predictor Prior Probability: prior probability of predictor Types of Bayes: 1. Maximum A Posteriori (MAP) : predict the most probably rating 2. Mean Squared Error (MSE): compute the weighted sum of ratings that corresponds to minimizing the expectation of MSE 3. Mean Absolute Error (MAE): select the rating that minimizes the expectation of Mean Absolute Error Table of Contents: 7.A. Implement Bayes 7.A.1. Implement Naive Bayes using PySpark: DONE 7.A.2. Implement Naive Bayes using PyMC 7.A.3. Implement Naive Bayes manually: DONE * Implement Bayes MAP: DONE * Implement Bayes MSE: DONE * Implement Bayes MAE: DONE 7.A.1. Implementing Naive Bayes using PySpark It does not support computation for Bayes MAP, MSE, and MAE because it does not provide a probability distribution over labels (aka rating) for the given featureset (aka user_id, movie_id). End of explanation """ # test accuracy sameRating = r_um_predicted.filter( lambda ((test_user_id, test_movie_id), (predicted_rating, actual_rating)): predicted_rating == actual_rating) accuracy = 1.0 * sameRating.count() / r_um_test.count() print "accuracy = (predicted_rating == actual_rating)/total_num_ratings = ", accuracy """ Explanation: [((2.0, 593.0), (1.0, 5.0)), ((2.0, 1955.0), (1.0, 4.0)), ((5.0, 3476.0), (1.0, 3.0)), ((5.0, 1093.0), (1.0, 2.0)), ((6.0, 3508.0), (1.0, 3.0))] End of explanation """ # calculate RMSE and MAE # convert into two vectors where # one vector describes the actual ratings in the format [(user_id, movie_id, actual_rating)] # second vector describes the predicted ratings in the format [(user_id, movie_id, predicted_rating)] actual = r_um_predicted.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) predicted = r_um_predicted.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, predicted_rating) ) print "actual:\n", actual.take(5) print "predicted:\n", predicted.take(5) rmse = pm.calculate_rmse_using_rdd(actual, predicted) print "rmse = ", rmse mae = pm.calculate_mae_using_rdd(actual, predicted) print "mae = ", mae """ Explanation: accuracy = (predicted_rating == actual_rating)/total_num_ratings = 0.162442085039 End of explanation """ # determine min and max of ratings minRating = ratingsdf.map(lambda row: row.rating).min() maxRating = ratingsdf.map(lambda row: row.rating).max() print "minRating = ", minRating print "maxRating = ", maxRating """ Explanation: actual: [(7.0, 3793.0, 3.0), (8.0, 2490.0, 2.0), (15.0, 1343.0, 3.0), (16.0, 2713.0, 2.0), (17.0, 457.0, 5.0)] predicted: [(7.0, 3793.0, 1.0), (8.0, 2490.0, 1.0), (15.0, 1343.0, 1.0), (16.0, 2713.0, 1.0), (17.0, 457.0, 1.0)] rmse = 2.26584476437 mae = 1.88503067116 Implementing Naive Bayes using PyMC Implementing Naive Bayes manually Probability of rating r for a given user u on a given item i can be defined as follows: $$P(r|u, i) = \frac{P(r|u) * P(r|i)}{P(r)}*\frac{P(u) * P(i)}{P(u, i)}$$ We make the assumption that this is the same as: $$P(r|u, i) = \frac{P(r|u) * P(r|i)}{P(r)}$$ The last three probabilities P(u), P(i), and P(u, i) can be ignored since they are the same for all users and items. We will compute P(r|u), P(r|i), and P(r) individually before congregating them in a final computation. End of explanation """ # create RDD for the range of ratings # [(1, 2, 3, 4, 5)] rangeOfRatings = sc.parallelize( list(range(int(minRating), int(maxRating + 1))) ) print rangeOfRatings.collect() print rangeOfRatings.count() """ Explanation: Output example: minRating = 1.0 maxRating = 5.0 End of explanation """ # [(user_id, movie_id, rating)] umr = ratingsdf.map(lambda row: (row.user_id, row.movie_id, row.rating)) umr.count() """ Explanation: Output example: [1, 2, 3, 4, 5] 5 End of explanation """ # since we have to determine the probability of rating r for each user_id and movie_id, # we have to create a RDD with [(rating, (user_id, movie_id))] for each rating # ie. (rating_1, (user_id, movie_id)), (rating_2, (user_id, movie_id)), ..., (rating_5, (user_id, movie_id)) um = umr.map(lambda (user_id, movie_id, rating): (user_id, movie_id)) rCombo_um = rangeOfRatings.cartesian(um).map(lambda (rating, (user_id, movie_id)): (float(rating), (user_id, movie_id))) print rCombo_um.take(2) print rCombo_um.count() # == umr.count() * 5 """ Explanation: 1000209 End of explanation """ umrCombo = rCombo_um.map(lambda (rating, (user_id, movie_id)): (user_id, movie_id, rating)) print umrCombo.take(2) print umrCombo.count() """ Explanation: [(1.0, (1, 1197)), (1.0, (1, 938))] 5001045 End of explanation """ # since we have to determine the probability of rating r for each user_id and movie_id, # we have to create a RDD with [(rating, (user_id, movie_id))] for each rating # ie. (rating_1, (user_id, movie_id)), (rating_2, (user_id, movie_id)), ..., (rating_5, (user_id, movie_id)) um_test = umr_test.map(lambda (user_id, movie_id, rating): (user_id, movie_id)) rCombo_um_test = rangeOfRatings.cartesian(um_test).map(lambda (rating, (user_id, movie_id)): (float(rating), (user_id, movie_id))) print rCombo_um_test.take(2) print rCombo_um_test.count() # == umr.count() * 5 """ Explanation: [(1, 1197, 1.0), (1, 938, 1.0)] 5001045 End of explanation """ umrCombo_test = rCombo_um_test.map(lambda (rating, (user_id, movie_id)): (user_id, movie_id, rating)) print umrCombo_test.take(2) print umrCombo_test.count() """ Explanation: [(1.0, (2, 593)), (1.0, (2, 1955))] 501170 End of explanation """ # [((user_id, rating), 1)] ur_1 = umr.map(lambda (user_id, movie_id, rating): ((user_id, rating), 1)) ur_1.take(2) """ Explanation: [(2, 593, 1.0), (2, 1955, 1.0)] 501170 Calculating P(r|u) , probability of rating r for user u $$ P(r|u) = { numberOfParticularRatingThatUserGives \over totalNumberOfRatingsThatUserGives }$$ ``` P(r|u) = (number of ratings r that user u gives) / (total number of ratings that user u gives) For example: r == 1 P(r|u) = (number of ratings r == 1 that user u gives) / (total number of ratings that user u gives) ``` End of explanation """ ur_1.count() """ Explanation: [((1, 3.0), 1), ((1, 4.0), 1)] End of explanation """ # [(((user_id, rating_1), 0), ((user_id, rating_2), 0), ..., ((user_id, rating_5), 0))] urCombo_0 = umrCombo.map(lambda (user_id, movie_id, rating): ((user_id, rating), 0)).distinct() #print urCombo_0.sortByKey().collect() print urCombo_0.count() """ Explanation: 1000209 End of explanation """ ur_1Or0 = ur_1.union(urCombo_0) print ur_1Or0.take(2) print ur_1Or0.count() # ur_1Or0.count() == ur_1.count() + urCombo_0.count() # 1000209 + 30200 # 1030409 """ Explanation: 30200 End of explanation """ ur_1Or0.sortByKey().collect() from operator import add # [(user_id, rating), (num_rating)] ur_numRating = ur_1Or0.reduceByKey(add) print ur_numRating.take(2) print ur_numRating.count() """ Explanation: [((1, 3.0), 1), ((1, 4.0), 1)] 1030409 End of explanation """ # [(user_id, (rating, num_rating))] u_r_numRating = ur_numRating.map(lambda ((user_id, rating), num_rating): (user_id, (rating, num_rating))) print u_r_numRating.take(2) print u_r_numRating.count() """ Explanation: [((3577, 5.0), 29), ((1260, 2.0), 13)] 30200 End of explanation """ # [(user_id, total_rating)] u_totalRating = sc.parallelize(umr.map(lambda (user_id, movie_id, rating): (user_id, rating)).countByKey().items()) print u_totalRating.take(2) print u_totalRating.count() """ Explanation: [(3577, (5.0, 29)), (1260, (2.0, 13))] 30200 End of explanation """ # [(user_id, (total_rating, (rating, num_rating)))] u_componentsOfProb = u_totalRating.join(u_r_numRating) print u_componentsOfProb.take(2) print u_componentsOfProb.count() """ Explanation: [(1, 53), (2, 129)] 6040 End of explanation """ # [(user_id, rating, probRU)] probRU = u_componentsOfProb.map(lambda (user_id, (total_rating, (rating, num_rating))): (user_id, rating, float(num_rating)/float(total_rating)) ) print probRU.take(2) print probRU.count() """ Explanation: [(2850, (43, (4.0, 12))), (2850, (43, (1.0, 5)))] 30200 End of explanation """ # [((movie_id, rating), 1)] mr_1 = umr.map(lambda (user_id, movie_id, rating): ((movie_id, rating), 1)) mr_1.take(2) """ Explanation: [(2850, 1.0, 0.11627906976744186), (2850, 3.0, 0.18604651162790697)] 30200 Calculating P(r|i) $$ P(r|i) = { numberOfParticularRatingThatItemReceives \over totalNumberOfRatingsThatItemReceives }$$ ``` P(r|i) = (number of ratings r that item i receives) / (total number of ratings that item i receives) For example: r == 1 P(r|i) = (number of ratings r == 1 that movie i receives) / (total number of ratings that movie i receives) ``` End of explanation """ mr_1.count() """ Explanation: [((1197, 3.0), 1), ((938, 4.0), 1)] End of explanation """ # [(((user_id, rating_1), 0), ((user_id, rating_2), 0), ..., ((user_id, rating_5), 0))] mrCombo_0 = umrCombo.map(lambda (user_id, movie_id, rating): ((movie_id, rating), 0)).distinct() #print mrCombo_0.sortByKey().collect() print mrCombo_0.count() """ Explanation: 1000209 End of explanation """ mr_1Or0 = mr_1.union(mrCombo_0) print mr_1Or0.take(2) print mr_1Or0.count() # ur_1Or0.count() == ur_1.count() + urCombo_0.count() # 1000209 + 18530 # 1018739 """ Explanation: 18530 End of explanation """ # [(movie_id, rating), (num_rating)] mr_numRating = mr_1Or0.reduceByKey(add) print mr_numRating.take(2) print mr_numRating.count() """ Explanation: [((1197, 3.0), 1), ((938, 4.0), 1)] 1018739 End of explanation """ # OPTION instead of using union() and then reduceByKey() """ mr_1Or0 = mr_1.reduceByKey(add).rightOuterJoin(mrCombo_0) print mr_1Or0.take(2) print mr_1Or0.count() """ """ [((2001, 5.0), (129, 0)), ((3654, 4.0), (266, 0))] 18530 """ # [(movie_id, (rating, num_rating))] m_r_numRating = mr_numRating.map(lambda ((movie_id, rating), num_rating): (movie_id, (rating, num_rating))) print m_r_numRating.take(2) print m_r_numRating.count() """ Explanation: [((3577, 5.0), 3), ((1260, 2.0), 6)] 18530 End of explanation """ # [(movie_id, total_rating)] m_totalRating = sc.parallelize(umr.map(lambda (user_id, movie_id, rating): (movie_id, rating)).countByKey().items()) print m_totalRating.take(2) print m_totalRating.count() """ Explanation: [(391, (3.0, 18)), (518, (4.0, 22))] 18530 End of explanation """ # [(user_id, (total_rating, (rating, num_rating)))] m_componentsOfProb = m_totalRating.join(m_r_numRating) print m_componentsOfProb.take(2) print m_componentsOfProb.count() """ Explanation: [(1, 2077), (2, 701)] 3706 End of explanation """ # [(movie_id, rating, probRI)] probRI = m_componentsOfProb.map(lambda (movie_id, (total_rating, (rating, num_rating))): (movie_id, rating, float(num_rating)/float(total_rating)) ) print probRI.take(2) print probRI.count() """ Explanation: [(3808, (44, (5.0, 17))), (3808, (44, (3.0, 8)))] 18530 End of explanation """ totalRatings = umr.count() print totalRatings """ Explanation: [(3808, 5.0, 0.38636363636363635), (3808, 4.0, 0.36363636363636365)] 18530 P(r) = numRating / totalRatings ie. rating = 1 P(r) = (number of rating == 1) / (total number of ratings) End of explanation """ # [(rating, 1)] r_1 = umr.map(lambda (user_id, movie_id, rating): (rating, 1)) # [(rating, num_rating)] r_numRating = r_1.reduceByKey(add) # [(rating, probR)] probR = r_numRating.mapValues(lambda num_rating: float(num_rating)/float(totalRatings)) probR.take(2) """ Explanation: 1000209 End of explanation """ # add probR to user_id, movie_id, rating components = rCombo_um.join(probR) print components.take(2) print components.count() """ Explanation: [(1.0, 0.05616226208722377), (2.0, 0.1075345252842156)] P(r | a, i) = (P(r|u) * P(r|i) / P(r)) * (P(u) * P(i) / P(u, i)) = P(r|u) * P(r|i) / P(r) End of explanation """ # add probRU to user_id, movie_id, rating, probR tmp_a = components.map(lambda (rating, ((user_id, movie_id), prob_r)): ((user_id, rating), (movie_id, prob_r))) tmp_b = probRU.map(lambda (user_id, rating, prob_ru): ((user_id, rating), prob_ru)) components = tmp_a.join(tmp_b) print components.take(2) print components.count() """ Explanation: [(1.0, ((1, 914), 0.05616226208722377)), (1.0, ((1, 594), 0.05616226208722377))] 5001045 End of explanation """ # add probRI to user_id, movie_id, rating, probR, probRU tmp_a = components.map(lambda ( (user_id, rating), ((movie_id, prob_r), prob_ru) ): ( (movie_id, rating), (user_id, prob_r, prob_ru) ) ) tmp_b = probRI.map(lambda (movie_id, rating, prob_ri): ((movie_id, rating), prob_ri)) components = tmp_a.join(tmp_b) print components.take(2) print components.count() """ Explanation: [((327, 1.0), ((1248, 0.05616226208722377), 0.038135593220338986)), ((327, 1.0), ((1254, 0.05616226208722377), 0.038135593220338986))] 5001045 End of explanation """ # re-format # [((user_id, movie_id, rating), bayes_probability)] componentsReformat = components.map(lambda ((movie_id, rating), ((user_id, prob_r, prob_ru), prob_ri)): ((user_id, movie_id, rating), (prob_r, prob_ru, prob_ri)) ) # calculate bayes probability bayesProb = componentsReformat.mapValues(lambda (prob_r, prob_ru, prob_ri): prob_ru * prob_ri / prob_r) print bayesProb.take(2) """ Explanation: [((1644, 5.0), ((1605, 0.22626271109338147, 0.038381742738589214), 0.056842105263157895)), ((1644, 5.0), ((1451, 0.22626271109338147, 0.3022636484687084), 0.056842105263157895))] 5001045 End of explanation """ print "umr = ", umr.count() print "probR = ", probR.count() print "probRU = ", probRU.count() print "probRI = ", probRI.count() print "bayesProb = ", bayesProb.count() # note: bayesProb.count() = umr.count() * 5 # bayesProb = umr_train * 5 1000209 * 5 # extract only user_id, movie_id in umr_test from bayes_prob # remember that we have to extract the bayes_prob for each rating too # [(user_id, movie_id, rating)] print "umrCombo_test.count() = ", umrCombo_test.count() # [((user_id, movie_id, rating), bayes_prob)] print "bayesProb.count() = ", bayesProb.count() # [((user_id, movie_id), (rating, bayes_prob))] tmp_a = umrCombo_test.map(lambda (user_id, movie_id, rating): ((user_id, movie_id, rating), 1)) tmp_b = bayesProb bayesProb_test = tmp_a.join(tmp_b).map( lambda ((user_id, movie_id, rating), (_, bayes_prob)): ((user_id, movie_id), (rating, bayes_prob))) print bayesProb_test.take(2) print bayesProb_test.count() # == umrCombo_test.count() """ Explanation: [((2168, 135, 4.0), 0.13697613242692502), ((4808, 135, 4.0), 0.12445827900425674)] End of explanation """ # [((user_id, movie_id), [(rating_1, bayes_prob_1), (rating_2, bayes_prob_2), ..., (rating_5, bayes_prob_5)])] um_allBayesProb = bayesProb_test.mapValues(lambda value: [value]).reduceByKey(lambda a, b: a + b) print um_allBayesProb.take(2) print um_allBayesProb.count() # == bayesProb_test.count()/5 == umr_test.count() == 100234 """ Explanation: umrCombo_test.count() = 501170 bayesProb.count() = 5001045 [((5522, 2157), (3.0, 0.2209227724078584)), ((5786, 3210), (2.0, 0.08545729298368235))] 501170 End of explanation """ um_allBayesProb = um_allBayesProb.mapValues(lambda value: sorted(value, key=lambda(rating, bayes_prob): rating)) print um_allBayesProb.take(2) print um_allBayesProb.count() """ Explanation: [((4335, 1588), [(3.0, 0.5498862085999521), (1.0, 0.016548382705956422), (2.0, 0.13615664002520045), (4.0, 0.32236074697317796), (5.0, 0.025783030822306607)]), ((4728, 1894), [(5.0, 0.01634723322124617), (3.0, 0.7342812664378788), (4.0, 0.256444827101078), (1.0, 0.005091044955245684), (2.0, 0.1289571243789302)])] 100234 End of explanation """ def calculate_bayes_map(value): # extract the bayes_prob bayesProbList = [x[1] for x in value] # define the argmax, return the index argmax = bayesProbList.index(max(bayesProbList)) return argmax predicted_bayes_map = um_allBayesProb.mapValues(calculate_bayes_map) print predicted_bayes_map.take(2) print predicted_bayes_map.count() """ Explanation: 7.A.1. Implementing Bayes MAP Maximum A Posteriori (MAP) : predict the most probably rating ``` Pai = predicted rating for user a on movie i P(r|a,i) = Naive Bayes that computes the probability of rating r for a given user a on movie i Pai = Argmax(r=1 to 5) P(r|a,i) ``` End of explanation """ # [(test_user_id, test_movie_id), (actual_rating, predicted_rating)] tmp_a = umr_test.map(lambda (user_id, movie_id, rating): ((user_id, movie_id), rating)) tmp_b = predicted_bayes_map um_testBayesMap = tmp_a.join(tmp_b) print um_testBayesMap.take(2) print um_testBayesMap.count() # [(train_user_id, train_movie_id), (actual_rating, predicted_rating)] tmp_a = umr_train.map(lambda (user_id, movie_id, rating): ((user_id, movie_id), rating)) tmp_b = predicted_bayes_map um_trainBayesMap = tmp_a.join(tmp_b) """ Explanation: [((4335, 1588), 0), ((4728, 1894), 2)] 100234 End of explanation """ a, b, c = umr.randomSplit(weights, seed) print a.count() print b.count() print c.count() """ Explanation: [((3491, 3699), (4.0, 0)), ((1120, 1654), (4.0, 0))] 100234 End of explanation """ print a.take(1) print b.take(1) """ Explanation: 900042 100167 0 End of explanation """ # calculate RMSE and MAE # convert into two vectors where # one vector describes the actual ratings in the format [(user_id, movie_id, actual_rating)] # second vector describes the predicted ratings in the format [(user_id, movie_id, predicted_rating)] actual = um_testBayesMap.map( lambda((test_user_id, test_movie_id), (actual_rating, predicted_rating)): (test_user_id, test_movie_id, actual_rating) ) predicted = um_testBayesMap.map( lambda((test_user_id, test_movie_id), (actual_rating, predicted_rating)): (test_user_id, test_movie_id, predicted_rating) ) print "actual:\n", actual.take(5) print "predicted:\n", predicted.take(5) rmse = pm.calculate_rmse_using_rdd(actual, predicted) print "rmse = ", rmse mae = pm.calculate_mae_using_rdd(actual, predicted) print "mae = ", mae """ Explanation: [(2, 1955, 4.0)] [(31, 3591, 3.0)] End of explanation """ # y_test y_test = um_testBayesMap.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) # y_train y_train = um_trainBayesMap.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) # y_predicted y_predicted = um_testBayesMap.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, predicted_rating) ) pm_results_bayes_map = pm.get_perform_metrics(y_test, y_train, y_predicted, content_array, sqlCtx) from pprint import pprint pprint(pm_results_bayes_map) """ Explanation: actual: [(3039, 2937, 4), (1810, 3072, 3), (2718, 1610, 3), (5081, 3255, 3), (4448, 52, 3)] predicted: [(366, 3196, 5.0), (1810, 3072, 4.0), (2718, 1610, 2.0), (59, 943, 3.0), (4448, 52, 5.0)] rmse = 1.33451063018 mae = 1.06974538193 End of explanation """ def calculate_bayes_mse(value): predicted = 0. for rating, bayes_prob in value: predicted += rating * bayes_prob return predicted predicted_bayes_mse = um_allBayesProb.mapValues(calculate_bayes_mse) print predicted_bayes_mse.take(2) print predicted_bayes_mse.count() """ Explanation: 7.A.2. Implementing Bayes MSE Mean Squared Error (MSE): compute the weighted sum of ratings that corresponds to minimizing the expectation of MSE ``` Pai = predicted rating for user a on movie i P(r|a,i) = Naive Bayes that computes the probability of rating r for a given user a on movie i Pai = Sum of (r * P(r|a,i)) from r=1 to 5 ``` End of explanation """ # [(test_user_id, test_movie_id), (predicted_rating, actual_rating)] tmp_a = umr_test.map(lambda (user_id, movie_id, rating): ((user_id, movie_id), rating)) tmp_b = predicted_bayes_mse um_testBayesMse = tmp_a.join(tmp_b) print um_testBayesMse.take(2) print um_testBayesMse.count() """ Explanation: [((4335, 1588), 3.3568784305604584), ((4728, 1894), 3.5733645675372854)] 100234 End of explanation """ # calculate RMSE and MAE # convert into two vectors where # one vector describes the actual ratings in the format [(user_id, movie_id, actual_rating)] # second vector describes the predicted ratings in the format [(user_id, movie_id, predicted_rating)] actual = um_testBayesMse.map( lambda((test_user_id, test_movie_id), (actual_rating, predicted_rating)): (test_user_id, test_movie_id, actual_rating) ) predicted = um_testBayesMse.map( lambda((test_user_id, test_movie_id), (actual_rating, predicted_rating)): (test_user_id, test_movie_id, predicted_rating) ) print "actual:\n", actual.take(5) print "predicted:\n", predicted.take(5) rmse = pm.calculate_rmse_using_rdd(actual, predicted) print "rmse = ", rmse mae = pm.calculate_mae_using_rdd(actual, predicted) print "mae = ", mae """ Explanation: [((1120, 1654), (4.0, 3.44226621556054)), ((4439, 3005), (3.0, 3.360422113015879))] 100234 End of explanation """ # y_test y_test = um_testBayesMse.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) # y_train tmp_a = umr_train.map(lambda (user_id, movie_id, rating): ((user_id, movie_id), rating)) tmp_b = predicted_bayes_mse um_trainBayesMse = tmp_a.join(tmp_b) y_train = um_trainBayesMse.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) # y_predicted y_predicted = um_testBayesMse.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, predicted_rating) ) pm_results_bayes_mse = pm.get_perform_metrics(y_test, y_train, y_predicted, content_array, sqlCtx) from pprint import pprint pprint(pm_results_bayes_mse) """ Explanation: actual: [(1120, 1654, 3.44226621556054), (4439, 3005, 3.360422113015879), (4271, 3671, 2.8494882459525477), (2259, 1213, 7.778718357130783), (1820, 3101, 3.7261784376809395)] predicted: [(1120, 1654, 4.0), (4439, 3005, 3.0), (4271, 3671, 5.0), (2259, 1213, 5.0), (1820, 3101, 5.0)] rmse = 1.17748775303 mae = 0.918588835371 End of explanation """ # TODO: fix this the same as argmax def calculate_bayes_mae(value): sumOfProductList = [] for rating, bayes_prob in value: sumOfProduct = 0. for i in range(1, 6): sumOfProduct += bayes_prob * abs(rating - i) sumOfProductList.append(sumOfProduct) argmin = sumOfProductList.index(min(sumOfProductList)) return argmin predicted_bayes_mae = um_allBayesProb.mapValues(calculate_bayes_mae) print predicted_bayes_mae.take(2) print predicted_bayes_mae.count() """ Explanation: 7.A.3. Implementing Bayes MAE Mean Absolute Error (MAE): select the rating that minimizes the expectation of Mean Absolute Error ``` Pai = predicted rating for user a on movie i P(r|a,i) = Naive Bayes that computes the probability of rating r for a given user a on movie i Pai = Argmin from r=1 to 5(Sum of (P(n|a,i) * |r-n|) from n=1 to 5) ``` End of explanation """ # [(test_user_id, test_movie_id), (predicted_rating, actual_rating)] tmp_a = umr_test.map(lambda (user_id, movie_id, rating): ((user_id, movie_id), rating)) tmp_b = predicted_bayes_map um_testBayesMae = tmp_a.join(tmp_b) print um_testBayesMae.take(2) print um_testBayesMae.count() """ Explanation: [((4335, 1588), 4), ((4728, 1894), 2)] 100234 End of explanation """ # calculate RMSE and MAE from src.algorithms import performance_metrics as pm # convert into two vectors where # one vector describes the actual ratings in the format [(user_id, movie_id, actual_rating)] # second vector describes the predicted ratings in the format [(user_id, movie_id, predicted_rating)] actual = um_testBayesMae.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) predicted = um_testBayesMae.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, predicted_rating) ) print "actual:\n", actual.take(5) print "predicted:\n", predicted.take(5) rmse = pm.calculate_rmse_using_rdd(actual, predicted) print "rmse = ", rmse mae = pm.calculate_mae_using_rdd(actual, predicted) print "mae = ", mae """ Explanation: [((1120, 1654), (4.0, 2)), ((4169, 2723), (3.0, 3))] 100234 End of explanation """ # y_test y_test = um_testBayesMae.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) # y_train tmp_a = umr_train.map(lambda (user_id, movie_id, rating): ((user_id, movie_id), rating)) tmp_b = predicted_bayes_mae um_trainBayesMae = tmp_a.join(tmp_b) y_train = um_trainBayesMae.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, actual_rating) ) # y_predicted y_predicted = um_testBayesMae.map( lambda((test_user_id, test_movie_id), (predicted_rating, actual_rating)): (test_user_id, test_movie_id, predicted_rating) ) pm_results_bayes_mae = get_results(y_test, y_train, y_predicted, content_array, sqlCtx) pprint(pm_results_bayes_mae) """ Explanation: actual: [(1120, 1654, 2), (4169, 2723, 3), (4271, 3671, 3), (2259, 1213, 1), (1820, 3101, 4)] predicted: [(1120, 1654, 4.0), (4439, 3005, 3.0), (4271, 3671, 5.0), (2259, 1213, 5.0), (1820, 3101, 5.0)] rmse = 2.39828498227 mae = 1.95142366862 End of explanation """
MatteusDeloge/opengrid
notebooks/Water Leak Detection.ipynb
apache-2.0
import os import sys import pytz import inspect import numpy as np import pandas as pd import datetime as dt import matplotlib.pyplot as plt import tmpo from opengrid import config from opengrid.library import plotting from opengrid.library import houseprint c=config.Config() %matplotlib inline plt.rcParams['figure.figsize'] = 16,8 # path to data path_to_data = c.get('data', 'folder') if not os.path.exists(path_to_data): raise IOError("Provide your path to the data in your config.ini file. This is a folder containing a 'zip' and 'csv' subfolder.") hp = houseprint.Houseprint() #set start and end date end = pd.Timestamp('2015/1/4') start = end - dt.timedelta(days=60) hp.init_tmpo() def dif_interp(ts, freq='min', start=None, end=None): """ Return a fixed frequency discrete difference time series from an unevenly spaced cumulative time series """ if ts.empty and (start is None or end is None): return ts start = start or ts.index[0] start = start.replace(tzinfo=pytz.utc) end = end or max(start, ts.index[-1]) end = end.replace(tzinfo=pytz.utc) start = min(start, end) newindex = pd.DataFrame([0, 0], index=[start, end]).resample(freq).index if ts.dropna().empty: tsmin = ts.reindex(newindex) else: tsmin = ts.reindex(ts.index + newindex) tsmin = tsmin.interpolate(method='time') tsmin = tsmin.reindex(newindex) return tsmin.diff()*3600/60 df = hp.get_data(sensortype='water', head=start, tail=end).diff() water_sensors = [sensor for sensor in hp.get_sensors('water') if sensor.key in df.columns] print "{} water sensors".format(len(water_sensors)) """ Explanation: This notebook shows step by step how water leaks of different severity can be detected End of explanation """ for sensor in water_sensors: ts = df[sensor.key] if not ts.dropna().empty: plotting.carpet(ts, title=sensor.device.key, zlabel=r'Flow [l/hour]') """ Explanation: The purpose is to automatically detect leaks, undesired high consumption, etc.. so we can warn the user Let's first have a look at the carpet plots in order to see whether we have such leaks, etc... in our database. End of explanation """ for sensor in water_sensors: ts = df[sensor.key] if not ts.dropna().empty: tsday = ts.resample('D', how='sum') tsday.plot(label=sensor.device.key) (tsday*0.+1000.).plot(style='--', lw=3, label='_nolegend_') plt.legend() """ Explanation: Yes, we do! The most obvious is FL03001579 with a more or less constant leak in the first month and later on some very large leaks during several hours. FL03001556 has a moderate leak once and seems to have similar, but less severe leaks later again. Also in FL03001561 there was once a strange (but rather short) issue and later on small, stubborn and irregularly deteriorating leaks of a different kind. So, out of 6 water consumption profiles, there are 3 with possible leaks of different types and severities! This looks a very promising case to detect real issues and show the value of opengrid. So, we would like to detect the following issues: * FL03001579: constant leak in first month and big water leak during several hours on some days (toilet leaking?) * FL03001556: moderate leak once and small water leak during several hours on some days (toilet leaking?) * FL03001561: rather short leak and small, irregularly deteriorating water leak? How could we detect these? Let's look first at the daily load curves of each sensor End of explanation """ for sensor in water_sensors: ts = df[sensor.key] if not ts.dropna().empty: plt.figure() for day in pd.date_range(start, end): try: tsday = ts[day.strftime('%Y/%m/%d')].order(ascending=False) * 60. plt.plot(tsday.values/60.) x = np.arange(len(tsday.values)) + 10. plt.plot(x + 100., 500./x**1.5, 'k--') plt.gca().set_yscale('log') plt.ylim(ymin=1/60.) plt.title(sensor.device.key) except: pass """ Explanation: So, the big water leaks of FL03001579 is relatively easy to detect, e.g. by raising an alarm as soon as the daily consumption exceeds 1500 l. However, by that time a lot of water has been wasted already. One could lower the threshold a bit, but below 1000l a false alarm would be raised for FL03001525 on one day. Moreover, the other issues are not detected by such an alarm. Let's try it in a different way. First have a look at the load duration curve, maybe there we could find something usefull for the alarm. End of explanation """ for sensor in water_sensors: ts = df[sensor.key] * 60. if not ts.dropna().empty: tsday = pd.rolling_min(ts, 60) ax = tsday.plot(label=sensor.device.key) (tsday*0.+20.).plot(style='--', lw=3, label='_nolegend_') plt.gca().set_yscale('log') ax.set_ylim(ymin=1) plt.legend() """ Explanation: This way, most of the issues could be detected, but some marginally. For small leaks it may take a full day before the alarm is raised. Maybe we can improve this. A more reliable way may be to look for consecutive minutes with high load. So, let's have a look at the 60 minutes rolling minimum of the load. End of explanation """ for sensor in water_sensors: ts = df[sensor.key] * 60. if not ts.dropna().empty: tsday = pd.rolling_min(ts, 60) - 0.4*pd.rolling_mean(ts, 60) ax = tsday.plot(label=sensor.device.key) (tsday*0.+1.).plot(style='--', lw=3, label='_nolegend_') ax.set_yscale('log') ax.set_ylim(ymin=0.1) plt.legend() """ Explanation: The large leaks are very pronounced and easily detected (remark that this is a logarithmic scale!) one hour after the leak started. But the smaller leaks are still not visible. Remark that a typical characteristic of a leak is that it is more or less constant, and thus the mean is probably pretty close to the minimum. An expected high load typically varies a lot more over one hour and thus its mean is probably a lot higher than its minimum. So, we could exploit this characteristic of leaks by subtracting some fraction of the rolling mean from the rolling minimum. Leaks should then stand out compared to normal loads. End of explanation """
EuroPython/ep-tools
notebooks/session_instructions_toPDF.ipynb
mit
%%javascript IPython.OutputArea.auto_scroll_threshold = 99999; //increase max size of output area import json import datetime as dt from operator import itemgetter from collections import OrderedDict from operator import itemgetter from IPython.display import display, HTML from nbconvert.filters.markdown import markdown2html from eptools._utils import DefaultOrderedDict talk_sessions = json.load(open('accepted_talks.json'), encoding='utf-8') talks_admin_url = 'https://ep2016.europython.eu/admin/conference/talk' show = lambda s: display(HTML(s)) def ordinal(n): if 10 <= n % 100 < 20: return str(n) + 'th' else: return str(n) + {1 : 'st', 2 : 'nd', 3 : 'rd'}.get(n % 10, "th") def talk_schedule(start, end): input_format = "%Y-%m-%d %H:%M:%S" output_format_day = "%A, %B" output_format_time = "%H:%M" output_date = lambda d: "{} {} at {}".format(d.strftime(output_format_day), ordinal(int(d.strftime('%d'))), d.strftime(output_format_time)) start_date = dt.datetime.strptime(start, input_format) end_date = dt.datetime.strptime(end , input_format) return output_date(start_date), output_date(end_date) def show_talk(talk, show_duration=True, show_link_to_admin=True): speakers = talk['speakers'] title = talk['title'] abstract = talk['abstract_long'][0] room = talk.get('track_title', '').split(', ')[0] timerange = talk.get('timerange', '').split(';')[0] pdf = '' pdf += '<h2>- {}</h2>\n'.format(title) if show_link_to_admin: pdf += talks_admin_url + '/{}'.format(talk['id']) pdf += '<a href={0}>{0}</a>\n'.format(talk_admin_url) if show_duration: pdf += '{} mins.'.format(talk['duration']) else: pdf += '' timerange = talk['timerange'].split(';')[0] try: start, end = talk_schedule(*timerange.split(', ')) except: start, end = ('', '') if start: pdf += '<p>' pdf += '{} in {}'.format(start, room) if show_duration: pdf += ' ({})'.format(duration) pdf += '</p>\n' #show(schedule) pdf += '<h3><i>{}</i></h2>\n'.format(speakers) pdf += '<p>{}</p>\n'.format(markdown2html(abstract)) pdf += '<br/>\n' return pdf """ Explanation: EuroPython 2016 session instructions to PDF Session Chair Information Before your session Plan to be in the talk room at least 10 minutes before your session. Double check that you have the schedule with you. Walk to your room. Check if there is spare water bottles for the speakers and video adaptors. If something is missing kindly ask the room manager to pick it. Reach for the next speaker. For Each Talk Ask the speaker how they would like to be introduced, and double check that you know how to pronounce their name. Ask the speaker about their Q&A preference (see options below) and how they want to be announced of the last 10 or 5 minutes of talk. Make sure the talk starts on time! At the talk's scheduled start time, get up on stage, get the microphone, make eye contact with the A/V crew to get their go-ahead, the introduce the speaker. Watch the clock. Announce the speaker that their talk is about to finish, they way you agreed. Two minutes before the talk's scheduled end time, if it doesn't seem like the speaker is going to stop on their own, quietly get up and stand to the side of the stage. Be ready to interrupt them if they do not finish on time: "I'm very sorry but our time is almost up.", "Can I suggest continuing in an open space?", or "Let's move on to Q&A.". If the speaker requested Q&A, stand beside the microphone to help moderate (see section below). When the speaker is finished: start the applause, then stand up and thank them. After Your Session Go tell the coordinator or anyone in the reception desk that you finished your session. Q&A Speakers might have different preferences on how they want to Q&A session to be. Please ask them. Here go 3 examples: 1. No questions; they will speak for the duration of their talk then leave. 2. Take a session-chair-moderated questions for the last 5 minutes of the timeslot. If You Have Any Problems If you have any problems, concerns, or questions, try to contact someone on the europython-volunteers Telegram channel (http://bit.ly/28QF0FM). Otherwise you can contact: 1. Christian Barra: barrachris@gmail.com / @christianbarra 2. Alexandre Savio: +34630555357 / alexsavio@gmail.com / @alexsavio 3. TODO Add someone else with a phone number 4. Any conference staff member. Moderating Q&A To moderate a Q&A section, please: Step up to the audience mic and say (something to the effect of): "Thank you {speaker}! We've got a few minutes for questions now". Kindly ask the speaker to repeat the question once they heard it. Questions should be kept short. If the discussion is taking too long (2 minutes), kindly ask to discuss after the talk and reach for the following question, if any. End of explanation """ sessions_talks = OrderedDict() # remove the IDs from the talks for name, sess in talk_sessions.items(): sessions_talks[name] = [talk for tid, talk in sess.items()] talks = sessions_talks['talk'] """ Explanation: Filter talks End of explanation """ # add 'start' time for each talk for idx, talk in enumerate(talks): tr = talk['timerange'] if not tr: talk['start'] = dt.datetime.now() else: talk['start'] = dt.datetime.strptime(tr.split(',')[0].strip(), "%Y-%m-%d %H:%M:%S") talks[idx] = talk # add 'session_code' for each talk conference_start = dt.date(2016, 7, 17) first_coffee_start = dt.time(10, 0) lunch_start = dt.time(12, 45) secnd_coffee_start = dt.time(15, 30) close_start = dt.time(18, 0) journee_start_times = [first_coffee_start, lunch_start, secnd_coffee_start, close_start] def get_journee_number(talk_start_time, journee_start_times): for idx, start in enumerate(journee_start_times): if talk_start_time < start: return idx return -1 tracks = ['A1', 'A2', 'Barria 1', 'Barria 2', 'PyCharm Room', ] for idx, talk in enumerate(talks): talk_start = talk['start'].time() talk_room = talk['track_title'].split('[')[0].strip().replace(' ', '_') day_num = (talk['start'].date() - conference_start).days journee_num = get_journee_number(talk['start'].time(), journee_start_times) talk['session'] = str(talk_room) + '_' + str(int(day_num)) + '.' + str(journee_num) talks[idx] = talk """ Explanation: Create start time and session fields in each talk End of explanation """ # sort by and group by session keys = ('start', 'session') sorted_talks = sorted(talks, key=itemgetter(*keys)) talk_sessions = DefaultOrderedDict(list) for talk in sorted_talks: talk_sessions[talk['session']].append(talk) """ Explanation: Group talks by session code End of explanation """ session_texts = OrderedDict() for session, talks in talk_sessions.items(): text = ['<h1>' + session + '</h1>'] for talk in talks: text += [show_talk(talk, show_duration=False, show_link_to_admin=False)] session_texts[session] = '\n'.join(text) """ Explanation: Create the HTML texts for each session End of explanation """ import os import os.path as op import subprocess os.makedirs('session_pdfs', exist_ok=True) def pandoc_html_to_pdf(html_file, out_file, options): cmd = 'pandoc {} {} -o {}'.format(options, html_file, out_file) print(cmd) subprocess.check_call(cmd, shell=True) # pandoc options DIN-A6 # options = ' -V '.join(['-V geometry:paperwidth=6cm', # 'geometry:paperheight=8cm', # 'geometry:width=5.5cm', # 'geometry:height=7.5cm', # 'geometry:left=.25cm', # ]) # pandoc options DIN-A4 options = ' -V '.join(['-V geometry:paperwidth=210mm', 'geometry:paperheight=297mm', 'geometry:left=2cm', 'geometry:top=2cm', 'geometry:bottom=2cm', 'geometry:right=2cm', ]) options += ' --latex-engine=xelatex' for session, text in session_texts.items(): html_file = op.join('session_pdfs', '{}.html'.format(session)) out_file = html_file.replace('.html', '.pdf') ops = open(html_file, mode='w') ops.write(text) ops.close() pandoc_html_to_pdf(html_file, out_file, options) os.remove(html_file) """ Explanation: Export to PDF You need to have pandoc, wkhtmltopdf and xelatex installed in your computer. End of explanation """
IsacLira/data-science-cookbook
2016/network-analysis/Centrality.ipynb
mit
import network_analysis_utils as nau import networkx as nx # Available functions: # # - nau.facebook_nx_graph(): Obtém o grafo Networkx do facebook # # - nau.random_nx_graph(): Obtém o grafo Networkx randômico # # - nau.write_btwns_graph(nx_graph, weight_dict, output_filename): # Plota o grafo em um arquivo de acordo com o dicionário de pessos obtidos do "betwenness centrality" # # Obs: o prefixo 'nx' indica o uso da Networkx, a lib de grafos utilizada nesse exercício """ Explanation: Exercício 1 - Centralidade Nesse exercício iremos importar, gerar e comparar a centralidade de 2 grafos: - Um pedaço da rede do facebook, coletado dos participantes de um survey que usaram o aplicativo Social Circles no Facebook (link para o dataset). - Um grafo randômico com a mesma quantidade de nós e de arestas. End of explanation """ # ALGORITHM: Betweenness Centrality # INPUT: G(V, E) # OUTPUT: Dictionary(V, Weights) # # BEGIN # result <- init_empty_betweenness(V) # # FOR EACH vi, vj in V THEN # # IF vi < vj THEN # # shortest_paths <- get_shortest_paths(G, vi, vj) # # contribuition <- 1 / SIZE_OF(shortest_paths) # # FOR EACH path IN shortest_paths THEN # # FOR EACH pi IN path IF pi != vi AND pi != pj # # result[pi] += contribuition # # RETURN result # END def betweenness_centrality(nx_graph): """ Return the weight dictionary {node: betweenness centrality} """ return {} """ Explanation: Agora você deve escrever a sua função que computa a betweenness centrality para um grafo nx: Dica: recomendo fortemente o uso da all_shortest_paths da Networkx Requisito: Não é permitido o uso da função de betweenness centrality da Networkx End of explanation """ facebook_graph = nau.facebook_nx_graph() random_graph = nau.random_nx_grap() facebook_btwns = betweenness_centrality(facebook_graph) random_btwns = betweenness_centrality(random_graph) """ Explanation: Agora que temos nossa função de centralidade pronta, podemos obter a centralidades para os dois gráficos: End of explanation """ nau.write_btwns_graph(facebook_graph, facebook_btwns, 'facebook_btwns_graph.png') nau.write_btwns_graph(random_graph, random_btwns, 'random_btwns_graph.png') """ Explanation: E agora geraremos os gráficos dos grafos para análise visual: End of explanation """
jorisvandenbossche/DS-python-data-analysis
notebooks/python_recap/01-basic.ipynb
bsd-3-clause
# Two general packages import os import sys """ Explanation: Python the basics: datatypes DS Data manipulation, analysis and visualization in Python May/June, 2021 © 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons This notebook is largely based on material of the Python Scientific Lecture Notes (https://scipy-lectures.github.io/), adapted with some exercises. Importing packages Importing packages is always the first thing you do in python, since it offers the functionalities to work with. Different options are available: <span style="color:green">import <i>package-name</i></span> <p> importing all functionalities as such <span style="color:green">from <i>package-name</i> import <i>specific function</i></span> <p> importing a specific function or subset of the package <span style="color:green">from <i>package-name</i> import * </span> <p> importing all definitions and actions of the package (sometimes better than option 1) <span style="color:green">import <i>package-name</i> as <i>short-package-name</i></span> <p> Very good way to keep a good insight in where you use what package import all functionalities as such End of explanation """ an_integer = 3 print(type(an_integer)) an_integer # type casting: converting the integer to a float type float(an_integer) a_float = 0.2 type(a_float) a_complex = 1.5 + 0.5j # get the real or imaginary part of the complex number by using the functions # real and imag on the variable print(type(a_complex), a_complex.real, a_complex.imag) a_boolean = (3 > 4) a_boolean """ Explanation: Basic python datatypes Numerical types Python supports the following numerical, scalar types: * integer * floats * complex * boolean End of explanation """ print (7 * 3.) print (2**10) print (8 % 3) """ Explanation: A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations addition, substraction, division ... are natively implemented +, -, *, /, % (modulo) natively implemented operation| python implementation ----------:| --------------------- addition | + substraction | - multiplication | * division | / modulo | % exponentiation | ** End of explanation """ print(3/2) print(3/2.) print(3.//2.) #integer division """ Explanation: Attention ! End of explanation """ a_list = [2.,'aa', 0.2] a_list # accessing individual object in the list a_list[1] # negative indices are used to count from the back a_list[-1] """ Explanation: Containers Lists A list is an ordered collection of objects, that may have different types. The list container supports slicing, appending, sorting ... Indexing starts at 0 (as in C, C++ or Java), not at 1 (as in Fortran or Matlab)! End of explanation """ another_list = ['first', 'second', 'third', 'fourth', 'fifth'] print(another_list[3:]) print(another_list[:2]) print(another_list[::2]) """ Explanation: Slicing: obtaining sublists of regularly-spaced elements End of explanation """ another_list[3] = 'newFourth' print(another_list) another_list[1:3] = ['newSecond', 'newThird'] print(another_list) """ Explanation: Note that L[start:stop] contains the elements with indices i such as start<= i < stop (i ranging from start to stop-1). Therefore, L[start:stop] has (stop-start) elements. Slicing syntax: L[start:stop:stride] all slicing parameters are optional Lists are mutable objects and can be modified End of explanation """ a = ['a', 'b'] b = a b[0] = 1 print(a) """ Explanation: Warning, with views equal to each other, they point to the same point in memory. Changing one of them is also changing the other!! End of explanation """ #dir(list) a_third_list = ['red', 'blue', 'green', 'black', 'white'] # Appending a_third_list.append('pink') a_third_list # Removes and returns the last element a_third_list.pop() a_third_list # Extends the list in-place a_third_list.extend(['pink', 'purple']) a_third_list # Reverse the list a_third_list.reverse() a_third_list # Remove the first occurence of an element a_third_list.remove('white') a_third_list # Sort list a_third_list.sort() a_third_list """ Explanation: List methods: You can always list the available methods in the namespace by using the dir()-command: End of explanation """ a_third_list.count? a_third_list.index?? """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: What happens if you put two question marks behind the command? </div> End of explanation """ a_third_list = ['red', 'blue', 'green', 'black', 'white'] # remove the last two elements a_third_list = a_third_list[:-2] a_third_list """ Explanation: End of explanation """ a_third_list[::-1] """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: Mimick the functioning of the *reverse* command using the appropriate slicing command: </div> End of explanation """ a_list = ['pink', 'orange'] a_concatenated_list = a_third_list + a_list a_concatenated_list """ Explanation: Concatenating lists is just the same as summing both lists: End of explanation """ reverted = a_third_list.reverse() ## comment out the next lines to test the error: #a_concatenated_list = a_third_list + reverted #a_concatenated_list """ Explanation: <div class="alert alert alert-danger"> <b>Note</b>: Why is the following not working? </div> End of explanation """ # Repeating lists a_repeated_list = a_concatenated_list*10 print(a_repeated_list) """ Explanation: The list itself is reversed and no output is returned, so reverted is None, which can not be added to a list End of explanation """ number_list = [1, 2, 3, 4] [i**2 for i in number_list] """ Explanation: List comprehensions List comprehensions are a very powerful functionality. It creates an in-list for-loop option, looping through all the elements of a list and doing an action on it, in a single, readable line. End of explanation """ [i**2 for i in number_list if i>1] [i**2 for i in number_list if i>1] # Let's try multiplying with two on a list of strings: print([i*2 for i in a_repeated_list]) """ Explanation: and with conditional options: End of explanation """ s = 'Never gonna give you up' print(s) s = "never gonna let you down" print(s) s = '''Never gonna run around and desert you''' print(s) s = """Never gonna make you cry, never gonna say goodbye""" print(s) ## pay attention when using apostrophes! - test out the next two lines one at a time #print('Hi, what's up?') #print("Hi, what's up?") """ Explanation: Cool, this works! let's check more about strings: Strings Different string syntaxes (simple, double or triple quotes) End of explanation """ print('''Never gonna tell a lie and hurt you. Never gonna give you up,\tnever gonna let you down Never \ngonna\n run around and\t desert\t you''') """ Explanation: The newline character is \n, and the tab character is \t. End of explanation """ a_string = "hello" print(a_string[0]) print(a_string[1:5]) print(a_string[-4:-1:2]) """ Explanation: Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules. End of explanation """ print(u'Hello\u0020World !') """ Explanation: Accents and special characters can also be handled in Unicode strings (see http://docs.python.org/tutorial/introduction.html#unicode-strings). End of explanation """ #a_string[3] = 'q' # uncomment this cell """ Explanation: A string is an immutable object and it is not possible to modify its contents. One may however create new strings from the original one. End of explanation """ #dir(str) # uncomment this cell another_string = "Strawberry-raspBerry pAstry package party" another_string.lower().replace('r', 'l', 7) """ Explanation: We won't introduce all methods on strings, but let's check the namespace and apply a few of them: End of explanation """ print('An integer: %i; a float: %f; another string: %s' % (1, 0.1, 'string')) """ Explanation: String formatting to make the output as wanted can be done as follows: End of explanation """ print('An integer: {}; a float: {}; another string: {}'.format(1, 0.1, 'string')) n_dataset_number = 20 sFilename = 'processing_of_dataset_%d.txt' % n_dataset_number print(sFilename) """ Explanation: The format string print options in python 3 are able to interpret the conversions itself: End of explanation """ [el for el in dir(list) if not el[0]=='_'] """ Explanation: <div class="alert alert alert-success"> <b>Exercise</b>: With the `dir(list)` command, all the methods of the list type are printed. However, we're not interested in the hidden methods. Use a list comprehension to only print the non-hidden methods (methods with no starting or trailing '_'): </div> End of explanation """ sentence = "the quick brown fox jumps over the lazy dog" #split in words and get word lengths [len(word) for word in sentence.split()] """ Explanation: <div class="alert alert alert-success"> <b>Exercise</b>: Given the previous sentence `the quick brown fox jumps over the lazy dog`, split the sentence and put all the word-lengths in a list. </div> End of explanation """ # Always key : value combinations, datatypes can be mixed hourly_wage = {'Jos':10, 'Frida': 9, 'Gaspard': '13', 23 : 3} hourly_wage hourly_wage['Jos'] """ Explanation: Dictionaries A dictionary is basically an efficient table that maps keys to values. It is an unordered container It can be used to conveniently store and retrieve values associated with a name End of explanation """ hourly_wage['Antoinette'] = 15 hourly_wage """ Explanation: Adding an extra element: End of explanation """ hourly_wage.keys() hourly_wage.values() hourly_wage.items() # all combinations in a list # ignore this loop for now, this will be explained later for key, value in hourly_wage.items(): print(key,' earns ', value, '€/hour') """ Explanation: You can get the keys and values separately: End of explanation """ hourly_wage = {'Jos':10, 'Frida': 9, 'Gaspard': '13', 23 : 3} str_key = [] for key in hourly_wage.keys(): str_key.append(str(key)) str_key """ Explanation: <div class="alert alert alert-success"> <b>Exercise</b> Put all keys of the `hourly_wage` dictionary in a list as strings. If they are not yet a string, convert them: </div> End of explanation """ a_tuple = (2, 3, 'aa', [1, 2]) a_tuple a_second_tuple = 2, 3, 'aa', [1,2] a_second_tuple """ Explanation: Tuples Tuples are basically immutable lists. The elements of a tuple are written between parentheses, or just separated by commas End of explanation """
QuantCrimAtLeeds/PredictCode
examples/Networks/Case study Chicago/Input data.ipynb
artistic-2.0
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.collections import geopandas as gpd import open_cp.network import open_cp.sources.chicago import open_cp.geometry #data_path = os.path.join("/media", "disk", "Data") data_path = os.path.join("..", "..", "..", "..", "..", "..", "Data") open_cp.sources.chicago.set_data_directory(data_path) """ Explanation: Input data In this notebook we collect the geometry and event datasets we need. End of explanation """ #tiger_path = os.path.join("/media", "disk", "TIGER Data") tiger_path = os.path.join("..", "..", "..", "..", "..", "..", "Data", "TIGER Data") filename = os.path.join(tiger_path, "tl_2016_17031_roads") tiger_frame = gpd.GeoDataFrame.from_file(filename) chicago = tiger_frame.to_crs({"init":"epsg:3528"}) chicago.head() south_side = open_cp.sources.chicago.get_side("South") mask = chicago.geometry.map(lambda x : x.intersects(south_side)) frame = chicago[mask] frame.head() all_nodes = [] for geo in frame.geometry: for pt in geo.coords: all_nodes.append(pt) b = open_cp.network.PlanarGraphNodeOneShot(all_nodes) for geo in frame.geometry: path = list(geo.coords) b.add_path(path) b.remove_duplicate_edges() graph = b.build() reduced = open_cp.network.simple_reduce_graph(graph) graph.number_edges, reduced.number_edges """ Explanation: Geometry We use the TIGER/Line® Shapefiles https://www.census.gov/geo/maps-data/data/tiger-line.html We load this into a geoPandas data frame, and then convert each geometry LINESTRING into nodes and edges in a graph. End of explanation """ filename = open_cp.sources.chicago.get_default_filename() timed_points = open_cp.sources.chicago.load(filename, ["BURGLARY"]) timed_points.number_data_points timed_points = open_cp.geometry.intersect_timed_points(timed_points, south_side) timed_points.number_data_points fig, ax = plt.subplots(figsize=(12,12)) lc = matplotlib.collections.LineCollection(graph.as_lines(), color="black", linewidth=0.5) ax.add_collection(lc) ax.scatter(timed_points.xcoords, timed_points.ycoords) xmin, ymin, xmax, ymax = *timed_points.bounding_box.min, *timed_points.bounding_box.max xd, yd = xmax - xmin, ymax - ymin ax.set(xlim=(xmin-xd/20, xmax+xd/20), ylim=(ymin-yd/20, ymax+yd/20)) None fig, axes = plt.subplots(ncols=2, figsize=(18,8)) for ax in axes: lc = matplotlib.collections.LineCollection(graph.as_lines(), color="black", linewidth=0.5) ax.add_collection(lc) ax.scatter(timed_points.xcoords, timed_points.ycoords) axes[0].set(xlim=[358000, 360000], ylim=[570000, 572000]) axes[1].set(xlim=[362000, 364000], ylim=[565000, 567000]) """ Explanation: Event data CSV file available from https://catalog.data.gov/dataset/crimes-one-year-prior-to-present-e171f This (freely available) file is geocoded so that each event lies on the midpoint of a road. End of explanation """ import pickle, lzma with lzma.open("input.pic.xz", "wb") as f: pickle.dump(timed_points, f) with open("input.graph", "wb") as f: f.write(graph.dump_bytes()) """ Explanation: Save for later We'll save using pickle for use in other notebooks End of explanation """ filename = os.path.join(data_path, "chicago_all_old.csv") timed_points = open_cp.sources.chicago.load(filename, ["BURGLARY"], type="all") timed_points.number_data_points timed_points = open_cp.geometry.intersect_timed_points(timed_points, south_side) timed_points.number_data_points with lzma.open("input_old.pic.xz", "wb") as f: pickle.dump(timed_points, f) with open("input_old.graph", "wb") as f: f.write(graph.dump_bytes()) """ Explanation: With old data Older data (which is no longer publically available, sadly) is more accurately geocoded. End of explanation """
ContinualAI/avalanche
notebooks/from-zero-to-hero-tutorial/06_loggers.ipynb
mit
!pip install avalanche-lib==0.2.0 """ Explanation: description: "Logging... logging everywhere! \U0001F52E" Loggers Welcome to the "Logging" tutorial of the "From Zero to Hero" series. In this part we will present the functionalities offered by the Avalanche logging module. End of explanation """ from torch.optim import SGD from torch.nn import CrossEntropyLoss from avalanche.benchmarks.classic import SplitMNIST from avalanche.evaluation.metrics import forgetting_metrics, \ accuracy_metrics, loss_metrics, timing_metrics, cpu_usage_metrics, \ confusion_matrix_metrics, disk_usage_metrics from avalanche.models import SimpleMLP from avalanche.logging import InteractiveLogger, TextLogger, TensorboardLogger, WandBLogger from avalanche.training.plugins import EvaluationPlugin from avalanche.training import Naive benchmark = SplitMNIST(n_experiences=5, return_task_id=False) # MODEL CREATION model = SimpleMLP(num_classes=benchmark.n_classes) # DEFINE THE EVALUATION PLUGIN and LOGGERS # The evaluation plugin manages the metrics computation. # It takes as argument a list of metrics, collectes their results and returns # them to the strategy it is attached to. loggers = [] # log to Tensorboard loggers.append(TensorboardLogger()) # log to text file loggers.append(TextLogger(open('log.txt', 'a'))) # print to stdout loggers.append(InteractiveLogger()) # W&B logger - comment this if you don't have a W&B account loggers.append(WandBLogger(project_name="avalanche", run_name="test")) eval_plugin = EvaluationPlugin( accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True), loss_metrics(minibatch=True, epoch=True, experience=True, stream=True), timing_metrics(epoch=True, epoch_running=True), cpu_usage_metrics(experience=True), forgetting_metrics(experience=True, stream=True), confusion_matrix_metrics(num_classes=benchmark.n_classes, save_image=True, stream=True), disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True), loggers=loggers, benchmark=benchmark ) # CREATE THE STRATEGY INSTANCE (NAIVE) cl_strategy = Naive( model, SGD(model.parameters(), lr=0.001, momentum=0.9), CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100, evaluator=eval_plugin) # TRAINING LOOP print('Starting experiment...') results = [] for experience in benchmark.train_stream: # train returns a dictionary which contains all the metric values res = cl_strategy.train(experience) print('Training completed') print('Computing accuracy on the whole test set') # test also returns a dictionary which contains all the metric values results.append(cl_strategy.eval(benchmark.test_stream)) # need to manually call W&B run end since we are in a notebook import wandb wandb.finish() %load_ext tensorboard %tensorboard --logdir tb_data --port 6066 """ Explanation: 📑 The Logging Module In the previous tutorial we have learned how to evaluate a continual learning algorithm in Avalanche, through different metrics that can be used off-the-shelf via the Evaluation Plugin or stand-alone. However, computing metrics and collecting results, may not be enough at times. While running complex experiments with long waiting times, logging results over-time is fundamental to "babysit" your experiments in real-time, or even understand what went wrong in the aftermath. This is why in Avalanche we decided to put a strong emphasis on logging and provide a number of loggers that can be used with any set of metrics! Loggers Avalanche at the moment supports four main Loggers: InteractiveLogger: This logger provides a nice progress bar and displays real-time metrics results in an interactive way (meant for stdout). TextLogger: This logger, mostly intended for file logging, is the plain text version of the InteractiveLogger. Keep in mind that it may be very verbose. TensorboardLogger: It logs all the metrics on Tensorboard in real-time. Perfect for real-time plotting. WandBLogger: It leverages Weights and Biases tools to log metrics and results on a dashboard. It requires a W&B account. In order to keep track of when each metric value has been logged, we leverage two global counters, one for the training phase, one for the evaluation phase. You can see the global counter value reported in the x axis of the logged plots. Each global counter is an ever-increasing value which starts from 0 and it is increased by one each time a training/evaluation iteration is performed (i.e. after each training/evaluation minibatch). The global counters are updated automatically by the strategy. How to use loggers End of explanation """
shngli/Data-Mining-Python
UMSI course recommender/Course database.ipynb
gpl-3.0
import re import math from operator import itemgetter enrolled = {} numstudents = {} numincommon = {} scores = {} titles = {} for line in open("courseenrollment.txt", "r"): line = line.rstrip('\s\r\n') (student, graddate, spec, term, dept, courseno) = line.split('\t') # Create a variable course that consists of the dept. abbreviation # Followed by a space and course number course=dept+' '+courseno if course not in enrolled: enrolled[course] = {student: 1} if student not in enrolled[course]: enrolled[course][student] = 1 if course not in numstudents: numstudents[course] = 0 numstudents[course] += 1 """ Explanation: UMSI course recommender database Author: Chisheng Li 1) Output every pair of courses (Source Course and Target Course) and their Cosine Similarity scores to allpairs.txt End of explanation """ for course1 in enrolled: for course2 in enrolled: # Initialize each value in the 2d dict. if course1 not in numincommon: numincommon[course1] = {course2: 0} if course2 not in numincommon[course1]: numincommon[course1][course2] = 0 for student in enrolled[course2]: if student in enrolled[course1]: # If the same student is enrolled in both courses # Increment the counter for students in common numincommon[course1][course2] += 1 denominator = math.sqrt(numstudents[course1] * numstudents[course2]) # Same initialization as numincommon if course1 not in scores: scores[course1] = {course2: 0} if course2 not in scores[course1]: scores[course1][course2] = 0 scores[course1][course2] = numincommon[course1][course2]/denominator for line in open("coursetitles.txt", "r"): line = line.rstrip('\s\r\n') (course, title) = line.split('\t') # Strip trailing sections "-1" and spaces from the course numbers # and replace underscores with spaces. # Assign the result to the variable "course2" course=course.replace('-1',"") course2 = course.replace("_"," ") # Regex for sequent if/continue si_re = re.compile(r"^SI \d+.*") found_re = re.compile(r"SI 50[01234].*") """ Explanation: This part calculates the cosine similarity: End of explanation """ test = open("allpairs.txt",'w') test.write("Source Course"+"\t"+"Target Course"+"\t"+"Cosine Similarity") for course1 in sorted(scores): i = 1 # skip if course was not sufficiently popular if (numstudents[course1] < 5): continue # skip if the course is not an SI course (use regexp) if (si_re.match(course1) is None): continue # skip if the course is one of the foundations: 500,501,502,503,504 if found_re.match(course1): continue for course2,score in sorted(scores[course1].items(), key=itemgetter(1),reverse = True): # skip if the course (course2) is the one we're asking about (course1) if course2 == course1: continue # skip if (course2) is one of the old foundations: 501,502,503,504 if found_re.match(course2): continue # only consider course2 if the number of students in common is >=1 if(numincommon[course1][course2] < 1): continue # write the data set to pairs.txt test.write("\n%s\t%s\t%s" % (course1,course2,score)) test.close() """ Explanation: Create a new text file called allpairs.txt End of explanation """ import sqlite3 as lite import sys con = None pairs=[] try: con = lite.connect('courseSmilarity.db') cur = con.cursor() cur.execute("select * from courses where score >= 0 and score <= 0.25") query1 = open('cosine0.25.txt','w') query1.write("Source Course" + "\t" + "Target Course") print "%s" % ("Course pairs with cosine similarity score >= 0 and <= 0.25") print "-------------------------------------------------------" print "%s\t%s" % ("Source Course","Target Course") for row in cur: t1 = row[1] + row[0] t2 = row[0] + row[1] if t1 in pairs: continue pairs.append(t2) query1.write("\n%s\t%s" %(row[0],row[1])) print "%s\t%s" % (row[0],row[1]) print cur.execute("select * from courses where score > 0.25 and score <= 0.5") query2 = open('cosine0.25-0.5.txt','w') query2.write("Source Course" + "\t" + "Target Course") print "%s" % ("Course pairs with cosine similarity score > 0.25 and <= 0.5") print "-------------------------------------------------------" print "%s\t%s" % ("Source Course","Target Course") for row in cur: t1 = row[1] + row[0] t2 = row[0] + row[1] if t1 in pairs: continue pairs.append(t2) query2.write("\n%s\t%s" %(row[0],row[1])) print "%s\t%s" % (row[0],row[1]) print cur.execute("select * from courses where score > 0.5 and score <= 0.75") query3=open('cosine0.5-0.75.txt','w') query3.write("Source Course" + "\t" + "Target Course") print "%s" % ("Course pairs with cosine similarity score > 0.5 and <= 0.75") print "-------------------------------------------------------" print "%s\t%s" % ("Source Course","Target Course") for row in cur: t1=row[1]+row[0] t2=row[0]+row[1] if t1 in pairs: continue pairs.append(t2) query3.write("\n%s\t%s" %(row[0],row[1])) print "%s\t%s" % (row[0],row[1]) print cur.execute("select * from courses where score > 0.75 and score <= 1") query4=open('cosine0.75-1.txt','w') query4.write("Source Course" + "\t" + "Target Course") print "%s" % ("Course pairs with cosine similarity score > 0.75 and <= 1") print "-------------------------------------------------------" print "%s\t%s" % ("Source Course","Target Course") for row in cur: t1=row[1]+row[0] t2=row[0]+row[1] if t1 in pairs: continue pairs.append(t2) query4.write("\n%s\t%s" %(row[0],row[1])) print "%s\t%s" % (row[0],row[1]) except lite.Error, e: print "Error %s:" % e.args[0] sys.exit(1) finally: if con: con.close() """ Explanation: 2) Create database to store course pairs Create a database classSimilarity.db and create tables for the data in allpairs.txt Create queries to insert the data into classSimilarity.db (the source class, the target class, and the cosine similarity value) Source code: createDB.py 3) Query courseSmilarity.db for courses within certain ranges of cosine similarity Output the course pairs in distinct ranges of cosine similarity: - for values from 0 <= x <= 0.25 (cosine0.25.txt) - for values from 0.25 < x <= 0.5 (cosine0.25-0.5.txt) - for values from 0.5 < x <= 0.75 (cosine0.5-0.75.txt) - for values from 0.75 < x <= 1 (cosine0.75-1.txt) End of explanation """
csdms/pymt
notebooks/cem.ipynb
mit
import numpy as np import matplotlib.pyplot as plt #Some magic that allows us to view images within the notebook. %matplotlib inline """ Explanation: Coastline Evolution Model The Coastline Evolution Model (CEM) addresses predominately sandy, wave-dominated coastlines on time-scales ranging from years to millenia and on spatial scales ranging from kilometers to hundreds of kilometers. Shoreline evolution results from gradients in wave-driven alongshore sediment transport. At its most basic level, the model follows the standard 'one-line' modeling approach, where the cross-shore dimension is collapsed into a single data point. However, the model allows the planview shoreline to take on arbitrary local orientations, and even fold back upon itself, as complex shapes such as capes and spits form under some wave climates (distributions of wave influences from different approach angles). So the model works on a 2D grid. The model has been used to represent varying geology underlying a sandy coastline and shoreface in a simplified manner and enables the simulation of coastline evolution when sediment supply from an eroding shoreface may be constrained. CEM also supports the simulation of human manipulations to coastline evolution through beach nourishment or hard structures. CEM authors & developers: Andrew Ashton, Brad Murray, Jordan Slot, Jaap Nienhuis and others. This version is adapted from a CSDMS teaching notebook, listed below. It has been created by Irina Overeem, October 2019 for a Sedimentary Modeling course. Link to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/cem.ipynb Install command: $ conda install notebook pymt_cem Download local copy of notebook: $ curl -O https://raw.githubusercontent.com/csdms/pymt/master/notebooks/cem.ipynb Key References Ashton, A.D., Murray, B., Arnault, O. 2001. Formation of coastline features by large-scale instabilities induced by high-angle waves, Nature 414. Ashton, A. D., and A. B. Murray (2006), High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes, J. Geophys. Res., 111, F04011, doi:10.1029/2005JF000422. Links CEM source code: Look at the files that have deltas in their name. CEM description on CSDMS: Detailed information on the CEM model. Interacting with the Coastline Evolution Model BMI using Python End of explanation """ import pymt.models cem = pymt.models.Cem() """ Explanation: Import the Cem class. In Python, a model with a Basic Model Interface (BMI) will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later! End of explanation """ help(cem) cem.input_var_names cem.output_var_names """ Explanation: Even though we can't run our waves model yet, we can still get some information about it. Some things we can do with our model are to get help, to get the names of the input variables or output variables. End of explanation """ angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity' print("Data type: %s" % cem.get_var_type(angle_name)) print("Units: %s" % cem.get_var_units(angle_name)) print("Grid id: %d" % cem.get_var_grid(angle_name)) print("Number of elements in grid: %d" % cem.get_grid_number_of_nodes(0)) print("Type of grid: %s" % cem.get_grid_type(0)) """ Explanation: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is, "sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity" Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one). End of explanation """ args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.) cem.initialize(*args) """ Explanation: First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults. End of explanation """ cem.set_value("sea_surface_water_wave__height", 1.5) cem.set_value("sea_surface_water_wave__period", 7.) cem.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.) """ Explanation: Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline. End of explanation """ # list your answers here """ Explanation: Assignment 1 Let's think about the wave conditions that are the input to this CEM model run. For both assignment 1 and 2 it will help to look theory up in the paper by Ashton & Murray 2001, and/or Ashton et al, 2006. How do wave height and wave period determine sediment transport? The relationship between sediment transport and wave height and period is non-linear. What are the implications of this non-linearity for the impact of lots of small ocean storms versus a few extreme storms with much higher wave height? End of explanation """ # discuss wave angle here """ Explanation: Assignment 2 The other important part of the wave conditions that is input to CEM model is under what angle the waves approach the shore. It will help to read the paper by Ashton & Murray 2001, and the longer version by Ashton et al, 2006. Explain why incoming wave angle is an important control? End of explanation """ grid_id = cem.get_var_grid('sea_water__depth') """ Explanation: The CEM model operates on a grid, consisting of a number of rows and colums with values. The main output variable for this model is water depth, or bathymetry. In this case, the CSDMS Standard Name is much shorter: "sea_water__depth" First we find out which of Cem's grids contains water depth. End of explanation """ grid_type = cem.get_grid_type(grid_id) grid_rank = cem.get_grid_ndim(grid_id) print('Type of grid: %s (%dD)' % (grid_type, grid_rank)) """ Explanation: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid. This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars, or single values. End of explanation """ spacing = np.empty((grid_rank, ), dtype=float) shape = cem.get_grid_shape(grid_id) cem.get_grid_spacing(grid_id, out=spacing) print('The grid has %d rows and %d columns' % (shape[0], shape[1])) print('The spacing between rows is {:f} m and between columns is {:f} m'.format(spacing[0], spacing[1])) """ Explanation: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include: * get_grid_shape * get_grid_spacing * get_grid_origin End of explanation """ z = np.empty(shape, dtype=float) cem.get_value('sea_water__depth', out=z) """ Explanation: Allocate memory for the water depth grid and get the current values from cem. End of explanation """ def plot_coast(spacing, z): import matplotlib.pyplot as plt xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3 ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3 plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean') plt.colorbar().ax.set_ylabel('Water Depth (m)') plt.xlabel('Along shore (km)') plt.ylabel('Cross shore (km)') """ Explanation: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on. End of explanation """ plot_coast(spacing, z) """ Explanation: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain to more than 20 m water depth. End of explanation """ #Allocate memory for the sediment discharge array # and set the bedload sediment flux at the coastal cell to some value. qs = np.zeros_like(z) qs[0, 100] = 750 """ Explanation: Right now we have waves coming in but no sediment entering the ocean. To add a sediment source and specify its discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean. End of explanation """ cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate') """ Explanation: The CSDMS Standard Name for this variable is: "land_surface_water_sediment~bedload__mass_flow_rate" You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function get_var_units. End of explanation """ # read in the csv file of bedload measurements in the Rhine River, the Netherlands # these data were collected over different days over a season in 2004, at nearby locations. # plot how river discharge controls bedload; Q (x-axis) and Qb (y-axis) data. # label both axes """ Explanation: Assignment 3 Here, we are introducing a river mouth of one gridcell of 200 by 200m. And we just have specified a bedload flux of 750 kg/s. Is this a realistic incoming value? How much water discharge and slope would you possibly need to transport a bedload flux of that magnitude? End of explanation """ # extrapolate this relationship and calculate how much river discharge, Q, # would be needed to transport the model specification Qb of 1250 kg/s cem.time_step, cem.time_units, cem.time """ Explanation: Assignment 4 The bedload measurements were a combination of very different methods, and taken at different locations (although nearby). The data is quite scattered. But if you would fit a linear regression line through this data, you would find that the river discharge of the Rhine can be related to its bedload transport as: Qb=0.0163*Q End of explanation """ for time in range(3000): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) cem.time cem.get_value('sea_water__depth', out=z) plot_coast(spacing, z) # this code gives you a handle on retrieving the position of the river mouth over time val = np.empty((5, ), dtype=float) cem.get_value("basin_outlet~coastal_center__x_coordinate", val) val / 100. print(val) """ Explanation: Set the bedload flux and run the model. End of explanation """ # your run description goes here """ Explanation: Assignment 5 Describe what the CEM model has simulated in 3000 timesteps. How far has this wave influenced delta prograded? Recall the R-factor for fluvial dominance (Nienhuis 2015). What would the R-factor be for this simulated system? (smaller then 1, larger then 1)? Motivate. End of explanation """ # introduce a second river here qs[0, 150] = 1500 for time in range(4000): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) plot_coast(spacing, z) """ Explanation: Assignment 6 Let's add another sediment source with a different flux and update the model. remember that the Basic Model Interface allows you to update values and then continue a simulation End of explanation """ qs.fill(0.) for time in range(4500): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) plot_coast(spacing, z) """ Explanation: Here we shut off the sediment supply completely. End of explanation """ import pymt.models cemLR = pymt.models.Cem() args = cemLR.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.) cemLR.initialize(*args) # Here you will have to change the settings to a different wave climate cemLR.set_value("sea_surface_water_wave__height", 1.5) cemLR.set_value("sea_surface_water_wave__period", 7.) cemLR.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.) zLR = np.empty(shape, dtype=float) cemLR.get_value('sea_water__depth', out=zLR) # set your smaller river input here """ Explanation: Assignment 7 Create a new CEM run (remember to create a new cem instance) with a more subdued river influx and higher waves. End of explanation """ # run your new simulation for a similar time as the other first simulation for time in range(3000): cemLR.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qsLR) cemLR.update_until(time) cemLR.get_value('sea_water__depth', out=zLR) # hypothesize how your run output would be different # plot the sea water depth # save out this figure """ Explanation: Assignment 8 End of explanation """ ## initialize CEM instance # set the wave angle # run for 1000 timesteps # plot intermediate output # save out an array of this sea water depth at t=1000 # describe what effect you see. Is it to be expected? #What is the unique theory in the CEM model that drives this behavior? """ Explanation: BONUS Assignment 9 - for graduate students Create a new CEM run (remember to create a new cem instance) that is all similar to your first simulation. In this experiment we will use a different incoming wave angle, and look at its effect without a river input first, 1000 timesteps and then with a river input for another 2000 timsteps. End of explanation """ # your code to introduce new river input goes here # run an additional 2000 timesteps # plot # describe what effect you see. Is it to be expected? # Is this a fluvial-dominated delta or a wave-dominated delta? # Is the delta assymetric? # save out the array of your final sea water depth # calculate the deposition and erosion per gridcell between t=1000 and t=3000 """ Explanation: BONUS Assignment 9 - for graduate students Use the same CEM run that you have just started. Keep the incoming wave angle you had specified, and now run the rest of the simulation with a new river input for another 2000 timsteps. 'Place' the rivermouth out of center in the grid (although not too close to the grid boundary, that can give instability problems). End of explanation """
jrg365/gpytorch
examples/02_Scalable_Exact_GPs/KeOps_GP_Regression.ipynb
mit
import math import torch import gpytorch from matplotlib import pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 """ Explanation: GPyTorch Regression With KeOps Introduction KeOps is a recently released software package for fast kernel operations that integrates wih PyTorch. We can use the ability of KeOps to perform efficient kernel matrix multiplies on the GPU to integrate with the rest of GPyTorch. In this tutorial, we'll demonstrate how to integrate the kernel matmuls of KeOps with all of the bells of whistles of GPyTorch, including things like our preconditioning for conjugate gradients. In this notebook, we will train an exact GP on 3droad, which has hundreds of thousands of data points. Together, the highly optimized matmuls of KeOps combined with algorithmic speed improvements like preconditioning allow us to train on a dataset like this in a matter of minutes using only a single GPU. End of explanation """ import urllib.request import os.path from scipy.io import loadmat from math import floor if not os.path.isfile('../3droad.mat'): print('Downloading \'3droad\' UCI dataset...') urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '../3droad.mat') data = torch.Tensor(loadmat('../3droad.mat')['data']) import numpy as np N = data.shape[0] # make train/val/test n_train = int(0.8 * N) train_x, train_y = data[:n_train, :-1], data[:n_train, -1] test_x, test_y = data[n_train:, :-1], data[n_train:, -1] # normalize features mean = train_x.mean(dim=-2, keepdim=True) std = train_x.std(dim=-2, keepdim=True) + 1e-6 # prevent dividing by 0 train_x = (train_x - mean) / std test_x = (test_x - mean) / std # normalize labels mean, std = train_y.mean(),train_y.std() train_y = (train_y - mean) / std test_y = (test_y - mean) / std # make continguous train_x, train_y = train_x.contiguous(), train_y.contiguous() test_x, test_y = test_x.contiguous(), test_y.contiguous() output_device = torch.device('cuda:0') train_x, train_y = train_x.to(output_device), train_y.to(output_device) test_x, test_y = test_x.to(output_device), test_y.to(output_device) """ Explanation: Downloading Data We will be using the 3droad UCI dataset which contains a total of 278,319 data points. The next cell will download this dataset from a Google drive and load it. End of explanation """ # We will use the simplest form of GP model, exact inference class ExactGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(ExactGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.keops.MaternKernel(nu=2.5)) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) # initialize likelihood and model likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda() model = ExactGPModel(train_x, train_y, likelihood).cuda() # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) import time training_iter = 50 for i in range(training_iter): start_time = time.time() # Zero gradients from previous iteration optimizer.zero_grad() # Output from model output = model(train_x) # Calc loss and backprop gradients loss = -mll(output, train_y) loss.backward() print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % ( i + 1, training_iter, loss.item(), model.covar_module.base_kernel.lengthscale.item(), model.likelihood.noise.item() )) optimizer.step() print(time.time() - start_time) # Get into evaluation (predictive posterior) mode model.eval() likelihood.eval() # Test points are regularly spaced along [0,1] # Make predictions by feeding model through likelihood with torch.no_grad(), gpytorch.settings.fast_pred_var(): observed_pred = likelihood(model(test_x)) """ Explanation: Using KeOps with a GPyTorch Model Using KeOps with one of our pre built kernels is as straightforward as swapping the kernel out. For example, in the cell below, we copy the simple GP from our basic tutorial notebook, and swap out gpytorch.kernels.MaternKernel for gpytorch.kernels.keops.MaternKernel. End of explanation """ torch.sqrt(torch.mean(torch.pow(observed_pred.mean - test_y, 2))) """ Explanation: Compute RMSE End of explanation """
harmsm/pythonic-science
labs/02_regression/02_model-fitting_key.ipynb
unlicense
def first_order(t,A,k): """ First-order kinetics model. """ return A*(1 - np.exp(-k*t)) def first_order_r(param,t,obs): """ Residuals function for first-order model. """ return first_order(t,param[0],param[1]) - obs def fit_model(t,obs,param_guesses=(1,1)): """ Fit the first-order model. """ fit = scipy.optimize.least_squares(first_order_r, param_guesses, args=(t,obs)) fit_A = fit.x[0] fit_k = fit.x[1] return fit_A, fit_k d = pd.read_csv("data/time_course_0.csv") A, k = fit_model(d.t,d.obs) plt.plot(d.t,d.obs,'o') plt.plot(d.t,first_order(d.t,A,k)) """ Explanation: Parameter uncertainty Just how confident are you in your parameter values, anyway? You measure an observable over time and would like to extract a rate constant $k$ for the process. You are fitting the data using the code below. Each data point was measured independently of the others. The uncertainty in each measured point is normally distributed with a standard deviation of 0.05. How robust are your estimates of $A$ and $k$ given the uncertainty in your measured points? End of explanation """ d = pd.read_csv("data/time_course_0.csv") datasets = [] for i in range(1000): datasets.append(d.obs + np.random.normal(0,0.05,len(d.t))) """ Explanation: Generate 1,000 simulated data sets where each experimental point is drawn from a normal distribution with a mean of d.obs and a standard deviation of 0.05. End of explanation """ A_list = [] k_list = [] for dataset in datasets: A, k = fit_model(d.t,dataset) A_list.append(A) k_list.append(k) plt.hist(A_list) plt.show() plt.hist(k_list) plt.show() """ Explanation: Generate a histogram of possible values of $A$ and $k$ from these simulations. (Hint: try fitting each simulated dataset independently...). End of explanation """ k_list.sort() # Sort from low to high lower = k_list[25] # Lower 25 upper = k_list[1000-25] # Upper 25 plt.hist(k_list,bins=np.arange(0.15,0.35,0.005)) plt.show() print(lower,np.mean(k_list),upper) """ Explanation: What are the 95% confidence intervals on your estimate of $k$? The lower bound is the value of $k$ for which 2.5% of the histogram counts are below the value. The upper bound is the value of $k$ for which 2.5% of the histogram counts are above the value. End of explanation """ d = pd.read_csv("data/time_course_1.csv") A1_list = [] k1_list = [] for i in range(1000): A, k = fit_model(d.t,d.obs + np.random.normal(0,0.05,len(d.t))) A1_list.append(A) k1_list.append(k) plt.hist(k_list,bins=np.arange(0.15,0.35,0.005)) plt.hist(k1_list,bins=np.arange(0.25,0.45,0.005)) plt.show() k1_list.sort() lower1 = k1_list[25] upper1 = k1_list[1000-250] print(lower,np.mean(k_list),upper) print(lower1,np.mean(k1_list),upper1) # 95% confidence intervals do not overlap. Can be distinguished. """ Explanation: You measure the same process under slightly different conditions. These data are stored in data/time_course_1.csv. Is there a statistically significant difference between $k$ from dataset 1 vs. 0? End of explanation """ ### CELL FOR CREATING DATA def create_data(t,k,A,out_file="junk.csv",noise=None): write_noise = False if noise == None: noise = 0 elif type(noise) == float: noise = np.random.normal(0,noise,len(t)) else: write_noise = True nosie = noise obs = first_order(t,A,k) + noise plt.plot(t,obs,"o") if not write_noise: d = pd.DataFrame({"t":t,"obs":obs}) else: d = pd.DataFrame({"t":t,"obs":obs,"obs_err":np.abs(noise)}) d.to_csv(out_file) #t = np.arange(0,10,0.25) #create_data(t,0.25,1.0,"data/time_course_0.csv",0.05) #create_data(t,0.35,1.0,"data/time_course_1.csv",0.05) #create_data(t,0.25,1.0,"data/time_course_2.csv",np.random.normal(0,0.05,len(t))) #create_data(t,0.25,1.0,"data/time_course_3.csv",np.random.normal(0,0.05,len(t))) #create_data(t,0.25,1.0,"data/time_course_4.csv",np.random.normal(0,0.05,len(t))) """ Explanation: Bonus: These are a couple of challenge questions for students who are already comfortable fitting and want to extend this basic fitting example. You repeat the experiment again and explicitly measure the variance of each point (meaning some are now 0.1, others 0.3, etc.). These values are stored in data/time_course_2.csv in the obs_err column. Modify your sampling code so you incorporate these point-specific uncertainties. "Global fitting" is a powerful way to squeeze the most information out of experiments. Rather than fitting models to individual experimental replicates and then estimating parameter uncertainty, one can fit all experiments at once to extract a single estimate of the fit parameters. Implement a global fit function that simultaneously fits the data in data/time_course_2.csv, data/time_course_3.csv, and data/time_course_4.csv, then report your estimate and 95% confidence intervals of $k$. Do your confidence intervals change with more data? End of explanation """ # these values ssr_list = [0.05,0.001,0.00095] num_parameters = [2,3,10] num_obs = 10 # should give these weights weights = [8.69e-9,0.9988,1.18e-3] def calc_aic(ssr_list,k_list,num_obs): aic_list = [] for i in range(len(ssr_list)): aic_list.append(num_obs*np.log(ssr_list[i]) + 2*(k_list[i] + 1)) aic_list = np.array(aic_list) delta_list = aic_list - np.min(aic_list) Q = np.exp(-delta_list/2) return Q/np.sum(Q) weights = calc_aic(ssr_list,num_parameters,num_obs) print(weights) """ Explanation: Model selection How do you choose which model to use? The Akaike Information Criterion (AIC) helps you select between competing models. It penalizes models for the number of fittable parameters because adding parameters will almost always make a fit better. $$AIC = -2ln(\hat{L}) + 2k$$ where $\hat{L}$ is the "maximum likelihood" and $k$ is (1 + number of fittable parameters in the model). $\hat{L}$ is proportional to the sum of the squared residuals ($SSR$), so we can write: $$ AIC = nln(SSR) + 2k $$ where $n$ is the number of observations. If we want to compare $N$ different models, we first normalize $AIC$ for each model as $\Delta _{i}$. $$\Delta {i} = AIC{i} - AIC_{min}$$ where $AIC_{min}$ is the best AIC. The support for a given model is given by the Akaike weight ($w$): $$w_{i} = \frac{exp(-\Delta_{i}/2)}{\sum_{j=0}^{j<N} exp(-\Delta_{j}/2)}$$ The weight runs from 0.0 to 1.0, with 1.0 representing the strongest support. For a more thorough description, see here. Create a function called calc_aic that returns the Akaike weight for a list of models. calc_aic should take the following arguments: list of ssr for each fit list of num_parameters for each fit number of observations you fit the models to You can check your function with the following lists. End of explanation """ def residuals(params,r,c,f): """ General residuals function. """ return f(r,*params) - c def sed_eq_m(r,c0,M): return c0*np.exp(M*(r**2)/2) def sed_eq_md(r,c0,M,theta): return c0*((theta)*np.exp(M*(r**2)/2) + (1 - theta)*np.exp(2*M*(r**2)/2)) """ Explanation: Real example: what can you learn from your data? Sedimentation equilibrium experiments are used to study molecular assemblies. If spin a solution of molecules in a centrifuge very fast for a very long time, you get an equlibirum distribution of molecules along the length ($r$) of the tube. This is because centrigual force pulls them to the bottom of the tube, while diffusion tends to spread them out over the tube. If you measure the concentration of molecules along the length of the tube $c(r)$, you can then fit a model and back out the molecular weight(s) of the species in solution. The equation describing a simple, non-interacting molecule is below. $c_{0}$ is the concentration of the molecule at $r=0$, $M$ is the molecular weight of the molecule, and $r$ is the position along the tube. $$c(r) = c_{0}exp \Big [ M \Big ( \frac{r^{2}}{2} \Big ) \Big ]$$ You are interested in whether there is any dimer present (a dimer is a pair of interacting molecules). If there is dimer, our model gets more complicated. We have to add a term $\theta$ that describes the fraction of the molecules in dimer versus monomer: $$c(r) = c_{0} \Big ( \theta exp \Big [ M \Big ( \frac{r^{2}}{2} \Big ) \Big ] + (1 - \theta )exp \Big [ 2M \Big ( \frac{r^{2}}{2} \Big ) \Big ] \Big )$$ (I've collapsed in a bunch of constants to simplify the problem. If you want a full discussion of the method, see here, particularly equation 4). Main question: do these data provide evidence for a dimer? Implement a function and residuals function for each of these models. The first model should have the fittable parameters $c_{0}$ and $M$. The second model should have the fittable parameters $c_{0}$, $M$, and $\theta$. End of explanation """ ## CREATE DATA CELL #r = np.linspace(0,1,100) #c0 = 1.0 #M = 2 #theta = 0.95 #md = pd.DataFrame({"r":r,"c":sed_eq_md(r,c0,M,theta)+ np.random.normal(0,0.025,len(r))}) #md.to_csv("dev/md.csv") d = pd.read_csv("data/sed_eq.csv") plt.plot(d.r, d.c,"o") fit_m = scipy.optimize.least_squares(residuals, (1,2), kwargs={"r":d.r, "c":d.c, "f":sed_eq_m}) print("monomer:",fit_m.cost,fit_m.x,len(fit_m.fun)) plt.plot(d.r,sed_eq_m(d.r,fit_m.x[0],fit_m.x[1]),color="red") fit_md = scipy.optimize.least_squares(residuals, (1,2.0,0.95), kwargs={"r":d.r, "c":d.c, "f":sed_eq_md}) print("monomer/dimer",fit_md.cost,fit_md.x,len(fit_md.fun)) plt.plot(d.r,sed_eq_md(d.r,fit_md.x[0],fit_md.x[1],fit_md.x[2]),color="blue") """ Explanation: Fit both models to the data in data/sed_eq.csv. What are your estimates of $c_{0}$, $M$, and $\theta$? Are they the same between the two fits? End of explanation """ calc_aic([0.0211731680092,0.0205784296649],[2,3],100) """ Explanation: Use your calc_aic function on these fits. Which model is supported? Can you conclude there is dimer present? End of explanation """ d = pd.read_csv("data/gaussian.csv") plt.plot(d.x,d.y) """ Explanation: Gaussian One common type of data is a sum of Gaussian functions. Among other places, these sort of data pop up in single-molecule work, fluorescence-activated cell sorting, spectroscopy and chromatography. You collect a dataset that looks like the following: End of explanation """ def multi_gaussian(x,means,stds,areas): """ Function calculating multiple gaussians (built from values in means, stds, areas). The number of gaussians is determined by the length of means, stds, and areas. The gaussian functions are calculated at values in array x. """ if len(means) != len(stds) or len(means) != len(areas): err = "means, standard deviations and areas should have the same length!\n" raise ValueError(err) out = np.zeros(len(x),dtype=float) for i in range(len(means)): out += areas[i]*scipy.stats.norm(means[i],stds[i]).pdf(x) return out def multi_gaussian_r(params,x,y): """ Residuals function for multi_guassian. """ params = np.array(params) if params.shape[0] % 3 != 0: err = "num parameters must be divisible by 3\n" raise ValueError(err) means = params[np.arange(0,len(params),3)] stds = params[np.arange(1,len(params),3)] areas = params[np.arange(2,len(params),3)] return multi_gaussian(x,means,stds,areas) - y def fitter(x,y,means_guess,stds_guess,areas_guess): """ Fit an arbitrary number of gaussian functions to x/y data. The number of gaussians that will be fit is determined by the length of means_guess. x: measurement x-values (array) y: measurement y-values (array) means_guess: array of guesses for means for gaussians. length determines number of gaussians stds_guess: array of guesses of standard deviations for gaussians. length must match means_guess areas_guess: array of area guesses for gaussians. length must match means guess. returns: means, stds, areas and fit sum-of-squared-residuals """ # Sanity check if len(means_guess) != len(stds_guess) or len(means_guess) != len(areas_guess): err = "means, standard deviations and areas should have the same length!\n" raise ValueError(err) # Construct an array of parameter guesses by assembling # means, stds, and areas param_guesses = [] for i in range(len(means_guess)): param_guesses.append(means_guess[i]) param_guesses.append(stds_guess[i]) param_guesses.append(areas_guess[i]) param_guesses = np.array(param_guesses) # Fit the multigaussian function fit = scipy.optimize.least_squares(multi_gaussian_r,param_guesses, args=(x,y)) # Disassemble into means, stds, areas means = fit.x[np.arange(0,len(fit.x),3)] stds = fit.x[np.arange(1,len(fit.x),3)] areas = fit.x[np.arange(2,len(fit.x),3)] return means, stds, areas, fit.cost def plot_gaussians(means,stds,areas): """ Plot a collection of gaussians. means: array of means for gaussians. length determines number of gaussians stds: array of standard deviations for gaussians. length must match means_guess areas: array of areas for gaussians. length must match means guess. """ plt.plot(d.x,multi_gaussian(d.x,means,stds,areas)) for i in range(len(means)): plt.plot(d.x,multi_gaussian(d.x, [means[i]], [stds[i]], [areas[i]])) ## CREATE DATA #x = np.arange(-10,10,0.1) #means = np.array((-2,0,1.5)) #stds = np.array((0.6,.6,1.5)) #areas = np.array((1,1,1)) #d = pd.DataFrame({"x":np.arange(-10,10,0.1), # "y":multi_gaussian(x,means,stds,areas)+np.random.normal(0,0.01,len(x))}) #d.to_csv("dev/gaussian.csv") d = pd.read_csv("data/gaussian.csv") ssr_list = [] num_params = [] for i in range(1,6): means_guess = np.random.normal(0.1,1,i) #np.ones(i,dtype=float) stds_guess = np.ones(i,dtype=float) areas_guess = np.ones(i,dtype=float) fit_means, fit_stds, fit_areas, ssr = fitter(d.x,d.y,means_guess,stds_guess,areas_guess) plt.plot(d.x,d.y,"o") plot_gaussians(fit_means,fit_stds,fit_areas) plt.show() ssr_list.append(ssr) num_params.append((i+1)*3) len(d.x) print(calc_aic(ssr_list,num_params,len(d.x))) """ Explanation: You find code to analyze this kind of data on the internet. Using the functions below, determine: how many gaussians you can extract from the data in lab-data/gaussian.csv. their means, standard deviations, and areas. PS: you should play with your initial guesses PPS: negative area gaussians are not allowed End of explanation """