markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Isolines There are two Isoline functions: isochrones and isodistances. In this guide we will use the isochrones function to calculate walking areas by time for each Starbucks store and the isodistances function to calculate the walking area by distance. By definition, isolines are concentric polygons that display equally calculated levels over a given surface area, and they are calculated as the intersection areas from the origin point, measured by: Time in the case of isochrones Distance in the case of isodistances Isochrones For isochrones, let's calculate the time ranges of 5, 15 and 30 minutes. These ranges are input in seconds, so they will be 300, 900, and 1800 respectively.
from cartoframes.data.services import Isolines iso_service = Isolines() _, isochrones_dry_metadata = iso_service.isochrones(geo_gdf, [300, 900, 1800], mode='walk', dry_run=True)
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
Remember to always check the quota using dry_run parameter and available_quota method before running the service!
print('available {0}, required {1}'.format( iso_service.available_quota(), isochrones_dry_metadata.get('required_quota')) ) isochrones_gdf, isochrones_metadata = iso_service.isochrones(geo_gdf, [300, 900, 1800], mode='walk') isochrones_gdf.head() from cartoframes.viz import Layer, basic_style, basic_legend Layer(isochrones_gdf, basic_style(opacity=0.5), basic_legend('Isochrones'))
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
Isodistances For isodistances, let's calculate the distance ranges of 100, 500 and 1000 meters. These ranges are input in meters, so they will be 100, 500, and 1000 respectively.
_, isodistances_dry_metadata = iso_service.isodistances(geo_gdf, [100, 500, 1000], mode='walk', dry_run=True) print('available {0}, required {1}'.format( iso_service.available_quota(), isodistances_dry_metadata.get('required_quota')) ) isodistances_gdf, isodistances_metadata = iso_service.isodistances(geo_gdf, [100, 500, 1000], mode='walk') isodistances_gdf.head() from cartoframes.viz import Layer, basic_style, basic_legend Layer(isodistances_gdf, basic_style(opacity=0.5), basic_legend('Isodistances'))
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
EXERCISE DATASET Run the colorbot experiment and notice the choosen model_dir Below is the input function definition,we don't need some of the auxiliar functions anymore Add this cell and then add the solution to the EXERCISE EXPERIMENT choose a different model_dir and run the cells Copy the model_dir of the two models to the same path tensorboard --logdir=path
def get_input_fn(csv_file, batch_size, num_epochs=1, shuffle=True): def _parse(line): # each line: name, red, green, blue # split line items = tf.string_split([line],',').values # get color (r, g, b) color = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0 # split color_name into a sequence of characters color_name = tf.string_split([items[0]], '') length = color_name.indices[-1, 1] + 1 # length = index of last char + 1 color_name = color_name.values return color, color_name, length def input_fn(): # https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data dataset = ( tf.contrib.data.TextLineDataset(csv_file) # reading from the HD .skip(1) # skip header .map(_parse) # parse text to variables .padded_batch(batch_size, padded_shapes=([None], [None], []), padding_values=(0.0, chr(0), tf.cast(0, tf.int64))) .repeat(num_epochs) # repeat dataset the number of epochs ) # for our "manual" test we don't want to shuffle the data if shuffle: dataset = dataset.shuffle(buffer_size=100000) # create iterator color, color_name, length = dataset.make_one_shot_iterator().get_next() features = { COLOR_NAME_KEY: color_name, SEQUENCE_LENGTH_KEY: length, } return features, color return input_fn
code_samples/RNN/colorbot/colorbot_solutions.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
As a result you will see something like: We called the original model "sorted_batch" and the model using the simplified input function as "simple_batch" Notice that both models have basically the same loss in the last step, but the "sorted_batch" model runs way faster , notice the global_step/sec metric, it measures how many steps the model executes per second. Since the "sorted_batch" has a larger global_step/sec it means it trains faster. If you don't belive me you can change Tensorboard to compare the models in a "relative" way, this will compare the models over time. See result below. EXERCISE HYPERPARAMETERS This one is more personal, what you see will depends on what you change in the model. Below is a very simple example we just changed the model to use a GRUCell, just in case...
def get_model_fn(rnn_cell_sizes, label_dimension, dnn_layer_sizes=[], optimizer='SGD', learning_rate=0.01): def model_fn(features, labels, mode): color_name = features[COLOR_NAME_KEY] sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], dtype=tf.int32) # int64 -> int32 # ----------- Preparing input -------------------- # Creating a tf constant to hold the map char -> index # this is need to create the sparse tensor and after the one hot encode mapping = tf.constant(CHARACTERS, name="mapping") table = tf.contrib.lookup.index_table_from_tensor(mapping, dtype=tf.string) int_color_name = table.lookup(color_name) # representing colornames with one hot representation color_name_onehot = tf.one_hot(int_color_name, depth=len(CHARACTERS) + 1) # ---------- RNN ------------------- # Each RNN layer will consist of a GRU cell rnn_layers = [tf.nn.rnn_cell.GRUCell(size) for size in rnn_cell_sizes] # Construct the layers multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers) # Runs the RNN model dynamically # more about it at: # https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell, inputs=color_name_onehot, sequence_length=sequence_length, dtype=tf.float32) # Slice to keep only the last cell of the RNN last_activations = rnn_common.select_last_activations(outputs, sequence_length) # ------------ Dense layers ------------------- # Construct dense layers on top of the last cell of the RNN for units in dnn_layer_sizes: last_activations = tf.layers.dense( last_activations, units, activation=tf.nn.relu) # Final dense layer for prediction predictions = tf.layers.dense(last_activations, label_dimension) # ----------- Loss and Optimizer ---------------- loss = None train_op = None if mode != tf.estimator.ModeKeys.PREDICT: loss = tf.losses.mean_squared_error(labels, predictions) if mode == tf.estimator.ModeKeys.TRAIN: train_op = tf.contrib.layers.optimize_loss( loss, tf.contrib.framework.get_global_step(), optimizer=optimizer, learning_rate=learning_rate) return model_fn_lib.EstimatorSpec(mode, predictions=predictions, loss=loss, train_op=train_op) return model_fn
code_samples/RNN/colorbot/colorbot_solutions.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Edit form information
form.myattribute = "myinformation"
test/forms/test/test_testsuite_1234.ipynb
IS-ENES-Data/submission_forms
apache-2.0
Save your form your form will be stored (the form name consists of your last name plut your keyword)
form_handler.save_form(sf,"..my comment..") # edit my comment info
test/forms/test/test_testsuite_1234.ipynb
IS-ENES-Data/submission_forms
apache-2.0
officially submit your form the form will be submitted to the DKRZ team to process you also receive a confirmation email with a reference to your online form for future modifications
form_handler.email_form_info(sf) form_handler.form_submission(sf)
test/forms/test/test_testsuite_1234.ipynb
IS-ENES-Data/submission_forms
apache-2.0
TensorFlow 2 início rápido para especialistas <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">Ver em TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Executar no Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">Ver código fontes no GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Baixar notebook</a></td> </table> Este é um arquivo de notebook [Google Colaboratory] (https://colab.research.google.com/notebooks/welcome.ipynb). Os programas Python são executados diretamente no navegador - uma ótima maneira de aprender e usar o TensorFlow. Para seguir este tutorial, execute o bloco de anotações no Google Colab clicando no botão na parte superior desta página. No Colab, conecte-se a uma instância do Python: No canto superior direito da barra de menus, selecione * CONNECT*. Execute todas as células de código do notebook: Selecione * Runtime * > * Run all *. Faça o download e instale o pacote TensorFlow 2: Note: Upgrade pip to install the TensorFlow 2 package. See the install guide for details. Importe o TensorFlow dentro de seu programa:
from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Carregue e prepare o [conjunto de dados MNIST] (http://yann.lecun.com/exdb/mnist/).
mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # Adicione uma dimensão de canais x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis]
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Use tf.data para agrupar e embaralhar o conjunto de dados:
train_ds = tf.data.Dataset.from_tensor_slices( (x_train, y_train)).shuffle(10000).batch(32) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Crie o modelo tf.keras usando a Keras [API de subclasse de modelo] (https://www.tensorflow.org/guide/keras#model_subclassing):
class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.conv1 = Conv2D(32, 3, activation='relu') self.flatten = Flatten() self.d1 = Dense(128, activation='relu') self.d2 = Dense(10, activation='softmax') def call(self, x): x = self.conv1(x) x = self.flatten(x) x = self.d1(x) return self.d2(x) # Crie uma instância do modelo model = MyModel()
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Escolha uma função otimizadora e de perda para treinamento:
loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam()
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Selecione métricas para medir a perda e a precisão do modelo. Essas métricas acumulam os valores ao longo das épocas e, em seguida, imprimem o resultado geral.
train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Use tf.GradientTape para treinar o modelo:
@tf.function def train_step(images, labels): with tf.GradientTape() as tape: # training=True é necessário apenas se houver camadas com diferentes     # comportamentos durante o treinamento versus inferência (por exemplo, Dropout). predictions = model(images, training=True) loss = loss_object(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) train_accuracy(labels, predictions)
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
Teste o modelo:
@tf.function def test_step(images, labels): # training=True é necessário apenas se houver camadas com diferentes   # comportamentos durante o treinamento versus inferência (por exemplo, Dropout). predictions = model(images, training=False) t_loss = loss_object(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) EPOCHS = 5 for epoch in range(EPOCHS): # Reiniciar as métricas no início da próxima época train_loss.reset_states() train_accuracy.reset_states() test_loss.reset_states() test_accuracy.reset_states() for images, labels in train_ds: train_step(images, labels) for test_images, test_labels in test_ds: test_step(test_images, test_labels) template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}' print(template.format(epoch+1, train_loss.result(), train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100))
site/pt-br/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
# Size of the encoding layer (the hidden layer) encoding_dim = 32 image_size = mnist.train.images.shape[1] inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets') # Output of hidden layer encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu) # Output layer logits logits = tf.layers.dense(encoded, image_size, activation=None) # Sigmoid output from decoded = tf.nn.sigmoid(logits, name='output') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost)
autoencoder/Simple_Autoencoder_Solution.ipynb
chusine/dlnd
mit
Spinning Symmetric Rigid Body setup: The body's orientation in inertial frame $\mathcal I$ is described by a 3-1-2 $(\psi,\theta,\phi)$ rotation: the body is rotated by angle $\psi$ about $\mathbf e_3$ (creating intermediate frame $\mathcal A$), by an angle $\theta$ about $\mathbf a_1$ (creating intermediate frame $\mathcal B$), and finally spinning about $\mathbf b_2 \equiv \mathbf c_2$ at a rate $\Omega \equiv \dot\phi$, creating body-fixed frame $\mathcal C$. Note that while the body fixed frame is $\mathcal C$, all computations here are performed in $\mathcal B$ frame components. ${}^\mathcal{B}C^\mathcal{A}$:
bCa = rotMat(1,th);bCa
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
$\left[{}^\mathcal{I}\boldsymbol{\omega}^\mathcal{B}\right]_\mathcal{B}$:
iWb_B = bCa*Matrix([0,0,psid])+ Matrix([thd,0,0]); iWb_B
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
${}^\mathcal{I}\boldsymbol{\omega}^\mathcal{C} = {}^\mathcal{I}\boldsymbol{\omega}^\mathcal{B} + {}^\mathcal{B}\boldsymbol{\omega}^\mathcal{C}$. $\left[{}^\mathcal{I}\boldsymbol{\omega}^\mathcal{C}\right]_\mathcal{B}$:
iWc_B = iWb_B +Matrix([0,Omega,0]); iWc_B
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
$\left[ \mathbb I_G \right]_\mathcal B$:
IG_B = diag(I1,I2,I1);IG_B
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
$\left[{}^\mathcal{I} \mathbf h_G\right]_\mathcal{B}$:
hG_B = IG_B*iWc_B; hG_B
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
$\vphantom{\frac{\mathrm{d}}{\mathrm{d}t}}^\mathcal{I}\frac{\mathrm{d}}{\mathrm{d}t} {}^\mathcal{I} \mathbf h_G = \vphantom{\frac{\mathrm{d}}{\mathrm{d}t}}^\mathcal{B}\frac{\mathrm{d}}{\mathrm{d}t} {}^\mathcal{I} \mathbf h_G + {}^\mathcal{I}\boldsymbol{\omega}^\mathcal{B} \times \mathbf h_G$. $\left[\vphantom{\frac{\mathrm{d}}{\mathrm{d}t}}^\mathcal{I}\frac{\mathrm{d}}{\mathrm{d}t} {}^\mathcal{I} \mathbf h_G\right]_\mathcal{B}$:
dhG_B = difftotalmat(hG_B,t,diffmap) + skew(iWb_B)*hG_B; dhG_B
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
Note that the $\mathbf b_2$ component of ${}^\mathcal{I}\boldsymbol{\omega}^\mathcal{B} \times \mathbf h_G$ is zero:
skew(iWb_B)*hG_B
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
Define $C \triangleq \Omega + \dot\psi\sin\theta$ and substitute into $\left[\vphantom{\frac{\mathrm{d}}{\mathrm{d}t}}^\mathcal{I}\frac{\mathrm{d}}{\mathrm{d}t} {}^\mathcal{I} \mathbf h_G\right]_\mathcal{B}$:
dhG_B_simp = dhG_B.subs(Omega+psid*sin(th),C); dhG_B_simp
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
Assume an external torque generating moment about $G$ of $\mathbf M_G = -M_1\mathbf b_1$:
solve([dhG_B_simp[0] + M1,dhG_B_simp[2]],[thdd,psidd])
Notebooks/Spinning Symmetric Rigid Body.ipynb
dsavransky/MAE4060
mit
Training
batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f)
gan_mnist/Intro_to_GANs_Solution.ipynb
dataewan/deep-learning
mit
Helper function to compute the Bit Error Rate (BER)
# helper function to compute the bit error rate def BER(predictions, labels): return np.mean(1-np.isclose((predictions > 0.5).astype(float), labels))
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Define Parameters Here, we consider the simple AWGN channel. We modulate using a constellation with $M = 2^m$ different symbol. To symbol $i$, we assign the binary representation of $i$ as bit pattern.
# number of bits assigned to symbol m = 5 # number of symbols M = 2**m EbN0 = 10 # noise standard deviation sigma_n = np.sqrt((1/2/np.log2(M)) * 10**(-EbN0/10))
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions
# Bit representation of symbols binaries = torch.from_numpy(np.reshape(np.unpackbits(np.uint8(np.arange(0,2**m))), (-1,8))).float().to(device) binaries = binaries[:,(8-m):] # validation set. Training examples are generated on the fly N_valid = 100000 # number of neurons in hidden layers at receiver hidden_neurons_RX_1 = 50 hidden_neurons_RX_2 = 128 hidden_neurons_RX = [hidden_neurons_RX_1, hidden_neurons_RX_2] # Generate Validation Data y_valid = np.random.randint(M,size=N_valid) y_valid_onehot = np.eye(M)[y_valid] y_valid_binary = binaries[y_valid,:].detach().cpu().numpy()
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Define the architecture of the autoencoder, i.e. the neural network This is the main neural network/Autoencoder with transmitter, channel and receiver. Transmitter and receiver each with ELU activation function. Note that the final layer does not use a softmax function, as this function is already included in the CrossEntropyLoss.
class Autoencoder(nn.Module): def __init__(self, hidden_neurons_RX): super(Autoencoder, self).__init__() # Define Transmitter Layer: Linear function, M input neurons (symbols), 2 output neurons (real and imaginary part) self.fcT = nn.Linear(M, 2) # Define Receiver Layer: Linear function, 2 input neurons (real and imaginary part), m output neurons (bits) self.fcR1 = nn.Linear(2,hidden_neurons_RX[0]) self.fcR2 = nn.Linear(hidden_neurons_RX[0], hidden_neurons_RX[1]) self.fcR3 = nn.Linear(hidden_neurons_RX[1], m) # Non-linearity (used in transmitter and receiver) self.activation_function = nn.ELU() self.sigmoid = nn.Sigmoid() def forward(self, x): # compute output encoded = self.network_transmitter(x) # compute normalization factor and normalize channel output norm_factor = torch.sqrt(torch.mean(torch.mul(encoded,encoded)) * 2 ) modulated = encoded / norm_factor received = self.channel_model(modulated) bitprob = self.network_receiver(received) return bitprob def network_transmitter(self,batch_labels): return self.fcT(batch_labels) def network_receiver(self,inp): out = self.activation_function(self.fcR1(inp)) out = self.activation_function(self.fcR2(out)) logits = self.sigmoid(self.fcR3(out)) return logits def channel_model(self,modulated): # just add noise, nothing else received = torch.add(modulated, sigma_n*torch.randn(len(modulated),2).to(device)) return received
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Train the NN and evaluate it at the end of each epoch Here the idea is to vary the batch size during training. In the first iterations, we start with a small batch size to rapidly get to a working solution. The closer we come towards the end of the training we increase the batch size. If keeping the batch size small, it may happen that there are no misclassifications in a small batch and there is no incentive of the training to improve. A larger batch size will most likely contain errors in the batch and hence there will be incentive to keep on training and improving. Here, the data is generated on the fly inside the graph, by using PyTorch random number generation. As PyTorch does not natively support complex numbers (at least in early versions), we decided to replace the complex number operations in the channel by a simple rotation matrix and treating real and imaginary parts separately. We use the ELU activation function inside the neural network and employ the Adam optimization algorithm. Now, carry out the training as such. First initialize the variables and then loop through the training. Here, the epochs are not defined in the classical way, as we do not have a training set per se. We generate new data on the fly and never reuse data. We change the batch size in each epoch.<br> To get the constellation symbols and the received data, we apply the model after each epoch.
model = Autoencoder(hidden_neurons_RX) model.to(device) loss_fn = nn.BCELoss() # Adam Optimizer optimizer = optim.Adam(model.parameters()) # Training parameters num_epochs = 150 batches_per_epoch = np.linspace(1, 1000, num=num_epochs).astype(int) # Vary batch size during training batch_size_per_epoch = np.linspace(200,5000,num=num_epochs) learning_rate_per_epoch = np.linspace(0.001, 0.00001, num=num_epochs) validation_BERs = np.zeros(num_epochs) validation_received = [] constellations = [] print('Start Training') for epoch in range(num_epochs): batch_labels = torch.empty(int(batch_size_per_epoch[epoch]), device=device) batch_labels_binary = torch.zeros(int(batch_size_per_epoch[epoch]), m, device=device) for step in range(batches_per_epoch[epoch]): # Generate training data: In most cases, you have a dataset and do not generate a training dataset during training loop # sample new mini-batch directory on the GPU (if available) batch_labels.random_(M) batch_labels_onehot = torch.zeros(int(batch_size_per_epoch[epoch]), M, device=device) batch_labels_onehot[range(batch_labels_onehot.shape[0]), batch_labels.long()]=1 batch_labels_binary[range(batch_labels_onehot.shape[0]), :] = binaries[batch_labels.long(),:] # Propagate (training) data through the net NN_output = model(batch_labels_onehot) # compute loss loss = loss_fn(NN_output, batch_labels_binary) # compute gradients loss.backward() # Adapt weights optimizer.step() # reset gradients optimizer.zero_grad() optimizer.param_groups[0]['lr'] = learning_rate_per_epoch[epoch] # compute validation BER out_valid = model(torch.Tensor(y_valid_onehot).to(device)) validation_BERs[epoch] = BER(out_valid.detach().cpu().numpy(), y_valid_binary) print('Validation BER after epoch %d: %f (loss %1.8f)' % (epoch, validation_BERs[epoch], loss.detach().cpu().numpy())) # calculate and store constellation encoded = model.network_transmitter(torch.eye(M).to(device)) norm_factor = torch.sqrt(torch.mean(torch.mul(encoded,encoded)) * 2 ) modulated = encoded / norm_factor constellations.append(modulated.detach().cpu().numpy()) print('Training finished')
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Evaluate results Plt decision region and scatter plot of the validation set. Note that the validation set is only used for computing SERs and plotting, there is no feedback towards the training!
cmap = matplotlib.cm.tab20 base = plt.cm.get_cmap(cmap) color_list = base.colors new_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))] # find minimum SER from validation set min_BER_iter = np.argmin(validation_BERs) plt.figure(figsize=(10,8)) font = {'size' : 14} plt.rc('font', **font) plt.rc('text', usetex=True) bin_labels = [np.binary_repr(j).zfill(m) for j in range(2**m)] plt.scatter(constellations[min_BER_iter][:,0], constellations[min_BER_iter][:,1], c=range(M), cmap='tab20',s=50) for i, txt in enumerate(bin_labels): plt.annotate(txt, xy=(constellations[min_BER_iter][i,0], constellations[min_BER_iter][i,1]), xycoords='data', \ xytext=(0, 3), textcoords='offset points', \ ha='center', va='bottom') plt.axis('scaled') plt.xlabel(r'$\Re\{r\}$',fontsize=16) plt.ylabel(r'$\Im\{r\}$',fontsize=16) plt.xlim((-1.7, +1.7)) plt.ylim((-1.7, +1.7)) plt.grid(which='both') plt.title('Constellation with Bit Mapping',fontsize=18) plt.savefig('learning_AWGN_BitAE_EbN0%1.1f_M%d.pdf' % (EbN0,M),bbox_inches='tight')
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Generate animation and save as a gif. (Evaluate results III)
%matplotlib notebook %matplotlib notebook # Generate animation from matplotlib import animation, rc from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs. font = {'size' : 18} plt.rc('font', **font) fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(1,1,1) ax1.axis('scaled') written = False def animate(i): ax1.clear() ax1.scatter(constellations[i][:,0], constellations[i][:,1], c=range(M), cmap='tab20',s=50) for j, txt in enumerate(bin_labels): ax1.annotate(txt, xy=(constellations[i][j,0], constellations[i][j,1]), xycoords='data', \ xytext=(0, 3), textcoords='offset points', \ ha='center', va='bottom', fontsize=12) ax1.set_xlim(( -1.7, +1.7)) ax1.set_ylim(( -1.7, +1.7)) ax1.set_title('Constellation', fontsize=18) ax1.set_xlabel(r'$\Re\{r\}$',fontsize=16) ax1.set_ylabel(r'$\Im\{r\}$',fontsize=16) anim = animation.FuncAnimation(fig, animate, frames=min_BER_iter+1, interval=200, blit=False) fig.show() anim.save('learning_AWGN_BitAE_EbN0%1.1f_M%d.gif' % (EbN0,M), writer=PillowWriter(fps=5))
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
kit-cel/wt
gpl-2.0
Step 3: Complete application code in application/main.py We can set up our server with python using Flask. Below, we've already built out most of the application for you. The @app.route() decorator defines a function to handle web reqests. Let's say our website is www.example.com. With how our @app.route("/") function is defined, our sever will render our <a href="application/templates/index.html">index.html</a> file when users go to www.example.com/ (which is the default route for a website). So, when a user pings our server with www.example.com/predict, they would use @app.route("/predict", methods=["POST"]) to make a prediction. The data that gets sent over the internet isn't a dictionary, but a string like below: name1=value1&amp;name2=value2 where name corresponds to the name on the input tag of our html form, and the value is what the user entered. Thankfully, Flask makes it easy to transform this string into a dictionary with request.form.to_dict(), but we still need to transform the data into a format our model expects. We've done this with the gender2str and the plurality2str utility functions. Ok! Let's set up a webserver to take in the form inputs, process them into features, and send these features to our model on Cloud AI Platform to generate predictions to serve to back to users. Fill in the TODO comments in <a href="application/main.py">application/main.py</a>. Give it a go first and review the solutions folder if you get stuck. Note: AppEngine test configurations have already been set for you in the file <a href="application/app.yaml">application/app.yaml</a>. Review app.yaml documentation for additional configuration options. Step 4: Deploy application So how do we know that it works? We'll have to deploy our website and find out! Notebooks aren't made for website deployment, so we'll move our operation to the Google Cloud Shell. By default, the shell doesn't have Flask installed, so copy over the following command to install it. python3 -m pip install --user Flask==0.12.1 Next, we'll need to copy our web app to the Cloud Shell. We can use Google Cloud Storage as an inbetween.
%%bash gsutil -m rm -r gs://$BUCKET/baby_app gsutil -m cp -r application/ gs://$BUCKET/baby_app
courses/machine_learning/deepdive2/structured/labs/6_serving_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the below cell, and copy the output into the Google Cloud Shell
%%bash echo rm -r baby_app/ echo mkdir baby_app/ echo gsutil cp -r gs://$BUCKET/baby_app ./ echo python3 baby_app/main.py
courses/machine_learning/deepdive2/structured/labs/6_serving_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Custom layers <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table> We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.
import tensorflow as tf tfe = tf.contrib.eager tf.enable_eager_execution()
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
manipopopo/tensorflow
apache-2.0
Layers: common sets of useful operations Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables. Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers. TensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.
# In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time the layer is used, but it can be provided if you want to # specify it manually, which is useful in some complex models. layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
manipopopo/tensorflow
apache-2.0
The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer), Conv2D, LSTM, BatchNormalization, Dropout, and many others.
# To use a layer, simply call it. layer(tf.zeros([10, 5])) # Layers have many useful methods. For example, you can inspect all variables # in a layer by calling layer.variables. In this case a fully-connected layer # will have variables for weights and biases. layer.variables # The variables are also accessible through nice accessors layer.kernel, layer.bias
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
manipopopo/tensorflow
apache-2.0
Implementing custom layers The best way to implement your own layer is extending the tf.keras.Layer class and implementing: * __init__ , where you can do all input-independent initialization * build, where you know the shapes of the input tensors and can do the rest of the initialization * call, where you do the forward computation Note that you don't have to wait until build is called to create your variables, you can also create them in __init__. However, the advantage of creating them in build is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in __init__ would mean that shapes required to create the variables will need to be explicitly specified.
class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_variable("kernel", shape=[input_shape[-1].value, self.num_outputs]) def call(self, input): return tf.matmul(input, self.kernel) layer = MyDenseLayer(10) print(layer(tf.zeros([10, 5]))) print(layer.variables)
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
manipopopo/tensorflow
apache-2.0
Note that you don't have to wait until build is called to create your variables, you can also create them in __init__. Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request! Models: composing layers Many interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut. The main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.
class ResnetIdentityBlock(tf.keras.Model): def __init__(self, kernel_size, filters): super(ResnetIdentityBlock, self).__init__(name='') filters1, filters2, filters3 = filters self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1)) self.bn2a = tf.keras.layers.BatchNormalization() self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same') self.bn2b = tf.keras.layers.BatchNormalization() self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1)) self.bn2c = tf.keras.layers.BatchNormalization() def call(self, input_tensor, training=False): x = self.conv2a(input_tensor) x = self.bn2a(x, training=training) x = tf.nn.relu(x) x = self.conv2b(x) x = self.bn2b(x, training=training) x = tf.nn.relu(x) x = self.conv2c(x) x = self.bn2c(x, training=training) x += input_tensor return tf.nn.relu(x) block = ResnetIdentityBlock(1, [1, 2, 3]) print(block(tf.zeros([1, 2, 3, 3]))) print([x.name for x in block.variables])
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
manipopopo/tensorflow
apache-2.0
Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(2, 1, padding='same'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(3, (1, 1)), tf.keras.layers.BatchNormalization()]) my_seq(tf.zeros([1, 2, 3, 3]))
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
manipopopo/tensorflow
apache-2.0
retrieve the NBAR and PQ for the spatiotemporal range of interest
#Define which pixel quality artefacts you want removed from the results mask_components = {'cloud_acca':'no_cloud', 'cloud_shadow_acca' :'no_cloud_shadow', 'cloud_shadow_fmask' : 'no_cloud_shadow', 'cloud_fmask' :'no_cloud', 'blue_saturated' : False, 'green_saturated' : False, 'red_saturated' : False, 'nir_saturated' : False, 'swir1_saturated' : False, 'swir2_saturated' : False, 'contiguous':True} #Retrieve the NBAR and PQ data for sensor n sensor_clean = {} for sensor in sensors: #Load the NBAR and corresponding PQ sensor_nbar = dc.load(product= sensor+'_nbar_albers', group_by='solar_day', measurements = bands_of_interest, **query) sensor_pq = dc.load(product= sensor+'_pq_albers', group_by='solar_day', **query) #grab the projection info before masking/sorting crs = sensor_nbar.crs crswkt = sensor_nbar.crs.wkt affine = sensor_nbar.affine #This line is to make sure there's PQ to go with the NBAR sensor_nbar = sensor_nbar.sel(time = sensor_pq.time) #Apply the PQ masks to the NBAR cloud_free = masking.make_mask(sensor_pq, **mask_components) good_data = cloud_free.pixelquality.loc[start_of_epoch:end_of_epoch] sensor_nbar = sensor_nbar.where(good_data) sensor_clean[sensor] = sensor_nbar #Conctanate measurements from the different sensors together nbar_clean = xr.concat(sensor_clean.values(), dim='time') time_sorted = nbar_clean.time.argsort() nbar_clean = nbar_clean.isel(time=time_sorted) nbar_clean.attrs['crs'] = crs nbar_clean.attrs['affine'] = affine #calculate the normalised difference vegetation index (NDVI) all_ndvi_sorted = ((nbar_clean.nir - nbar_clean.red)/(nbar_clean.nir + nbar_clean.red)) print('The number of time slices at this location is '+ str(nbar_clean.red.shape[0]))
notebooks/07_hovmoller_space_time_visualisation.ipynb
data-cube/agdc-v2-examples
apache-2.0
Plotting an image, select a location for extracting the hovmoller plot The interactive widget allows you to select a location (x, y coordinates), the plot will then show all of the time series that fall into the same x coordinate.
#select time slice of interest - this is trial and error until you get a decent image time_slice_i = 481 rgb = nbar_clean.isel(time =time_slice_i).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color') #rgb = nbar_clean.isel(time =time_slice).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color') fake_saturation = 4500 clipped_visible = rgb.where(rgb<fake_saturation).fillna(fake_saturation) max_val = clipped_visible.max(['y', 'x']) scaled = (clipped_visible / max_val) #Click on this image to chose the location for time series extraction w = widgets.HTML("Event information appears here when you click on the figure") def callback(event): global x, y x, y = int(event.xdata + 0.5), int(event.ydata + 0.5) w.value = 'X: {}, Y: {}'.format(x,y) fig = plt.figure(figsize =(12,6)) plt.imshow(scaled, interpolation = 'nearest', extent=[scaled.coords['x'].min(), scaled.coords['x'].max(), scaled.coords['y'].min(), scaled.coords['y'].max()]) fig.canvas.mpl_connect('button_press_event', callback) date_ = nbar_clean.time[time_slice_i] plt.title(date_.astype('datetime64[D]')) plt.show() display(w) #this converts the map x coordinate into image x coordinates image_coords = ~affine * (x, y) imagex = int(image_coords[0]) imagey = int(image_coords[1]) #This sets up the NDVI colour ramp and corresponding thresholds ndvi_cmap = mpl.colors.ListedColormap(['blue', '#ffcc66','#ffffcc' , '#ccff66' , '#2eb82e', '#009933' , '#006600']) ndvi_bounds = [-1, 0, 0.1, 0.25, 0.35, 0.5, 0.8, 1] ndvi_norm = mpl.colors.BoundaryNorm(ndvi_bounds, ndvi_cmap.N) #This cell shows the x transect that you've chosen in the context of an NDVI image with a suitable colour ramp fig = plt.figure(figsize=(11.69,4)) plt.plot([0, all_ndvi_sorted.shape[2]], [imagey,imagey], 'r') plt.imshow(all_ndvi_sorted.isel(time = time_slice_i), cmap = ndvi_cmap, norm = ndvi_norm) #Hovmoller plot for the x transect fig = plt.figure(figsize=(11.69,7)) all_ndvi_sorted.isel(#x=[xdim], y=[imagey] ).plot(norm= ndvi_norm, cmap = ndvi_cmap, yincrease = False)
notebooks/07_hovmoller_space_time_visualisation.ipynb
data-cube/agdc-v2-examples
apache-2.0
Fun with MyPETS Table of Contents Motivation Introduction Problem Statement Import packages History of Open Source Movement How to learn STEM (or MyPETS) References Contributors Appendix Motivation <a class="anchor" id="hid_why"></a> Current Choice <img src=http://www.cctechlimited.com/pics/office1.jpg> A New Option The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more. Useful for many tasks Programming Blogging Learning Research Documenting work Collaborating Communicating Publishing results or even Doing homework as a student
HTML("<img src=../images/office-suite.jpg>")
learn_stem/fun_with_mypets.ipynb
wgong/open_source_learning
apache-2.0
Introduction <a class="anchor" id="hid_intro"></a> Problem Statement <a class="anchor" id="hid_problem"></a> Import packages <a class="anchor" id="hid_pkg"></a>
# math function import math # create np array import numpy as np # pandas for data analysis import pandas as pd # plotting import matplotlib.pyplot as plt %matplotlib inline # symbolic math import sympy as sy # html5 from IPython.display import HTML, SVG, YouTubeVideo # widgets from collections import OrderedDict from IPython.display import display, clear_output from ipywidgets import Dropdown # csv file import csv
learn_stem/fun_with_mypets.ipynb
wgong/open_source_learning
apache-2.0
History of Open Source Movement <a class="anchor" id="hid_open_src"></a>
with open('../dataset/open_src_move_v2_1.csv') as csvfile: reader = csv.DictReader(csvfile) table_str = '<table>' table_row = """ <tr><td>{year}</td> <td><img src={picture}></td> <td><table> <tr><td>{person}</td></tr> <tr><td><a target=new href={subject_url}>{subject}</a></td></tr> <tr><td>{history}</td></tr> </table> </td> </tr> """ for row in reader: table_str = table_str + table_row.format(year=row['Year'], \ subject=row['Subject'],\ subject_url=row['SubjectURL'],\ person=row['Person'],\ picture=row['Picture'],\ history=row['History']) table_str = table_str + '</table>' HTML(table_str)
learn_stem/fun_with_mypets.ipynb
wgong/open_source_learning
apache-2.0
How to learn STEM <a class="anchor" id="hid_stem"></a>
HTML("Wen calls it -<br><br><br> <font color=red size=+4>M</font><font color=purple>y</font><font color=blue size=+3>P</font><font color=blue size=+4>E</font><font color=green size=+4>T</font><font color=magenta size=+3>S</font><br>")
learn_stem/fun_with_mypets.ipynb
wgong/open_source_learning
apache-2.0
The MUV dataset is a challenging benchmark in molecular design that consists of 17 different "targets" where there are only a few "active" compounds per target. The goal of working with this dataset is to make a machine learnign model which achieves high accuracy on held-out compounds at predicting activity. To get started, let's download the MUV dataset for us to play with.
import os import deepchem as dc current_dir = os.path.dirname(os.path.realpath("__file__")) dataset_file = "medium_muv.csv.gz" full_dataset_file = "muv.csv.gz" # We use a small version of MUV to make online rendering of notebooks easy. Replace with full_dataset_file # In order to run the full version of this notebook dc.utils.download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/%s" % dataset_file, current_dir) dataset = dc.utils.save.load_from_disk(dataset_file) print("Columns of dataset: %s" % str(dataset.columns.values)) print("Number of examples in dataset: %s" % str(dataset.shape[0]))
examples/tutorials/05_Putting_Multitask_Learning_to_Work.ipynb
miaecle/deepchem
mit
Now, let's visualize some compounds from our dataset
from rdkit import Chem from rdkit.Chem import Draw from itertools import islice from IPython.display import Image, display, HTML def display_images(filenames): """Helper to pretty-print images.""" for filename in filenames: display(Image(filename)) def mols_to_pngs(mols, basename="test"): """Helper to write RDKit mols to png files.""" filenames = [] for i, mol in enumerate(mols): filename = "MUV_%s%d.png" % (basename, i) Draw.MolToFile(mol, filename) filenames.append(filename) return filenames num_to_display = 12 molecules = [] for _, data in islice(dataset.iterrows(), num_to_display): molecules.append(Chem.MolFromSmiles(data["smiles"])) display_images(mols_to_pngs(molecules))
examples/tutorials/05_Putting_Multitask_Learning_to_Work.ipynb
miaecle/deepchem
mit
There are 17 datasets total in MUV as we mentioned previously. We're going to train a multitask model that attempts to build a joint model to predict activity across all 17 datasets simultaneously. There's some evidence [2] that multitask training creates more robust models. As fair warning, from my experience, this effect can be quite fragile. Nonetheless, it's a tool worth trying given how easy DeepChem makes it to build these models. To get started towards building our actual model, let's first featurize our data.
MUV_tasks = ['MUV-692', 'MUV-689', 'MUV-846', 'MUV-859', 'MUV-644', 'MUV-548', 'MUV-852', 'MUV-600', 'MUV-810', 'MUV-712', 'MUV-737', 'MUV-858', 'MUV-713', 'MUV-733', 'MUV-652', 'MUV-466', 'MUV-832'] featurizer = dc.feat.CircularFingerprint(size=1024) loader = dc.data.CSVLoader( tasks=MUV_tasks, smiles_field="smiles", featurizer=featurizer) dataset = loader.featurize(dataset_file)
examples/tutorials/05_Putting_Multitask_Learning_to_Work.ipynb
miaecle/deepchem
mit
We'll now want to split our dataset into training, validation, and test sets. We're going to do a simple random split using dc.splits.RandomSplitter. It's worth noting that this will provide overestimates of real generalizability! For better real world estimates of prospective performance, you'll want to use a harder splitter.
splitter = dc.splits.RandomSplitter(dataset_file) train_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split( dataset) #NOTE THE RENAMING: valid_dataset, test_dataset = test_dataset, valid_dataset
examples/tutorials/05_Putting_Multitask_Learning_to_Work.ipynb
miaecle/deepchem
mit
Let's now get started building some models! We'll do some simple hyperparameter searching to build a robust model.
import numpy as np import numpy.random params_dict = {"activation": ["relu"], "momentum": [.9], "batch_size": [50], "init": ["glorot_uniform"], "data_shape": [train_dataset.get_data_shape()], "learning_rate": [1e-3], "decay": [1e-6], "nb_epoch": [1], "nesterov": [False], "dropouts": [(.5,)], "nb_layers": [1], "batchnorm": [False], "layer_sizes": [(1000,)], "weight_init_stddevs": [(.1,)], "bias_init_consts": [(1.,)], "penalty": [0.], } n_features = train_dataset.get_data_shape()[0] def model_builder(model_params, model_dir): model = dc.models.MultitaskClassifier( len(MUV_tasks), n_features, **model_params) return model metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean) optimizer = dc.hyper.HyperparamOpt(model_builder) best_dnn, best_hyperparams, all_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, [], metric)
examples/tutorials/05_Putting_Multitask_Learning_to_Work.ipynb
miaecle/deepchem
mit
Hodrick-Prescott Filter The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$ $$y_t = \tau_t + \zeta_t$$ The components are determined by minimizing the following quadratic loss function $$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp) gdp_decomp = dta[['realgdp']] gdp_decomp["cycle"] = gdp_cycle gdp_decomp["trend"] = gdp_trend fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16); legend = ax.get_legend() legend.prop.set_size(20);
examples/notebooks/tsa_filters.ipynb
phobson/statsmodels
bsd-3-clause
Baxter-King approximate band-pass filter: Inflation and Unemployment Explore the hypothesis that inflation and unemployment are counter-cyclical. The Baxter-King filter is intended to explictly deal with the periodicty of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average $$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$ where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2). For completeness, the filter weights are determined as follows $$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$ $$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$ $$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$ where $\theta$ is a normalizing constant such that the weights sum to zero. $$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$ $$\omega_{1}=\frac{2\pi}{P_{H}}$$ $$\omega_{2}=\frac{2\pi}{P_{L}}$$ $P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl","unemp"]])
examples/notebooks/tsa_filters.ipynb
phobson/statsmodels
bsd-3-clause
We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
fig = plt.figure(figsize=(12,10)) ax = fig.add_subplot(111) bk_cycles.plot(ax=ax, style=['r--', 'b-']);
examples/notebooks/tsa_filters.ipynb
phobson/statsmodels
bsd-3-clause
Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the calculations of the weights in $$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$ for $t=3,4,...,T-2$, where $$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$ $$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$ $\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation. The CF filter is appropriate for series that may follow a random walk.
print(sm.tsa.stattools.adfuller(dta['unemp'])[:3]) print(sm.tsa.stattools.adfuller(dta['infl'])[:3]) cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl","unemp"]]) print(cf_cycles.head(10)) fig = plt.figure(figsize=(14,10)) ax = fig.add_subplot(111) cf_cycles.plot(ax=ax, style=['r--','b-']);
examples/notebooks/tsa_filters.ipynb
phobson/statsmodels
bsd-3-clause
Check your work
genos_new == [0, 2, 1, 1, 2]
tutorials/genotypes.ipynb
gastonstat/stat259
mit
Part 2: Sometimes there are errors and the genotype cannot be determined. Adapt your code from above to deal with this problem (in this example missing data is assigned NA for "Not Available").
genos_w_missing = ['AA', 'NA', 'GG', 'AG', 'AG', 'GG', 'NA'] genos_w_missing_new = [] # The missing data should not be converted to a number, but remain 'NA' in the new list
tutorials/genotypes.ipynb
gastonstat/stat259
mit
Check your work
genos_w_missing_new == [0, 'NA', 2, 1, 1, 2, 'NA']
tutorials/genotypes.ipynb
gastonstat/stat259
mit
Main Practice Setup: Open a terminal and run the following commands: ```bash create a new directory mkdir python-intro cd python-intro download data file, and ipython notebook curl -O https://raw.githubusercontent.com/gastonstat/stat259/gh-pages/tutorials/genos.txt curl -O https://raw.githubusercontent.com/gastonstat/stat259/gh-pages/tutorials/genotypes.ipynb ``` Data File: The raw data for this practice is in the file genos.txt which contains one column of genotypes (one genotype per row). Each genotype consists of two characters: e.g. 'AA' or 'GG'. In addition, there are some rows that contain missing values denoted as 'NA'. I. Read in the data and store the contents in a list called genos. II. Find out how what are the different (i.e. unique) values are in genos. III. Calculate the number of occurrences of each genotype, and store the results in a dictionary called geno_counts. Use the following 3 approaches: 1. Use a for loop to count the genotypes (store the result in a dictionary) 2. Get the same counts but this time using the count() method 3. Another alternative is to use Counter from Collections IV. Once you've counted the genotypes, make a function get_proportions() that takes geno_counts and returns a dictionary with relative frequencies (i.e. proportions) of genotypes. Also, test your function with the provided assertion. V. Convert the string values in genos into integers ('NA' remains as 'NA') and put them in a new list called numeric_genos: - 'AA' = 0 - 'AG' = 1 - 'GG' = 2 - 'NA' = 'NA' VI. Write the data in numeric_genos to a text file called genos_int.txt VII. Finally, convert your notebook to html (and open it) by running these commands from the shell: shell ipython nbconvert genotypes.ipynb open genotypes.html
# things to be imported from __future__ import division # if you use python 2.? from collections import Counter
tutorials/genotypes.ipynb
gastonstat/stat259
mit
I. Reading a text file Some refs about Reading Files: File Operations: https://github.com/dlab-berkeley/python-fundamentals/blob/master/cheat-sheets/12-Files.ipynb Reading Text Files: http://www.jarrodmillman.com/rcsds/lectures/reading_text_files.html
# open 'genos.txt' and store values in "genos" # YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
II. Unique Genotypes
# Find the unique values in genos # YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
III. Counting Genotypes a) Using a for loop
# For loop to count occurrences of AA, AG, GG, NA # (store results in dictionary "geno_counts") # YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
b) Using count method
# YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
c) Using Counter from collections
# YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
IV. Function to Calculate Proportions
# Write a function "get_proportions()" # Parameters: geno_counts (dictionary) # Returns: dictionary of proportions # YOUR CODE # apply your function: # get_proportions(geno_counts) # test for function get_proportions() def test_get_proportions(): # We make a fake dictionary input_val = {'AA': 2, 'AB': 4, 'BB': 14} expected_result = {'AA': 0.1, 'AB': 0.2, 'BB': 0.7} # run function res = get_proportions(input_val) assert res == expected_result # run the test and see what happens: # test_get_proportions()
tutorials/genotypes.ipynb
gastonstat/stat259
mit
V. Converting to numeric genotypes
# convert genotypes: AA = 0, AG = 1, GG = 2, NA = NA # (create a list called "numeric_genos") # YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
VI. Write Numeric Genotypes to a text file
# write values in "numeric_genos" to a file "genos_int.txt" # YOUR CODE
tutorials/genotypes.ipynb
gastonstat/stat259
mit
Now that we have our design, let's generate some synthetic data. We will generate AR1 noise to add to the data; this is not a perfect model of the autocorrelation in fMRI, but it's at least a start towards realistic noise.
from statsmodels.tsa.arima_process import arma_generate_sample ar1_noise=arma_generate_sample([1,0.3],[1,0.],len(regressor)) beta=4 y=regressor.T*beta + ar1_noise print y.shape plt.plot(y.T)
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's fit the general linear model to these data. We will ignore serial autocorrelation for now.
X=numpy.vstack((regressor.T,numpy.ones(y.shape))).T plt.imshow(X,interpolation='nearest',cmap='gray') plt.axis('auto') beta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(y.T) y_est=X.dot(beta_hat) plt.plot(y.T,color='blue') plt.plot(y_est,color='red',linewidth=2) print X.shape
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's make a function to repeatedly generate data and fit the model.
def efficiency_older(X,c=None): if not c==None: c=numpy.ones((X.shape[1])) else: c=numpy.array(c) return 1./c.dot(numpy.linalg.inv(X.T.dot(X))).dot(c) def efficiency(X,c=None): """ remove the intercept""" if not c==None: c=numpy.ones((X.shape[1])) else: c=numpy.array(c) return 1./numpy.trace((numpy.linalg.inv(X[:,:-1].T.dot(X[:,:-1]))))
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's write a simulation that creates datasets with varying levels of blockiness, runs the previous function 1000 times for each level, and plots mean efficiency. Note that we don't actually need to run it 1000 times for blockiness=1, since that design is exactly the same each time.
nruns=100 blockiness_vals=numpy.arange(0,1.1,0.1) meaneff_blockiness=numpy.zeros(len(blockiness_vals)) for b in range(len(blockiness_vals)): eff=numpy.zeros(nruns) for i in range(nruns): d_sim,design_sim=create_design_singlecondition(blockiness=blockiness_vals[b]) regressor_sim,_=compute_regressor(design_sim,'spm',numpy.arange(0,len(d_sim))) X=numpy.vstack((regressor_sim.T,numpy.ones(y.shape))).T eff[i]=efficiency(X,c=[1,0]) meaneff_blockiness[b]=numpy.mean(eff) plt.plot(blockiness_vals,meaneff_blockiness) plt.xlabel('blockiness') plt.ylabel('efficiency') X.shape
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's do a similar simulation looking at the effects of varying block length between 10 seconds and 120 seconds (in steps of 10). since blockiness is 1.0 here, we only need one run per block length.
blocklenvals=numpy.arange(10,120,1) meaneff_blocklen=numpy.zeros(len(blocklenvals)) sims=[] for b in range(len(blocklenvals)): d_sim,design_sim=create_design_singlecondition(blocklength=blocklenvals[b],blockiness=1.) regressor_sim,_=compute_regressor(design_sim,'spm',numpy.arange(0,len(d_sim))) X=numpy.vstack((regressor_sim.T,numpy.ones(y.shape))).T sims.append(numpy.mean(regressor_sim)) meaneff_blocklen[b]=efficiency(X,c=[1,0]) plt.plot(blocklenvals,meaneff_blocklen) plt.xlabel('block length') plt.ylabel('efficiency')
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's look at the effects of correlation between regressors. We first need to create a function to generate a design with two conditions where we can control the correlation between them.
from mkdesign import create_design_twocondition d,des1,des2=create_design_twocondition(correlation=1.0) regressor1,_=compute_regressor(des1,'spm',numpy.arange(0,d.shape[0])) regressor2,_=compute_regressor(des2,'spm',numpy.arange(0,d.shape[0])) X=numpy.vstack((regressor1.T,regressor2.T,numpy.ones(y.shape))).T nruns=100 corr_vals_intended=numpy.arange(-1,1.1,0.1) corr_vals=numpy.zeros(len(corr_vals_intended)) meaneff_corr=numpy.zeros(len(corr_vals)) sumx=numpy.zeros(len(corr_vals)) for b in range(len(corr_vals_intended)): eff=numpy.zeros(nruns) corrs=numpy.zeros(nruns) for i in range(nruns): d_sim,des1_sim,des2_sim=create_design_twocondition(correlation=corr_vals_intended[b]) regressor1_sim,_=compute_regressor(des1_sim,'spm',numpy.arange(0,d_sim.shape[0])) regressor2_sim,_=compute_regressor(des2_sim,'spm',numpy.arange(0,d_sim.shape[0])) X=numpy.vstack((regressor1_sim.T,regressor2_sim.T,numpy.ones(y.shape))).T # use contrast of first regressor eff[i]=efficiency(X,c=[1,0,0]) corrs[i]=numpy.corrcoef(X.T)[0,1] corr_vals[b]=numpy.mean(corrs) sumx[b]=numpy.sum(X[:,0]) meaneff_corr[b]=numpy.mean(eff) plt.plot(corr_vals,meaneff_corr) plt.xlabel('mean correlation between regressors') plt.ylabel('efficiency')
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's look at efficiency of estimation of the shape of the HRF, rather than detection of the activation effect. This requires that we use a finite impulse response (FIR) model.
d,design=create_design_singlecondition(blockiness=0.0) regressor,_=compute_regressor(design,'fir',numpy.arange(0,len(d)),fir_delays=numpy.arange(0,16)) plt.imshow(regressor[:50,:],interpolation='nearest',cmap='gray')
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Now let's simulate the FIR model, and estimate the variance of the fits.
nruns=100 blockiness_vals=numpy.arange(0,1.1,0.1) meaneff_fit_blockiness=numpy.zeros(len(blockiness_vals)) meancorr=[] for b in range(len(blockiness_vals)): eff=numpy.zeros(nruns) cc=numpy.zeros(nruns) for i in range(nruns): d_sim,design_sim=create_design_singlecondition(blockiness=blockiness_vals[b]) regressor_sim,_=compute_regressor(design_sim,'fir', numpy.arange(0,len(d_sim)),fir_delays=numpy.arange(0,16)) X=numpy.vstack((regressor_sim.T,numpy.ones(regressor_sim.shape[0]))).T eff[i]=efficiency(X) cc[i]=numpy.corrcoef(X.T)[0,1] meaneff_fit_blockiness[b]=numpy.mean(eff) meancorr.append(numpy.mean(cc)) plt.plot(blockiness_vals,meaneff_fit_blockiness) plt.xlabel('blockiness') plt.ylabel('efficiency') plt.plot(blockiness_vals,meancorr) __Exercise:__ write a function to generate random designs, and then do this a large number of times, each time estimating the efficiency. Then plot the histogram of efficiencies.
analysis/efficiency/DesignEfficiency.ipynb
poldrack/fmri-analysis-vm
mit
Then connect to ChemSpider by creating a ChemSpider instance using your security token:
# Tip: Store your security token as an environment variable to reduce the chance of accidentally sharing it import os mytoken = os.environ['CHEMSPIDER_SECURITY_TOKEN'] cs = ChemSpider(security_token=mytoken)
examples/Getting Started.ipynb
mcs07/ChemSpiPy
mit
All your interaction with the ChemSpider database should now happen through this ChemSpider object, cs. Retrieve a Compound Retrieving information about a specific Compound in the ChemSpider database is simple. Let’s get the Compound with ChemSpider ID 2157:
comp = cs.get_compound(2157) comp
examples/Getting Started.ipynb
mcs07/ChemSpiPy
mit
Now we have a Compound object called comp. We can get various identifiers and calculated properties from this object:
print(comp.molecular_formula) print(comp.molecular_weight) print(comp.smiles) print(comp.common_name)
examples/Getting Started.ipynb
mcs07/ChemSpiPy
mit
Search for a name What if you don’t know the ChemSpider ID of the Compound you want? Instead use the search method:
for result in cs.search('glucose'): print(result)
examples/Getting Started.ipynb
mcs07/ChemSpiPy
mit
numpy has a handy polyfit function we can use, to let us construct an nth-degree polynomial model of our data that minimizes squared error. Let's try it with a 4th degree polynomial:
x = np.array(pageSpeeds) y = np.array(purchaseAmount) p4 = np.poly1d(np.polyfit(x, y, 4))
MachineLearning/DataScience-Python3/PolynomialRegression.ipynb
martinggww/lucasenlights
cc0-1.0
We'll visualize our original scatter plot, together with a plot of our predicted values using the polynomial for page speed times ranging from 0-7 seconds:
import matplotlib.pyplot as plt xp = np.linspace(0, 7, 100) plt.scatter(x, y) plt.plot(xp, p4(xp), c='r') plt.show()
MachineLearning/DataScience-Python3/PolynomialRegression.ipynb
martinggww/lucasenlights
cc0-1.0
Looks pretty good! Let's measure the r-squared error:
from sklearn.metrics import r2_score r2 = r2_score(y, p4(x)) print(r2)
MachineLearning/DataScience-Python3/PolynomialRegression.ipynb
martinggww/lucasenlights
cc0-1.0
Using the two contig names you sent me it's simplest to do this:
desired_contigs = ['Contig' + str(x) for x in [1131, 3182, 39106, 110, 5958]] desired_contigs
scanfasta.ipynb
plumbwj01/Barcoding-Fraxinus
apache-2.0
If you have a genuinely big file then I would do the following:
grab = [c for c in contigs if c.name in desired_contigs] len(grab)
scanfasta.ipynb
plumbwj01/Barcoding-Fraxinus
apache-2.0
Ya! There's two contigs.
import os print(os.getcwd()) write_contigs_to_file('data2/sequences_desired.fa', grab) [c.name for c in grab[:100]] import os os.path.realpath('')
scanfasta.ipynb
plumbwj01/Barcoding-Fraxinus
apache-2.0
Informal Methods The most straightforward approach for assessing convergence is based on simply plotting and inspecting traces and histograms of the observed MCMC sample. If the trace of values for each of the stochastics exhibits asymptotic behavior over the last $m$ iterations, this may be satisfactory evidence for convergence.
with bioassay_model: bioassay_trace = sample(10000) traceplot(bioassay_trace[9000:], varnames=['beta'])
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
A similar approach involves plotting a histogram for every set of $k$ iterations (perhaps 50-100) beyond some burn in threshold $n$; if the histograms are not visibly different among the sample intervals, this may be considered some evidence for convergence. Note that such diagnostics should be carried out for each stochastic estimated by the MCMC algorithm, because convergent behavior by one variable does not imply evidence for convergence for other variables in the analysis.
import matplotlib.pyplot as plt beta_trace = bioassay_trace['beta'] fig, axes = plt.subplots(2, 5, figsize=(14,6)) axes = axes.ravel() for i in range(10): axes[i].hist(beta_trace[500*i:500*(i+1)]) plt.tight_layout()
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
An extension of this approach can be taken when multiple parallel chains are run, rather than just a single, long chain. In this case, the final values of $c$ chains run for $n$ iterations are plotted in a histogram; just as above, this is repeated every $k$ iterations thereafter, and the histograms of the endpoints are plotted again and compared to the previous histogram. This is repeated until consecutive histograms are indistinguishable. Another ad hoc method for detecting lack of convergence is to examine the traces of several MCMC chains initialized with different starting values. Overlaying these traces on the same set of axes should (if convergence has occurred) show each chain tending toward the same equilibrium value, with approximately the same variance. Recall that the tendency for some Markov chains to converge to the true (unknown) value from diverse initial values is called ergodicity. This property is guaranteed by the reversible chains constructed using MCMC, and should be observable using this technique. Again, however, this approach is only a heuristic method, and cannot always detect lack of convergence, even though chains may appear ergodic.
with bioassay_model: bioassay_trace = sample(1000, njobs=2, start=[{'alpha':0.5}, {'alpha':5}]) bioassay_trace.get_values('alpha', chains=0)[0] plt.plot(bioassay_trace.get_values('alpha', chains=0)[:200], 'r--') plt.plot(bioassay_trace.get_values('alpha', chains=1)[:200], 'k--')
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
A principal reason that evidence from informal techniques cannot guarantee convergence is a phenomenon called metastability. Chains may appear to have converged to the true equilibrium value, displaying excellent qualities by any of the methods described above. However, after some period of stability around this value, the chain may suddenly move to another region of the parameter space. This period of metastability can sometimes be very long, and therefore escape detection by these convergence diagnostics. Unfortunately, there is no statistical technique available for detecting metastability. Formal Methods Along with the ad hoc techniques described above, a number of more formal methods exist which are prevalent in the literature. These are considered more formal because they are based on existing statistical methods, such as time series analysis. PyMC currently includes three formal convergence diagnostic methods. The first, proposed by Geweke (1992), is a time-series approach that compares the mean and variance of segments from the beginning and end of a single chain. $$z = \frac{\bar{\theta}_a - \bar{\theta}_b}{\sqrt{S_a(0) + S_b(0)}}$$ where $a$ is the early interval and $b$ the late interval, and $S_i(0)$ is the spectral density estimate at zero frequency for chain segment $i$. If the z-scores (theoretically distributed as standard normal variates) of these two segments are similar, it can provide evidence for convergence. PyMC calculates z-scores of the difference between various initial segments along the chain, and the last 50% of the remaining chain. If the chain has converged, the majority of points should fall within 2 standard deviations of zero. In PyMC, diagnostic z-scores can be obtained by calling the geweke function. It accepts either (1) a single trace, (2) a Node or Stochastic object, or (4) an entire Model object:
from pymc3 import geweke with bioassay_model: tr = sample(2000) z = geweke(tr, intervals=15) plt.scatter(*z['alpha'].T) plt.hlines([-1,1], 0, 1000, linestyles='dotted') plt.xlim(0, 1000)
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
The arguments expected are the following: x : The trace of a variable. first : The fraction of series at the beginning of the trace. last : The fraction of series at the end to be compared with the section at the beginning. intervals : The number of segments. Plotting the output displays the scores in series, making it is easy to see departures from the standard normal assumption. A second convergence diagnostic provided by PyMC is the Gelman-Rubin statistic Gelman and Rubin (1992). This diagnostic uses multiple chains to check for lack of convergence, and is based on the notion that if multiple chains have converged, by definition they should appear very similar to one another; if not, one or more of the chains has failed to converge. The Gelman-Rubin diagnostic uses an analysis of variance approach to assessing convergence. That is, it calculates both the between-chain varaince (B) and within-chain varaince (W), and assesses whether they are different enough to worry about convergence. Assuming $m$ chains, each of length $n$, quantities are calculated by: $$\begin{align}B &= \frac{n}{m-1} \sum_{j=1}^m (\bar{\theta}{.j} - \bar{\theta}{..})^2 \ W &= \frac{1}{m} \sum_{j=1}^m \left[ \frac{1}{n-1} \sum_{i=1}^n (\theta_{ij} - \bar{\theta}_{.j})^2 \right] \end{align}$$ for each scalar estimand $\theta$. Using these values, an estimate of the marginal posterior variance of $\theta$ can be calculated: $$\hat{\text{Var}}(\theta | y) = \frac{n-1}{n} W + \frac{1}{n} B$$ Assuming $\theta$ was initialized to arbitrary starting points in each chain, this quantity will overestimate the true marginal posterior variance. At the same time, $W$ will tend to underestimate the within-chain variance early in the sampling run. However, in the limit as $n \rightarrow \infty$, both quantities will converge to the true variance of $\theta$. In light of this, the Gelman-Rubin statistic monitors convergence using the ratio: $$\hat{R} = \sqrt{\frac{\hat{\text{Var}}(\theta | y)}{W}}$$ This is called the potential scale reduction, since it is an estimate of the potential reduction in the scale of $\theta$ as the number of simulations tends to infinity. In practice, we look for values of $\hat{R}$ close to one (say, less than 1.1) to be confident that a particular estimand has converged. In PyMC, the function gelman_rubin will calculate $\hat{R}$ for each stochastic node in the passed model:
from pymc3 import gelman_rubin gelman_rubin(bioassay_trace)
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
For the best results, each chain should be initialized to highly dispersed starting values for each stochastic node. By default, when calling the forestplot function using nodes with multiple chains, the $\hat{R}$ values will be plotted alongside the posterior intervals.
from pymc3 import forestplot forestplot(bioassay_trace)
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
Autocorrelation
from pymc3 import autocorrplot autocorrplot(tr); bioassay_trace['alpha'].shape from pymc3 import effective_n effective_n(bioassay_trace)
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
Goodness of Fit Checking for model convergence is only the first step in the evaluation of MCMC model outputs. It is possible for an entirely unsuitable model to converge, so additional steps are needed to ensure that the estimated model adequately fits the data. One intuitive way of evaluating model fit is to compare model predictions with the observations used to fit the model. In other words, the fitted model can be used to simulate data, and the distribution of the simulated data should resemble the distribution of the actual data. Fortunately, simulating data from the model is a natural component of the Bayesian modelling framework. Recall, from the discussion on imputation of missing data, the posterior predictive distribution: $$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$ Here, $\tilde{y}$ represents some hypothetical new data that would be expected, taking into account the posterior uncertainty in the model parameters. Sampling from the posterior predictive distribution is easy in PyMC. The code looks identical to the corresponding data stochastic, with two modifications: (1) the node should be specified as deterministic and (2) the statistical likelihoods should be replaced by random number generators. Consider the gelman_bioassay example, where deaths are modeled as a binomial random variable for which the probability of death is a logit-linear function of the dose of a particular drug.
from pymc3 import Normal, Binomial, Deterministic, invlogit # Samples for each dose level n = 5 * np.ones(4, dtype=int) # Log-dose dose = np.array([-.86, -.3, -.05, .73]) with Model() as model: # Logit-linear model parameters alpha = Normal('alpha', 0, 0.01) beta = Normal('beta', 0, 0.01) # Calculate probabilities of death theta = Deterministic('theta', invlogit(alpha + beta * dose)) # Data likelihood deaths = Binomial('deaths', n=n, p=theta, observed=[0, 1, 3, 5])
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
The posterior predictive distribution of deaths uses the same functional form as the data likelihood, in this case a binomial stochastic. Here is the corresponding sample from the posterior predictive distribution:
with model: deaths_sim = Binomial('deaths_sim', n=n, p=theta, shape=4)
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
Notice that the observed stochastic Binomial has been replaced with a stochastic node that is identical in every respect to deaths, except that its values are not fixed to be the observed data -- they are left to vary according to the values of the fitted parameters. The degree to which simulated data correspond to observations can be evaluated in at least two ways. First, these quantities can simply be compared visually. This allows for a qualitative comparison of model-based replicates and observations. If there is poor fit, the true value of the data may appear in the tails of the histogram of replicated data, while a good fit will tend to show the true data in high-probability regions of the posterior predictive distribution. The Matplot package in PyMC provides an easy way of producing such plots, via the gof_plot function.
with model: gof_trace = sample(2000) from pymc3 import forestplot forestplot(gof_trace, varnames=['deaths_sim'])
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
Exercise: Meta-analysis of beta blocker effectiveness Carlin (1992) considers a Bayesian approach to meta-analysis, and includes the following examples of 22 trials of beta-blockers to prevent mortality after myocardial infarction. In a random effects meta-analysis we assume the true effect (on a log-odds scale) $d_i$ in a trial $i$ is drawn from some population distribution. Let $r^C_i$ denote number of events in the control group in trial $i$, and $r^T_i$ denote events under active treatment in trial $i$. Our model is: $$\begin{aligned} r^C_i &\sim \text{Binomial}\left(p^C_i, n^C_i\right) \ r^T_i &\sim \text{Binomial}\left(p^T_i, n^T_i\right) \ \text{logit}\left(p^C_i\right) &= \mu_i \ \text{logit}\left(p^T_i\right) &= \mu_i + \delta_i \ \delta_i &\sim \text{Normal}(d, t) \ \mu_i &\sim \text{Normal}(m, s) \end{aligned}$$ We want to make inferences about the population effect $d$, and the predictive distribution for the effect $\delta_{\text{new}}$ in a new trial. Build a model to estimate these quantities in PyMC, and (1) use convergence diagnostics to check for convergence and (2) use posterior predictive checks to assess goodness-of-fit. Here are the data:
r_t_obs = [3, 7, 5, 102, 28, 4, 98, 60, 25, 138, 64, 45, 9, 57, 25, 33, 28, 8, 6, 32, 27, 22] n_t_obs = [38, 114, 69, 1533, 355, 59, 945, 632, 278,1916, 873, 263, 291, 858, 154, 207, 251, 151, 174, 209, 391, 680] r_c_obs = [3, 14, 11, 127, 27, 6, 152, 48, 37, 188, 52, 47, 16, 45, 31, 38, 12, 6, 3, 40, 43, 39] n_c_obs = [39, 116, 93, 1520, 365, 52, 939, 471, 282, 1921, 583, 266, 293, 883, 147, 213, 122, 154, 134, 218, 364, 674] N = len(n_c_obs) # Write your answer here
notebooks/6. Model Checking.ipynb
fonnesbeck/PyMC3_Oslo
cc0-1.0
Class 9: The Solow growth model The Solow growth model is at the core of modern theories of growth and business cycles. The Solow model is a model of exogenous growth: long-run growth arises in the model as a consequence of exogenous growth in the labor supply and total factor productivity. The Solow model, like many other macroeconomic models, is a time series model. The Solow model without exogenous growth For the moment, let's disregard population and total factor productivity growth and assume that equilibrium in a closed economy is described by the following four equations: \begin{align} Y_t & = A K_t^{\alpha} \tag{1}\ C_t & = (1-s)Y_t \tag{2}\ Y_t & = C_t + I_t \tag{3}\ K_{t+1} & = I_t + ( 1- \delta)K_t \tag{4}\ \end{align} Equation (1) is the production function. Equation (2) is the consumption function where $s$ denotes the exogenously given saving rate. Equation (3) is the aggregate market clearing condition. Finally, Equation (4) is the capital evolution equation specifying that capital in yeat $t+1$ is the sum of newly created capital $I_t$ and the capital stock from year $t$ that has not depreciated $(1-\delta)K_t$. Combine Equations (1) through (4) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a single-variable recurrence relation for $K_{t+1}$: \begin{align} K_{t+1} & = sAK_t^{\alpha} + ( 1- \delta)K_t \tag{5} \end{align} Given an initial value for capital $K_0 >0$, iterate on Equation (5) to compute the value of the capital stock at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1) through (3). Simulation Simulate the Solow growth model for $t=0\ldots 100$. For the simulation, assume the following values of the parameters: \begin{align} A & = 10\ \alpha & = 0.35\ s & = 0.15\ \delta & = 0.1 \end{align} Furthermore, suppose that the initial value of capital is: \begin{align} K_0 & = 20 \end{align}
# Initialize parameters for the simulation (A, s, T, delta, alpha, K0) # Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0 # Compute all capital values by iterating over t from 0 through T # Print the value of capital at dates 0 and T # Store the simulated capital data in a pandas DataFrame called data # Print the first five rows of the DataFrame # Create columns in the DataFrame to store computed values of the other endogenous variables # Print the first row of the DataFrame # Print the last row of the DataFrame # Create a 2x2 grid of plots of capital, output, consumption, and investment fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(2,2,1) ax.plot(data['capital'],lw=3) ax.grid() ax.set_title('Capital')
winter2017/econ129/python/Econ129_Class_09.ipynb
letsgoexploring/teaching
mit
The Solow model with exogenous population growth Now, let's suppose that production is a function of the supply of labor $L_t$: \begin{align} Y_t & = AK_t^{\alpha} L_t^{1-\alpha}\tag{6} \end{align} The supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation: \begin{align} L_{t+1} & = (1+n) L_t \tag{7} \end{align} The rest of the economy is characterized by the same equations as before: \begin{align} C_t & = (1-s)Y_t \tag{8}\ Y_t & = C_t + I_t \tag{9}\ K_{t+1} & = I_t + ( 1- \delta)K_t \tag{10}\ \end{align} Combine Equations (6), (8), (9), and (10) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$ and $L_t$: \begin{align} K_{t+1} & = sAK_t^{\alpha}L_t^{1-\alpha} + ( 1- \delta)K_t \tag{11} \end{align} Given an initial values for capital and labor, Equations (7) and (11) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (6), (8), (9), and (10). Simulation Simulate the Solow growth model with exogenous labor growth for $t=0\ldots 100$. For the simulation, assume the following values of the parameters: \begin{align} A & = 10\ \alpha & = 0.35\ s & = 0.15\ \delta & = 0.1\ n & = 0.01 \end{align} Furthermore, suppose that the initial values of capital and labor are: \begin{align} K_0 & = 20\ L_0 & = 1 \end{align}
# Initialize parameters for the simulation (A, s, T, delta, alpha, n, K0, L0) # Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0 # Compute all labor values by iterating over t from 0 through T # Plot the simulated labor series # Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0 # Compute all capital values by iterating over t from 0 through T # Plot the simulated capital series # Store the simulated capital data in a pandas DataFrame called data_labor # Print the first five rows of the data_labor # Create columns in the DataFrame to store computed values of the other endogenous variables # Print the first five rows of data_labor # Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker # Print the first five rows of data_labor # Create a 2x2 grid of plots of capital, output, consumption, and investment # Create a 2x2 grid of plots of capital per worker, outputper worker, consumption per worker, and investment per worker
winter2017/econ129/python/Econ129_Class_09.ipynb
letsgoexploring/teaching
mit