code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
## SOLVING PLANNING PROBLEMS ---- ### GRAPHPLAN <br> The GraphPlan algorithm is a popular method of solving classical planning problems. Before we get into the details of the algorithm, let's look at a special data structure called **planning graph**, used to give better heuristic estimates and plays a key role in the GraphPlan algorithm. ### Planning Graph A planning graph is a directed graph organized into levels. Each level contains information about the current state of the knowledge base and the possible state-action links to and from that level. The first level contains the initial state with nodes representing each fluent that holds in that level. This level has state-action links linking each state to valid actions in that state. Each action is linked to all its preconditions and its effect states. Based on these effects, the next level is constructed. The next level contains similarly structured information about the next state. In this way, the graph is expanded using state-action links till we reach a state where all the required goals hold true simultaneously. We can say that we have reached our goal if none of the goal states in the current level are mutually exclusive. This will be explained in detail later. <br> Planning graphs only work for propositional planning problems, hence we need to eliminate all variables by generating all possible substitutions. <br> For example, the planning graph of the `have_cake_and_eat_cake_too` problem might look like this ![title](images/cake_graph.jpg) <br> The black lines indicate links between states and actions. <br> In every planning problem, we are allowed to carry out the `no-op` action, ie, we can choose no action for a particular state. These are called 'Persistence' actions and are represented in the graph by the small square boxes. In technical terms, a persistence action has effects same as its preconditions. This enables us to carry a state to the next level. <br> <br> The gray lines indicate mutual exclusivity. This means that the actions connected bya gray line cannot be taken together. Mutual exclusivity (mutex) occurs in the following cases: 1. **Inconsistent effects**: One action negates the effect of the other. For example, _Eat(Cake)_ and the persistence of _Have(Cake)_ have inconsistent effects because they disagree on the effect _Have(Cake)_ 2. **Interference**: One of the effects of an action is the negation of a precondition of the other. For example, _Eat(Cake)_ interferes with the persistence of _Have(Cake)_ by negating its precondition. 3. **Competing needs**: One of the preconditions of one action is mutually exclusive with a precondition of the other. For example, _Bake(Cake)_ and _Eat(Cake)_ are mutex because they compete on the value of the _Have(Cake)_ precondition. In the module, planning graphs have been implemented using two classes, `Level` which stores data for a particular level and `Graph` which connects multiple levels together. Let's look at the `Level` class. ``` from planning import * from notebook import psource psource(Level) ``` Each level stores the following data 1. The current state of the level in `current_state` 2. Links from an action to its preconditions in `current_action_links` 3. Links from a state to the possible actions in that state in `current_state_links` 4. Links from each action to its effects in `next_action_links` 5. Links from each possible next state from each action in `next_state_links`. This stores the same information as the `current_action_links` of the next level. 6. Mutex links in `mutex`. <br> <br> The `find_mutex` method finds the mutex links according to the points given above. <br> The `build` method populates the data structures storing the state and action information. Persistence actions for each clause in the current state are also defined here. The newly created persistence action has the same name as its state, prefixed with a 'P'. Let's now look at the `Graph` class. ``` psource(Graph) ``` The class stores a problem definition in `pddl`, a knowledge base in `kb`, a list of `Level` objects in `levels` and all the possible arguments found in the initial state of the problem in `objects`. <br> The `expand_graph` method generates a new level of the graph. This method is invoked when the goal conditions haven't been met in the current level or the actions that lead to it are mutually exclusive. The `non_mutex_goals` method checks whether the goals in the current state are mutually exclusive. <br> <br> Using these two classes, we can define a planning graph which can either be used to provide reliable heuristics for planning problems or used in the `GraphPlan` algorithm. <br> Let's have a look at the `GraphPlan` class. ``` psource(GraphPlan) ``` Given a planning problem defined as a PlanningProblem, `GraphPlan` creates a planning graph stored in `graph` and expands it till it reaches a state where all its required goals are present simultaneously without mutual exclusivity. <br> Once a goal is found, `extract_solution` is called. This method recursively finds the path to a solution given a planning graph. In the case where `extract_solution` fails to find a solution for a set of goals as a given level, we record the `(level, goals)` pair as a **no-good**. Whenever `extract_solution` is called again with the same level and goals, we can find the recorded no-good and immediately return failure rather than searching again. No-goods are also used in the termination test. <br> The `check_leveloff` method checks if the planning graph for the problem has **levelled-off**, ie, it has the same states, actions and mutex pairs as the previous level. If the graph has already levelled off and we haven't found a solution, there is no point expanding the graph, as it won't lead to anything new. In such a case, we can declare that the planning problem is unsolvable with the given constraints. <br> <br> To summarize, the `GraphPlan` algorithm calls `expand_graph` and tests whether it has reached the goal and if the goals are non-mutex. <br> If so, `extract_solution` is invoked which recursively reconstructs the solution from the planning graph. <br> If not, then we check if our graph has levelled off and continue if it hasn't. Let's solve a few planning problems that we had defined earlier. #### Air cargo problem In accordance with the summary above, we have defined a helper function to carry out `GraphPlan` on the `air_cargo` problem. The function is pretty straightforward. Let's have a look. ``` psource(air_cargo_graphplan) ``` Let's instantiate the problem and find a solution using this helper function. ``` airCargoG = air_cargo_graphplan() airCargoG ``` Each element in the solution is a valid action. The solution is separated into lists for each level. The actions prefixed with a 'P' are persistence actions and can be ignored. They simply carry certain states forward. We have another helper function `linearize` that presents the solution in a more readable format, much like a total-order planner, but it is _not_ a total-order planner. ``` linearize(airCargoG) ``` Indeed, this is a correct solution. <br> There are similar helper functions for some other planning problems. <br> Lets' try solving the spare tire problem. ``` spareTireG = spare_tire_graphplan() linearize(spareTireG) ``` Solution for the cake problem ``` cakeProblemG = have_cake_and_eat_cake_too_graphplan() linearize(cakeProblemG) ``` Solution for the Sussman's Anomaly configuration of three blocks. ``` sussmanAnomalyG = three_block_tower_graphplan() linearize(sussmanAnomalyG) ``` Solution of the socks and shoes problem ``` socksShoesG = socks_and_shoes_graphplan() linearize(socksShoesG) ```
github_jupyter
from planning import * from notebook import psource psource(Level) psource(Graph) psource(GraphPlan) psource(air_cargo_graphplan) airCargoG = air_cargo_graphplan() airCargoG linearize(airCargoG) spareTireG = spare_tire_graphplan() linearize(spareTireG) cakeProblemG = have_cake_and_eat_cake_too_graphplan() linearize(cakeProblemG) sussmanAnomalyG = three_block_tower_graphplan() linearize(sussmanAnomalyG) socksShoesG = socks_and_shoes_graphplan() linearize(socksShoesG)
0.529507
0.991015
# Deep Learning Tutorial with Keras and Tensorflow <div> <img style="text-align: left" src="imgs/keras-tensorflow-logo.jpg" width="40%" /> <div> ## Get the Materials <img src="imgs/github.jpg" /> ```shell git clone https://github.com/leriomaggio/deep-learning-keras-tensorflow.git ``` --- # Outline at a glance - **Part I**: **Introduction** - Intro to Artificial Neural Networks - Perceptron and MLP - naive pure-Python implementation - fast forward, sgd, backprop - Introduction to Deep Learning Frameworks - Intro to Theano - Intro to Tensorflow - Intro to Keras - Overview and main features - Overview of the `core` layers - Multi-Layer Perceptron and Fully Connected - Examples with `keras.models.Sequential` and `Dense` - Keras Backend - **Part II**: **Supervised Learning ** - Fully Connected Networks and Embeddings - Intro to MNIST Dataset - Hidden Leayer Representation and Embeddings - Convolutional Neural Networks - meaning of convolutional filters - examples from ImageNet - Visualising ConvNets - Advanced CNN - Dropout - MaxPooling - Batch Normalisation - HandsOn: MNIST Dataset - FC and MNIST - CNN and MNIST - Deep Convolutiona Neural Networks with Keras (ref: `keras.applications`) - VGG16 - VGG19 - ResNet50 - Transfer Learning and FineTuning - Hyperparameters Optimisation - **Part III**: **Unsupervised Learning** - AutoEncoders and Embeddings - AutoEncoders and MNIST - word2vec and doc2vec (gensim) with `keras.datasets` - word2vec and CNN - **Part IV**: **Recurrent Neural Networks** - Recurrent Neural Network in Keras - `SimpleRNN`, `LSTM`, `GRU` - LSTM for Sentence Generation - **PartV**: **Additional Materials**: - Custom Layers in Keras - Multi modal Network Topologies with Keras --- # Requirements This tutorial requires the following packages: - Python version 3.5 - Python 3.4 should be fine as well - likely Python 2.7 would be also fine, but *who knows*? :P - `numpy` version 1.10 or later: http://www.numpy.org/ - `scipy` version 0.16 or later: http://www.scipy.org/ - `matplotlib` version 1.4 or later: http://matplotlib.org/ - `pandas` version 0.16 or later: http://pandas.pydata.org - `scikit-learn` version 0.15 or later: http://scikit-learn.org - `keras` version 2.0 or later: http://keras.io - `tensorflow` version 1.0 or later: https://www.tensorflow.org - `ipython`/`jupyter` version 4.0 or later, with notebook support (Optional but recommended): - `pyyaml` - `hdf5` and `h5py` (required if you use model saving/loading functions in keras) - **NVIDIA cuDNN** if you have NVIDIA GPUs on your machines. [https://developer.nvidia.com/rdp/cudnn-download]() The easiest way to get (most) these is to use an all-in-one installer such as [Anaconda](http://www.continuum.io/downloads) from Continuum. These are available for multiple architectures. --- ### Python Version I'm currently running this tutorial with **Python 3** on **Anaconda** ``` !python --version ``` ### Configure Keras with tensorflow 1) Create the `keras.json` (if it does not exist): ```shell touch $HOME/.keras/keras.json ``` 2) Copy the following content into the file: ``` { "epsilon": 1e-07, "backend": "tensorflow", "floatx": "float32", "image_data_format": "channels_last" } ``` ``` !cat ~/.keras/keras.json ``` --- # Test if everything is up&running ## 1. Check import ``` import numpy as np import scipy as sp import pandas as pd import matplotlib.pyplot as plt import sklearn import keras ``` ## 2. Check installeded Versions ``` import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import IPython print('iPython:', IPython.__version__) import sklearn print('scikit-learn:', sklearn.__version__) import keras print('keras: ', keras.__version__) # optional import theano print('Theano: ', theano.__version__) import tensorflow as tf print('Tensorflow: ', tf.__version__) ``` <br> <h1 style="text-align: center;">If everything worked till down here, you're ready to start!</h1> ---
github_jupyter
git clone https://github.com/leriomaggio/deep-learning-keras-tensorflow.git !python --version touch $HOME/.keras/keras.json { "epsilon": 1e-07, "backend": "tensorflow", "floatx": "float32", "image_data_format": "channels_last" } !cat ~/.keras/keras.json import numpy as np import scipy as sp import pandas as pd import matplotlib.pyplot as plt import sklearn import keras import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import IPython print('iPython:', IPython.__version__) import sklearn print('scikit-learn:', sklearn.__version__) import keras print('keras: ', keras.__version__) # optional import theano print('Theano: ', theano.__version__) import tensorflow as tf print('Tensorflow: ', tf.__version__)
0.54359
0.975969
# 3D Segmentation with UNet ``` import os import sys import tempfile from glob import glob import logging import nibabel as nib import numpy as np import torch from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator from ignite.handlers import ModelCheckpoint from torch.utils.data import DataLoader import monai from monai.data import NiftiDataset, create_test_image_3d from monai.transforms import Compose, AddChannel, ScaleIntensity, Resize, ToTensor, RandSpatialCrop from monai.handlers import \ StatsHandler, TensorBoardStatsHandler, TensorBoardImageHandler, MeanDice, stopping_fn_from_metric from monai.networks.utils import predict_segmentation monai.config.print_config() logging.basicConfig(stream=sys.stdout, level=logging.INFO) ``` ## Setup demo data ``` # Create a temporary directory and 40 random image, mask paris tempdir = tempfile.mkdtemp() for i in range(40): im, seg = create_test_image_3d(128, 128, 128, num_seg_classes=1) n = nib.Nifti1Image(im, np.eye(4)) nib.save(n, os.path.join(tempdir, 'im%i.nii.gz' % i)) n = nib.Nifti1Image(seg, np.eye(4)) nib.save(n, os.path.join(tempdir, 'seg%i.nii.gz' % i)) ``` ## Setup transforms, dataset ``` images = sorted(glob(os.path.join(tempdir, 'im*.nii.gz'))) segs = sorted(glob(os.path.join(tempdir, 'seg*.nii.gz'))) # Define transforms for image and segmentation imtrans = Compose([ ScaleIntensity(), AddChannel(), RandSpatialCrop((96, 96, 96), random_size=False), ToTensor() ]) segtrans = Compose([ AddChannel(), RandSpatialCrop((96, 96, 96), random_size=False), ToTensor() ]) # Define nifti dataset, dataloader. ds = NiftiDataset(images, segs, transform=imtrans, seg_transform=segtrans) loader = DataLoader(ds, batch_size=10, num_workers=2, pin_memory=torch.cuda.is_available()) im, seg = monai.utils.misc.first(loader) print(im.shape, seg.shape) ``` ## Create Model, Loss, Optimizer ``` # Create UNet, DiceLoss and Adam optimizer. net = monai.networks.nets.UNet( dimensions=3, in_channels=1, out_channels=1, channels=(16, 32, 64, 128, 256), strides=(2, 2, 2, 2), num_res_units=2, ) loss = monai.losses.DiceLoss(do_sigmoid=True) lr = 1e-3 opt = torch.optim.Adam(net.parameters(), lr) ``` ## Create supervised_trainer using ignite ``` # Create trainer device = torch.device('cuda:0') trainer = create_supervised_trainer(net, opt, loss, device, False) ``` ## Setup event handlers for checkpointing and logging ``` ### optional section for checkpoint and tensorboard logging # adding checkpoint handler to save models (network params and optimizer stats) during training checkpoint_handler = ModelCheckpoint('./runs/', 'net', n_saved=10, require_empty=False) trainer.add_event_handler(event_name=Events.EPOCH_COMPLETED, handler=checkpoint_handler, to_save={'net': net, 'opt': opt}) # StatsHandler prints loss at every iteration and print metrics at every epoch, # we don't set metrics for trainer here, so just print loss, user can also customize print functions # and can use output_transform to convert engine.state.output if it's not a loss value train_stats_handler = StatsHandler(name='trainer') train_stats_handler.attach(trainer) # TensorBoardStatsHandler plots loss at every iteration and plots metrics at every epoch, same as StatsHandler train_tensorboard_stats_handler = TensorBoardStatsHandler() train_tensorboard_stats_handler.attach(trainer) ``` ## Add Validation every N epochs ``` ### optional section for model validation during training validation_every_n_epochs = 1 # Set parameters for validation metric_name = 'Mean_Dice' # add evaluation metric to the evaluator engine val_metrics = {metric_name: MeanDice(add_sigmoid=True, to_onehot_y=False)} # ignite evaluator expects batch=(img, seg) and returns output=(y_pred, y) at every iteration, # user can add output_transform to return other values evaluator = create_supervised_evaluator(net, val_metrics, device, True) # create a validation data loader val_imtrans = Compose([ ScaleIntensity(), AddChannel(), Resize((96, 96, 96)), ToTensor() ]) val_segtrans = Compose([ AddChannel(), Resize((96, 96, 96)), ToTensor() ]) val_ds = NiftiDataset(images[-20:], segs[-20:], transform=val_imtrans, seg_transform=val_segtrans) val_loader = DataLoader(val_ds, batch_size=5, num_workers=8, pin_memory=torch.cuda.is_available()) @trainer.on(Events.EPOCH_COMPLETED(every=validation_every_n_epochs)) def run_validation(engine): evaluator.run(val_loader) # Add stats event handler to print validation stats via evaluator val_stats_handler = StatsHandler( name='evaluator', output_transform=lambda x: None, # no need to print loss value, so disable per iteration output global_epoch_transform=lambda x: trainer.state.epoch) # fetch global epoch number from trainer val_stats_handler.attach(evaluator) # add handler to record metrics to TensorBoard at every validation epoch val_tensorboard_stats_handler = TensorBoardStatsHandler( output_transform=lambda x: None, # no need to plot loss value, so disable per iteration output global_epoch_transform=lambda x: trainer.state.epoch) # fetch global epoch number from trainer val_tensorboard_stats_handler.attach(evaluator) # add handler to draw the first image and the corresponding label and model output in the last batch # here we draw the 3D output as GIF format along Depth axis, at every validation epoch val_tensorboard_image_handler = TensorBoardImageHandler( batch_transform=lambda batch: (batch[0], batch[1]), output_transform=lambda output: predict_segmentation(output[0]), global_iter_transform=lambda x: trainer.state.epoch ) evaluator.add_event_handler(event_name=Events.EPOCH_COMPLETED, handler=val_tensorboard_image_handler) ``` ## Run training loop ``` # create a training data loader logging.basicConfig(stream=sys.stdout, level=logging.INFO) train_ds = NiftiDataset(images[:20], segs[:20], transform=imtrans, seg_transform=segtrans) train_loader = DataLoader(train_ds, batch_size=5, shuffle=True, num_workers=8, pin_memory=torch.cuda.is_available()) train_epochs = 5 state = trainer.run(train_loader, train_epochs) ``` ## Visualizing Tensorboard logs ``` log_dir = './runs' # by default TensorBoard logs go into './runs' %load_ext tensorboard %tensorboard --logdir $log_dir ! rm -rf {tempdir} ```
github_jupyter
import os import sys import tempfile from glob import glob import logging import nibabel as nib import numpy as np import torch from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator from ignite.handlers import ModelCheckpoint from torch.utils.data import DataLoader import monai from monai.data import NiftiDataset, create_test_image_3d from monai.transforms import Compose, AddChannel, ScaleIntensity, Resize, ToTensor, RandSpatialCrop from monai.handlers import \ StatsHandler, TensorBoardStatsHandler, TensorBoardImageHandler, MeanDice, stopping_fn_from_metric from monai.networks.utils import predict_segmentation monai.config.print_config() logging.basicConfig(stream=sys.stdout, level=logging.INFO) # Create a temporary directory and 40 random image, mask paris tempdir = tempfile.mkdtemp() for i in range(40): im, seg = create_test_image_3d(128, 128, 128, num_seg_classes=1) n = nib.Nifti1Image(im, np.eye(4)) nib.save(n, os.path.join(tempdir, 'im%i.nii.gz' % i)) n = nib.Nifti1Image(seg, np.eye(4)) nib.save(n, os.path.join(tempdir, 'seg%i.nii.gz' % i)) images = sorted(glob(os.path.join(tempdir, 'im*.nii.gz'))) segs = sorted(glob(os.path.join(tempdir, 'seg*.nii.gz'))) # Define transforms for image and segmentation imtrans = Compose([ ScaleIntensity(), AddChannel(), RandSpatialCrop((96, 96, 96), random_size=False), ToTensor() ]) segtrans = Compose([ AddChannel(), RandSpatialCrop((96, 96, 96), random_size=False), ToTensor() ]) # Define nifti dataset, dataloader. ds = NiftiDataset(images, segs, transform=imtrans, seg_transform=segtrans) loader = DataLoader(ds, batch_size=10, num_workers=2, pin_memory=torch.cuda.is_available()) im, seg = monai.utils.misc.first(loader) print(im.shape, seg.shape) # Create UNet, DiceLoss and Adam optimizer. net = monai.networks.nets.UNet( dimensions=3, in_channels=1, out_channels=1, channels=(16, 32, 64, 128, 256), strides=(2, 2, 2, 2), num_res_units=2, ) loss = monai.losses.DiceLoss(do_sigmoid=True) lr = 1e-3 opt = torch.optim.Adam(net.parameters(), lr) # Create trainer device = torch.device('cuda:0') trainer = create_supervised_trainer(net, opt, loss, device, False) ### optional section for checkpoint and tensorboard logging # adding checkpoint handler to save models (network params and optimizer stats) during training checkpoint_handler = ModelCheckpoint('./runs/', 'net', n_saved=10, require_empty=False) trainer.add_event_handler(event_name=Events.EPOCH_COMPLETED, handler=checkpoint_handler, to_save={'net': net, 'opt': opt}) # StatsHandler prints loss at every iteration and print metrics at every epoch, # we don't set metrics for trainer here, so just print loss, user can also customize print functions # and can use output_transform to convert engine.state.output if it's not a loss value train_stats_handler = StatsHandler(name='trainer') train_stats_handler.attach(trainer) # TensorBoardStatsHandler plots loss at every iteration and plots metrics at every epoch, same as StatsHandler train_tensorboard_stats_handler = TensorBoardStatsHandler() train_tensorboard_stats_handler.attach(trainer) ### optional section for model validation during training validation_every_n_epochs = 1 # Set parameters for validation metric_name = 'Mean_Dice' # add evaluation metric to the evaluator engine val_metrics = {metric_name: MeanDice(add_sigmoid=True, to_onehot_y=False)} # ignite evaluator expects batch=(img, seg) and returns output=(y_pred, y) at every iteration, # user can add output_transform to return other values evaluator = create_supervised_evaluator(net, val_metrics, device, True) # create a validation data loader val_imtrans = Compose([ ScaleIntensity(), AddChannel(), Resize((96, 96, 96)), ToTensor() ]) val_segtrans = Compose([ AddChannel(), Resize((96, 96, 96)), ToTensor() ]) val_ds = NiftiDataset(images[-20:], segs[-20:], transform=val_imtrans, seg_transform=val_segtrans) val_loader = DataLoader(val_ds, batch_size=5, num_workers=8, pin_memory=torch.cuda.is_available()) @trainer.on(Events.EPOCH_COMPLETED(every=validation_every_n_epochs)) def run_validation(engine): evaluator.run(val_loader) # Add stats event handler to print validation stats via evaluator val_stats_handler = StatsHandler( name='evaluator', output_transform=lambda x: None, # no need to print loss value, so disable per iteration output global_epoch_transform=lambda x: trainer.state.epoch) # fetch global epoch number from trainer val_stats_handler.attach(evaluator) # add handler to record metrics to TensorBoard at every validation epoch val_tensorboard_stats_handler = TensorBoardStatsHandler( output_transform=lambda x: None, # no need to plot loss value, so disable per iteration output global_epoch_transform=lambda x: trainer.state.epoch) # fetch global epoch number from trainer val_tensorboard_stats_handler.attach(evaluator) # add handler to draw the first image and the corresponding label and model output in the last batch # here we draw the 3D output as GIF format along Depth axis, at every validation epoch val_tensorboard_image_handler = TensorBoardImageHandler( batch_transform=lambda batch: (batch[0], batch[1]), output_transform=lambda output: predict_segmentation(output[0]), global_iter_transform=lambda x: trainer.state.epoch ) evaluator.add_event_handler(event_name=Events.EPOCH_COMPLETED, handler=val_tensorboard_image_handler) # create a training data loader logging.basicConfig(stream=sys.stdout, level=logging.INFO) train_ds = NiftiDataset(images[:20], segs[:20], transform=imtrans, seg_transform=segtrans) train_loader = DataLoader(train_ds, batch_size=5, shuffle=True, num_workers=8, pin_memory=torch.cuda.is_available()) train_epochs = 5 state = trainer.run(train_loader, train_epochs) log_dir = './runs' # by default TensorBoard logs go into './runs' %load_ext tensorboard %tensorboard --logdir $log_dir ! rm -rf {tempdir}
0.524882
0.758332
# Road Follower - Train Model In this notebook we will train a neural network to take an input image, and output a set of x, y values corresponding to a target. We will be using PyTorch deep learning framework to train ResNet18 neural network architecture model for road follower application. ``` import torch import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.datasets as datasets import torchvision.models as models import torchvision.transforms as transforms import glob import PIL.Image import os import numpy as np ``` ### Download and extract data Before you start, you should upload the ``road_following_<Date&Time>.zip`` file that you created in the ``data_collection.ipynb`` notebook on the robot. > If you're training on the JetBot you collected data on, you can skip this! You should then extract this dataset by calling the command below: ``` !unzip -q road_following.zip ``` You should see a folder named ``dataset_all`` appear in the file browser. ### Create Dataset Instance Here we create a custom ``torch.utils.data.Dataset`` implementation, which implements the ``__len__`` and ``__getitem__`` functions. This class is responsible for loading images and parsing the x, y values from the image filenames. Because we implement the ``torch.utils.data.Dataset`` class, we can use all of the torch data utilities :) We hard coded some transformations (like color jitter) into our dataset. We made random horizontal flips optional (in case you want to follow a non-symmetric path, like a road where we need to 'stay right'). If it doesn't matter whether your robot follows some convention, you could enable flips to augment the dataset. ``` def get_x(path): """Gets the x value from the image filename""" return (float(int(path[3:6])) - 50.0) / 50.0 def get_y(path): """Gets the y value from the image filename""" return (float(int(path[7:10])) - 50.0) / 50.0 class XYDataset(torch.utils.data.Dataset): def __init__(self, directory, random_hflips=False): self.directory = directory self.random_hflips = random_hflips self.image_paths = glob.glob(os.path.join(self.directory, '*.jpg')) self.color_jitter = transforms.ColorJitter(0.3, 0.3, 0.3, 0.3) def __len__(self): return len(self.image_paths) def __getitem__(self, idx): image_path = self.image_paths[idx] image = PIL.Image.open(image_path) x = float(get_x(os.path.basename(image_path))) y = float(get_y(os.path.basename(image_path))) if float(np.random.rand(1)) > 0.5: image = transforms.functional.hflip(image) x = -x image = self.color_jitter(image) image = transforms.functional.resize(image, (224, 224)) image = transforms.functional.to_tensor(image) image = image.numpy()[::-1].copy() image = torch.from_numpy(image) image = transforms.functional.normalize(image, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) return image, torch.tensor([x, y]).float() dataset = XYDataset('dataset_xy', random_hflips=False) ``` ### Split dataset into train and test sets Once we read dataset, we will split data set in train and test sets. In this example we split train and test a 90%-10%. The test set will be used to verify the accuracy of the model we train. ``` test_percent = 0.1 num_test = int(test_percent * len(dataset)) train_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - num_test, num_test]) ``` ### Create data loaders to load data in batches We use ``DataLoader`` class to load data in batches, shuffle data and allow using multi-subprocesses. In this example we use batch size of 64. Batch size will be based on memory available with your GPU and it can impact accuracy of the model. ``` train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=16, shuffle=True, num_workers=0 ) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=16, shuffle=True, num_workers=0 ) ``` ### Define Neural Network Model We use ResNet-18 model available on PyTorch TorchVision. In a process called transfer learning, we can repurpose a pre-trained model (trained on millions of images) for a new task that has possibly much less data available. More details on ResNet-18 : https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py More Details on Transfer Learning: https://www.youtube.com/watch?v=yofjFQddwHE ``` model = models.resnet18(pretrained=True) ``` ResNet model has fully connect (fc) final layer with 512 as ``in_features`` and we will be training for regression thus ``out_features`` as 1 Finally, we transfer our model for execution on the GPU ``` model.fc = torch.nn.Linear(512, 2) device = torch.device('cuda') model = model.to(device) ``` ### Train Regression: We train for 50 epochs and save best model if the loss is reduced. ``` NUM_EPOCHS = 70 BEST_MODEL_PATH = 'best_steering_model_xy.pth' best_loss = 1e9 optimizer = optim.Adam(model.parameters()) for epoch in range(NUM_EPOCHS): model.train() train_loss = 0.0 for images, labels in iter(train_loader): images = images.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model(images) loss = F.mse_loss(outputs, labels) train_loss += float(loss) loss.backward() optimizer.step() train_loss /= len(train_loader) model.eval() test_loss = 0.0 for images, labels in iter(test_loader): images = images.to(device) labels = labels.to(device) outputs = model(images) loss = F.mse_loss(outputs, labels) test_loss += float(loss) test_loss /= len(test_loader) print('%f, %f' % (train_loss, test_loss)) if test_loss < best_loss: torch.save(model.state_dict(), BEST_MODEL_PATH) best_loss = test_loss ``` Once the model is trained, it will generate ``best_steering_model_xy.pth`` file which you can use for inferencing in the live demo notebook. If you trained on a different machine other than JetBot, you'll need to upload this to the JetBot to the ``road_following`` example folder.
github_jupyter
import torch import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.datasets as datasets import torchvision.models as models import torchvision.transforms as transforms import glob import PIL.Image import os import numpy as np !unzip -q road_following.zip def get_x(path): """Gets the x value from the image filename""" return (float(int(path[3:6])) - 50.0) / 50.0 def get_y(path): """Gets the y value from the image filename""" return (float(int(path[7:10])) - 50.0) / 50.0 class XYDataset(torch.utils.data.Dataset): def __init__(self, directory, random_hflips=False): self.directory = directory self.random_hflips = random_hflips self.image_paths = glob.glob(os.path.join(self.directory, '*.jpg')) self.color_jitter = transforms.ColorJitter(0.3, 0.3, 0.3, 0.3) def __len__(self): return len(self.image_paths) def __getitem__(self, idx): image_path = self.image_paths[idx] image = PIL.Image.open(image_path) x = float(get_x(os.path.basename(image_path))) y = float(get_y(os.path.basename(image_path))) if float(np.random.rand(1)) > 0.5: image = transforms.functional.hflip(image) x = -x image = self.color_jitter(image) image = transforms.functional.resize(image, (224, 224)) image = transforms.functional.to_tensor(image) image = image.numpy()[::-1].copy() image = torch.from_numpy(image) image = transforms.functional.normalize(image, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) return image, torch.tensor([x, y]).float() dataset = XYDataset('dataset_xy', random_hflips=False) test_percent = 0.1 num_test = int(test_percent * len(dataset)) train_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - num_test, num_test]) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=16, shuffle=True, num_workers=0 ) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=16, shuffle=True, num_workers=0 ) model = models.resnet18(pretrained=True) model.fc = torch.nn.Linear(512, 2) device = torch.device('cuda') model = model.to(device) NUM_EPOCHS = 70 BEST_MODEL_PATH = 'best_steering_model_xy.pth' best_loss = 1e9 optimizer = optim.Adam(model.parameters()) for epoch in range(NUM_EPOCHS): model.train() train_loss = 0.0 for images, labels in iter(train_loader): images = images.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model(images) loss = F.mse_loss(outputs, labels) train_loss += float(loss) loss.backward() optimizer.step() train_loss /= len(train_loader) model.eval() test_loss = 0.0 for images, labels in iter(test_loader): images = images.to(device) labels = labels.to(device) outputs = model(images) loss = F.mse_loss(outputs, labels) test_loss += float(loss) test_loss /= len(test_loader) print('%f, %f' % (train_loss, test_loss)) if test_loss < best_loss: torch.save(model.state_dict(), BEST_MODEL_PATH) best_loss = test_loss
0.864568
0.984942
# US HOUSEHOLD INCOME ANALYSIS ## Import required libraries and Load the data ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('/content/transaction_dataset.csv') data.head(200) import warnings warnings.filterwarnings("ignore", category=FutureWarning) data.head() data.shape data.columns data.info() ``` ## Exploratory Data Analysis ``` data.describe() def bar_graph(feature): data[feature].value_counts().plot(kind="bar") bar_graph('FLAG') plt.subplots(figsize = (8, 6)) sns.set(style = 'darkgrid') sns.scatterplot(data = data,x = 'Unique Received From Addresses', y= 'Received Tnx',hue = 'FLAG' ) plt.title('Unique Received From Addresses Vs Received Tnx') plt.show() plt.subplots(figsize = (8, 6)) sns.set(style = 'whitegrid') sns.scatterplot(data = data,x = 'Unique Sent To Addresses', y= 'Sent tnx',hue = 'FLAG' ) plt.title('Unique Sent To Addresses Vs Sent tnx') plt.show() plt.subplots(figsize = (8, 6)) sns.scatterplot(data = data,x = 'Sent tnx', y= 'Unique Sent To Addresses',hue = 'FLAG' ) plt.title('Sent tnx Vs Unique Sent To Addresses') plt.show() plt.subplots(figsize = (8, 6)) sns.scatterplot(data = data,x = 'total transactions (including tnx to create contract', y= 'Received Tnx',hue = 'FLAG' ) plt.title('Total_transactions Vs Received Tnx') plt.show() fig, ax = plt.subplots(figsize=(18,10)) sns.heatmap(data.corr(), annot=False, cmap='Blues', center=0, square=True) # Dropping the unncessary columns drop = ['Unnamed: 0', 'Index', 'Address','total transactions (including tnx to create contract', 'total ether sent contracts', 'max val sent to contract', ' ERC20 avg val rec', ' ERC20 avg val rec',' ERC20 max val rec', ' ERC20 min val rec', ' ERC20 uniq rec contract addr', 'max val sent', ' ERC20 avg val sent', ' ERC20 min val sent', ' ERC20 max val sent', ' Total ERC20 tnxs', 'avg value sent to contract', 'Unique Sent To Addresses', 'Unique Received From Addresses', 'total ether received', ' ERC20 uniq sent token name', 'min value received', 'min val sent', ' ERC20 uniq rec addr','min value sent to contract',' ERC20 uniq sent addr.1',] data.drop(drop, axis=1, inplace=True) data.shape # Visualize missings pattern of the dataframe plt.figure(figsize=(12,6)) sns.heatmap(data.isnull(), cbar=False) plt.show() ``` ## Identify Input and Target columns ``` X = data.drop(columns=['FLAG']) y = data['FLAG'] print(X.shape) print(y.shape) ``` ## Separating Numeric and Categorical columns ``` numeric_cols = X.select_dtypes(include=np.number).columns.tolist() #lists all the numeric columns. categoric_cols = X.select_dtypes(include='object').columns.tolist() #lists all the categorical columns. print(numeric_cols) print(categoric_cols) ``` ## Handling Missing Data ``` X[numeric_cols].isna().sum() from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy='mean').fit(data[numeric_cols]) X[numeric_cols] = imputer.transform(X[numeric_cols]) X[numeric_cols].isna().sum() ``` ## Scaling Numeric columns(Data Normalization) - Scaling your variables to make them equivalent, and thus your ML model performance would not be impacted by an underlying bias towards the larger variables. ``` X[numeric_cols].describe().loc[['min', 'max']] from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler().fit(data[numeric_cols]) X[numeric_cols] = scaler.transform(X[numeric_cols]) X[numeric_cols].describe().loc[['min', 'max']] ``` ## Encoding Caategorical Columns ``` from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder(sparse=False, handle_unknown='ignore').fit(data[categoric_cols]) encoded_cols = list(encoder.get_feature_names(categoric_cols)) X[encoded_cols] = encoder.transform(X[categoric_cols]) X = X[numeric_cols + encoded_cols] print(X.shape) ``` ## Splitting the data into Train and Test ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0) print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape) from sklearn.decomposition import PCA pca = PCA(n_components = 0.99) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) explained_variance = pca.explained_variance_ratio_ print(X_train.shape) print(X_test.shape) ``` ## Model Building and Prediction ``` from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from xgboost import XGBClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix models = [] models.append(('Logistic Regression', LogisticRegression())) models.append(('DecisionTreeClassifier', DecisionTreeClassifier())) models.append(('RandomForestClassifier',RandomForestClassifier())) models.append(('GradientBoostingClassifier',XGBClassifier())) model_names = [] model_score = [] for name,model in models: model.fit(X_train,y_train) predictions = model.predict(X_test) model_names.append(name) model_score.append(accuracy_score(predictions, y_test)) report = pd.DataFrame({'Models':model_names, 'Accuracy_Score':model_score}) report model = XGBClassifier() model.fit(X_train,y_train) preds = model.predict(X_test) from sklearn.metrics import classification_report print(classification_report(y_test,preds)) from sklearn.metrics import plot_confusion_matrix plot_confusion_matrix(model,X_test,y_test) ```
github_jupyter
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('/content/transaction_dataset.csv') data.head(200) import warnings warnings.filterwarnings("ignore", category=FutureWarning) data.head() data.shape data.columns data.info() data.describe() def bar_graph(feature): data[feature].value_counts().plot(kind="bar") bar_graph('FLAG') plt.subplots(figsize = (8, 6)) sns.set(style = 'darkgrid') sns.scatterplot(data = data,x = 'Unique Received From Addresses', y= 'Received Tnx',hue = 'FLAG' ) plt.title('Unique Received From Addresses Vs Received Tnx') plt.show() plt.subplots(figsize = (8, 6)) sns.set(style = 'whitegrid') sns.scatterplot(data = data,x = 'Unique Sent To Addresses', y= 'Sent tnx',hue = 'FLAG' ) plt.title('Unique Sent To Addresses Vs Sent tnx') plt.show() plt.subplots(figsize = (8, 6)) sns.scatterplot(data = data,x = 'Sent tnx', y= 'Unique Sent To Addresses',hue = 'FLAG' ) plt.title('Sent tnx Vs Unique Sent To Addresses') plt.show() plt.subplots(figsize = (8, 6)) sns.scatterplot(data = data,x = 'total transactions (including tnx to create contract', y= 'Received Tnx',hue = 'FLAG' ) plt.title('Total_transactions Vs Received Tnx') plt.show() fig, ax = plt.subplots(figsize=(18,10)) sns.heatmap(data.corr(), annot=False, cmap='Blues', center=0, square=True) # Dropping the unncessary columns drop = ['Unnamed: 0', 'Index', 'Address','total transactions (including tnx to create contract', 'total ether sent contracts', 'max val sent to contract', ' ERC20 avg val rec', ' ERC20 avg val rec',' ERC20 max val rec', ' ERC20 min val rec', ' ERC20 uniq rec contract addr', 'max val sent', ' ERC20 avg val sent', ' ERC20 min val sent', ' ERC20 max val sent', ' Total ERC20 tnxs', 'avg value sent to contract', 'Unique Sent To Addresses', 'Unique Received From Addresses', 'total ether received', ' ERC20 uniq sent token name', 'min value received', 'min val sent', ' ERC20 uniq rec addr','min value sent to contract',' ERC20 uniq sent addr.1',] data.drop(drop, axis=1, inplace=True) data.shape # Visualize missings pattern of the dataframe plt.figure(figsize=(12,6)) sns.heatmap(data.isnull(), cbar=False) plt.show() X = data.drop(columns=['FLAG']) y = data['FLAG'] print(X.shape) print(y.shape) numeric_cols = X.select_dtypes(include=np.number).columns.tolist() #lists all the numeric columns. categoric_cols = X.select_dtypes(include='object').columns.tolist() #lists all the categorical columns. print(numeric_cols) print(categoric_cols) X[numeric_cols].isna().sum() from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy='mean').fit(data[numeric_cols]) X[numeric_cols] = imputer.transform(X[numeric_cols]) X[numeric_cols].isna().sum() X[numeric_cols].describe().loc[['min', 'max']] from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler().fit(data[numeric_cols]) X[numeric_cols] = scaler.transform(X[numeric_cols]) X[numeric_cols].describe().loc[['min', 'max']] from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder(sparse=False, handle_unknown='ignore').fit(data[categoric_cols]) encoded_cols = list(encoder.get_feature_names(categoric_cols)) X[encoded_cols] = encoder.transform(X[categoric_cols]) X = X[numeric_cols + encoded_cols] print(X.shape) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0) print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape) from sklearn.decomposition import PCA pca = PCA(n_components = 0.99) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) explained_variance = pca.explained_variance_ratio_ print(X_train.shape) print(X_test.shape) from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from xgboost import XGBClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix models = [] models.append(('Logistic Regression', LogisticRegression())) models.append(('DecisionTreeClassifier', DecisionTreeClassifier())) models.append(('RandomForestClassifier',RandomForestClassifier())) models.append(('GradientBoostingClassifier',XGBClassifier())) model_names = [] model_score = [] for name,model in models: model.fit(X_train,y_train) predictions = model.predict(X_test) model_names.append(name) model_score.append(accuracy_score(predictions, y_test)) report = pd.DataFrame({'Models':model_names, 'Accuracy_Score':model_score}) report model = XGBClassifier() model.fit(X_train,y_train) preds = model.predict(X_test) from sklearn.metrics import classification_report print(classification_report(y_test,preds)) from sklearn.metrics import plot_confusion_matrix plot_confusion_matrix(model,X_test,y_test)
0.553988
0.797911
<img src='https://radiant-assets.s3-us-west-2.amazonaws.com/PrimaryRadiantMLHubLogo.png' alt='Radiant MLHub Logo' width='300'/> # How to use the Radiant MLHub API to browse and download the LandCoverNet dataset This Jupyter notebook, which you may copy and adapt for any use, shows basic examples of how to use the API to download labels and source imagery for the LandCoverNet dataset. Full documentation for the API is available at [docs.mlhub.earth](http://docs.mlhub.earth). We'll show you how to set up your authorization, list collection properties, and retrieve the items (the data contained within them) from those collections. Each item in our collection is explained in json format compliant with STAC label extension definition. ## Citation Alemohammad S.H., Ballantyne A., Bromberg Gaber Y., Booth K., Nakanuku-Diggs L., & Miglarese A.H. (2020) "LandCoverNet: A Global Land Cover Classification Training Dataset", Version 1.0, Radiant MLHub. \[Date Accessed\] [https://doi.org/10.34911/rdnt.d2ce8i](https://doi.org/10.34911/rdnt.d2ce8i) ## Dependencies This notebook utilizes the [`radiant-mlhub` Python client](https://pypi.org/project/radiant-mlhub/) for interacting with the API. If you are running this notebooks using Binder, then this dependency has already been installed. If you are running this notebook locally, you will need to install this yourself. See the official [`radiant-mlhub` docs](https://radiant-mlhub.readthedocs.io/) for more documentation of the full functionality of that library. ## Authentication ### Create an API Key Access to the Radiant MLHub API requires an API key. To get your API key, go to [dashboard.mlhub.earth](https://dashboard.mlhub.earth). If you have not used Radiant MLHub before, you will need to sign up and create a new account. Otherwise, sign in. In the **API Keys** tab, you'll be able to create API key(s), which you will need. *Do not share* your API key with others: your usage may be limited and sharing your API key is a security risk. ### Configure the Client Once you have your API key, you need to configure the `radiant_mlhub` library to use that key. There are a number of ways to configure this (see the [Authentication docs](https://radiant-mlhub.readthedocs.io/en/latest/authentication.html) for details). For these examples, we will set the `MLHUB_API_KEY` environment variable. Run the cell below to save your API key as an environment variable that the client library will recognize. *If you are running this notebook locally and have configured a profile as described in the [Authentication docs](https://radiant-mlhub.readthedocs.io/en/latest/authentication.html), then you do not need to execute this cell.* ``` import os os.environ['MLHUB_API_KEY'] = 'PASTE_YOUR_API_KEY_HERE' import urllib.parse import re from pathlib import Path import itertools as it from functools import partial from concurrent.futures import ThreadPoolExecutor from tqdm.notebook import tqdm from radiant_mlhub import client, get_session ``` ## Listing Collection Properties The following cell makes a request to the API for the properties for the LandCoverNet labels collection and prints out a few important properties. ``` collection_id = 'ref_landcovernet_v1_labels' collection = client.get_collection(collection_id) print(f'Description: {collection["description"]}') print(f'License: {collection["license"]}') print(f'DOI: {collection["sci:doi"]}') print(f'Citation: {collection["sci:citation"]}') ``` ## Finding Possible Land Cover Labels Each label item within the collection has a property which lists all of the possible land cover types and which ones are present in each label item. The code below prints out which land cover types are present in the dataset and we will reference these later in the notebook when we filter downloads. ``` items = client.list_collection_items(collection_id, limit=1) first_item = next(items) label_classes = first_item['properties']['label:classes'] for label_class in label_classes: print(f'Classes for {label_class["name"]}') for c in sorted(label_class['classes']): print(f'- {c}') ``` ## Downloading Assets > **NOTE:** If you are running these notebooks using Binder these resources will be downloaded to the remote file system that the notebooks are running on and **not to your local file system.** If you want to download the files to your machine, you will need to clone the repo and run the notebook locally. ### Create Download Helpers The cell below creates 3 helper functions that we will use to select items from a collection and download the associated assets (source imagery or labels). * **`get_items`** This is a [Python generator](https://realpython.com/introduction-to-python-generators/) that yields items from the given collection that match the criteria we give it. For instance, the following code will yield up to 10 items from the BigEarthNet labels collection that contain *either the `'Coniferous forest'` or the `'Rice fields'` labels*: ```python get_items('bigearthnet_v1_labels', classes=['Coniferous forest', 'Rice fields'], max_items=10) ``` * **`download`** This function takes an item dictionary and an asset key and downloads the given asset. By default, the asset is downloaded to the current working directory, but this can be changed using the `output_dir` argument. * **`filter_item`** This is a helper function used by the `get_items` function to filter items returned by `client.list_collection_items`. ``` items_pattern = re.compile(r'^/mlhub/v1/collections/(\w+)/items/(\w+)$') def filter_item(item, classes=None, cloud_and_shadow=None, seasonal_snow=None): """Function to be used as an argument to Python's built-in filter function that filters out any items that do not match the given classes, cloud_and_shadow, and/or seasonal_snow values. If any of these filter arguments are set to None, they will be ignored. For instance, using filter_item(item, cloud_and_shadow=True) will only return items where item['properties']['cloud_and_shadow'] == 'true', and will not filter based on classes/labels, or seasonal_snow. """ # Match classes, if provided item_labels = item['properties'].get('labels', []) if classes is not None and not any(label in classes for label in item_labels): return False # Match cloud_and_shadow, if provided item_cloud_and_shadow = item['properties'].get('cloud_and_shadow', 'false') == 'true' if cloud_and_shadow is not None and item_cloud_and_shadow != cloud_and_shadow: return False # Match seasonal_snow, if provided item_seasonal_snow = item['properties'].get('seasonal_snow', 'false') == 'true' if seasonal_snow is not None and item_seasonal_snow != seasonal_snow: return False return True def get_items(collection_id, classes=None, cloud_and_shadow=None, seasonal_snow=None, max_items=1): """Generator that yields up to max_items items that match the given classes, cloud_and_shadow, and seasonal_snow values. Setting one of these filter arguments to None will cause that filter to be ignored (e.g. classes=None means that items will not be filtered by class/label). """ filter_fn = partial( filter_item, classes=classes, cloud_and_shadow=cloud_and_shadow, seasonal_snow=seasonal_snow ) filtered = filter( filter_fn, # Note that we set the limit to None here because we want to limit based on our own filters. It is not # recommended to use limit=None for the client.list_collection_items method without implementing your # own limits because the bigearthnet_v1_labels collection contains hundreds of thousands of items and # looping over these items without limit may take a very long time. client.list_collection_items(collection_id, limit=None) ) yield from it.islice(filtered, max_items) def download(item, asset_key, output_dir='./data'): """Downloads the given item asset by looking up that asset and then following the "href" URL.""" # Try to get the given asset and return None if it does not exist asset = item.get('assets', {}).get(asset_key) if asset is None: print(f'Asset "{asset_key}" does not exist in this item') return None # Try to get the download URL from the asset and return None if it does not exist download_url = asset.get('href') if download_url is None: print(f'Asset {asset_key} does not have an "href" property, cannot download.') return None session = get_session() r = session.get(download_url, allow_redirects=True, stream=True) filename = urllib.parse.urlsplit(r.url).path.split('/')[-1] output_path = Path(output_dir) / filename with output_path.open('wb') as dst: for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: dst.write(chunk) def download_labels_and_source(item, assets=None, output_dir='./data'): """Downloads all label and source imagery assets associated with a label item that match the given asset types. """ # Follow all source links and add all assets from those def _get_download_args(link): # Get the item ID (last part of the link path) source_item_path = urllib.parse.urlsplit(link['href']).path source_item_collection, source_item_id = items_pattern.fullmatch(source_item_path).groups() source_item = client.get_collection_item(source_item_collection, source_item_id) source_download_dir = download_dir / 'source' source_download_dir.mkdir(exist_ok=True) matching_source_assets = [ asset for asset in source_item.get('assets', {}) if assets is None or asset in assets ] return [ (source_item, asset, source_download_dir) for asset in matching_source_assets ] download_args = [] download_dir = Path(output_dir) / item['id'] download_dir.mkdir(parents=True, exist_ok=True) labels_download_dir = download_dir / 'labels' labels_download_dir.mkdir(exist_ok=True) # Download the labels assets matching_assets = [ asset for asset in item.get('assets', {}) if assets is None or asset in assets ] for asset in matching_assets: download_args.append((item, asset, labels_download_dir)) source_links = [link for link in item['links'] if link['rel'] == 'source'] with ThreadPoolExecutor(max_workers=16) as executor: for argument_batch in executor.map(_get_download_args, source_links): download_args += argument_batch print(f'Downloading {len(download_args)} assets...') with ThreadPoolExecutor(max_workers=16) as executor: with tqdm(total=len(download_args)) as pbar: for _ in executor.map(lambda triplet: download(*triplet), download_args): pbar.update(1) ``` ### Download Assets for 1 Item The following cell below will navigate and API and collect all the download links for labels and source imagery assets. In this case we specified the `max_items` argument to the `get_items` function, which limits the number of label items fetched to just 1. We also pass a list of `assets` to the `download_labels_and_source` function, which limits the types of assets downloaded to only those included in the list. We limit the results in these two ways because there a nearly 2,000 label items and over 150,000 source items in the LandCoverNet collections, and each source item contains at least 13 items representing the various Sentinel 2 bands. Attempting to download all items or all assets for even a few items can take a very long time. ``` items = get_items( collection_id, max_items=1 ) for item in items: download_labels_and_source(item, assets=['labels', 'B02', 'B03', 'B04']) ``` ### Filtering on Land Cover Type We can specify which land cover types we want to download by adding the "classes" argument. This argument accepts an array of land cover types and only label items which contain one or more of the classes specified will be downloaded. The possible land cover types can be found in the "Finding Possible Land Cover Labels" cell above. ``` items = get_items( collection_id, classes=['Woody Vegetation'], max_items=1, ) for item in items: download_labels_and_source(item, assets=['labels', 'B02', 'B03', 'B04']) ``` ### Download All Assets Looping through all items and downloading the associated assets may be *very* time-consuming for larger datasets like LandCoverNet. Instead, MLHub provides TAR archives of all collections that can be downloaded using the `/archive/{collection_id}` endpoint. The following cell uses the `client.download_archive` function to download the `ref_landcovernet_v1_labels` archive to the current working directory. ``` client.download_archive(collection_id, output_dir='./data') ```
github_jupyter
import os os.environ['MLHUB_API_KEY'] = 'PASTE_YOUR_API_KEY_HERE' import urllib.parse import re from pathlib import Path import itertools as it from functools import partial from concurrent.futures import ThreadPoolExecutor from tqdm.notebook import tqdm from radiant_mlhub import client, get_session collection_id = 'ref_landcovernet_v1_labels' collection = client.get_collection(collection_id) print(f'Description: {collection["description"]}') print(f'License: {collection["license"]}') print(f'DOI: {collection["sci:doi"]}') print(f'Citation: {collection["sci:citation"]}') items = client.list_collection_items(collection_id, limit=1) first_item = next(items) label_classes = first_item['properties']['label:classes'] for label_class in label_classes: print(f'Classes for {label_class["name"]}') for c in sorted(label_class['classes']): print(f'- {c}') get_items('bigearthnet_v1_labels', classes=['Coniferous forest', 'Rice fields'], max_items=10) ``` * **`download`** This function takes an item dictionary and an asset key and downloads the given asset. By default, the asset is downloaded to the current working directory, but this can be changed using the `output_dir` argument. * **`filter_item`** This is a helper function used by the `get_items` function to filter items returned by `client.list_collection_items`. ### Download Assets for 1 Item The following cell below will navigate and API and collect all the download links for labels and source imagery assets. In this case we specified the `max_items` argument to the `get_items` function, which limits the number of label items fetched to just 1. We also pass a list of `assets` to the `download_labels_and_source` function, which limits the types of assets downloaded to only those included in the list. We limit the results in these two ways because there a nearly 2,000 label items and over 150,000 source items in the LandCoverNet collections, and each source item contains at least 13 items representing the various Sentinel 2 bands. Attempting to download all items or all assets for even a few items can take a very long time. ### Filtering on Land Cover Type We can specify which land cover types we want to download by adding the "classes" argument. This argument accepts an array of land cover types and only label items which contain one or more of the classes specified will be downloaded. The possible land cover types can be found in the "Finding Possible Land Cover Labels" cell above. ### Download All Assets Looping through all items and downloading the associated assets may be *very* time-consuming for larger datasets like LandCoverNet. Instead, MLHub provides TAR archives of all collections that can be downloaded using the `/archive/{collection_id}` endpoint. The following cell uses the `client.download_archive` function to download the `ref_landcovernet_v1_labels` archive to the current working directory.
0.623148
0.954984
# Awari - Data Science ## Projeto - Gráficos Interativos com Bokeh ## 1. Considerações iniciais Usando os [microdados](http://portal.inep.gov.br/microdados) do Enen, o objetivo deste notebook é você exercitar o uso da biblioteca para uso em conjuntos de dados reais. Ao final deste notebook, esperamos que você consiga: - Transformar seus dados em visualizações usando Bokeh. - Personalizar e organizar suas visualizações. - Adicionar interatividade às suas visualizações. ### 1.1. Por que o Bokeh? Bokeh é uma biblioteca Python para geração de gráficos interativos que podem ser exibidos em navegadores web. Ao contrário dos populares Matplotlib e Seaborn, o Bokeh renderiza seus gráficos usando HTML e JavaScript. Esta característica é o maior diferencial do Bokeh. ### 1.2. Prepare seu ambiente O Bokeh também está disponível em R e Scala. Como o nosso foco é o Python, faça a instalação da biblioteca no seu sistema usando o gerenciador de pacotes da linguagem: ``` $ pip install bokeh ``` Ou caso esteja usando o Anaconda: ``` $ conda install bokeh ``` ### 1.3. Conjunto de dados Nossa o conjunto de dados serão os microdados do [ENEN 2018](http://download.inep.gov.br/microdados/microdados_enem2018.zip). Na verdade, utilizaremos uma versão menor deste conjunto, visto que o original é muito grande o objetivo desde notebook - o arquivo CSV original possui mais de 3Gb. Deste modo, iremos trabalhar com o arquivo [MICRODADOS_ENEM_2018.csv](MICRODADOS_ENEM_2018.csv) que representa uma parte dos dados e que já estão pré-processados. #### 1.3.1. Descrição do dados No arquivo [MICRODADOS_ENEM_2018.csv](MICRODADOS_ENEM_2018.csv), você encontrará as seguintes colunas: - INSCRICAO: Número de inscrição. - MUNICIPIO: Município de residência. - UF: Unidade federativa (estado). - IDADE: Idade. - SEXO: Sexo. - COR_RACA: Cor/raça. - TIPO_ESCOLA: Tipo de escola. Caso 1 - não respondeu, 2 - pública, 3 - privada e 4 exterior. - NOTA_MT: Nota da prova de Matemática - NOTA_CN: Nota da prova de Ciências da Natureza - NOTA_LC: Nota da prova de Linguagens e Códigos - NOTA_CH: Nota da prova de Ciências Humanas ## 2. Procedimentos Para a criação de visualizações no Bokeh, você pode seguir estes 6 passos: 1. **Prepare os dados** - Esta etapa envolve o uso de outras bibliotecas como o Pandas ou Numpy. 2. **Defina a saída** - O Bokeh permite que o usuário escolha a saída da visualização. As opções são arquivos HTML, apresentação através do notebook ou em um servidor. 3. **Configure sua visualização** - A partir daqui, você montará seu gráfico, como se estivesse preparando uma tela. Nesta etapa, você pode personalizar tudo, desde os títulos às marcas de escala. 4. **Conecte-se e desenhe seus dados** - Em seguida, você tem a flexibilidade de desenhar seus dados do zero, usando as muitas opções disponíveis de marcador e forma, todas facilmente personalizáveis. 5. **Organize o layout** - O Bokeh não apenas oferece as opções de layout padrão de grade, mas também permite que você organize suas visualizações em um layout com guias em apenas algumas linhas de código. 6. **Visualize e salve sua visualização** - Finalmente, é hora de ver o que você criou. Seja no navegador, no notebook ou em um servidor. ### 2.1. Prepare seus dados Na etapa inicial, você começa importando a biblioteca Pandas e carregando os dados em um dataframe. ### __TAREFA 01__ 1. Importe o pandas 2. Leia o arquivo ['MICRODADOS_ENEM_2018.csv'](MICRODADOS_ENEM_2018.csv). 3. Visualize as primeiras linhas do dataframe. **DICA**: Use o argumento *index_col=0* para que a coluna *INSCRICAO* vire o índice do dataframe. ``` # Insira sua resposta aqui import pandas as pd df = pd.read_csv('MICRODADOS_ENEM_2018.csv', index_col='INSCRICAO') df.head() ``` ### 2.2. Defina a saída Com seus dados em mãos, agora é hora de definir onde sua visualização será renderizada. Para facilitar, vamos trabalhar com um arquivo em HTML. ### __TAREFA 02__ 1. Importe a função da biblioteca Bokeh que renderiza a saída em HTML. 2. Salve a visualização no arquivo *grafico_vazio.html*. 2. Informe o título de "Gráfico Vazio" nesta saída. **DICA**: As funções de saída do Bokeh ficam dentro de *bokeh.io*. ``` # Insira sua resposta aqui from bokeh.io import output_file output_file("grafico_vazio.html", title='Gráfico Vazio') ``` ### 2.3. Configure sua visualização Chegou o momento de montar sua figura. ### __TAREFA 03__ 1. Importe a classe do Bokeh para criar uma figura. 2. Instancie um objeto chamado fig com a sua visualização. **DICA**: O objeto para configurar uma visualização está dentro de *bokeh.plotting*. ``` # Insira sua resposta aqui from bokeh.plotting import figure fig = figure() ``` A partir deste ponto, você já pode conferir sua visualização, apesar de não ter plotado nenhum dado. ### __TAREFA 04__ 1. Junte o código das tarefas 2 e 3. 2. Importe a função do Bokeh que apresente uma figura. 3. Plote uma visualização vazia. ``` # Insira seu código aqui from bokeh.io import show show(fig) ``` Uma nova aba com o nome "Gráfico Vazio do Bokeh" abrirá em seu navegador para apresentar seu gráfico vazio. Repare que a saída padrão vem pré-carregada com uma barra de ferramentas. Esta é uma prévia importante dos elementos interativos do Bokeh traz. ### __TAREFA 05__ 1. Plote a mesma estrutura da Tarefa 04 no notebook. ``` # Insira seu código aqui from bokeh.io import output_notebook output_notebook() fig2 = figure() show(fig2) ``` Quando executou o comando, verificou que duas visualizações (sem dados) foram renderizadas? Uma em nova aba, similar à tarefa 04, outra no próprio notebook. Isso aconteceu porque você definiu uma nova saída para a visualização, porém o Bokeh ainda não havia "esqueceu" a anterior. Esta característica é ideal para quando você deseja renderizar visualizações em vários locais - HTML, notebook e/ou servidores. Para o nosso caso, se nós ficarmos definindo sempre novos locais para renderização a cada nova tarefa, provalvemente ficaremos com o notebook e navegador bem poluídos. O Bokeh permite que a saida da visualização seja resetada a cada nova renderização. ### __TAREFA 06__ 1. Importe a função que reseta a saída das visualizações. 2. Execute a função. **DICA**: A função que reseta a saída das visualizações estão dentro de *bokeh.plotting*. ``` # Insira seu código aqui from bokeh.plotting import reset_output reset_output() ``` Lembre-se, ainda não passamos os nossos dados para gerar uma visualização completa. Até o momento, é como se você estivesse preparando a tela em que vai pintar. Agora que você sabe como criar uma figura do Bokeh genérica em um HTML ou notebook, é hora de aprender mais sobre como configurar o objeto *figure()*. Vamos trabalhar neste elemento, que fornecerá muitos dos parâmetros que definirão a estética das nossas visualizações. ### __TAREFA 07__ 1. Manipulando o elemento *figure*, plote uma figura com título "Gráfico Genérico". 2. Defina o nome do eixo x como "Eixo X". 3. Defina o nome do eixo y como "Eixo Y". 4. Defina a posição do título abaixo do gráfico. 5. Exiba o resultado no notebook. **DICA**: Todos os parâmtros são configurados no momento em que você instancia o objeto *figure*. ``` # Insira seu código aqui output_notebook() fig = figure(title='Gráfico Genérico', x_axis_label='Eixo X', y_axis_label='Eixo Y', title_location='below') show(fig) ``` Além de mudar título e nome dos eixos da visualizações, o objeto figure também possui vários outros parâmetros para cor, legenda, tamanho, faixa de valores, etc. ### 2.3. Conecte-se e desenhe seus dados Finalmente chegou a hora de pintar! Com o objeto *figure* instanciado e configurado, você pode conectar-se ao objeto e desenhar seus dados. O elemento gráfico mais básico no Bokeh é o glifo. Um glifo é uma forma gráfica vetorial ou marcador usado para representar seus dados, como um círculo ou quadrado. Para facilitar a visualização, vamos selecionar somente as 10 primeiras inscrições do nosso conjunto de dados e plotar glifos (pontos) no gráfico que representam as notas de matemática (eixo x) e redação (eixo y) dos nossos inscritos. ``` x = df['NOTA_MT'].head(10).reset_index(drop=True) y = df['NOTA_REDACAO'].head(10).reset_index(drop=True) ``` ### __TAREFA 08__ 1. Crie uma figure com o título "Matemática X Redação" 2. Plote os glifos usando o vetores x e y 3. Defina a saída no notebook **DICA**: Use a função *figure.circle()* para plotar os glifos. ``` # Insira seu código aqui figura = figure(title='Matemática x Redação') figura.circle(x, y) output_notebook() show(figura) ``` Ficou fácil não é verdade? Com o objeto figure instanciado, você se "conecta" a ele e repassa seus dados de forma muito simples. Acima, usamos o método *cicle()*, mas o Bokeh possui vários outros elementos. Também existem várias categorias de glifos, talvez você queira dar uma conferida na [documentação](https://bokeh.pydata.org/en/latest/docs/user_guide/plotting.html). Vamos explorar um pouco mais nosso conjunto de dados. Vamos conferir como está a distribuição de inscrições por estado. Vamos contar quantas inscrições existem por *UF*. Para que você foque no processo do Bokeh, os dados já foram preparados: ``` contagem_x = list(df['UF'].value_counts().sort_index().index) contagem_y = list(df['UF'].value_counts().sort_index().values) ``` ### __TAREFA 09__ 1. Crie uma figure com o título "Quantidade de Inscritos por UF" 2. Plote vetores x e y através de um gráfico de barras. 3. Define o nome do eixo x como "UF". 4. Define o nome do eixo y como "Inscritos". 3. Defina a saída no notebook - **DICA 01**: Como a variável x é categórica, use o argumento *x_range* quando criar uma figura. - **DICA 02**: O gráfico de barras é feito usando a função *figure.vbar()*. ``` # Insira seu código aqui x_range = [str(c) for c in range(len(contagem_x))] figura2= figure(title='Quantidade de Inscritos por UF', x_axis_label='UF', y_axis_label='Inscritos', x_range=[str(c) for c in range(len(contagem_x))]) figura2.vbar(x=x_range, top=contagem_y, width=0.9) output_notebook() show(figura2) ``` Aproveite para experimentar os botôes do lado direito da visualização. Aproximar, salvar e recarregar são algumas opções que o Bokeh apresenta por padrão. ## 3. Conclusão O Bokeh ainda possui uma vasta gama de customizações e opções que não foram exploradas neste notebook. Espero que o conceitos e as tarefas simples apresentadas tenham lhe dado motivação e base suficiente para continuar explorando o Bokeh. ### Awari - <a href="https://awari.com.br/"> awari.com.br</a>
github_jupyter
$ pip install bokeh $ conda install bokeh # Insira sua resposta aqui import pandas as pd df = pd.read_csv('MICRODADOS_ENEM_2018.csv', index_col='INSCRICAO') df.head() # Insira sua resposta aqui from bokeh.io import output_file output_file("grafico_vazio.html", title='Gráfico Vazio') # Insira sua resposta aqui from bokeh.plotting import figure fig = figure() # Insira seu código aqui from bokeh.io import show show(fig) # Insira seu código aqui from bokeh.io import output_notebook output_notebook() fig2 = figure() show(fig2) # Insira seu código aqui from bokeh.plotting import reset_output reset_output() # Insira seu código aqui output_notebook() fig = figure(title='Gráfico Genérico', x_axis_label='Eixo X', y_axis_label='Eixo Y', title_location='below') show(fig) x = df['NOTA_MT'].head(10).reset_index(drop=True) y = df['NOTA_REDACAO'].head(10).reset_index(drop=True) # Insira seu código aqui figura = figure(title='Matemática x Redação') figura.circle(x, y) output_notebook() show(figura) contagem_x = list(df['UF'].value_counts().sort_index().index) contagem_y = list(df['UF'].value_counts().sort_index().values) # Insira seu código aqui x_range = [str(c) for c in range(len(contagem_x))] figura2= figure(title='Quantidade de Inscritos por UF', x_axis_label='UF', y_axis_label='Inscritos', x_range=[str(c) for c in range(len(contagem_x))]) figura2.vbar(x=x_range, top=contagem_y, width=0.9) output_notebook() show(figura2)
0.34632
0.969671
# Loading Image Data So far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks. We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images: <img src='assets/dog_cat.png'> We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ``` The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder)). In general you'll use `ImageFolder` like so: ```python dataset = datasets.ImageFolder('path/to/data', transform=transform) ``` where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so: ``` root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png ``` where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. ### Transforms When you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor: ```python transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) ``` There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). ### Data Loaders With the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch. ```python dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) ``` Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`. ```python # Looping through it, get a batch on each loop for images, labels in dataloader: pass # Get one batch images, labels = next(iter(dataloader)) ``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ``` data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ``` If you loaded the data correctly, you should see something like this (your image will be different): <img src='assets/cat_cropped.png' width=244> ## Data Augmentation A common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc. To randomly rotate, scale and crop, then flip your images you would define your transforms like this: ```python train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) ``` You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so ```input[channel] = (input[channel] - mean[channel]) / std[channel]``` Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn. You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop. >**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ``` data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ``` Your transformed images should look something like this. <center>Training examples:</center> <img src='assets/train_examples.png' width=500px> <center>Testing examples:</center> <img src='assets/test_examples.png' width=500px> At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny). In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ``` # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ```
github_jupyter
%matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper dataset = datasets.ImageFolder('path/to/data', transform=transform) root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Looping through it, get a batch on each loop for images, labels in dataloader: pass # Get one batch images, labels = next(iter(dataloader)) data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn. You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop. >**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. Your transformed images should look something like this. <center>Training examples:</center> <img src='assets/train_examples.png' width=500px> <center>Testing examples:</center> <img src='assets/test_examples.png' width=500px> At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny). In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem.
0.479016
0.98947
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import difflib df = pd.read_json('../Data2.json') df.sort_index(inplace=True) df.head(2) df['field'].value_counts().head() df['field'].value_counts().count() df['field']=df['field'].str.lower() df['field'].value_counts().count() ``` **1307 types of fields** ``` for i in df.index: f=df.get_value(i,'field').strip() if "civil" in f: df.set_value(i,'field','civil') elif "wireless" in f: df.set_value(i,'field','wireless') elif ("computer sci" in f) | ("compter sci" in f) | ('ai'==f) | ('artificial' in f)| ('machine' in f)| ('robot' in f): df.set_value(i,'field','computer science') elif "account" in f: df.set_value(i,'field','accounting') elif ("electrical" in f )| ('electronic'in f): df.set_value(i,'field','electric') elif "mechanic" in f: df.set_value(i,'field','mechanic') elif "math" in f: df.set_value(i,'field','math') elif "law" in f: df.set_value(i,'field','law') elif ("software" in f) | ('computer' in f): df.set_value(i,'field','computer engineering') elif ("information t" in f) | ("it"==f) | (f=='ict')| ('it ' in f): df.set_value(i,'field','information technology') elif "information sec" in f: df.set_value(i,'field','information security') elif "economic" in f: df.set_value(i,'field','economics') elif "material" in f: df.set_value(i,'field','materials') elif "market" in f: df.set_value(i,'field','marketing') elif "construct" in f: df.set_value(i,'field','construction') elif "management eng" in f: df.set_value(i,'field','engineering management') elif "chemical" in f: df.set_value(i,'field','chemical engineering') elif "geology" in f: df.set_value(i,'field','geology') elif "power" in f: if "elec" in f: df.set_value(i,'field','electric') else: df.set_value(i,'field','power engineering') elif "project" in f: df.set_value(i,'field','project management') elif "physic" in f: df.set_value(i,'field','physics') elif "psyc" in f: df.set_value(i,'field','psychology') elif "media" in f: df.set_value(i,'field','media informatics') elif "business" in f: df.set_value(i,'field','mba') elif "finance" in f: df.set_value(i,'field','finance') elif ("english" in f) | ('tesol' in f): df.set_value(i,'field','english literature') elif "urban" in f: df.set_value(i,'field','urban planning') elif "network" in f: df.set_value(i,'field','networks and distributed systems') elif "biomedic" in f: df.set_value(i,'field','biomedical engineering') elif "water" in f: df.set_value(i,'field','water management') elif "telecommunication" in f: df.set_value(i,'field','telecommunication engineering') elif "industrial" in f: df.set_value(i,'field','industrial engineering') elif "statis" in f: df.set_value(i,'field','statistics') elif "information sys" in f: df.set_value(i,'field','information systems') elif "chemist" in f: df.set_value(i,'field','chemistry') elif "chem." in f: df.set_value(i,'field','chemical engineering') elif "mining" in f: df.set_value(i,'field','mining') print(str(df['field'].value_counts().count())+' types') head=df['field'].value_counts() head[head>5] head=head[head>5].index ``` ### Using difflib Library ### difflib.get_close_matches(word, possibilities[, n][, cutoff]) ### cutoff (default 0.6) ** Testing Library ** ``` difflib.get_close_matches('wireless communications engineering',head) ``` ** ! Wow Error with deafult CUTOFF ** ``` difflib.get_close_matches('wireless communications engineering',head,cutoff=0.7) ``` ** Better than default one ** ``` for i in df.index: f=df.get_value(i,'field') if f in head:continue f=f.replace(u")",'') f=f.replace(u"(",'') res=difflib.get_close_matches(f,head) if res==[]:continue df.set_value(i,'field',res[0]) new=df['field'].value_counts() new.count() print(str(new[new>4].count())+' main types') out=new[new<3] out.head() for i in df.index: f=df.get_value(i,'field') if f in out: df.set_value(i,'field','n/a') df['field'].value_counts().count() x=df.field.value_counts() pd.set_option('display.max_rows', len(x)) print(x) pd.reset_option('display.max_rows') groups=[ ["computer science","computer engineering","information technology","wireless","information systems","media informatics","information security","signal processing","telecommunication engineering","networks and distributed systems"],["mechanic"],["physics"],["math","statistics","operations research"],["chemical engineering","chemistry","biomedical engineering "],["civil","law","economics",""],["accounting","finance"],["geology","environmental engineering","petroleum engineering","water management","mining","aerospace engineering","hydraulic structures","materials"],["architectural engineering","architecture","structural engineering"],["industrial engineering"],["engineering management","mba","management","project management","urban planning "] ,["power engineering","electric","energy engineering"],["philosophy"],["psychology","neuroscience","sociology"], ["linguistics","english literature","applied linguistics"], ["marketing"],["education"],["automation"],["ece"]] groups[6] groups[8] for inx,d in df.iterrows(): for i,g in enumerate(groups): if (d['field']!='n/a') & (d['field'] in g): df.set_value(inx, 'fieldGroup', i) df.head() df.to_json('Data_Field.json',date_format='utf8') ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt import pandas as pd import difflib df = pd.read_json('../Data2.json') df.sort_index(inplace=True) df.head(2) df['field'].value_counts().head() df['field'].value_counts().count() df['field']=df['field'].str.lower() df['field'].value_counts().count() for i in df.index: f=df.get_value(i,'field').strip() if "civil" in f: df.set_value(i,'field','civil') elif "wireless" in f: df.set_value(i,'field','wireless') elif ("computer sci" in f) | ("compter sci" in f) | ('ai'==f) | ('artificial' in f)| ('machine' in f)| ('robot' in f): df.set_value(i,'field','computer science') elif "account" in f: df.set_value(i,'field','accounting') elif ("electrical" in f )| ('electronic'in f): df.set_value(i,'field','electric') elif "mechanic" in f: df.set_value(i,'field','mechanic') elif "math" in f: df.set_value(i,'field','math') elif "law" in f: df.set_value(i,'field','law') elif ("software" in f) | ('computer' in f): df.set_value(i,'field','computer engineering') elif ("information t" in f) | ("it"==f) | (f=='ict')| ('it ' in f): df.set_value(i,'field','information technology') elif "information sec" in f: df.set_value(i,'field','information security') elif "economic" in f: df.set_value(i,'field','economics') elif "material" in f: df.set_value(i,'field','materials') elif "market" in f: df.set_value(i,'field','marketing') elif "construct" in f: df.set_value(i,'field','construction') elif "management eng" in f: df.set_value(i,'field','engineering management') elif "chemical" in f: df.set_value(i,'field','chemical engineering') elif "geology" in f: df.set_value(i,'field','geology') elif "power" in f: if "elec" in f: df.set_value(i,'field','electric') else: df.set_value(i,'field','power engineering') elif "project" in f: df.set_value(i,'field','project management') elif "physic" in f: df.set_value(i,'field','physics') elif "psyc" in f: df.set_value(i,'field','psychology') elif "media" in f: df.set_value(i,'field','media informatics') elif "business" in f: df.set_value(i,'field','mba') elif "finance" in f: df.set_value(i,'field','finance') elif ("english" in f) | ('tesol' in f): df.set_value(i,'field','english literature') elif "urban" in f: df.set_value(i,'field','urban planning') elif "network" in f: df.set_value(i,'field','networks and distributed systems') elif "biomedic" in f: df.set_value(i,'field','biomedical engineering') elif "water" in f: df.set_value(i,'field','water management') elif "telecommunication" in f: df.set_value(i,'field','telecommunication engineering') elif "industrial" in f: df.set_value(i,'field','industrial engineering') elif "statis" in f: df.set_value(i,'field','statistics') elif "information sys" in f: df.set_value(i,'field','information systems') elif "chemist" in f: df.set_value(i,'field','chemistry') elif "chem." in f: df.set_value(i,'field','chemical engineering') elif "mining" in f: df.set_value(i,'field','mining') print(str(df['field'].value_counts().count())+' types') head=df['field'].value_counts() head[head>5] head=head[head>5].index difflib.get_close_matches('wireless communications engineering',head) difflib.get_close_matches('wireless communications engineering',head,cutoff=0.7) for i in df.index: f=df.get_value(i,'field') if f in head:continue f=f.replace(u")",'') f=f.replace(u"(",'') res=difflib.get_close_matches(f,head) if res==[]:continue df.set_value(i,'field',res[0]) new=df['field'].value_counts() new.count() print(str(new[new>4].count())+' main types') out=new[new<3] out.head() for i in df.index: f=df.get_value(i,'field') if f in out: df.set_value(i,'field','n/a') df['field'].value_counts().count() x=df.field.value_counts() pd.set_option('display.max_rows', len(x)) print(x) pd.reset_option('display.max_rows') groups=[ ["computer science","computer engineering","information technology","wireless","information systems","media informatics","information security","signal processing","telecommunication engineering","networks and distributed systems"],["mechanic"],["physics"],["math","statistics","operations research"],["chemical engineering","chemistry","biomedical engineering "],["civil","law","economics",""],["accounting","finance"],["geology","environmental engineering","petroleum engineering","water management","mining","aerospace engineering","hydraulic structures","materials"],["architectural engineering","architecture","structural engineering"],["industrial engineering"],["engineering management","mba","management","project management","urban planning "] ,["power engineering","electric","energy engineering"],["philosophy"],["psychology","neuroscience","sociology"], ["linguistics","english literature","applied linguistics"], ["marketing"],["education"],["automation"],["ece"]] groups[6] groups[8] for inx,d in df.iterrows(): for i,g in enumerate(groups): if (d['field']!='n/a') & (d['field'] in g): df.set_value(inx, 'fieldGroup', i) df.head() df.to_json('Data_Field.json',date_format='utf8')
0.070029
0.429071
# Twisted Gaussian (banana) toy LogPDF This distribution (described [here](http://pints.readthedocs.io/en/latest/toy/twisted_gaussian_logpdf.html)) has a curved "banana" shape. The problem can be more more or less difficult by changing the "bananicity" parameter `b`. ``` import pints import pints.toy import numpy as np import matplotlib.pyplot as plt # Create log pdf log_pdf = pints.toy.TwistedGaussianLogPDF(dimension=2) # Contour plot of pdf levels = np.linspace(-50, -1, 20) x = np.linspace(-50, 50, 250) y = np.linspace(-100, 20, 250) X, Y = np.meshgrid(x, y) Z = [[log_pdf([i, j]) for i in x] for j in y] plt.contour(X, Y, Z, levels = levels) plt.show() ``` We can also sample independently from this toy LogPDF, and add that to the visualisation: ``` direct = log_pdf.sample(15000) plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2) plt.scatter(direct[:, 0], direct[:, 1], alpha=0.2) plt.xlim(-50, 50) plt.ylim(-100, 20) plt.show() ``` We now try to sample from the distribution with MCMC: ``` # Create an adaptive covariance MCMC routine x0 = np.random.uniform(-25, 25, size=(3, 2)) mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HaarioBardenetACMC) # Stop after 10000 iterations mcmc.set_max_iterations(3000) # Disable logging mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') # Discard warm-up chains = [chain[1000:] for chain in chains] stacked = np.vstack(chains) plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2) plt.scatter(stacked[:, 0], stacked[:, 1], alpha=0.2) plt.xlim(-50, 50) plt.ylim(-100, 20) plt.show() ``` Now check how close the result is to the expected result, using the [Kullback-Leibler divergence](https://en.wikipedia.org/wiki/Kullback–Leibler_divergence), and compare this to the result from sampling directly. ``` print(log_pdf.kl_divergence(stacked)) print(log_pdf.kl_divergence(direct)) ``` Hamiltonian Monte Carlo fares much better on this curved density. ``` # Create an adaptive covariance MCMC routine x0 = np.random.uniform(-25, 25, size=(3, 2)) sigma0 = [5, 5] mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HamiltonianMCMC, sigma0=sigma0) # Stop after 10000 iterations mcmc.set_max_iterations(3000) # Disable logging mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') chains1 = [chain[1000:] for chain in chains] stacked = np.vstack(chains1) print(log_pdf.kl_divergence(stacked)) print(log_pdf.kl_divergence(direct)) ``` Visualising the path of a single HMC chain, we see that it moves naturally along contours although does occassionally suffer from divergent iterations (red dots) in the neck of the banana due to the varying posterior curvature throughout the domain. ``` divergent_transitions = mcmc.samplers()[0].divergent_iterations() plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2) plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5) plt.scatter(chains[0][divergent_transitions, 0], chains[0][divergent_transitions, 1], color='red') plt.xlim(-50, 50) plt.ylim(-100, 20) plt.show() ```
github_jupyter
import pints import pints.toy import numpy as np import matplotlib.pyplot as plt # Create log pdf log_pdf = pints.toy.TwistedGaussianLogPDF(dimension=2) # Contour plot of pdf levels = np.linspace(-50, -1, 20) x = np.linspace(-50, 50, 250) y = np.linspace(-100, 20, 250) X, Y = np.meshgrid(x, y) Z = [[log_pdf([i, j]) for i in x] for j in y] plt.contour(X, Y, Z, levels = levels) plt.show() direct = log_pdf.sample(15000) plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2) plt.scatter(direct[:, 0], direct[:, 1], alpha=0.2) plt.xlim(-50, 50) plt.ylim(-100, 20) plt.show() # Create an adaptive covariance MCMC routine x0 = np.random.uniform(-25, 25, size=(3, 2)) mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HaarioBardenetACMC) # Stop after 10000 iterations mcmc.set_max_iterations(3000) # Disable logging mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') # Discard warm-up chains = [chain[1000:] for chain in chains] stacked = np.vstack(chains) plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2) plt.scatter(stacked[:, 0], stacked[:, 1], alpha=0.2) plt.xlim(-50, 50) plt.ylim(-100, 20) plt.show() print(log_pdf.kl_divergence(stacked)) print(log_pdf.kl_divergence(direct)) # Create an adaptive covariance MCMC routine x0 = np.random.uniform(-25, 25, size=(3, 2)) sigma0 = [5, 5] mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HamiltonianMCMC, sigma0=sigma0) # Stop after 10000 iterations mcmc.set_max_iterations(3000) # Disable logging mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') chains1 = [chain[1000:] for chain in chains] stacked = np.vstack(chains1) print(log_pdf.kl_divergence(stacked)) print(log_pdf.kl_divergence(direct)) divergent_transitions = mcmc.samplers()[0].divergent_iterations() plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2) plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5) plt.scatter(chains[0][divergent_transitions, 0], chains[0][divergent_transitions, 1], color='red') plt.xlim(-50, 50) plt.ylim(-100, 20) plt.show()
0.53777
0.923661
``` import pandas as pd import numpy as np pd.set_option('max_columns',300) import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 20}) from scipy.stats import hmean # Data Link: http://ocslab.hksecurity.net/Datasets/driving-dataset # maybe more data: https://www.kaggle.com/data/27093 obd_data = pd.read_csv('ten_drivers.csv') obd_data = obd_data.drop(['Time(s)'], axis=1) print(obd_data.shape) obd_data.head() OFFSET = 648 # it's the same for all drivers # plt.figure(figsize=(20,8)) # plt.scatter(np.arange(obd_data.shape[0]), obd_data.Engine_speed-648) # plt.show() # print(np.min(obd_data.Engine_speed[:50])) obd_data.Engine_speed = obd_data.Engine_speed - OFFSET ``` <h1> Data preparation </h1> ``` drivers = driver0, driver1, driver2, driver3, driver4, driver5, driver6, driver7, driver8, driver9 = [obd_data.loc[np.where(obd_data.Class == class_letter)[0]].drop('Class',axis=1) for class_letter in np.unique(obd_data.Class)] drive_times = [d.shape[0] for d in drivers] x = [np.arange(drive_time) for drive_time in drive_times] print("Driving seconds of each driver:", drive_times) ``` <h1> Driving skills feature extraction </h1> ``` def lin_interpol(x0, x1, y0, y1, x): return y0 + (y1-y0)/(x1-x0) * (x - x0) def calc_scores(driver): # Higher values implie worse driving behaviour! # 1. RPM (Q3) q3_rpm_norm = np.percentile(driver.Engine_speed, 75) / np.max(obd_data.Engine_speed) # 2. Stering wheel speed (Q3) q3_steering_speed_norm = np.percentile(driver.Steering_wheel_speed, 98)/np.max(obd_data.Steering_wheel_speed) # percentage of the sample that falls below this observation # 3. Stering wheel angle (std) std_wheel_angle_norm = np.std(driver.Steering_wheel_angle)/100 # percentage of the sample that falls below this observation # Scale to new range (because it adulterated the final score results) x0 = 0.4762777 # min of all std_wheel_angle_norm x1 = 0.7427367 # max of all std_wheel_angle_norm y0 = 0.2222 # new min y1 = 0.3 # new max std_wheel_angle_norm_as_score = lin_interpol(x0, x1, y0, y1, std_wheel_angle_norm) # 4. Vehicle speed: (threshold) LIMIT = 90 threshold_speed_bool = np.sum(driver.Vehicle_speed > LIMIT) > 10 speed_bool_as_score = np.round(threshold_speed_bool * 0.1 + 0.2,2) # 5. Acceleration speed - Longitudinal: (Q3) q3_acc_speed_long_norm_as_score = np.percentile(np.abs(driver['Acceleration_speed_-_Longitudinal']), 80)/np.max(obd_data['Acceleration_speed_-_Longitudinal'])*3 # percentage of the sample that falls below this observation # 6. Throttle position: (Q3) q3_throttle_pos_norm = np.percentile(np.abs(driver['Absolute_throttle_position']), 80)/np.max(obd_data.Absolute_throttle_position) # percentage of the sample that falls below this observation # 7. Fuel consumption: (Q3) q3_fuel_cons_norm = np.percentile(driver.Fuel_consumption/np.max(obd_data.Fuel_consumption), 80) return q3_rpm_norm, q3_steering_speed_norm, std_wheel_angle_norm_as_score, speed_bool_as_score, q3_acc_speed_long_norm_as_score, q3_throttle_pos_norm, q3_fuel_cons_norm final_scores = [] for driver in drivers: score_reversed = calc_scores(driver) # Higher values implie worse driving behaviour! scores = [1-score for score in score_reversed] final_score = hmean(scores) # Higher values implie better driving behaviour! final_scores.append(final_score) final_score_better_range = lin_interpol(np.min(final_scores), np.max(final_scores), 0.5, 0.95, final_scores) print(final_score_better_range) plt.figure(figsize=(10,5)) plt.bar(np.arange(final_score_better_range.shape[0]), final_score_better_range, tick_label=np.arange(final_score_better_range.shape[0])+1) plt.xlabel("Driver") plt.ylabel("Driving Score") plt.savefig('Final_Driving_Scores.png') plt.show() ```
github_jupyter
import pandas as pd import numpy as np pd.set_option('max_columns',300) import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 20}) from scipy.stats import hmean # Data Link: http://ocslab.hksecurity.net/Datasets/driving-dataset # maybe more data: https://www.kaggle.com/data/27093 obd_data = pd.read_csv('ten_drivers.csv') obd_data = obd_data.drop(['Time(s)'], axis=1) print(obd_data.shape) obd_data.head() OFFSET = 648 # it's the same for all drivers # plt.figure(figsize=(20,8)) # plt.scatter(np.arange(obd_data.shape[0]), obd_data.Engine_speed-648) # plt.show() # print(np.min(obd_data.Engine_speed[:50])) obd_data.Engine_speed = obd_data.Engine_speed - OFFSET drivers = driver0, driver1, driver2, driver3, driver4, driver5, driver6, driver7, driver8, driver9 = [obd_data.loc[np.where(obd_data.Class == class_letter)[0]].drop('Class',axis=1) for class_letter in np.unique(obd_data.Class)] drive_times = [d.shape[0] for d in drivers] x = [np.arange(drive_time) for drive_time in drive_times] print("Driving seconds of each driver:", drive_times) def lin_interpol(x0, x1, y0, y1, x): return y0 + (y1-y0)/(x1-x0) * (x - x0) def calc_scores(driver): # Higher values implie worse driving behaviour! # 1. RPM (Q3) q3_rpm_norm = np.percentile(driver.Engine_speed, 75) / np.max(obd_data.Engine_speed) # 2. Stering wheel speed (Q3) q3_steering_speed_norm = np.percentile(driver.Steering_wheel_speed, 98)/np.max(obd_data.Steering_wheel_speed) # percentage of the sample that falls below this observation # 3. Stering wheel angle (std) std_wheel_angle_norm = np.std(driver.Steering_wheel_angle)/100 # percentage of the sample that falls below this observation # Scale to new range (because it adulterated the final score results) x0 = 0.4762777 # min of all std_wheel_angle_norm x1 = 0.7427367 # max of all std_wheel_angle_norm y0 = 0.2222 # new min y1 = 0.3 # new max std_wheel_angle_norm_as_score = lin_interpol(x0, x1, y0, y1, std_wheel_angle_norm) # 4. Vehicle speed: (threshold) LIMIT = 90 threshold_speed_bool = np.sum(driver.Vehicle_speed > LIMIT) > 10 speed_bool_as_score = np.round(threshold_speed_bool * 0.1 + 0.2,2) # 5. Acceleration speed - Longitudinal: (Q3) q3_acc_speed_long_norm_as_score = np.percentile(np.abs(driver['Acceleration_speed_-_Longitudinal']), 80)/np.max(obd_data['Acceleration_speed_-_Longitudinal'])*3 # percentage of the sample that falls below this observation # 6. Throttle position: (Q3) q3_throttle_pos_norm = np.percentile(np.abs(driver['Absolute_throttle_position']), 80)/np.max(obd_data.Absolute_throttle_position) # percentage of the sample that falls below this observation # 7. Fuel consumption: (Q3) q3_fuel_cons_norm = np.percentile(driver.Fuel_consumption/np.max(obd_data.Fuel_consumption), 80) return q3_rpm_norm, q3_steering_speed_norm, std_wheel_angle_norm_as_score, speed_bool_as_score, q3_acc_speed_long_norm_as_score, q3_throttle_pos_norm, q3_fuel_cons_norm final_scores = [] for driver in drivers: score_reversed = calc_scores(driver) # Higher values implie worse driving behaviour! scores = [1-score for score in score_reversed] final_score = hmean(scores) # Higher values implie better driving behaviour! final_scores.append(final_score) final_score_better_range = lin_interpol(np.min(final_scores), np.max(final_scores), 0.5, 0.95, final_scores) print(final_score_better_range) plt.figure(figsize=(10,5)) plt.bar(np.arange(final_score_better_range.shape[0]), final_score_better_range, tick_label=np.arange(final_score_better_range.shape[0])+1) plt.xlabel("Driver") plt.ylabel("Driving Score") plt.savefig('Final_Driving_Scores.png') plt.show()
0.505615
0.776284
# Personalized Medicine Kaggle Competition > "This was my approach to the Personalized Healthcare Redefining Cancer Treatment Kaggle competition. The goal of the competition was to create a machine learning algorithm that can classify genetic variations that are present in cancer cells." - toc:true - branch: master - badges: true - comments: false - author: Dario Arcos-Díaz - categories: [machine_learning, classification, healthcare] - image: images/Kaggle_logo.png This notebook describes my approach to the [Kaggle competition](https://www.kaggle.com/c/msk-redefining-cancer-treatment) named in the title. This was a research competition at Kaggle in cooperation with the Memorial Sloan Kettering Cancer Center (MSKCC). The goal of the competition was to create a machine learning algorithm that can classify genetic variations that are present in cancer cells. Tumors contain cells with many different abnormal mutations in their DNA: some of these mutations are the drivers of tumor growth, whereas others are neutral and considered *passengers*. Normally, mutations are manually classified into different categories after literature review by clinicians. The dataset made available for this competition contains mutations that have been manually anotated into 9 different categories. The goal is to predict the correct category of mutations in the test set. The model and submission described here got me to the 140th place (out of 1386 teams) or top 11%. ## Data The data comes in two different kinds of files: one of them contains information about the genetic variants (*training_variants* and *stage2_test_variants.csv*) and the other contains the text (clinical evidence) that was used to manually classify the variants (*training_text* and *stage2_test_text.csv*). The training data contains a class target feature corresponding to one of the 9 categories that variants can be classified as. *Note: the "stage2" prefix of the test files is due to the nature of the competition. There was an initial test set that was used at the beginning of the competition and a "stage2" test set that was used in the final week before the deadline to make the submissions.* ``` import os import re import string import pandas as pd import numpy as np train_variant = pd.read_csv("input/training_variants") test_variant = pd.read_csv("input/stage2_test_variants.csv") train_text = pd.read_csv("input/training_text", sep="\|\|", engine='python', header=None, skiprows=1, names=["ID","Text"]) test_text = pd.read_csv("input/stage2_test_text.csv", header=None, skiprows=1, names=["ID", "Text"]) train = pd.merge(train_variant, train_text, how='left', on='ID') train_y = train['Class'].values train_x = train.drop('Class', axis=1) train_size=len(train_x) print('Number of training variants: %d' % (train_size)) # number of train data : 3321 test_x = pd.merge(test_variant, test_text, how='left', on='ID') test_size=len(test_x) print('Number of test variants: %d' % (test_size)) # number of test data : 5668 test_index = test_x['ID'].values all_data = np.concatenate((train_x, test_x), axis=0) all_data = pd.DataFrame(all_data) all_data.columns = ["ID", "Gene", "Variation", "Text"] all_data.head() ``` The data from the different train and test files is now consolidated into one single file. This is necessary for the correct vectorization of the text data and categorical data later on. We can see that the text information resembles scientific article text. We will process this consolidated file in the next step. ## Preprocessing In order to be able to use this data to train a machine learning model, we need to extract the features from the dataset. This means that we have to transform the text data into vectors that can be understood by an algorithm. As I am not an expert in Natural Language Processing, I applied a modified version of [this script published on Kaggle.](https://www.kaggle.com/alyosama/doc2vec-with-keras-0-77) Afterwards we will have the data in a form that I can use to train a neural network. ``` # Pre-processing script by Aly Osama https://www.kaggle.com/alyosama/doc2vec-with-keras-0-77 from nltk.corpus import stopwords from gensim.models.doc2vec import LabeledSentence from gensim import utils def constructLabeledSentences(data): sentences=[] for index, row in data.iteritems(): sentences.append(LabeledSentence(utils.to_unicode(row).split(), ['Text' + '_%s' % str(index)])) return sentences def textClean(text): text = re.sub(r"[^A-Za-z0-9^,!.\/'+-=]", " ", str(text)) text = text.lower().split() stops = set(stopwords.words("english")) text = [w for w in text if not w in stops] text = " ".join(text) return(text) def cleanup(text): text = textClean(text) text= text.translate(str.maketrans("","", string.punctuation)) return text allText = all_data['Text'].apply(cleanup) sentences = constructLabeledSentences(allText) allText.head() # Pre-processing script by Aly Osama https://www.kaggle.com/alyosama/doc2vec-with-keras-0-77 # PROCESS TEXT DATA from gensim.models import Doc2Vec Text_INPUT_DIM=300 text_model=None filename='docEmbeddings_5_clean.d2v' if os.path.isfile(filename): text_model = Doc2Vec.load(filename) else: text_model = Doc2Vec(min_count=1, window=5, size=Text_INPUT_DIM, sample=1e-4, negative=5, workers=4, iter=5,seed=1) text_model.build_vocab(sentences) text_model.train(sentences, total_examples=text_model.corpus_count, epochs=text_model.iter) text_model.save(filename) text_train_arrays = np.zeros((train_size, Text_INPUT_DIM)) text_test_arrays = np.zeros((test_size, Text_INPUT_DIM)) for i in range(train_size): text_train_arrays[i] = text_model.docvecs['Text_'+str(i)] j=0 for i in range(train_size,train_size+test_size): text_test_arrays[j] = text_model.docvecs['Text_'+str(i)] j=j+1 print(text_train_arrays[0][:10]) # PROCESS GENE DATA from sklearn.decomposition import TruncatedSVD Gene_INPUT_DIM=25 svd = TruncatedSVD(n_components=25, n_iter=Gene_INPUT_DIM, random_state=12) one_hot_gene = pd.get_dummies(all_data['Gene']) truncated_one_hot_gene = svd.fit_transform(one_hot_gene.values) one_hot_variation = pd.get_dummies(all_data['Variation']) truncated_one_hot_variation = svd.fit_transform(one_hot_variation.values) # ENCODE THE LABELS FROM INTEGERS TO VECTORS from keras.utils import np_utils from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() label_encoder.fit(train_y) encoded_y = np_utils.to_categorical((label_encoder.transform(train_y))) print(encoded_y[0]) ``` We have processed the train labels, as printed above (`encoded_y`), into vectors that contain 1 in the index of the category that the sample belongs to, and zeros in all other indexes. Moreover, the training and test sets are now stacked together to look like this: ``` train_set=np.hstack((truncated_one_hot_gene[:train_size],truncated_one_hot_variation[:train_size],text_train_arrays)) test_set=np.hstack((truncated_one_hot_gene[train_size:],truncated_one_hot_variation[train_size:],text_test_arrays)) print('Training set shape is: ', train_set.shape) # (3321, 350) print('Test set shape is: ', test_set.shape) # (986, 350) print('Training set example rows:') print(train_set[0][:10]) # [ -2.46065582e-23 -5.21548048e-19 -1.95048372e-20 -2.44542833e-22 # -1.19176742e-22 1.61985461e-25 2.93618862e-25 -6.23860891e-27 # 1.14583929e-28 -1.79996588e-29] print('Test set example rows:') print(test_set[0][:10]) # [ 9.74220189e-33 -1.31484613e-27 4.37925347e-27 -9.88109317e-29 # 7.66365772e-27 6.58254980e-26 -3.74901712e-26 -8.97613299e-26 # -3.75471102e-23 -1.05563623e-21] ``` Our data is now ready to be fed into a machine learning model, in this case, into a neural network in TensorFlow. ## Training a 4-layer neural network for classification The next step is to create a neural network on TensorFlow. I am using a fully-connected neural network with 4 layers. For details on how the network is built, you can check my [TensorFlow MNIST notebook](https://github.com/dariodata/TensorFlow-MNIST/blob/master/TensorFlow-MNIST.ipynb). Wherever necessary, I will explains what adaptations were specifically necessary for this challenge. ``` import math import time import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split from tensorflow.python.framework import ops %matplotlib inline np.random.seed(1) ``` I found it useful to add the current timestamp to the name of the files that the code will output. This helped me to uniquely identify the results from each run. ``` timestr = time.strftime("%Y%m%d-%H%M%S") dirname = 'output/' # output directory filename = '' ``` I select 20% of the training data to use as a validation set and be able to quantify my variance (watch out for overfitting), as I don't want to have an algorithm that only works well with this specific training data set that was provided, but one that generalizes as well as possible. ``` # split data into training and validation sets X_train, X_val, Y_train, Y_val = train_test_split(train_set, encoded_y, test_size=0.20, random_state=42) X_train, X_val, Y_train, Y_val = X_train.T, X_val.T, Y_train.T, Y_val.T # transpose test set X_test = test_set.T # view data set shapes print('X_train: ', X_train.shape) print('X_val: ', X_val.shape) print('Y_train: ', Y_train.shape) print('Y_val: ', Y_val.shape) print('X_test: ', X_test.shape) ``` Now I define the functions needed to build the neural network. ``` def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, dimensions of the input n_y -- scalar, number of classes (from 0 to 8, so -> 9) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" """ X = tf.placeholder(tf.float32, shape=(n_x, None), name='X') Y = tf.placeholder(tf.float32, shape=(n_y, None), name='Y') return X, Y def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. Returns: parameters -- a dictionary of tensors containing W and b for every layer """ tf.set_random_seed(1) W1 = tf.get_variable('W1', [350, X_train.shape[0]], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b1 = tf.get_variable('b1', [350, 1], initializer=tf.zeros_initializer()) W2 = tf.get_variable('W2', [350, 350], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b2 = tf.get_variable('b2', [350, 1], initializer=tf.zeros_initializer()) W3 = tf.get_variable('W3', [100, 350], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b3 = tf.get_variable('b3', [100, 1], initializer=tf.zeros_initializer()) W4 = tf.get_variable('W4', [9, 100], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b4 = tf.get_variable('b4', [9, 1], initializer=tf.zeros_initializer()) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3, "W4": W4, "b4": b4} return parameters def forward_propagation(X, parameters, keep_prob1, keep_prob2): """ Implements the forward propagation for the model: (LINEAR -> RELU)^3 -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W" and "b" for every layer the shapes are given in initialize_parameters Returns: Z4 -- the output of the last LINEAR unit (logits) """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] W4 = parameters['W4'] b4 = parameters['b4'] Z1 = tf.matmul(W1, X) + b1 # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) A1 = tf.nn.dropout(A1, keep_prob1) # add dropout Z2 = tf.matmul(W2, A1) + b2 # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) A2 = tf.nn.dropout(A2, keep_prob2) # add dropout Z3 = tf.matmul(W3, A2) + b3 # Z3 = np.dot(W3,Z2) + b3 A3 = tf.nn.relu(Z3) Z4 = tf.matmul(W4, A3) + b4 return Z4 def compute_cost(Z4, Y): """ Computes the cost Arguments: Z4 -- output of forward propagation (output of the last LINEAR unit), of shape (n_classes, number of examples) Y -- "true" labels vector placeholder, same shape as Z4 Returns: cost - Tensor of the cost function """ # transpose to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z4) labels = tf.transpose(Y) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) return cost def random_mini_batches(X, Y, mini_batch_size, seed=0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector, of shape (1, number of examples) mini_batch_size - size of the mini-batches, integer seed Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ m = X.shape[1] # number of training examples mini_batches = [] np.random.seed(seed) # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((Y.shape[0], m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor( m / mini_batch_size) # number of mini batches of size mini_batch_size in your partitioning for k in range(0, num_complete_minibatches): mini_batch_X = shuffled_X[:, k * mini_batch_size: k * mini_batch_size + mini_batch_size] mini_batch_Y = shuffled_Y[:, k * mini_batch_size: k * mini_batch_size + mini_batch_size] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size: m] mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size: m] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches def predict(X, parameters): W1 = tf.convert_to_tensor(parameters['W1']) b1 = tf.convert_to_tensor(parameters["b1"]) W2 = tf.convert_to_tensor(parameters["W2"]) b2 = tf.convert_to_tensor(parameters["b2"]) W3 = tf.convert_to_tensor(parameters["W3"]) b3 = tf.convert_to_tensor(parameters["b3"]) W4 = tf.convert_to_tensor(parameters["W4"]) b4 = tf.convert_to_tensor(parameters["b4"]) params = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3, "W4": W4, "b4": b4} x = tf.placeholder("float", [X_train.shape[0], None]) keep_prob1 = tf.placeholder(tf.float32, name='keep_prob1') keep_prob2 = tf.placeholder(tf.float32, name='keep_prob2') z4 = forward_propagation(x, params, keep_prob1, keep_prob2) p = tf.nn.softmax(z4, dim=0) # dim=0 because the classes are on that axis # p = tf.argmax(z4) # this gives only the predicted class as output sess = tf.Session() prediction = sess.run(p, feed_dict={x: X, keep_prob1: 1.0, keep_prob2: 1.0}) return prediction ``` And now I define the model function which is in fact the neural network that we will train afterwards. An important difference with respect to [my previous MNIST example](https://github.com/dariodata/TensorFlow-MNIST/blob/master/TensorFlow-MNIST.ipynb) is that I added an additional regularization term to the cost function. I used L2 regularization to penalize the weights in all four layers. The bias was not penalized as this is not necessary. The strictness of this penalty was given by a `beta` constant defined at 0.01. Why use additional regularization? Because this allowed me to decrease the variance, i.e. decrease the difference in performance of the model with the training set compared to the validation set. This produced my best submission in the competition. ``` def model(X_train, Y_train, X_test, Y_test, learning_rate=0.0001, num_epochs=1000, minibatch_size=64, print_cost=True): """ Implements a four-layer tensorflow neural network: (LINEAR->RELU)^3->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size, number of training examples) Y_train -- test set, of shape (output size, number of training examples) X_test -- training set, of shape (input size, number of training examples) Y_test -- test set, of shape (output size, number of test examples) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost t0 = time.time() # to mark the start of the training # Create Placeholders of shape (n_x, n_y) X, Y = create_placeholders(n_x, n_y) keep_prob1 = tf.placeholder(tf.float32, name='keep_prob1') # probability to keep a unit during dropout keep_prob2 = tf.placeholder(tf.float32, name='keep_prob2') # Initialize parameters parameters = initialize_parameters() # Forward propagation Z4 = forward_propagation(X, parameters, keep_prob1, keep_prob2) # Cost function cost = compute_cost(Z4, Y) regularizers = tf.nn.l2_loss(parameters['W1']) + tf.nn.l2_loss(parameters['W2']) + tf.nn.l2_loss(parameters['W3']) \ + tf.nn.l2_loss(parameters['W4']) # add regularization term beta = 0.01 # regularization constant cost = tf.reduce_mean(cost + beta * regularizers) # cost with regularization # Backpropagation: Define the tensorflow AdamOptimizer. optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # Run the session to execute the "optimizer" and the "cost" _, minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y, keep_prob1: 0.7, keep_prob2: 0.5}) epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print("Cost after epoch {}: {:f}".format(epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # lets save the parameters in a variable parameters = sess.run(parameters) print("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z4), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) train_cost = cost.eval({X: X_train, Y: Y_train, keep_prob1: 1.0, keep_prob2: 1.0}) test_cost = cost.eval({X: X_test, Y: Y_test, keep_prob1: 1.0, keep_prob2: 1.0}) train_accuracy = accuracy.eval({X: X_train, Y: Y_train, keep_prob1: 1.0, keep_prob2: 1.0}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test, keep_prob1: 1.0, keep_prob2: 1.0}) print('Finished training in %s s' % (time.time() - t0)) print("Train Cost:", train_cost) print("Test Cost:", test_cost) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per fives)') plt.title("Learning rate = {}, beta = {},\n" "test cost = {:.6f}, test accuracy = {:.6f}".format(learning_rate, beta, test_cost, test_accuracy)) global filename filename = timestr + '_NN4Lstage2_lr_{}_beta_{}_cost_{:.2f}-{:.2f}_acc_{:.2f}-{:.2f}'.format( learning_rate, beta, train_cost, test_cost, train_accuracy, test_accuracy) plt.savefig(dirname + filename + '.png') return parameters ``` Note that the model function will return the learned parameters from the network and additionally will plot the cost after each epoch. The plot is also saved as a file that includes the timestamp as well as the learning rate, beta, cost and accuracy information for this particular run. Now it's time to train the model using the train and validation data: ``` # train the model and get learned parameters parameters = model(X_train, Y_train, X_val, Y_val) ``` From my validation results we can observe that the network learned nicely. However, the final cost of the training data was 0.665462, where as the validation data had a final cost of 1.74987. This is a large difference and an indication that the model is overfitting. Moreover the accuracy (defined here as the fraction of correct predictions) is very high (97.9%) for the training data and only 64.3% for the validation set. Another indication that the model is overfitting even though I have used both dropout and L2 regularization to counteract this. ## Make predictions We use the learned parameteres to make a prediction on the test data. ``` # use learned parameters to make prediction on test data prediction = predict(X_test, parameters) ``` Let's look at an example of a prediction. As we can see below, the prediction consists of the probabilities of the entry belongin to each of the nine different categories (this was the format needed for this competition). ``` prediction[:,0] prediction.shape ``` All we have to do now is create a submission .csv file to save our prediction results. ``` # create submission file submission = pd.DataFrame(prediction.T) submission['id'] = test_index submission.columns = ['class1', 'class2', 'class3', 'class4', 'class5', 'class6', 'class7', 'class8', 'class9', 'id'] submission.to_csv(dirname + filename + '.csv', index=False) ``` ## Results interpretation Using this neural network model, my submission to Kaggle yielded following results: - Public score (based on a portion of the test data by Kaggle to provide an indication of performance during the competition): Loss = 1.69148 - Private score (based on a different portion of the test data by Kaggle to provide the final score at the end of the competition): Loss = 2.74500 The discrepancy between these two scores further shows that overfitting is an issue in working with this data in a neural network model. My model could benefit from increasing the training data and a higher regularization.
github_jupyter
import os import re import string import pandas as pd import numpy as np train_variant = pd.read_csv("input/training_variants") test_variant = pd.read_csv("input/stage2_test_variants.csv") train_text = pd.read_csv("input/training_text", sep="\|\|", engine='python', header=None, skiprows=1, names=["ID","Text"]) test_text = pd.read_csv("input/stage2_test_text.csv", header=None, skiprows=1, names=["ID", "Text"]) train = pd.merge(train_variant, train_text, how='left', on='ID') train_y = train['Class'].values train_x = train.drop('Class', axis=1) train_size=len(train_x) print('Number of training variants: %d' % (train_size)) # number of train data : 3321 test_x = pd.merge(test_variant, test_text, how='left', on='ID') test_size=len(test_x) print('Number of test variants: %d' % (test_size)) # number of test data : 5668 test_index = test_x['ID'].values all_data = np.concatenate((train_x, test_x), axis=0) all_data = pd.DataFrame(all_data) all_data.columns = ["ID", "Gene", "Variation", "Text"] all_data.head() # Pre-processing script by Aly Osama https://www.kaggle.com/alyosama/doc2vec-with-keras-0-77 from nltk.corpus import stopwords from gensim.models.doc2vec import LabeledSentence from gensim import utils def constructLabeledSentences(data): sentences=[] for index, row in data.iteritems(): sentences.append(LabeledSentence(utils.to_unicode(row).split(), ['Text' + '_%s' % str(index)])) return sentences def textClean(text): text = re.sub(r"[^A-Za-z0-9^,!.\/'+-=]", " ", str(text)) text = text.lower().split() stops = set(stopwords.words("english")) text = [w for w in text if not w in stops] text = " ".join(text) return(text) def cleanup(text): text = textClean(text) text= text.translate(str.maketrans("","", string.punctuation)) return text allText = all_data['Text'].apply(cleanup) sentences = constructLabeledSentences(allText) allText.head() # Pre-processing script by Aly Osama https://www.kaggle.com/alyosama/doc2vec-with-keras-0-77 # PROCESS TEXT DATA from gensim.models import Doc2Vec Text_INPUT_DIM=300 text_model=None filename='docEmbeddings_5_clean.d2v' if os.path.isfile(filename): text_model = Doc2Vec.load(filename) else: text_model = Doc2Vec(min_count=1, window=5, size=Text_INPUT_DIM, sample=1e-4, negative=5, workers=4, iter=5,seed=1) text_model.build_vocab(sentences) text_model.train(sentences, total_examples=text_model.corpus_count, epochs=text_model.iter) text_model.save(filename) text_train_arrays = np.zeros((train_size, Text_INPUT_DIM)) text_test_arrays = np.zeros((test_size, Text_INPUT_DIM)) for i in range(train_size): text_train_arrays[i] = text_model.docvecs['Text_'+str(i)] j=0 for i in range(train_size,train_size+test_size): text_test_arrays[j] = text_model.docvecs['Text_'+str(i)] j=j+1 print(text_train_arrays[0][:10]) # PROCESS GENE DATA from sklearn.decomposition import TruncatedSVD Gene_INPUT_DIM=25 svd = TruncatedSVD(n_components=25, n_iter=Gene_INPUT_DIM, random_state=12) one_hot_gene = pd.get_dummies(all_data['Gene']) truncated_one_hot_gene = svd.fit_transform(one_hot_gene.values) one_hot_variation = pd.get_dummies(all_data['Variation']) truncated_one_hot_variation = svd.fit_transform(one_hot_variation.values) # ENCODE THE LABELS FROM INTEGERS TO VECTORS from keras.utils import np_utils from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() label_encoder.fit(train_y) encoded_y = np_utils.to_categorical((label_encoder.transform(train_y))) print(encoded_y[0]) train_set=np.hstack((truncated_one_hot_gene[:train_size],truncated_one_hot_variation[:train_size],text_train_arrays)) test_set=np.hstack((truncated_one_hot_gene[train_size:],truncated_one_hot_variation[train_size:],text_test_arrays)) print('Training set shape is: ', train_set.shape) # (3321, 350) print('Test set shape is: ', test_set.shape) # (986, 350) print('Training set example rows:') print(train_set[0][:10]) # [ -2.46065582e-23 -5.21548048e-19 -1.95048372e-20 -2.44542833e-22 # -1.19176742e-22 1.61985461e-25 2.93618862e-25 -6.23860891e-27 # 1.14583929e-28 -1.79996588e-29] print('Test set example rows:') print(test_set[0][:10]) # [ 9.74220189e-33 -1.31484613e-27 4.37925347e-27 -9.88109317e-29 # 7.66365772e-27 6.58254980e-26 -3.74901712e-26 -8.97613299e-26 # -3.75471102e-23 -1.05563623e-21] import math import time import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split from tensorflow.python.framework import ops %matplotlib inline np.random.seed(1) timestr = time.strftime("%Y%m%d-%H%M%S") dirname = 'output/' # output directory filename = '' # split data into training and validation sets X_train, X_val, Y_train, Y_val = train_test_split(train_set, encoded_y, test_size=0.20, random_state=42) X_train, X_val, Y_train, Y_val = X_train.T, X_val.T, Y_train.T, Y_val.T # transpose test set X_test = test_set.T # view data set shapes print('X_train: ', X_train.shape) print('X_val: ', X_val.shape) print('Y_train: ', Y_train.shape) print('Y_val: ', Y_val.shape) print('X_test: ', X_test.shape) def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, dimensions of the input n_y -- scalar, number of classes (from 0 to 8, so -> 9) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" """ X = tf.placeholder(tf.float32, shape=(n_x, None), name='X') Y = tf.placeholder(tf.float32, shape=(n_y, None), name='Y') return X, Y def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. Returns: parameters -- a dictionary of tensors containing W and b for every layer """ tf.set_random_seed(1) W1 = tf.get_variable('W1', [350, X_train.shape[0]], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b1 = tf.get_variable('b1', [350, 1], initializer=tf.zeros_initializer()) W2 = tf.get_variable('W2', [350, 350], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b2 = tf.get_variable('b2', [350, 1], initializer=tf.zeros_initializer()) W3 = tf.get_variable('W3', [100, 350], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b3 = tf.get_variable('b3', [100, 1], initializer=tf.zeros_initializer()) W4 = tf.get_variable('W4', [9, 100], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b4 = tf.get_variable('b4', [9, 1], initializer=tf.zeros_initializer()) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3, "W4": W4, "b4": b4} return parameters def forward_propagation(X, parameters, keep_prob1, keep_prob2): """ Implements the forward propagation for the model: (LINEAR -> RELU)^3 -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W" and "b" for every layer the shapes are given in initialize_parameters Returns: Z4 -- the output of the last LINEAR unit (logits) """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] W4 = parameters['W4'] b4 = parameters['b4'] Z1 = tf.matmul(W1, X) + b1 # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) A1 = tf.nn.dropout(A1, keep_prob1) # add dropout Z2 = tf.matmul(W2, A1) + b2 # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) A2 = tf.nn.dropout(A2, keep_prob2) # add dropout Z3 = tf.matmul(W3, A2) + b3 # Z3 = np.dot(W3,Z2) + b3 A3 = tf.nn.relu(Z3) Z4 = tf.matmul(W4, A3) + b4 return Z4 def compute_cost(Z4, Y): """ Computes the cost Arguments: Z4 -- output of forward propagation (output of the last LINEAR unit), of shape (n_classes, number of examples) Y -- "true" labels vector placeholder, same shape as Z4 Returns: cost - Tensor of the cost function """ # transpose to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z4) labels = tf.transpose(Y) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) return cost def random_mini_batches(X, Y, mini_batch_size, seed=0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector, of shape (1, number of examples) mini_batch_size - size of the mini-batches, integer seed Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ m = X.shape[1] # number of training examples mini_batches = [] np.random.seed(seed) # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((Y.shape[0], m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor( m / mini_batch_size) # number of mini batches of size mini_batch_size in your partitioning for k in range(0, num_complete_minibatches): mini_batch_X = shuffled_X[:, k * mini_batch_size: k * mini_batch_size + mini_batch_size] mini_batch_Y = shuffled_Y[:, k * mini_batch_size: k * mini_batch_size + mini_batch_size] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size: m] mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size: m] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches def predict(X, parameters): W1 = tf.convert_to_tensor(parameters['W1']) b1 = tf.convert_to_tensor(parameters["b1"]) W2 = tf.convert_to_tensor(parameters["W2"]) b2 = tf.convert_to_tensor(parameters["b2"]) W3 = tf.convert_to_tensor(parameters["W3"]) b3 = tf.convert_to_tensor(parameters["b3"]) W4 = tf.convert_to_tensor(parameters["W4"]) b4 = tf.convert_to_tensor(parameters["b4"]) params = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3, "W4": W4, "b4": b4} x = tf.placeholder("float", [X_train.shape[0], None]) keep_prob1 = tf.placeholder(tf.float32, name='keep_prob1') keep_prob2 = tf.placeholder(tf.float32, name='keep_prob2') z4 = forward_propagation(x, params, keep_prob1, keep_prob2) p = tf.nn.softmax(z4, dim=0) # dim=0 because the classes are on that axis # p = tf.argmax(z4) # this gives only the predicted class as output sess = tf.Session() prediction = sess.run(p, feed_dict={x: X, keep_prob1: 1.0, keep_prob2: 1.0}) return prediction def model(X_train, Y_train, X_test, Y_test, learning_rate=0.0001, num_epochs=1000, minibatch_size=64, print_cost=True): """ Implements a four-layer tensorflow neural network: (LINEAR->RELU)^3->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size, number of training examples) Y_train -- test set, of shape (output size, number of training examples) X_test -- training set, of shape (input size, number of training examples) Y_test -- test set, of shape (output size, number of test examples) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost t0 = time.time() # to mark the start of the training # Create Placeholders of shape (n_x, n_y) X, Y = create_placeholders(n_x, n_y) keep_prob1 = tf.placeholder(tf.float32, name='keep_prob1') # probability to keep a unit during dropout keep_prob2 = tf.placeholder(tf.float32, name='keep_prob2') # Initialize parameters parameters = initialize_parameters() # Forward propagation Z4 = forward_propagation(X, parameters, keep_prob1, keep_prob2) # Cost function cost = compute_cost(Z4, Y) regularizers = tf.nn.l2_loss(parameters['W1']) + tf.nn.l2_loss(parameters['W2']) + tf.nn.l2_loss(parameters['W3']) \ + tf.nn.l2_loss(parameters['W4']) # add regularization term beta = 0.01 # regularization constant cost = tf.reduce_mean(cost + beta * regularizers) # cost with regularization # Backpropagation: Define the tensorflow AdamOptimizer. optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # Run the session to execute the "optimizer" and the "cost" _, minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y, keep_prob1: 0.7, keep_prob2: 0.5}) epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print("Cost after epoch {}: {:f}".format(epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # lets save the parameters in a variable parameters = sess.run(parameters) print("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z4), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) train_cost = cost.eval({X: X_train, Y: Y_train, keep_prob1: 1.0, keep_prob2: 1.0}) test_cost = cost.eval({X: X_test, Y: Y_test, keep_prob1: 1.0, keep_prob2: 1.0}) train_accuracy = accuracy.eval({X: X_train, Y: Y_train, keep_prob1: 1.0, keep_prob2: 1.0}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test, keep_prob1: 1.0, keep_prob2: 1.0}) print('Finished training in %s s' % (time.time() - t0)) print("Train Cost:", train_cost) print("Test Cost:", test_cost) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per fives)') plt.title("Learning rate = {}, beta = {},\n" "test cost = {:.6f}, test accuracy = {:.6f}".format(learning_rate, beta, test_cost, test_accuracy)) global filename filename = timestr + '_NN4Lstage2_lr_{}_beta_{}_cost_{:.2f}-{:.2f}_acc_{:.2f}-{:.2f}'.format( learning_rate, beta, train_cost, test_cost, train_accuracy, test_accuracy) plt.savefig(dirname + filename + '.png') return parameters # train the model and get learned parameters parameters = model(X_train, Y_train, X_val, Y_val) # use learned parameters to make prediction on test data prediction = predict(X_test, parameters) prediction[:,0] prediction.shape # create submission file submission = pd.DataFrame(prediction.T) submission['id'] = test_index submission.columns = ['class1', 'class2', 'class3', 'class4', 'class5', 'class6', 'class7', 'class8', 'class9', 'id'] submission.to_csv(dirname + filename + '.csv', index=False)
0.416915
0.877739
# Roots: Bracketing Methods Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root has been found. They generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require to start with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods for getting information on the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which ensure finding all real roots with a guaranteed accuracy. ## GRAPHICAL METHODS A simple method for obtaining an estimate of the root of the equation $f (x) = 0$ is to make a plot of the function and observe where it crosses the x axis. Given this function $$f(m) = \sqrt{\frac{gm}{c_d}}\tanh(\sqrt{\frac{gc_d}{m}}t) - v(t)$$ We need to find the value of mass due some conditions ``` import numpy as np import scipy as sc import matplotlib.pyplot as plt # initial conditions cd = 0.25 g = 9.81 v = 30 t = 5 x = np.linspace(20,50,100) y = np.sqrt(g*x/cd)*np.tanh(np.sqrt(g*cd/x)*t) - v # Plot plt.plot(x,y) plt.grid(color='k', linestyle='--', linewidth=1) ``` The function crosses the m axis between 25 and 30 kg. Visual inspection of the plot provides a rough estimate of the root of 28 kg. Assuming that the mass is 28kg, let's see the value of velocity ``` mass = 28 v_est = np.sqrt(g*mass/cd)*np.tanh(np.sqrt(g*cd/mass)*t) v_est ``` 29.8795 is not 30, right? But it's fine, for now. Graphical techniques are of limited practical value because they are not very precise. However, graphical methods can be utilized to obtain rough estimates of roots. These esti- mates can be employed as starting guesses for numerical methods ## BRACKETING METHODS AND INITIAL GUESSES If you had a roots problem in the days before computing, you’d often be told to use “trial and error” to come up with the root. But, for many other problems, it is preferable to have methods that come up with the correct answer automatically. Interestingly, as with trial and error, these approaches require an initial “guess” to get started ### Incremental Search Using the Bolzano theorm, if $f:[a,b]\to \Re$$ ,y = f(x)$ and continuous in the interval from $a$ to $b$ and $f(a)$ and $f(b)$ have opposite signs, that is $f(a).f(b) < 0$ then there is at least one real root betwen $[a,b]$ Incremental search methods capitalize on this observation by locating an interval where the function changes sign A problem with an incremental search is the choice of the increment length. If the length is too small, the search can be very time consuming. On the other hand, if the length is too great, there is a possibility that closely spaced roots might be missed (Fig. 5.3). The problem is compounded by the possible exis- tence of multiple roots Identify brackets within the interval $[3,6]$ for the funciton $f(x) = sin(10x) + cos(3x)$ ``` def inc_search(func, x_min, x_max, ns): """ incsearch: incremental search root locator xb = incsearch(func,xmin,xmax,ns): finds brackets of x that contain sign changes of a function on an interval input: func = name of function xmin, xmax = endpoints of interval ns = number of subintervals output: xb(k,1) is the lower bound of the kth sign change xb(k,2) is the upper bound of the kth sign change If no brackets found, xb = []. if nargin < 3, error('at least 3 arguments required'), end if nargin < 4, ns = 50; end %if ns blank set to 50 """ # incremental search x = np.linspace(x_min,x_max,ns) f = func(x) nb = 0 xb = [] for i in range(0,len(x)-1): if np.sign(f[i]) is not np.sign(f[i+1]): nb += 1 xb[i,1] = x[i] xb[i,2] = x[i+1] if not xb: print("No brackets found") print("Check interval or increase number of intervals") else: print("The number os brackets is: " + str(nb)) return xb inc_search(lambda x: np.sin(10*x)+np.cos(3*x),3,6,50) ```
github_jupyter
import numpy as np import scipy as sc import matplotlib.pyplot as plt # initial conditions cd = 0.25 g = 9.81 v = 30 t = 5 x = np.linspace(20,50,100) y = np.sqrt(g*x/cd)*np.tanh(np.sqrt(g*cd/x)*t) - v # Plot plt.plot(x,y) plt.grid(color='k', linestyle='--', linewidth=1) mass = 28 v_est = np.sqrt(g*mass/cd)*np.tanh(np.sqrt(g*cd/mass)*t) v_est def inc_search(func, x_min, x_max, ns): """ incsearch: incremental search root locator xb = incsearch(func,xmin,xmax,ns): finds brackets of x that contain sign changes of a function on an interval input: func = name of function xmin, xmax = endpoints of interval ns = number of subintervals output: xb(k,1) is the lower bound of the kth sign change xb(k,2) is the upper bound of the kth sign change If no brackets found, xb = []. if nargin < 3, error('at least 3 arguments required'), end if nargin < 4, ns = 50; end %if ns blank set to 50 """ # incremental search x = np.linspace(x_min,x_max,ns) f = func(x) nb = 0 xb = [] for i in range(0,len(x)-1): if np.sign(f[i]) is not np.sign(f[i+1]): nb += 1 xb[i,1] = x[i] xb[i,2] = x[i+1] if not xb: print("No brackets found") print("Check interval or increase number of intervals") else: print("The number os brackets is: " + str(nb)) return xb inc_search(lambda x: np.sin(10*x)+np.cos(3*x),3,6,50)
0.591369
0.98551
``` from discopy import Ty, Id, Box, Diagram, Word # POS TAGS: s, n, np, adj, tv, iv, vp, rpron = Ty('S'), Ty('N'), Ty('NP'), Ty('ADJ'), Ty('TV'), Ty('IV'), Ty('VP'), Ty('RPRON') # The CFG's production rules are boxes. R0 = Box('R0', np @ vp, s) R1 = Box('R1', tv @ np , vp) R2 = Box('R2', adj @ n, np) R3 = Box('R3', iv, vp) R4 = Box('R4', n, np) R5 = Box('R5', n @ rpron @ vp, np) prods = [R0, R1, R3, R4, R5] # WORDS: nouns = ['Bojack', 'Diane', 'Eve', 'Fiona'] tverbs = ['loves', 'kills'] iverbs = ['sleeps', 'dies'] adjs = [] rprons = ['who'] vocab = [Word(x, n) for x in nouns] + [Word(x, tv) for x in tverbs] + [Word(x, adj) for x in adjs] +\ [Word(x, iv) for x in iverbs] + [Word(x, rpron) for x in rprons] from discopy.grammar import CFG # Language generation from a CFG: productions = prods + vocab cfg = CFG(*productions) gen = cfg.generate(s, 30, 12, max_iter=1000, remove_duplicates=True) sentences = [] for sentence in gen: sentences += [sentence] print('Example of a CFG tree:') sentences[12].draw(aspect='auto') from discopy.rigid import Cup, Cap, Functor # From POS tags to Pregroup types: ob = {n : n, s: s, adj: n @ n.l, tv: n.r @ s @ n.l, iv: n.r @ s, vp: n.r @ s, np: n, rpron: n.r @ n @ s.l @ n} # From CFG rules to Pregroup reductions: ar = {R0: Cup(n, n.r) @ Id(s), R1: Id(n.r @ s) @ Cup(n.l, n), R2: Id(n) @ Cup(n.l, n), R3 : Id( n.r @ s), R4: Id(n), R5: Cup(n, n.r) @ Id(n) @ Diagram.cups(s.l @ n, n.r @ s)} # Obtain pregroup dictionnary: new_vocab = [Word(x.name, ob[x.cod]) for x in vocab] arx = {vocab[i]: new_vocab[i] for i in range(len(vocab))} ar.update(arx) T2P = Functor(ob, ar) from discopy.grammar import draw print('Corresponding pregroup reduction:') draw(T2P(sentences[12])) from discopy.circuit import Ket, IQPansatz, Euler, CircuitFunctor from discopy.circuit import sqrt, Circuit, H, Id, CZ, Perm from discopy import Quiver import numpy as np ob = {s: 1, n: 1} depth = 2 GHZ = sqrt(2) @ Ket(0, 0, 0) >> H @ H @ H >> CZ @ Id(1) >> Id(1) @ CZ >> H @ Id(1) @ H def who_ansatz(qubits_n, qubits_s): circ = GHZ for i in range(1, qubits_n): circ = circ @ GHZ circ = circ >> Perm([x + 3*y for x in range(3) for y in range(qubits_n)]) return circ >> Id(2*qubits_n) @ Ket(*[0 for i in range(qubits_s)]) @ Id(qubits_n) def arity(word): return sum(ob[Ty(ty.name)] for ty in word.cod) def ansatz(word): if arity(word) == 1: return Ket(0) >> Euler(np.random.rand(3)) elif word.name == 'who': return who_ansatz(ob[n], ob[s]) else: k = arity(word) return Ket(*tuple([0 for i in range(k)])) >> IQPansatz(k, np.random.rand(depth, k - 1)) F = CircuitFunctor(ob, Quiver(ansatz)) circuits = [F(T2P(s)) for s in sentences] print('Corresponding circuit:') circuits[12].draw(aspect='auto') # prepare tket circuits tket_circuits = [circ.to_tk().measure_all() for circ in circuits] print('Corresponding tket circuit:') print(tket_circuits[12]) # backend = ... evaluate = lambda sentence: F(T2P(sentence)).get_counts(backend, n_shots=10000).array for sentence in sentences: draw(T2P(sentence)) print(evaluate(sentence)) ```
github_jupyter
from discopy import Ty, Id, Box, Diagram, Word # POS TAGS: s, n, np, adj, tv, iv, vp, rpron = Ty('S'), Ty('N'), Ty('NP'), Ty('ADJ'), Ty('TV'), Ty('IV'), Ty('VP'), Ty('RPRON') # The CFG's production rules are boxes. R0 = Box('R0', np @ vp, s) R1 = Box('R1', tv @ np , vp) R2 = Box('R2', adj @ n, np) R3 = Box('R3', iv, vp) R4 = Box('R4', n, np) R5 = Box('R5', n @ rpron @ vp, np) prods = [R0, R1, R3, R4, R5] # WORDS: nouns = ['Bojack', 'Diane', 'Eve', 'Fiona'] tverbs = ['loves', 'kills'] iverbs = ['sleeps', 'dies'] adjs = [] rprons = ['who'] vocab = [Word(x, n) for x in nouns] + [Word(x, tv) for x in tverbs] + [Word(x, adj) for x in adjs] +\ [Word(x, iv) for x in iverbs] + [Word(x, rpron) for x in rprons] from discopy.grammar import CFG # Language generation from a CFG: productions = prods + vocab cfg = CFG(*productions) gen = cfg.generate(s, 30, 12, max_iter=1000, remove_duplicates=True) sentences = [] for sentence in gen: sentences += [sentence] print('Example of a CFG tree:') sentences[12].draw(aspect='auto') from discopy.rigid import Cup, Cap, Functor # From POS tags to Pregroup types: ob = {n : n, s: s, adj: n @ n.l, tv: n.r @ s @ n.l, iv: n.r @ s, vp: n.r @ s, np: n, rpron: n.r @ n @ s.l @ n} # From CFG rules to Pregroup reductions: ar = {R0: Cup(n, n.r) @ Id(s), R1: Id(n.r @ s) @ Cup(n.l, n), R2: Id(n) @ Cup(n.l, n), R3 : Id( n.r @ s), R4: Id(n), R5: Cup(n, n.r) @ Id(n) @ Diagram.cups(s.l @ n, n.r @ s)} # Obtain pregroup dictionnary: new_vocab = [Word(x.name, ob[x.cod]) for x in vocab] arx = {vocab[i]: new_vocab[i] for i in range(len(vocab))} ar.update(arx) T2P = Functor(ob, ar) from discopy.grammar import draw print('Corresponding pregroup reduction:') draw(T2P(sentences[12])) from discopy.circuit import Ket, IQPansatz, Euler, CircuitFunctor from discopy.circuit import sqrt, Circuit, H, Id, CZ, Perm from discopy import Quiver import numpy as np ob = {s: 1, n: 1} depth = 2 GHZ = sqrt(2) @ Ket(0, 0, 0) >> H @ H @ H >> CZ @ Id(1) >> Id(1) @ CZ >> H @ Id(1) @ H def who_ansatz(qubits_n, qubits_s): circ = GHZ for i in range(1, qubits_n): circ = circ @ GHZ circ = circ >> Perm([x + 3*y for x in range(3) for y in range(qubits_n)]) return circ >> Id(2*qubits_n) @ Ket(*[0 for i in range(qubits_s)]) @ Id(qubits_n) def arity(word): return sum(ob[Ty(ty.name)] for ty in word.cod) def ansatz(word): if arity(word) == 1: return Ket(0) >> Euler(np.random.rand(3)) elif word.name == 'who': return who_ansatz(ob[n], ob[s]) else: k = arity(word) return Ket(*tuple([0 for i in range(k)])) >> IQPansatz(k, np.random.rand(depth, k - 1)) F = CircuitFunctor(ob, Quiver(ansatz)) circuits = [F(T2P(s)) for s in sentences] print('Corresponding circuit:') circuits[12].draw(aspect='auto') # prepare tket circuits tket_circuits = [circ.to_tk().measure_all() for circ in circuits] print('Corresponding tket circuit:') print(tket_circuits[12]) # backend = ... evaluate = lambda sentence: F(T2P(sentence)).get_counts(backend, n_shots=10000).array for sentence in sentences: draw(T2P(sentence)) print(evaluate(sentence))
0.4917
0.327117
### 1. Problem statement - We are tasked by a Fintech firm to analyze mobile app behavior data to identify potential churn customers. - The goal is to predict which users are likely to churn, so the firm can focus on re-engaging these users with better products. - Below is focusing on EDA. ### 2. Importing libraries #### ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sn ``` Users who were 60 days enrolled, churn in the next 30 ### 3. Read data ``` dataset = pd.read_csv('app_churn_data.csv') dataset.head() dataset.columns ``` ### 4. Review basic distribution ``` dataset.describe() ``` ### 5. Data quality check ``` col_nan = dataset.columns[dataset.isna().any()].tolist() col_nan dataset[col_nan].isna().sum() dataset[col_nan].isna().mean() ``` #### remove the 4 rows where age is nan; ``` dataset = dataset[pd.notnull(dataset.age)] ``` #### drop credit_score and rewards_earned columns ``` dataset = dataset.drop(columns = ['credit_score', 'rewards_earned']) dataset.isna().any() ``` ### 6. Variable count histogram ``` ## Histograms dataset2 = dataset.drop(columns = ['user', 'churn']) fig = plt.figure(figsize=(15, 12)) plt.suptitle('Histograms of Numerical Columns', fontsize=20) for i in range(1, dataset2.shape[1] + 1): plt.subplot(6, 5, i) f = plt.gca() f.axes.get_yaxis().set_visible(False) f.set_title(dataset2.columns.values[i - 1]) vals = np.size(dataset2.iloc[:, i - 1].unique()) plt.hist(dataset2.iloc[:, i - 1], bins=vals, color='#3F5D7D') plt.tight_layout(rect=[0, 0.03, 1, 0.95]) ``` ### 6. Variable Pie chart ``` ## Pie Plots dataset2 = dataset[['housing', 'is_referred', 'app_downloaded', 'web_user', 'app_web_user', 'ios_user', 'android_user', 'registered_phones', 'payment_type', 'waiting_4_loan', 'cancelled_loan', 'received_loan', 'rejected_loan', 'zodiac_sign', 'left_for_two_month_plus', 'left_for_one_month', 'is_referred']] fig = plt.figure(figsize=(15, 12)) plt.suptitle('Pie Chart Distributions', fontsize=20) for i in range(1, dataset2.shape[1] + 1): plt.subplot(3, 6, i) f = plt.gca() f.axes.get_yaxis().set_visible(False) f.set_title(dataset2.columns.values[i - 1]) values = dataset2.iloc[:, i - 1].value_counts(normalize = True).values index = dataset2.iloc[:, i - 1].value_counts(normalize = True).index plt.pie(values, labels = index, autopct='%1.1f%%') #do not show x and y axis plt.axis('equal') fig.tight_layout(rect=[0, 0.03, 1, 0.95]) ``` ### 7. Data balance analysis - 5 top imbalanced variables in terms of response variable - waiting_4_loan - cancelled_loan - received_loan - rejected_loan - left_for_one_month ``` ## Exploring Uneven Features dataset[dataset2.waiting_4_loan == 1].churn.value_counts() dataset[dataset2.cancelled_loan == 1].churn.value_counts() dataset[dataset2.received_loan == 1].churn.value_counts() dataset[dataset2.rejected_loan == 1].churn.value_counts() dataset[dataset2.left_for_one_month == 1].churn.value_counts() ``` ### 8. Correlation analysis #### 8.1 Correlation with Response Variable ``` dataset2.columns dataset.drop(columns = ['user', 'churn', 'housing', 'payment_type', 'registered_phones', 'zodiac_sign']).corrwith(dataset.churn).plot.bar(figsize=(20,10), title = 'Correlation with Response Variable', fontsize = 12, rot = 30, grid = True) ``` #### 8.2 Correlation matrix between indepdent var ``` ## Correlation Matrix sn.set(style="white") # Compute the correlation matrix corr = dataset.drop(columns = ['user', 'churn']).corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(15, 12)) # Generate a custom diverging colormap cmap = sn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) ``` #### 8.3 Remove correlated variables ``` dataset = dataset.drop(columns = ['app_web_user']) dataset.to_csv('new_churn_data.csv', index = False) ```
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sn dataset = pd.read_csv('app_churn_data.csv') dataset.head() dataset.columns dataset.describe() col_nan = dataset.columns[dataset.isna().any()].tolist() col_nan dataset[col_nan].isna().sum() dataset[col_nan].isna().mean() dataset = dataset[pd.notnull(dataset.age)] dataset = dataset.drop(columns = ['credit_score', 'rewards_earned']) dataset.isna().any() ## Histograms dataset2 = dataset.drop(columns = ['user', 'churn']) fig = plt.figure(figsize=(15, 12)) plt.suptitle('Histograms of Numerical Columns', fontsize=20) for i in range(1, dataset2.shape[1] + 1): plt.subplot(6, 5, i) f = plt.gca() f.axes.get_yaxis().set_visible(False) f.set_title(dataset2.columns.values[i - 1]) vals = np.size(dataset2.iloc[:, i - 1].unique()) plt.hist(dataset2.iloc[:, i - 1], bins=vals, color='#3F5D7D') plt.tight_layout(rect=[0, 0.03, 1, 0.95]) ## Pie Plots dataset2 = dataset[['housing', 'is_referred', 'app_downloaded', 'web_user', 'app_web_user', 'ios_user', 'android_user', 'registered_phones', 'payment_type', 'waiting_4_loan', 'cancelled_loan', 'received_loan', 'rejected_loan', 'zodiac_sign', 'left_for_two_month_plus', 'left_for_one_month', 'is_referred']] fig = plt.figure(figsize=(15, 12)) plt.suptitle('Pie Chart Distributions', fontsize=20) for i in range(1, dataset2.shape[1] + 1): plt.subplot(3, 6, i) f = plt.gca() f.axes.get_yaxis().set_visible(False) f.set_title(dataset2.columns.values[i - 1]) values = dataset2.iloc[:, i - 1].value_counts(normalize = True).values index = dataset2.iloc[:, i - 1].value_counts(normalize = True).index plt.pie(values, labels = index, autopct='%1.1f%%') #do not show x and y axis plt.axis('equal') fig.tight_layout(rect=[0, 0.03, 1, 0.95]) ## Exploring Uneven Features dataset[dataset2.waiting_4_loan == 1].churn.value_counts() dataset[dataset2.cancelled_loan == 1].churn.value_counts() dataset[dataset2.received_loan == 1].churn.value_counts() dataset[dataset2.rejected_loan == 1].churn.value_counts() dataset[dataset2.left_for_one_month == 1].churn.value_counts() dataset2.columns dataset.drop(columns = ['user', 'churn', 'housing', 'payment_type', 'registered_phones', 'zodiac_sign']).corrwith(dataset.churn).plot.bar(figsize=(20,10), title = 'Correlation with Response Variable', fontsize = 12, rot = 30, grid = True) ## Correlation Matrix sn.set(style="white") # Compute the correlation matrix corr = dataset.drop(columns = ['user', 'churn']).corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(15, 12)) # Generate a custom diverging colormap cmap = sn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) dataset = dataset.drop(columns = ['app_web_user']) dataset.to_csv('new_churn_data.csv', index = False)
0.484868
0.977778
<a href="https://colab.research.google.com/github/Catia2021/Projeto4_Machine_Learning_Iris/blob/main/Projeto_Machine_Learning_Iris3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Tema do Projeto: Espécies da Planta Iris #Apresentação dos Dados Neste estudo serão utilizados dados coletados do repositório Kaggle disponibilizados no seguinte link: (https://www.kaggle.com/saurabh00007/iriscsv?select=Iris.csv) Com base neste dataset, será feito um modelo preditivo para identificar as espécies de Iris. #Problema a ser resolvido Identificar as três espécies da planta do Gênero Iris: setosa, virgínica e versicolor #Objetivos do Projeto Instalar e importar bibliotecas apropriadas Processar os dados Estabelecer as Variáveis Preditoras e de Classe Realizar tratamento de atributos categóricos usando o LabelEncoder Escalonar os Atributos Dividir a base de dados em Treinamento e Teste Treinar o algoritimo Árvore de Decisão Testar o algoritmo utilizando a matriz de confusão, a função accuracy_score e a classification_report #Importando Bibliotecas e Dados ``` ! pip install pyod import pandas as pd # biblioteca para manipulação de dados import numpy as np # biblioteca para manipulação de dados numéricos import seaborn as sns # biblioteca para otimizar gráficos import matplotlib.pyplot as plt # biblioteca para geração de gráficos import plotly.express as px # biblioteca para geração de gráficos interativos import sklearn # bilioteca para subsidiar Machine Learning import plotly.graph_objects as go # para concatenar graficos from sklearn.preprocessing import StandardScaler #para escalonar variaveis from sklearn.tree import DecisionTreeClassifier# para usar a Arvore de Decisao from sklearn import tree # para visualizar a arvore from sklearn.preprocessing import LabelEncoder #para transformar variaveis from sklearn.preprocessing import OneHotEncoder #para transformar variaveis from sklearn.compose import ColumnTransformer #para transformar variaveis from sklearn.model_selection import train_test_split #para dividir base de teste e treinamento from sklearn.metrics import accuracy_score #para avaliar a acurácia import pickle #para fazer leitura do arquivo from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from seaborn.categorical import boxplot % matplotlib inline from pyod. models.knn import KNN from yellowbrick.classifier import ConfusionMatrix from numpy.ma.core import filled from IPython.core.pylabtools import figsize !pip install plotly --upgrade from google.colab import files uploaded = files.upload() ``` #Processamento dos Dados ``` Iris = pd.read_csv ('Iris.csv') print( ' Este dataset tem %s linhas e %s colunas' % (Iris.shape[0] , Iris.shape[1] ) ) Iris.head(10) Iris.tail() ``` **Acima você pode conferir as primeiras e últimas linhas do dataset e o total de linhas e colunas. Como os dados podem causar certa confusão, será alterado o idioma dos cabeçalhos das colunas.** ``` Iris.columns = ['ID','ComprimentoCM_Da_Sepala',' LarguraCM_Da_Sepala','ComprimentoCM_Da_Petala','LarguraCM_Da_Petala','Especies'] Iris.head(10) ``` **Como não será necessário utilizar a coluna ID, para a classificação, será utilizada a função drop para exclui-la.** ``` Iris = Iris.drop('ID', axis=1) Iris.tail() ``` **Vejamos um Resumo do Dataset** ``` Iris.describe() ``` **Após este pré-processamento inicial será caracterizado os tipos de variáveis e seus significados.** ``` Iris.dtypes ``` **Se tem 4 variáveis Numéricas e 1 Categórica, assim classificadas:** ComprimentoCM_Da_Sepala, LarguraCM_Da_Sepala,ComprimentoCM_Da_Petala, LarguraCM_Da_Petala: Numérica Contínua Especies : Categórica Nominal **Dicionário de Dados** * ComprimentoCM_Da_Sepala:Comprimento em centímetro da Sépala * LarguraCM_Da_Sepala:Largura em centímetro da Sépala * ComprimentoCM_Da_Petala:Comprimento em centímetro da Pétala * LarguraCM_Da_Petala:Largura em centímetro da Pétala * Especies: Espécies da planta com flor do gênero Iris **Prosseguindo o processamento de dados, será analisado se há valores faltantes e inconsistentes** ``` Iris.isnull() Iris.isnull().sum() Iris.loc[Iris['ComprimentoCM_Da_Sepala']<=0] Iris.loc[Iris[' LarguraCM_Da_Sepala']<=0] Iris.loc[Iris['ComprimentoCM_Da_Petala']<=0] Iris.loc[Iris['LarguraCM_Da_Petala']<=0] np.unique(Iris['Especies'],return_counts=True) ``` **Não foram observados valores inconsistentes ou faltantes nos registros, e, os dados da classe target estão balanceados. Agora, serão feitas as Visualizações gráficas, que ajudam a identificar,também, existência de valores inconsistentes** #Visualizando Gráficos ``` Especies = Iris['Especies'].value_counts() Especies.plot(kind ='pie',autopct='%1.2f%%') ``` **Já se percebe que existe a mesma quantidade de espécies no dataset.** ``` plt.hist(x=Iris['ComprimentoCM_Da_Sepala']); sns.distplot(Iris[' LarguraCM_Da_Sepala'],color='green'); Iris['ComprimentoCM_Da_Petala'].hist() Especies = Iris['LarguraCM_Da_Petala'].value_counts() Especies.plot(kind ='barh',color=['red','green']) ``` **Feito as visualizações acima, percebe-se que reproduz os dados da descrição geral e que não há valores inconsistentes.** **Verificando se há Outliers.** ``` sns.boxplot(y='ComprimentoCM_Da_Sepala',data= Iris,color='yellow'); sns.boxplot(y=' LarguraCM_Da_Sepala',data= Iris,color='red'); sns.boxplot( y='ComprimentoCM_Da_Petala',data= Iris,color='blue'); Iris.boxplot( column =['LarguraCM_Da_Petala'], grid = False,color = 'red') ``` **Na Visualização de Largura da Sepala há registros de outlieres. Será feito então uma análise mais refinada usando a biblioteca pyod.** ``` detector = KNN() detector.fit(Iris.iloc[:,0:4]) previsores = detector.labels_ previsores np.unique( previsores,return_counts=True) ``` **Foram detectados 15 outliers. O número 0 representa a não presença de outliers e o 1 a presença.** **Verificando a confiabilidade dos previsores.** ``` confianca_previsoes = detector.decision_scores_ confianca_previsoes outliers =[] for i in range(len(previsores)): if previsores[i]== 1: outliers.append(i) print(outliers) ``` **Estes são os indices que se encontram os outliers.Os outliers não serão tratados neste Projeto.** **Continuando com a Preparação do banco de dados para uso do Algoritmo.** #Dividindo as Variáveis Preditoras e de Classe **Serão criados duas variáveis: a X será a variável preditora e a Y a de classe.** ``` X_Iris = Iris.iloc[:,0:4].values X_Iris Y_Iris = Iris.iloc[:,4].values Y_Iris ``` #Tratando atributos Categóricos com Label Encoder ``` label_encoder_Especies = LabelEncoder() Y_Iris=label_encoder_Especies.fit_transform(Y_Iris) Y_Iris ``` **Agora, será feito o escalonamento dos valores.** #Escalonando os Atributos ``` X_Iris[:,0].min() X_Iris[:,0].max() X_Iris[:,1].min() X_Iris[:,1].max() X_Iris[:,2].min() X_Iris[:,2].max() X_Iris[:,3].min() X_Iris[:,3].max() ``` **Será necessário fazer a Padronização dos Valores, pois há valores discrepantes entre si.** ``` scaler_Iris = StandardScaler() X_Iris = scaler_Iris.fit_transform(X_Iris) X_Iris ``` #Divisão Base de Treinamento e Teste **Serão criados 4 variáveis , duas para treinamento e 2 para teste** ``` X_Iris_treinamento, X_Iris_teste,Y_Iris_treinamento,Y_Iris_teste = train_test_split(X_Iris,Y_Iris, test_size=0.25,random_state=0) ``` **Verificando as variáveis criadas.** ``` X_Iris_treinamento.shape X_Iris_teste.shape Y_Iris_treinamento.shape Y_Iris_teste.shape ``` Agora, o Treinamento e Teste é o passo seguinte. #Treinando o Algoritmo e Testando ``` arvore_Iris= DecisionTreeClassifier(criterion='entropy',random_state=0) arvore_Iris.fit(X_Iris_treinamento, Y_Iris_treinamento) ``` **Analisando a importância de cada atributo** ``` arvore_Iris.feature_importances_ ``` **O que se percebe é que o último atributo( larguraCM_Da_Petala) tem maior importância, seguida do ComprimentoCM_Da_Petala. A largura da Sepala teve pouca importancia e o comprimento da Sepala sem significância.** **Fazendo os Testes** ``` previsoes= arvore_Iris.predict(X_Iris_teste) previsoes Y_Iris_teste ``` #Métricas de Avaliação **Avaliando a Acurácia** ``` accuracy_score (Y_Iris_teste,previsoes) ``` **Matriz de Confusão** ``` confusion_matrix(Y_Iris_teste,previsoes) cm = ConfusionMatrix(arvore_Iris) cm.fit(X_Iris_treinamento, Y_Iris_treinamento) cm.score(X_Iris_teste,Y_Iris_teste) ``` **Avaliando Precisão e Sensibilidade** ``` print (classification_report(Y_Iris_teste,previsoes)) ``` #Visualizando a Árvore ``` arvore_Iris.classes_ previsores=['ComprimentoCM_Da_Sepala',' LarguraCM_Da_Sepala','ComprimentoCM_Da_Petala','LarguraCM_Da_Petala'] figura,axis = plt.subplots(nrows=1, ncols=1, figsize=(20,20)) tree.plot_tree(arvore_Iris,feature_names= previsores,class_names=['0','1','2'],filled= True); ``` #Concluindo **Percebe-se que o algoritmo tem boa acurácia, precisão e sensibilidade estando bastante adequado para identificar as espécies da planta do gênero Iris. Ou seja, o problema deste Projeto estaria resolvido.**
github_jupyter
! pip install pyod import pandas as pd # biblioteca para manipulação de dados import numpy as np # biblioteca para manipulação de dados numéricos import seaborn as sns # biblioteca para otimizar gráficos import matplotlib.pyplot as plt # biblioteca para geração de gráficos import plotly.express as px # biblioteca para geração de gráficos interativos import sklearn # bilioteca para subsidiar Machine Learning import plotly.graph_objects as go # para concatenar graficos from sklearn.preprocessing import StandardScaler #para escalonar variaveis from sklearn.tree import DecisionTreeClassifier# para usar a Arvore de Decisao from sklearn import tree # para visualizar a arvore from sklearn.preprocessing import LabelEncoder #para transformar variaveis from sklearn.preprocessing import OneHotEncoder #para transformar variaveis from sklearn.compose import ColumnTransformer #para transformar variaveis from sklearn.model_selection import train_test_split #para dividir base de teste e treinamento from sklearn.metrics import accuracy_score #para avaliar a acurácia import pickle #para fazer leitura do arquivo from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from seaborn.categorical import boxplot % matplotlib inline from pyod. models.knn import KNN from yellowbrick.classifier import ConfusionMatrix from numpy.ma.core import filled from IPython.core.pylabtools import figsize !pip install plotly --upgrade from google.colab import files uploaded = files.upload() Iris = pd.read_csv ('Iris.csv') print( ' Este dataset tem %s linhas e %s colunas' % (Iris.shape[0] , Iris.shape[1] ) ) Iris.head(10) Iris.tail() Iris.columns = ['ID','ComprimentoCM_Da_Sepala',' LarguraCM_Da_Sepala','ComprimentoCM_Da_Petala','LarguraCM_Da_Petala','Especies'] Iris.head(10) Iris = Iris.drop('ID', axis=1) Iris.tail() Iris.describe() Iris.dtypes Iris.isnull() Iris.isnull().sum() Iris.loc[Iris['ComprimentoCM_Da_Sepala']<=0] Iris.loc[Iris[' LarguraCM_Da_Sepala']<=0] Iris.loc[Iris['ComprimentoCM_Da_Petala']<=0] Iris.loc[Iris['LarguraCM_Da_Petala']<=0] np.unique(Iris['Especies'],return_counts=True) Especies = Iris['Especies'].value_counts() Especies.plot(kind ='pie',autopct='%1.2f%%') plt.hist(x=Iris['ComprimentoCM_Da_Sepala']); sns.distplot(Iris[' LarguraCM_Da_Sepala'],color='green'); Iris['ComprimentoCM_Da_Petala'].hist() Especies = Iris['LarguraCM_Da_Petala'].value_counts() Especies.plot(kind ='barh',color=['red','green']) sns.boxplot(y='ComprimentoCM_Da_Sepala',data= Iris,color='yellow'); sns.boxplot(y=' LarguraCM_Da_Sepala',data= Iris,color='red'); sns.boxplot( y='ComprimentoCM_Da_Petala',data= Iris,color='blue'); Iris.boxplot( column =['LarguraCM_Da_Petala'], grid = False,color = 'red') detector = KNN() detector.fit(Iris.iloc[:,0:4]) previsores = detector.labels_ previsores np.unique( previsores,return_counts=True) confianca_previsoes = detector.decision_scores_ confianca_previsoes outliers =[] for i in range(len(previsores)): if previsores[i]== 1: outliers.append(i) print(outliers) X_Iris = Iris.iloc[:,0:4].values X_Iris Y_Iris = Iris.iloc[:,4].values Y_Iris label_encoder_Especies = LabelEncoder() Y_Iris=label_encoder_Especies.fit_transform(Y_Iris) Y_Iris X_Iris[:,0].min() X_Iris[:,0].max() X_Iris[:,1].min() X_Iris[:,1].max() X_Iris[:,2].min() X_Iris[:,2].max() X_Iris[:,3].min() X_Iris[:,3].max() scaler_Iris = StandardScaler() X_Iris = scaler_Iris.fit_transform(X_Iris) X_Iris X_Iris_treinamento, X_Iris_teste,Y_Iris_treinamento,Y_Iris_teste = train_test_split(X_Iris,Y_Iris, test_size=0.25,random_state=0) X_Iris_treinamento.shape X_Iris_teste.shape Y_Iris_treinamento.shape Y_Iris_teste.shape arvore_Iris= DecisionTreeClassifier(criterion='entropy',random_state=0) arvore_Iris.fit(X_Iris_treinamento, Y_Iris_treinamento) arvore_Iris.feature_importances_ previsoes= arvore_Iris.predict(X_Iris_teste) previsoes Y_Iris_teste accuracy_score (Y_Iris_teste,previsoes) confusion_matrix(Y_Iris_teste,previsoes) cm = ConfusionMatrix(arvore_Iris) cm.fit(X_Iris_treinamento, Y_Iris_treinamento) cm.score(X_Iris_teste,Y_Iris_teste) print (classification_report(Y_Iris_teste,previsoes)) arvore_Iris.classes_ previsores=['ComprimentoCM_Da_Sepala',' LarguraCM_Da_Sepala','ComprimentoCM_Da_Petala','LarguraCM_Da_Petala'] figura,axis = plt.subplots(nrows=1, ncols=1, figsize=(20,20)) tree.plot_tree(arvore_Iris,feature_names= previsores,class_names=['0','1','2'],filled= True);
0.425844
0.89167
# 1. Import libraries ``` #----------------------------Reproducible---------------------------------------------------------------------------------------- import numpy as np import tensorflow as tf import random as rn import os seed=0 os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) rn.seed(seed) #session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) from keras import backend as K #tf.set_random_seed(seed) tf.compat.v1.set_random_seed(seed) #sess = tf.Session(graph=tf.get_default_graph(), config=session_conf) sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf) K.set_session(sess) #----------------------------Reproducible---------------------------------------------------------------------------------------- os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' #-------------------------------------------------------------------------------------------------------------------------------- from keras.datasets import fashion_mnist from keras.models import Model from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer from keras.layers.normalization import BatchNormalization from keras.utils import to_categorical from keras import optimizers,initializers,constraints,regularizers from keras import backend as K from keras.callbacks import LambdaCallback,ModelCheckpoint from keras.utils import plot_model from sklearn.model_selection import StratifiedKFold from sklearn.ensemble import ExtraTreesClassifier from sklearn import svm from sklearn.model_selection import cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import train_test_split import h5py import math import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm %matplotlib inline matplotlib.style.use('ggplot') #-------------------------------------------------------------------------------------------------------------------------------- #Import ourslef defined methods import sys sys.path.append(r"./Defined") import Functions as F # The following code should be added before the keras model #np.random.seed(seed) l1_lambda=1 ``` # 2. Loading data ``` Training_samples=5400 Validating_samples=600 Testing_samples=4000 (x_train_, y_train_), (x_test_, y_test_) = fashion_mnist.load_data() x_train = x_train_.reshape(60000, 28*28).astype('float32')[0:Training_samples] / 255. x_validate = x_train_.reshape(60000, 28*28).astype('float32')[Training_samples:Training_samples+Validating_samples] / 255. x_test__ = x_test_.reshape(10000, 28*28).astype('float32') / 255. np.random.seed(seed) x_test__num,_=x_test__.shape index=np.arange(x_test__num) np.random.shuffle(index) x_test=x_test__[index][0:Testing_samples] y_train=y_train_[0:Training_samples] y_validate=y_train_[Training_samples:Training_samples+Validating_samples] y_test=y_test_[index][0:Testing_samples] y_train_onehot_ = np.array(y_train) y_validate_onehot_ = np.array(y_validate) y_test_onehot_ = np.array(y_test) C_train_x=x_train_.reshape(60000, 28*28).astype('float32')[0:Training_samples+Validating_samples] / 255. C_train_y=y_train_[0:Training_samples+Validating_samples] C_test_x=x_test C_test_y=np.array(y_test) y_train_onehot = y_train_onehot_#to_categorical(y_train_onehot_) y_validate_onehot = y_validate_onehot_#to_categorical(y_validate_onehot_) y_test_onehot = y_test_onehot_#to_categorical(y_test_onehot_) print('Shape of x_train: ' + str(x_train.shape)) print('Shape of x_validate: ' + str(x_validate.shape)) print('Shape of x_test: ' + str(x_test.shape)) print('Shape of y_train: ' + str(y_train_onehot.shape)) print('Shape of y_validate: ' + str(y_validate_onehot.shape)) print('Shape of y_test: ' + str(y_test_onehot.shape)) print('Shape of C_train_x: ' + str(C_train_x.shape)) print('Shape of C_train_y: ' + str(C_train_y.shape)) print('Shape of C_test_x: ' + str(C_test_x.shape)) print('Shape of C_test_y: ' + str(C_test_y.shape)) #F.show_data_figures(x_train_[0:120],28,28,40) ``` # 3. Model ``` np.random.seed(seed) #-------------------------------------------------------------------------------------------------------------------------------- class Feature_Select_Layer(Layer): def __init__(self, output_dim, l1_lambda, **kwargs): super(Feature_Select_Layer, self).__init__(**kwargs) self.output_dim = output_dim self.l1_lambda=l1_lambda def build(self, input_shape): self.kernel = self.add_weight(name='kernel', shape=(input_shape[1],), initializer=initializers.RandomUniform(minval=0., maxval=1.), trainable=True, regularizer=regularizers.l1(self.l1_lambda), constraint=constraints.NonNeg()) super(Feature_Select_Layer, self).build(input_shape) def call(self, x, selection=False,k=36): kernel=self.kernel if selection: kernel_=K.transpose(kernel) print(kernel_.shape) kth_largest = tf.math.top_k(kernel_, k=k)[0][-1] kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel) return K.dot(x, tf.linalg.tensor_diag(kernel)) def compute_output_shape(self, input_shape): return (input_shape[0], self.output_dim) #-------------------------------------------------------------------------------------------------------------------------------- def Identity_Autoencoder(p_data_feature=x_train.shape[1],\ p_encoding_dim=50,\ p_learning_rate= 1E-3,\ p_l1_lambda=0.1): input_img = Input(shape=(p_data_feature,), name='autoencoder_input') feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\ l1_lambda=p_l1_lambda,\ input_shape=(p_data_feature,),\ name='feature_selection') feature_selection_score=feature_selection(input_img) encoded = Dense(p_encoding_dim,\ activation='tanh',\ kernel_initializer=initializers.glorot_uniform(seed),\ name='autoencoder_hidden_layer') encoded_score=encoded(feature_selection_score) bottleneck_score=encoded_score decoded = Dense(p_data_feature,\ activation='tanh',\ kernel_initializer=initializers.glorot_uniform(seed),\ name='autoencoder_output') decoded_score =decoded(bottleneck_score) latent_encoder_score = Model(input_img, bottleneck_score) autoencoder = Model(input_img, decoded_score) autoencoder.compile(loss='mean_squared_error',\ optimizer=optimizers.Adam(lr=p_learning_rate)) print('Autoencoder Structure-------------------------------------') autoencoder.summary() return autoencoder,latent_encoder_score ``` # 4. Running ``` epochs_number=1000 batch_size_value=256 ``` --- ### 4.1.1 Identity Autoencoder --- ``` Ide_AE,\ latent_encoder_score_Ide_AE=Identity_Autoencoder(p_data_feature=x_train.shape[1],\ p_encoding_dim=50,\ p_learning_rate= 1E-2,\ p_l1_lambda=l1_lambda) file_name="./log/AgnoSS.png" plot_model(Ide_AE, to_file=file_name,show_shapes=True) model_checkpoint=ModelCheckpoint('./log_weights/Ide_AE_weights.{epoch:04d}.hdf5',period=100,save_weights_only=True,verbose=1) #print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: print(Ide_AE.layers[1].get_weights())) Ide_AE_history = Ide_AE.fit(x_train, x_train,\ epochs=epochs_number,\ batch_size=batch_size_value,\ shuffle=True,\ validation_data=(x_validate,x_validate),\ callbacks=[model_checkpoint]) loss = Ide_AE_history.history['loss'] val_loss = Ide_AE_history.history['val_loss'] epochs = range(epochs_number) plt.plot(epochs, loss, 'bo', label='Training Loss') plt.plot(epochs, val_loss, 'r', label='Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() p_data=Ide_AE.predict(x_test) numbers=x_test.shape[0]*x_test.shape[1] print("MSE for one-to-one map layer",np.sum(np.power(np.array(p_data)-x_test,2))/numbers) ``` --- key_number=50 --- ``` key_number=50 key_features=F.top_k_keepWeights_1(Ide_AE.get_layer(index=1).get_weights()[0],key_number) selected_position_list=np.where(key_features>0)[0] ``` # 5 Classifying ``` train_feature=C_train_x train_label=C_train_y test_feature=C_test_x test_label=C_test_y p_seed=seed F.ETree(train_feature,train_label,test_feature,test_label,p_seed) train_feature_=np.multiply(C_train_x, key_features) train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list) print(train_feature.shape) train_label=C_train_y test_feature_=np.multiply(C_test_x, key_features) test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list) print(test_feature.shape) test_label=C_test_y p_seed=seed F.ETree(train_feature,train_label,test_feature,test_label,p_seed) print("\n\n") ``` # 6. Reconstruction loss ``` from sklearn.linear_model import LinearRegression def mse_check(train, test): LR = LinearRegression(n_jobs = -1) LR.fit(train[0], train[1]) MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean() return MSELR train_feature_=np.multiply(C_train_x, key_features) C_train_selected_x=F.compress_zero_withkeystructure(train_feature_,selected_position_list) print(C_train_selected_x.shape) test_feature_=np.multiply(C_test_x, key_features) C_test_selected_x=F.compress_zero_withkeystructure(test_feature_,selected_position_list) print(C_test_selected_x.shape) train_feature_tuple=(C_train_selected_x,C_train_x) test_feature_tuple=(C_test_selected_x,C_test_x) reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple) print(reconstruction_loss) ```
github_jupyter
#----------------------------Reproducible---------------------------------------------------------------------------------------- import numpy as np import tensorflow as tf import random as rn import os seed=0 os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) rn.seed(seed) #session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) from keras import backend as K #tf.set_random_seed(seed) tf.compat.v1.set_random_seed(seed) #sess = tf.Session(graph=tf.get_default_graph(), config=session_conf) sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf) K.set_session(sess) #----------------------------Reproducible---------------------------------------------------------------------------------------- os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' #-------------------------------------------------------------------------------------------------------------------------------- from keras.datasets import fashion_mnist from keras.models import Model from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer from keras.layers.normalization import BatchNormalization from keras.utils import to_categorical from keras import optimizers,initializers,constraints,regularizers from keras import backend as K from keras.callbacks import LambdaCallback,ModelCheckpoint from keras.utils import plot_model from sklearn.model_selection import StratifiedKFold from sklearn.ensemble import ExtraTreesClassifier from sklearn import svm from sklearn.model_selection import cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import train_test_split import h5py import math import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm %matplotlib inline matplotlib.style.use('ggplot') #-------------------------------------------------------------------------------------------------------------------------------- #Import ourslef defined methods import sys sys.path.append(r"./Defined") import Functions as F # The following code should be added before the keras model #np.random.seed(seed) l1_lambda=1 Training_samples=5400 Validating_samples=600 Testing_samples=4000 (x_train_, y_train_), (x_test_, y_test_) = fashion_mnist.load_data() x_train = x_train_.reshape(60000, 28*28).astype('float32')[0:Training_samples] / 255. x_validate = x_train_.reshape(60000, 28*28).astype('float32')[Training_samples:Training_samples+Validating_samples] / 255. x_test__ = x_test_.reshape(10000, 28*28).astype('float32') / 255. np.random.seed(seed) x_test__num,_=x_test__.shape index=np.arange(x_test__num) np.random.shuffle(index) x_test=x_test__[index][0:Testing_samples] y_train=y_train_[0:Training_samples] y_validate=y_train_[Training_samples:Training_samples+Validating_samples] y_test=y_test_[index][0:Testing_samples] y_train_onehot_ = np.array(y_train) y_validate_onehot_ = np.array(y_validate) y_test_onehot_ = np.array(y_test) C_train_x=x_train_.reshape(60000, 28*28).astype('float32')[0:Training_samples+Validating_samples] / 255. C_train_y=y_train_[0:Training_samples+Validating_samples] C_test_x=x_test C_test_y=np.array(y_test) y_train_onehot = y_train_onehot_#to_categorical(y_train_onehot_) y_validate_onehot = y_validate_onehot_#to_categorical(y_validate_onehot_) y_test_onehot = y_test_onehot_#to_categorical(y_test_onehot_) print('Shape of x_train: ' + str(x_train.shape)) print('Shape of x_validate: ' + str(x_validate.shape)) print('Shape of x_test: ' + str(x_test.shape)) print('Shape of y_train: ' + str(y_train_onehot.shape)) print('Shape of y_validate: ' + str(y_validate_onehot.shape)) print('Shape of y_test: ' + str(y_test_onehot.shape)) print('Shape of C_train_x: ' + str(C_train_x.shape)) print('Shape of C_train_y: ' + str(C_train_y.shape)) print('Shape of C_test_x: ' + str(C_test_x.shape)) print('Shape of C_test_y: ' + str(C_test_y.shape)) #F.show_data_figures(x_train_[0:120],28,28,40) np.random.seed(seed) #-------------------------------------------------------------------------------------------------------------------------------- class Feature_Select_Layer(Layer): def __init__(self, output_dim, l1_lambda, **kwargs): super(Feature_Select_Layer, self).__init__(**kwargs) self.output_dim = output_dim self.l1_lambda=l1_lambda def build(self, input_shape): self.kernel = self.add_weight(name='kernel', shape=(input_shape[1],), initializer=initializers.RandomUniform(minval=0., maxval=1.), trainable=True, regularizer=regularizers.l1(self.l1_lambda), constraint=constraints.NonNeg()) super(Feature_Select_Layer, self).build(input_shape) def call(self, x, selection=False,k=36): kernel=self.kernel if selection: kernel_=K.transpose(kernel) print(kernel_.shape) kth_largest = tf.math.top_k(kernel_, k=k)[0][-1] kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel) return K.dot(x, tf.linalg.tensor_diag(kernel)) def compute_output_shape(self, input_shape): return (input_shape[0], self.output_dim) #-------------------------------------------------------------------------------------------------------------------------------- def Identity_Autoencoder(p_data_feature=x_train.shape[1],\ p_encoding_dim=50,\ p_learning_rate= 1E-3,\ p_l1_lambda=0.1): input_img = Input(shape=(p_data_feature,), name='autoencoder_input') feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\ l1_lambda=p_l1_lambda,\ input_shape=(p_data_feature,),\ name='feature_selection') feature_selection_score=feature_selection(input_img) encoded = Dense(p_encoding_dim,\ activation='tanh',\ kernel_initializer=initializers.glorot_uniform(seed),\ name='autoencoder_hidden_layer') encoded_score=encoded(feature_selection_score) bottleneck_score=encoded_score decoded = Dense(p_data_feature,\ activation='tanh',\ kernel_initializer=initializers.glorot_uniform(seed),\ name='autoencoder_output') decoded_score =decoded(bottleneck_score) latent_encoder_score = Model(input_img, bottleneck_score) autoencoder = Model(input_img, decoded_score) autoencoder.compile(loss='mean_squared_error',\ optimizer=optimizers.Adam(lr=p_learning_rate)) print('Autoencoder Structure-------------------------------------') autoencoder.summary() return autoencoder,latent_encoder_score epochs_number=1000 batch_size_value=256 Ide_AE,\ latent_encoder_score_Ide_AE=Identity_Autoencoder(p_data_feature=x_train.shape[1],\ p_encoding_dim=50,\ p_learning_rate= 1E-2,\ p_l1_lambda=l1_lambda) file_name="./log/AgnoSS.png" plot_model(Ide_AE, to_file=file_name,show_shapes=True) model_checkpoint=ModelCheckpoint('./log_weights/Ide_AE_weights.{epoch:04d}.hdf5',period=100,save_weights_only=True,verbose=1) #print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: print(Ide_AE.layers[1].get_weights())) Ide_AE_history = Ide_AE.fit(x_train, x_train,\ epochs=epochs_number,\ batch_size=batch_size_value,\ shuffle=True,\ validation_data=(x_validate,x_validate),\ callbacks=[model_checkpoint]) loss = Ide_AE_history.history['loss'] val_loss = Ide_AE_history.history['val_loss'] epochs = range(epochs_number) plt.plot(epochs, loss, 'bo', label='Training Loss') plt.plot(epochs, val_loss, 'r', label='Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() p_data=Ide_AE.predict(x_test) numbers=x_test.shape[0]*x_test.shape[1] print("MSE for one-to-one map layer",np.sum(np.power(np.array(p_data)-x_test,2))/numbers) key_number=50 key_features=F.top_k_keepWeights_1(Ide_AE.get_layer(index=1).get_weights()[0],key_number) selected_position_list=np.where(key_features>0)[0] train_feature=C_train_x train_label=C_train_y test_feature=C_test_x test_label=C_test_y p_seed=seed F.ETree(train_feature,train_label,test_feature,test_label,p_seed) train_feature_=np.multiply(C_train_x, key_features) train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list) print(train_feature.shape) train_label=C_train_y test_feature_=np.multiply(C_test_x, key_features) test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list) print(test_feature.shape) test_label=C_test_y p_seed=seed F.ETree(train_feature,train_label,test_feature,test_label,p_seed) print("\n\n") from sklearn.linear_model import LinearRegression def mse_check(train, test): LR = LinearRegression(n_jobs = -1) LR.fit(train[0], train[1]) MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean() return MSELR train_feature_=np.multiply(C_train_x, key_features) C_train_selected_x=F.compress_zero_withkeystructure(train_feature_,selected_position_list) print(C_train_selected_x.shape) test_feature_=np.multiply(C_test_x, key_features) C_test_selected_x=F.compress_zero_withkeystructure(test_feature_,selected_position_list) print(C_test_selected_x.shape) train_feature_tuple=(C_train_selected_x,C_train_x) test_feature_tuple=(C_test_selected_x,C_test_x) reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple) print(reconstruction_loss)
0.563138
0.697773
# Let's try and create the company nodes within neo4j Loop through the active company list and generate the company nodes ``` from neo4j.v1 import GraphDatabase import pandas as pd driver = GraphDatabase.driver("bolt://10.0.0.1:7687", auth=("myusername", "mypassword")) ``` Let's take a peek at the data ``` company_sample = pd.read_csv('./data/BasicCompanyDataAsOneFile-2017-09-01.csv', nrows=1000) cols = [col.strip() for col in company_sample.columns] cols = [col.replace('.', '_') for col in cols] company_sample.columns = cols company_sample.iloc[-2] ``` ## We shall only worry about current company names. We will need out country references again. ``` country_code_map = pd.read_pickle('./data/clean_country_code_map.pkl') combined_map = pd.read_pickle('./data/combined_country_map.pkl') import numpy as np company_sample = company_sample.replace(np.nan, u'', regex=True) company_sample['CountryCode'] = company_sample.RegAddress_Country.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) company_sample['CleanCountry'] = company_sample.apply(lambda x: country_code_map.get(x['CountryCode'], 'UNKNOWN') if x['CountryCode'] in country_code_map.keys() else x.RegAddress_Country, axis=1) company_sample['CleanCountry'] = company_sample.CleanCountry.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) company_sample['CountryOfOriginCode'] = company_sample.CountryOfOrigin.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) company_sample['CleanCountryOfOrigin'] = company_sample.apply(lambda x: country_code_map.get(x['CountryOfOriginCode'], 'UNKNOWN') if x['CountryOfOriginCode'] in country_code_map.keys() else x.CountryOfOrigin, axis=1) company_sample['CleanCountryOfOrigin'] = company_sample.CleanCountryOfOrigin.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) company_sample['CleanPostcode'] = company_sample.RegAddress_PostCode.map(lambda x: 'UNKNOWN' if x == '' else x) company_sample.CleanPostcode.value_counts().sum() company_sample.CompanyNumber.isnull().value_counts() input_data = [v for k,v in company_sample.T.to_dict().items()] input_data[0] ``` Company node properties: - name: - number: - Accounts.LastMadeUpDate: - Returns.LastMadeUpDate: '11/09/2015' - Returns.NextDueDate: '09/10/2016' - Address.Line1: - Address.Line2: - Address.Country - Address.PostCode - Address.PostTown - Address.POBox - Address.County - URI: 'http://business.data.gov.uk/id/company/08209948' status nodes: - type: DORMANT, Active company_category nodes: - type: PLC ... Country: - name: United Kingdom ... ``` UNWIND {list} AS d MATCH (p:Person {user_id: d.id}) MERGE (a:Artist {artist_name: d.name}) MERGE (p)-[:LIKES {times: d.plays}]->(a) 'Accounts.LastMadeUpDate': '30/09/2016', 'Accounts.NextDueDate': '30/06/2018', Returns.LastMadeUpDate: '11/09/2015' Returns.NextDueDate: ``` # Looping over all the data and inserting the data into the neo4j Database Here we will chunk over the input file in batches of 100,000 records and use the functions we've tested above to create the node properties and format to allow us to do a batch CYPHER query that will create and connect the nodes and relationships. ``` chunks = pd.read_csv('./data/BasicCompanyDataAsOneFile-2017-09-01.csv', chunksize=100000) for chunk in chunks: cols = [col.strip() for col in chunk.columns] cols = [col.replace('.', '_') for col in cols] chunk.columns = cols chunk = chunk.replace(np.nan, u'', regex=True) chunk['CountryCode'] = chunk.RegAddress_Country.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) chunk['CleanCountry'] = chunk.apply(lambda x: country_code_map.get(x['CountryCode'], 'UNKNOWN') if x['CountryCode'] in country_code_map.keys() else x.RegAddress_Country, axis=1) chunk['CleanCountry'] = chunk.CleanCountry.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) chunk['CountryOfOriginCode'] = chunk.CountryOfOrigin.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) chunk['CleanCountryOfOrigin'] = chunk.apply(lambda x: country_code_map.get(x['CountryOfOriginCode'], 'UNKNOWN') if x['CountryOfOriginCode'] in country_code_map.keys() else x.CountryOfOrigin, axis=1) chunk['CleanCountryOfOrigin'] = chunk.CleanCountryOfOrigin.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) chunk['CleanPostcode'] = chunk.RegAddress_PostCode.map(lambda x: 'UNKNOWN' if x == '' else x) test = {'list': [v for k,v in chunk.T.to_dict().items()]} print('Starting Insert ....') with driver.session() as session: session.run(("UNWIND {list} AS d " "MERGE (c:Company {uid: d.CompanyNumber}) " "ON CREATE SET c.name=d.CompanyName, " "c.accounts_LastMadeUpDate=d.Accounts_LastMadeUpDate, " "c.accounts_NextDueDate=d.Accounts_NextDueDate, " "c.returns_LastMadeUpDate=d.Returns_LastMadeUpDate, " "c.returns_NextDueDate=d.Returns_NextDueDate, " "c.address_Line1=d.RegAddress_Line1, " "c.address_Line2=d.RegAddress_Line2, " "c.address_PostTown=d.RegAddress_PostTown, " "c.address_POBox=d.RegAddress_POBox, " "c.address_County=d.RegAddress_County, " "c.address_PostCode=d.RegAddress_Postcode, " "c.address_Country=d.CleanCountry, " "c.uri=d.URI;"), {"list": test.get('list')}) with driver.session() as session: session.run(("UNWIND {list} AS d " "MATCH (c:Company {uid: d.CompanyNumber}) " "MERGE (country:Country {code: d.CountryCode}) " "MERGE (c)-[:REGISTERED_IN]->(country);"), {"list": test.get('list')}) with driver.session() as session: session.run(("UNWIND {list} AS d " "MATCH (c:Company {uid: d.CompanyNumber}) " "MERGE (country:Country {code: d.CountryOfOriginCode}) " "MERGE (c)-[:HAS_ORIGIN]->(country);"), {"list": test.get('list')}) with driver.session() as session: session.run(("UNWIND {list} AS d " "MATCH (c:Company {uid: d.CompanyNumber}) " "MERGE (pc:Postcode {uid: d.CleanPostcode}) " "MERGE (c)-[:REGISTERED_IN]->(pc);"), {"list": test.get('list')}) print("Finished chunk...") print("DONE!") ``` ### Quick check how any companies have been inserted ``` with driver.session() as session: result = session.run("MATCH (c:Company) RETURN COUNT(c);") print(result.data()) ```
github_jupyter
from neo4j.v1 import GraphDatabase import pandas as pd driver = GraphDatabase.driver("bolt://10.0.0.1:7687", auth=("myusername", "mypassword")) company_sample = pd.read_csv('./data/BasicCompanyDataAsOneFile-2017-09-01.csv', nrows=1000) cols = [col.strip() for col in company_sample.columns] cols = [col.replace('.', '_') for col in cols] company_sample.columns = cols company_sample.iloc[-2] country_code_map = pd.read_pickle('./data/clean_country_code_map.pkl') combined_map = pd.read_pickle('./data/combined_country_map.pkl') import numpy as np company_sample = company_sample.replace(np.nan, u'', regex=True) company_sample['CountryCode'] = company_sample.RegAddress_Country.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) company_sample['CleanCountry'] = company_sample.apply(lambda x: country_code_map.get(x['CountryCode'], 'UNKNOWN') if x['CountryCode'] in country_code_map.keys() else x.RegAddress_Country, axis=1) company_sample['CleanCountry'] = company_sample.CleanCountry.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) company_sample['CountryOfOriginCode'] = company_sample.CountryOfOrigin.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) company_sample['CleanCountryOfOrigin'] = company_sample.apply(lambda x: country_code_map.get(x['CountryOfOriginCode'], 'UNKNOWN') if x['CountryOfOriginCode'] in country_code_map.keys() else x.CountryOfOrigin, axis=1) company_sample['CleanCountryOfOrigin'] = company_sample.CleanCountryOfOrigin.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) company_sample['CleanPostcode'] = company_sample.RegAddress_PostCode.map(lambda x: 'UNKNOWN' if x == '' else x) company_sample.CleanPostcode.value_counts().sum() company_sample.CompanyNumber.isnull().value_counts() input_data = [v for k,v in company_sample.T.to_dict().items()] input_data[0] UNWIND {list} AS d MATCH (p:Person {user_id: d.id}) MERGE (a:Artist {artist_name: d.name}) MERGE (p)-[:LIKES {times: d.plays}]->(a) 'Accounts.LastMadeUpDate': '30/09/2016', 'Accounts.NextDueDate': '30/06/2018', Returns.LastMadeUpDate: '11/09/2015' Returns.NextDueDate: chunks = pd.read_csv('./data/BasicCompanyDataAsOneFile-2017-09-01.csv', chunksize=100000) for chunk in chunks: cols = [col.strip() for col in chunk.columns] cols = [col.replace('.', '_') for col in cols] chunk.columns = cols chunk = chunk.replace(np.nan, u'', regex=True) chunk['CountryCode'] = chunk.RegAddress_Country.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) chunk['CleanCountry'] = chunk.apply(lambda x: country_code_map.get(x['CountryCode'], 'UNKNOWN') if x['CountryCode'] in country_code_map.keys() else x.RegAddress_Country, axis=1) chunk['CleanCountry'] = chunk.CleanCountry.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) chunk['CountryOfOriginCode'] = chunk.CountryOfOrigin.map(lambda x: combined_map.get(str(x).upper(), 'UNKNOWN')) chunk['CleanCountryOfOrigin'] = chunk.apply(lambda x: country_code_map.get(x['CountryOfOriginCode'], 'UNKNOWN') if x['CountryOfOriginCode'] in country_code_map.keys() else x.CountryOfOrigin, axis=1) chunk['CleanCountryOfOrigin'] = chunk.CleanCountryOfOrigin.map(lambda x: 'NO_COUNTRY_LISTED' if x == '' else x) chunk['CleanPostcode'] = chunk.RegAddress_PostCode.map(lambda x: 'UNKNOWN' if x == '' else x) test = {'list': [v for k,v in chunk.T.to_dict().items()]} print('Starting Insert ....') with driver.session() as session: session.run(("UNWIND {list} AS d " "MERGE (c:Company {uid: d.CompanyNumber}) " "ON CREATE SET c.name=d.CompanyName, " "c.accounts_LastMadeUpDate=d.Accounts_LastMadeUpDate, " "c.accounts_NextDueDate=d.Accounts_NextDueDate, " "c.returns_LastMadeUpDate=d.Returns_LastMadeUpDate, " "c.returns_NextDueDate=d.Returns_NextDueDate, " "c.address_Line1=d.RegAddress_Line1, " "c.address_Line2=d.RegAddress_Line2, " "c.address_PostTown=d.RegAddress_PostTown, " "c.address_POBox=d.RegAddress_POBox, " "c.address_County=d.RegAddress_County, " "c.address_PostCode=d.RegAddress_Postcode, " "c.address_Country=d.CleanCountry, " "c.uri=d.URI;"), {"list": test.get('list')}) with driver.session() as session: session.run(("UNWIND {list} AS d " "MATCH (c:Company {uid: d.CompanyNumber}) " "MERGE (country:Country {code: d.CountryCode}) " "MERGE (c)-[:REGISTERED_IN]->(country);"), {"list": test.get('list')}) with driver.session() as session: session.run(("UNWIND {list} AS d " "MATCH (c:Company {uid: d.CompanyNumber}) " "MERGE (country:Country {code: d.CountryOfOriginCode}) " "MERGE (c)-[:HAS_ORIGIN]->(country);"), {"list": test.get('list')}) with driver.session() as session: session.run(("UNWIND {list} AS d " "MATCH (c:Company {uid: d.CompanyNumber}) " "MERGE (pc:Postcode {uid: d.CleanPostcode}) " "MERGE (c)-[:REGISTERED_IN]->(pc);"), {"list": test.get('list')}) print("Finished chunk...") print("DONE!") with driver.session() as session: result = session.run("MATCH (c:Company) RETURN COUNT(c);") print(result.data())
0.322206
0.763307
<a href="https://colab.research.google.com/github/cfcastillo/DS-6-Notebooks/blob/main/Project_3_Notebook_cfc.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Problem Definition The purpose of this project is to predict **new song popularity** (target) based on data features (variables) collected for songs that have been on the top 200 Weekly Global charts of Spotify in 2020 and 2021. [Project details](https://docs.google.com/document/d/1v73i9PjBgqlaW6YMCd76MYSc96q2fqNv/edit) This is a **supervised regression** problem. **Goals** * To Minimize the cross-validated root mean squared error (RMSE) around 10. * To determine the importance of the features in driving the regression result. * To choose parameters that will avoid over-fitting in the result. **Methodology** The project will be done using tree-based regression techniques. # Data Collection The dataset includes all songs that have been on the Top 200 Weekly (Global) charts of Spotify in 2020 and 2021. Below, is a link to the data description. [Data Description](https://docs.google.com/document/d/14xiF2TXOGvbMbf5sxYQAwtXDlfrdPh_hxmeDHGCMH0E/edit) ## Imports ``` # grab the imports needed for the project import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns import statsmodels.api as sm # all from sklearn import datasets from sklearn import metrics from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import MultiLabelBinarizer from sklearn.metrics import classification_report import sklearn.model_selection as model_selection # Regression from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor import xgboost as xgb from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from sklearn.metrics import accuracy_score from sklearn import metrics # Visualization import graphviz from IPython.display import display from sklearn import tree # Installs !pip3 install dtreeviz from dtreeviz.trees import dtreeviz ``` ## Load Data ``` # Mount Drive from google.colab import drive drive.mount('/drive') # Load Data data_path = '/drive/My Drive/Cohort_6/Projects/Project 3/Data/Project_3_Spotify.csv' data = pd.read_csv(data_path) ``` # Data Cleaning ``` # View top 5 records - all columns data.head() # Column analysis data.info() # Any Nulls? data.isnull().sum().sum() # Get a copy of the data before massaging the data. data_clean = data.copy() ``` ## Drop columns Since the project requirement is to predict **new** song popularity, we will drop columns that would not be available for new songs. Additionally, we will drop columns that are not predictors, such as the row index and song id. ``` # Drop indexes data_clean.drop(['Index', 'Song ID'], inplace=True, axis=1) # Drop features that would not exist for new songs data_clean.drop(['Highest Charting Position', 'Number of Times Charted', 'Week of Highest Charting', 'Streams', 'Weeks Charted'], inplace=True, axis=1) data_clean.tail() ``` ## Fix Data Types Some columns were imported as object data types but actually contain numeric values. These columns needed to be converted to their numerical equivalent as required by the XG Boost model. ``` # Convert strings into numbers. Note: some are blank so using coerce will turn them into # NaN which can be removed later. # Target data data_clean['Popularity'] = pd.to_numeric(data_clean['Popularity'], errors='coerce') # Feature data data_clean['Artist Followers'] = pd.to_numeric(data_clean['Artist Followers'], errors='coerce') data_clean['Danceability'] = pd.to_numeric(data_clean['Danceability'], errors='coerce') data_clean['Energy'] = pd.to_numeric(data_clean['Energy'], errors='coerce') data_clean['Loudness'] = pd.to_numeric(data_clean['Loudness'], errors='coerce') data_clean['Speechiness'] = pd.to_numeric(data_clean['Speechiness'], errors='coerce') data_clean['Acousticness'] = pd.to_numeric(data_clean['Acousticness'], errors='coerce') data_clean['Liveness'] = pd.to_numeric(data_clean['Liveness'], errors='coerce') data_clean['Tempo'] = pd.to_numeric(data_clean['Tempo'], errors='coerce') data_clean['Duration (ms)'] = pd.to_numeric(data_clean['Duration (ms)'], errors='coerce') data_clean['Valence'] = pd.to_numeric(data_clean['Valence'], errors='coerce') # Convert date data_clean['Release Date'] = pd.to_datetime(data_clean['Release Date'], errors='coerce') # Conversion produced some nulls for values that could not be converted. Remove these rows # since all predictors in these rows are null. data_clean = data_clean.dropna() print(data_clean.isna().sum()) # Verify final data types data_clean.dtypes ``` ## Feature Engineering In this step, character data and columns containing multiple values were split and encoded so they could be modeled. Below are links to resources used for feature engineering. [Techniques](https://towardsdatascience.com/feature-engineering-for-machine-learning-3a5e293a5114) [Encoding Notebook](https://colab.research.google.com/drive/10mMFd3bsO7Gy8MbYY56Ukv8lg0-PCBmc) [Imputation Notebook](https://colab.research.google.com/drive/1pq84SRJTOXSdKN9j9g6pORrKtIuQomhH) [Lambda Examples](https://colab.research.google.com/drive/1jz5eSsPq2m1Onq492yKx0-oNO2UofiHR) [SK Learn One Hot Encoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder) [SK Learn Multi Label Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MultiLabelBinarizer.html) ``` # Create label encoders instances le = LabelEncoder() mlb = MultiLabelBinarizer() mlb2 = MultiLabelBinarizer() # second instance to handle more condensed genre features # Genre processing functions # Initial Genre cleaning before groupings # Preprocessing for first genre splitting which is less condensed. def genrePreProcessing1(df): genrePreProcessingAll(df) # Use regex to identify and remove specified characters so friendly column names can be generated when encoded. df['Genre'] = df['Genre'].str.replace(r"[\[\]'\-\&\+\s]", '', regex=True) # Turn Genre column into lists by splitting on comma. data_clean_enc['Genre'] = data_clean_enc['Genre'].str.split(',') # Preprocessing for second genre splitting which will be more condensed. def genrePreProcessing2(df): genrePreProcessingAll(df) # Use regex to identify and remove specified characters so friendly column names can be generated when encoded. df['Genre'] = df['Genre'].str.replace(r"[\[\]'\-\&\+\,]", '', regex=True) # Preprocessing for all genre splitting. def genrePreProcessingAll(df): # Combine some values to create categories. df['Genre'] = df['Genre'].str.replace('hip hop','hiphop') # Fill in blank genre. Blank genre will show as [] df['Genre'] = df['Genre'].str.replace('\[\]','unknown') # Do some final cleaning for Genre def genrePostProcessing(df): # HACK! FILL NAN WITH ZERO FOR NOW SINCE DON'T KNOW WHAT IS CAUSING THIS ANOMALY df.fillna(0, inplace=True) # Remove unencoded column df.drop(['Genre'], inplace=True, axis=1) # THIS PRODUCES NULLS IN LAST 11 ROWS AND MAKES ALL ENCODED VALUES INTO FLOATS INSTEAD OF INTEGERS!!!! # Encode Genre - one hot encode data_clean_enc = data_clean.copy() genrePreProcessing1(data_clean_enc) # Use multi label binarizer to transform the Genre column into individual columns # ALERT: THIS LINE IS CREATING NULL VALUES ON ALL GENRE FEATURE COLUMNS FOR LAST 11 ROWS ONLY. NOT SURE WHY. data_clean_enc = data_clean_enc.join(pd.DataFrame(mlb.fit_transform(data_clean_enc['Genre']),columns=mlb.classes_)) genrePostProcessing(data_clean_enc) data_clean.head() # Grab fresh dataset to be used for different encoding process. data_clean_enc2 = data_clean.copy() genrePreProcessing2(data_clean_enc2) # Use multi label binarizer to transform the Genre column into individual columns # Group Genre values more to reduce columns - reduced from 350 to 308. More grouping needed. # Create unique lists for genre. i.e. italian pop, pop will become italian and pop data_clean_enc2['Genre'] = data_clean_enc2['Genre'].apply(lambda x: np.unique(x.split())) data_clean_enc2 = data_clean_enc2.join(pd.DataFrame(mlb2.fit_transform(data_clean_enc2['Genre']),columns=mlb2.classes_)) genrePostProcessing(data_clean_enc2) # If using this grouping, then get copy to override previous methodology. data_clean_enc = data_clean_enc2.copy() # Export feature columns to Excel to examine how to further group them since # Grouping #2 is still resulting in over 300 columns. # file_name = "test.xlsx" # data_clean_enc.sum().to_excel(file_name) # Manually combine genres that are similar data_clean_enc['urban'] = data_clean_enc['urbaine'] + data_clean_enc['urban'] + data_clean_enc['urbano'] data_clean_enc.drop(['urbaine', 'urbano'], axis=1, inplace=True) data_clean_enc['alternative'] = data_clean_enc['alternative'] + data_clean_enc['alt'] data_clean_enc.drop(['alt'], axis=1, inplace=True) data_clean_enc['argentino'] = data_clean_enc['argentino'] + data_clean_enc['argentine'] data_clean_enc.drop(['argentine'], axis=1, inplace=True) data_clean_enc['colombiano'] = data_clean_enc['colombiano'] + data_clean_enc['colombian'] data_clean_enc.drop(['colombian'], axis=1, inplace=True) data_clean_enc['electro'] = data_clean_enc['electro'] + data_clean_enc['electropop'] + data_clean_enc['electronic'] data_clean_enc.drop(['electropop', 'electronic'], axis=1, inplace=True) data_clean_enc['hiphop'] = data_clean_enc['hiphop'] + data_clean_enc['hip'] data_clean_enc.drop(['hip'], axis=1, inplace=True) data_clean_enc['italian'] = data_clean_enc['italian'] + data_clean_enc['italiana'] + data_clean_enc['italiano'] data_clean_enc.drop(['italiana', 'italiano'], axis=1, inplace=True) data_clean_enc['latino'] = data_clean_enc['latino'] + data_clean_enc['latin'] + data_clean_enc['latina'] data_clean_enc.drop(['latin', 'latina'], axis=1, inplace=True) data_clean_enc['puertorican'] = data_clean_enc['puerto'] + data_clean_enc['rican'] data_clean_enc.drop(['puerto', 'rican'], axis=1, inplace=True) # View totals again to determine list to keep and which ones to throw into "other" group # These columns will be retained further down at the beginning of the EDA. genres_to_keep = ['pop','rap','hiphop','trap','latino','dance','postteen','reggaeton','melodic','canadian','electro', 'uk','rb','rock','house','colombiano','german','atl','group','kpop','edm','chicago','alternative','drill', 'tropical','boy','contemporary'] # Since some songs have multiple artists, pull out the first artist in the column # since this is likely the lead artist and thus more important for predicting song success. data_clean_enc['Artist'] = data_clean_enc['Artist'].apply(lambda x: x.split(',')[0]) # Now encode. data_clean_enc['Artist Lb'] = le.fit_transform(data_clean_enc['Artist']) # Remove unencoded column data_clean_enc.drop(['Artist'], inplace=True, axis=1) # TODO: Encode song name. WILL DO AT SOME FUTURE DATE. # First remove data about featured artists and remix - anything in parenthesis or square brackets. # Then remove dashes # data_clean_enc['Song Name'] data_clean_enc.drop(['Song Name'], inplace=True, axis=1) # Encode chord data_clean_enc['Chord Lb'] = le.fit_transform(data_clean_enc['Chord']) data_clean_enc[['Chord Lb', 'Chord']] # Remove unencoded column data_clean_enc.drop(['Chord'], inplace=True, axis=1) # Split release date into month, day, day of week to see if release date has impact on popularity. data_clean_enc['Release Month'] = data_clean_enc['Release Date'].dt.month data_clean_enc['Release Day'] = data_clean_enc['Release Date'].dt.day data_clean_enc['Release Weekday'] = data_clean_enc['Release Date'].dt.dayofweek # Assumes week starts on Monday # Verify results data_clean_enc[['Release Date', 'Release Month', 'Release Day', 'Release Weekday']] # Remove Release Date because model cannot use dates. Model will use date parts instead. data_clean_enc.drop(['Release Date'], inplace=True, axis=1) # Final data review to see what else needs to be done data_clean_enc.head() # print(data_clean_enc.shape) ``` # Exploratory Data Analysis (EDA) First, histograms were produced of all continuous columns to see how data was distributed. Second, pie charts were produced for all Release Date columns (Month, Day, Weekday) to see if there were any patterns in the values. Third, correlations were produced both between predictors and between predictors and the target. Observations were noted for all three cases. ``` # Get lists of columns that can be used for further analysis continuous_cols = ['Artist Followers','Danceability','Energy','Loudness','Speechiness','Acousticness','Liveness','Tempo', 'Duration (ms)','Valence'] other_cols = ['Artist Lb','Chord Lb'] release_date_cols = ['Release Month','Release Day','Release Weekday'] non_genre_cols = continuous_cols + other_cols + release_date_cols + ['Popularity'] cols_to_keep = non_genre_cols + genres_to_keep ``` ## Histograms - Continuous Variables ``` # Get continuous variables into a dataframe for analysis data_eda_cnt = data_clean_enc[continuous_cols].copy() # Get histograms of continuous variables to note biases in the data. across = 5 down = 2 fig, axs = plt.subplots(down,across, figsize = (20,8)) plt.subplots_adjust(hspace=.5) n = 0 for i in range(down): #loop rows for j in range(across): #loop cols if n < data_eda_cnt.shape[1]: #safety to avoid array out of bounds error - don't exceed number of cols col = data_eda_cnt.columns[n] axs[i,j].hist(data_eda_cnt[col]) axs[i,j].set_title(col) n+=1 # Get stats on continuous variables data_eda_cnt.describe() ``` fluca### Observations * **Artist Followers** is negatively skewed indicating that many newly released songs do not have very many followers. * **Danceability and energy** are positively skewed indicating that many new songs are high energy. * **Loudness** is measured in Loudness Units Relative to Full Scale (LUFS). The value provided in the file is the adjustment that was applied to bring the decibels down to the same -14 LUFS value so that a listener can avoid large fluctuations in loudness from song to song. Therefore a large negative means the song was much louder so that it had to be adjusted down more to get within Spotify's desired range. In looking at the data, it is apparant that most songs required some adjustment. * **Speechiness** indicates how many spoken words are on a track. High speechiness indicates the song is mostly spoken words. Smaller numbers indicate new songs are more musical with fewer words. * **Valence** indicates music positivity. A higher valence indicates the music is more positive or cheerful whereas a lower valence indicates the song is more negative, sad, angry, or depressed. The new song list valence is normally distributed indicating songs are pretty neutral in this area. * **Liveness** indicates the presence of an audience in the music. This value is negatively skewed indicating most songs were performed in studio without a live audience. * **Acousticness** indicates how much electrical amplification is in a song. Smaller numbers show that a large number of new songs are more natural in sound using less electrical amplification. * **Song duration** is normally distributed with most songs running around 3 minutes. * **Tempo** is normally distributed with most songs having a beat between 100 and 150 beats per minute. Here are some [common tempos by genre](https://learningmusic.ableton.com/make-beats/tempo-and-genre.html): * Dub: 60-90 bpm. * Hip-hop: 60-100 bpm. * House: 115-130 bpm. * Techno/trance: 120-140 bpm. * Dubstep: 135-145 bpm. * Drum and bass: 160-180 bpm ## Pie Charts - Categorical Variables ``` # Get continuous variables into a dataframe for analysis data_eda_rlse = data_clean_enc[release_date_cols].copy() # Show pie plots to observe value distribution fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize = (17,14)) plt.subplots_adjust(hspace=.25) data_eda_rlse['Release Month'].value_counts().plot.pie(ax=ax1) data_eda_rlse['Release Day'].value_counts().plot.pie(ax=ax2) data_eda_rlse['Release Weekday'].value_counts().plot.pie(ax=ax3, title='Week Starts with Sunday=0') ``` ### Observations Release Month and Release Day are pretty evenly distributed. However, **weekday for release is significantly higher for Wednesday and Thursday** indicating that a successful release should be on one of those two days. ## Correlations Between Predictors ``` # Correlations between variables - non-genre data_eda_corr = data_clean_enc[non_genre_cols].copy() plt.figure(figsize=(15,10)) correlation_matrix = data_eda_corr.corr().round(2) sns.heatmap(data=correlation_matrix, annot=True, cmap='Greens') ``` ### Observations * Energy and Loudness were highly correlated indicating high loudness resulted in high energy. * Valence showed moderate correlation with Danceability, Energy, and Loudness indicating that positive songs might also have high danceability, energy, and loudness. * Acousticness was negatively correlated with Danceability, Energy, and Loudness indicating more natural sounds did not contribute to these three categories. * All other correlations were modest indicating other variables were more independent. ## Correlations with Popularity ``` # Correlations between target and predictors to see which predictors have a linear correlation with Popularity. corr = data_eda_corr.corr()['Popularity'].round(3) correlated_vars = abs(corr).sort_values(ascending=False) correlated_vars ``` ### Observations Looking at linear correlations between Popularity (target) and the predictor columns indicates that there is **very little correlation between predictors and the target** indicating that a more complex relationship exists between the predictors and target. Therefore, further analysis will need to be done to determine how the predictors will affect song Popularity. # Processing [Decision Trees Notebook](https://colab.research.google.com/drive/1VemtU48HSaw8l70PCYNtMSWYjIAkZ05B) [Random Forest Notebook](https://colab.research.google.com/drive/1eovGCLgqtIIOIpqGqTmOVmgpTOaRBlvN) [XG Boost Notebook](https://colab.research.google.com/drive/1N2T8cBBQMQzJMklUjCNicu--IBVcmUFh) ### Prepare Test Data ``` # Create new dataset that contains the columns we want to keep for our analysis. # data_final = data_clean_enc[cols_to_keep].copy() data_final = data_clean_enc[non_genre_cols].copy() # Break up data into training and testing sets X = data_final.drop(['Popularity'], axis = 1).copy() y = data_final['Popularity'] # Get target and feature names for visualizations target_name = 'Popularity' feature_names = X.columns.to_list() # List parameters to be used in all models test_size = 0.2 ``` ## Decision Tree ``` # Number of iterations for cross validation # Loop of 10 produces inconsistent results. # Anything over 50 produces optimal max depth between 5 and 7 - numbers are very close for all three. num_loops = 200 # Try different max depth for each CV test. max_depth = [1, 2, 3, 4, 5, 6, 7, 8] rms_depth = np.zeros(len(max_depth)) for n, depth in enumerate(max_depth): # Storage for each result so we can get mean from all iterations. rmse_results = np.zeros(num_loops) for idx in range(0,num_loops): # Create train and test data sets. X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) # Configure model model_dt = DecisionTreeRegressor(max_depth=depth, random_state=0) model_dt.fit(X_train,y_train) y_pred_dt = model_dt.predict(X_test) rmse_results[idx] = np.sqrt(mean_squared_error(y_test, y_pred_dt)) # Record RMSE by depth to find optimal depth rms_depth[n] = rmse_results.mean().round(3) # Print result so we can see which depth provided best RMSE print(f"CV RMSE by Depth:\n") for n, depth in enumerate(max_depth): print(f'Depth={max_depth[n]} | RMSE={rms_depth[n]}') if rms_depth[n] == rms_depth.min(): optimal_depth = max_depth[n] optimal_rmse = rms_depth[n] print(f'Optimal Depth={optimal_depth}') # Plot result of max depth showing optimal depth. plt.figure(figsize = (8,5)) plt.plot(max_depth, rms_depth) plt.plot(optimal_depth,optimal_rmse,'rx', markersize=10, markeredgewidth=3) plt.xlabel('Max Depth') plt.ylabel('RMSE') plt.title('Decision Tree Optimal Depth') plt.grid() # Train model with optimal max depth X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_dt = DecisionTreeRegressor(max_depth=optimal_depth, random_state=0) model_dt.fit(X_train,y_train) y_pred_dt = model_dt.predict(X_test) rms_error = np.sqrt(mean_squared_error(y_test, y_pred_dt)) print(f"Optimal RMSE: {rms_error.round(2)}") # Show Tree display(graphviz.Source(tree.export_graphviz(model_dt, feature_names=feature_names, filled = True))) # Visualize using dtreeviz # This plot is very small making it hard to read. Commenting out for now. # dtreeviz(model_dt, X_train, y_train, target_name=target_name, feature_names=feature_names) ``` ## Random Forest ``` # Number of iterations for cross validation # Loop of 10 produces inconsistent results. # Anything over 50 produces optimal max depth between 5 and 7 - numbers are very close for all three. num_loops = 200 # Try different number of trees - go from min to max in steps # num_trees = range(min, max, steps) num_trees = range(10, 70, 10) rms_trees = np.zeros(len(num_trees)) for n, trees in enumerate(num_trees): # Storage for each result so we can get mean from all iterations. rmse_results = np.zeros(num_loops) for idx in range(0,num_loops): # Create train and test data sets. X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) # Configure model. Use optimal depth gained in Decision Tree above. model_rf = RandomForestRegressor(n_estimators=trees, max_depth=optimal_depth, random_state=0) model_rf.fit(X_train, y_train) y_pred_rf = model_rf.predict(X_test) rmse_results[idx] = np.sqrt(mean_squared_error(y_test, y_pred_rf)) # Record RMSE by depth to find optimal depth rms_trees[n] = rmse_results.mean().round(3) # Print result so we can see which tree count provided best RMSE print(f"CV RMSE by Tree Count:\n") for n, trees in enumerate(num_trees): print(f'Trees={num_trees[n]} | RMSE={rms_trees[n]}') if rms_trees[n] == rms_trees.min(): optimal_trees = num_trees[n] optimal_rmse = rms_trees[n] print(f'Optimal Trees={optimal_trees}') # Visualize result of optimal number of trees. plt.plot(num_trees, rms_trees) plt.plot(optimal_trees,optimal_rmse,'rx', markersize=10, markeredgewidth=3) plt.xlabel('Tree No.') plt.ylabel('RMSE') plt.title('Random Forest Optimal Tree Count') plt.grid() # Display one tree from the random forest display(graphviz.Source(tree.export_graphviz(model_rf.estimators_[0], feature_names=feature_names))) ``` ## XG Boost ### CV With No Parameters ``` # XG Boost - returns mse, not rmse. So need to take sqrt of result at end. num_loops = 200 rmse_xgb = np.zeros(num_loops) for idx in range(0,num_loops): X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_xgb = xgb.XGBRegressor(objective ='reg:squarederror', verbosity=0, seed = 10) model_xgb.fit(X_train,y_train) y_pred_xgb = model_xgb.predict(X_test) rmse_xgb[idx] = np.sqrt(mean_squared_error(y_test,y_pred_xgb)) print(f'CV RMSE XG Boost: {rmse_xgb.mean().round(2)}') # View feature importance score so we know which features are contributing to the outcome feat_imp = pd.Series(model_xgb.feature_importances_, index=X.columns) # get rid of NaN and zero feature importance feat_imp.dropna(inplace=True) feat_imp.drop(feat_imp[feat_imp.values == 0].index, inplace=True) plt.figure(figsize = (20,6)) ax = sns.barplot(x = feat_imp.index, y = feat_imp.values) ax.set_xticklabels(ax.get_xticklabels(),rotation = 90) plt.xlabel('Feature') plt.ylabel('Feature Importance Score') plt.title('Feature Importance for XG Boost - No Grid') ``` ### CV With Optimized Parameters ``` # Specify the parameters you want to try and their ranges. # small learning rate may require more estimators... # max_depth = tree depth # learning_rate = # n_estimators = number of trees param_test = { 'max_depth':[4,5,6,7,8], 'learning_rate' : [0.1, 0.2, 0.3, 0.4], 'n_estimators': [30,40,50,60,70] } # Perform the grid search # Allows us to specify multiple parameters for CV from parameter dictionaries. gsearch = GridSearchCV(estimator = xgb.XGBRegressor(objective = 'reg:squarederror', seed = 10), param_grid = param_test, scoring='neg_mean_squared_error', cv=5) # Fit to training data X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_xgb_grid = gsearch.fit(X_train,y_train) # Save results to variables for use in next step best_learning_rate = model_xgb_grid.best_params_['learning_rate'] best_max_depth = model_xgb_grid.best_params_['max_depth'] best_n_estimators = model_xgb_grid.best_params_['n_estimators'] # see grid search results print(model_xgb_grid.best_params_) # Try out the optimal parameters numLoops = 200 rmse_xgb_grid = np.zeros(numLoops) for idx in range(0,numLoops): X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_xgb_grid = xgb.XGBRegressor(objective ='reg:squarederror', verbosity=0, learning_rate = best_learning_rate, max_depth = best_max_depth, n_estimators = best_n_estimators, seed = 10) model_xgb_grid.fit(X_train,y_train) y_pred_xgb_grid = model_xgb_grid.predict(X_test) rmse_xgb_grid[idx] = np.sqrt(mean_squared_error(y_test,y_pred_xgb_grid)) print(f'CV RMSE XG Boost Grid: {rmse_xgb_grid.mean()}') # View feature importance score so we know which features are contributing to the outcome feat_imp = pd.Series(model_xgb_grid.feature_importances_, index=X.columns) # get rid of NaN and zero feature importance feat_imp.dropna(inplace=True) feat_imp.drop(feat_imp[feat_imp.values == 0].index, inplace=True) plt.figure(figsize = (20,6)) ax = sns.barplot(x = feat_imp.index, y = feat_imp.values) ax.set_xticklabels(ax.get_xticklabels(),rotation = 90) plt.xlabel('Feature') plt.ylabel('Feature Importance Score') ``` # Conclusions * The XG Boost model provided the best performance with an RMSE consistently below 10. * Genre did not improve model performance so was removed. * The two most important features in predicting a new song’s popularity were: * Artist Followers * Release Month * Additional analysis should be done with: * Different combinations of Genre * Encoding and adding Song Name elements to the model * Spotify should utilize fixed Genre categories to ensure more consistent data thus making Genre a more useful predictor.
github_jupyter
# grab the imports needed for the project import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns import statsmodels.api as sm # all from sklearn import datasets from sklearn import metrics from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import MultiLabelBinarizer from sklearn.metrics import classification_report import sklearn.model_selection as model_selection # Regression from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor import xgboost as xgb from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from sklearn.metrics import accuracy_score from sklearn import metrics # Visualization import graphviz from IPython.display import display from sklearn import tree # Installs !pip3 install dtreeviz from dtreeviz.trees import dtreeviz # Mount Drive from google.colab import drive drive.mount('/drive') # Load Data data_path = '/drive/My Drive/Cohort_6/Projects/Project 3/Data/Project_3_Spotify.csv' data = pd.read_csv(data_path) # View top 5 records - all columns data.head() # Column analysis data.info() # Any Nulls? data.isnull().sum().sum() # Get a copy of the data before massaging the data. data_clean = data.copy() # Drop indexes data_clean.drop(['Index', 'Song ID'], inplace=True, axis=1) # Drop features that would not exist for new songs data_clean.drop(['Highest Charting Position', 'Number of Times Charted', 'Week of Highest Charting', 'Streams', 'Weeks Charted'], inplace=True, axis=1) data_clean.tail() # Convert strings into numbers. Note: some are blank so using coerce will turn them into # NaN which can be removed later. # Target data data_clean['Popularity'] = pd.to_numeric(data_clean['Popularity'], errors='coerce') # Feature data data_clean['Artist Followers'] = pd.to_numeric(data_clean['Artist Followers'], errors='coerce') data_clean['Danceability'] = pd.to_numeric(data_clean['Danceability'], errors='coerce') data_clean['Energy'] = pd.to_numeric(data_clean['Energy'], errors='coerce') data_clean['Loudness'] = pd.to_numeric(data_clean['Loudness'], errors='coerce') data_clean['Speechiness'] = pd.to_numeric(data_clean['Speechiness'], errors='coerce') data_clean['Acousticness'] = pd.to_numeric(data_clean['Acousticness'], errors='coerce') data_clean['Liveness'] = pd.to_numeric(data_clean['Liveness'], errors='coerce') data_clean['Tempo'] = pd.to_numeric(data_clean['Tempo'], errors='coerce') data_clean['Duration (ms)'] = pd.to_numeric(data_clean['Duration (ms)'], errors='coerce') data_clean['Valence'] = pd.to_numeric(data_clean['Valence'], errors='coerce') # Convert date data_clean['Release Date'] = pd.to_datetime(data_clean['Release Date'], errors='coerce') # Conversion produced some nulls for values that could not be converted. Remove these rows # since all predictors in these rows are null. data_clean = data_clean.dropna() print(data_clean.isna().sum()) # Verify final data types data_clean.dtypes # Create label encoders instances le = LabelEncoder() mlb = MultiLabelBinarizer() mlb2 = MultiLabelBinarizer() # second instance to handle more condensed genre features # Genre processing functions # Initial Genre cleaning before groupings # Preprocessing for first genre splitting which is less condensed. def genrePreProcessing1(df): genrePreProcessingAll(df) # Use regex to identify and remove specified characters so friendly column names can be generated when encoded. df['Genre'] = df['Genre'].str.replace(r"[\[\]'\-\&\+\s]", '', regex=True) # Turn Genre column into lists by splitting on comma. data_clean_enc['Genre'] = data_clean_enc['Genre'].str.split(',') # Preprocessing for second genre splitting which will be more condensed. def genrePreProcessing2(df): genrePreProcessingAll(df) # Use regex to identify and remove specified characters so friendly column names can be generated when encoded. df['Genre'] = df['Genre'].str.replace(r"[\[\]'\-\&\+\,]", '', regex=True) # Preprocessing for all genre splitting. def genrePreProcessingAll(df): # Combine some values to create categories. df['Genre'] = df['Genre'].str.replace('hip hop','hiphop') # Fill in blank genre. Blank genre will show as [] df['Genre'] = df['Genre'].str.replace('\[\]','unknown') # Do some final cleaning for Genre def genrePostProcessing(df): # HACK! FILL NAN WITH ZERO FOR NOW SINCE DON'T KNOW WHAT IS CAUSING THIS ANOMALY df.fillna(0, inplace=True) # Remove unencoded column df.drop(['Genre'], inplace=True, axis=1) # THIS PRODUCES NULLS IN LAST 11 ROWS AND MAKES ALL ENCODED VALUES INTO FLOATS INSTEAD OF INTEGERS!!!! # Encode Genre - one hot encode data_clean_enc = data_clean.copy() genrePreProcessing1(data_clean_enc) # Use multi label binarizer to transform the Genre column into individual columns # ALERT: THIS LINE IS CREATING NULL VALUES ON ALL GENRE FEATURE COLUMNS FOR LAST 11 ROWS ONLY. NOT SURE WHY. data_clean_enc = data_clean_enc.join(pd.DataFrame(mlb.fit_transform(data_clean_enc['Genre']),columns=mlb.classes_)) genrePostProcessing(data_clean_enc) data_clean.head() # Grab fresh dataset to be used for different encoding process. data_clean_enc2 = data_clean.copy() genrePreProcessing2(data_clean_enc2) # Use multi label binarizer to transform the Genre column into individual columns # Group Genre values more to reduce columns - reduced from 350 to 308. More grouping needed. # Create unique lists for genre. i.e. italian pop, pop will become italian and pop data_clean_enc2['Genre'] = data_clean_enc2['Genre'].apply(lambda x: np.unique(x.split())) data_clean_enc2 = data_clean_enc2.join(pd.DataFrame(mlb2.fit_transform(data_clean_enc2['Genre']),columns=mlb2.classes_)) genrePostProcessing(data_clean_enc2) # If using this grouping, then get copy to override previous methodology. data_clean_enc = data_clean_enc2.copy() # Export feature columns to Excel to examine how to further group them since # Grouping #2 is still resulting in over 300 columns. # file_name = "test.xlsx" # data_clean_enc.sum().to_excel(file_name) # Manually combine genres that are similar data_clean_enc['urban'] = data_clean_enc['urbaine'] + data_clean_enc['urban'] + data_clean_enc['urbano'] data_clean_enc.drop(['urbaine', 'urbano'], axis=1, inplace=True) data_clean_enc['alternative'] = data_clean_enc['alternative'] + data_clean_enc['alt'] data_clean_enc.drop(['alt'], axis=1, inplace=True) data_clean_enc['argentino'] = data_clean_enc['argentino'] + data_clean_enc['argentine'] data_clean_enc.drop(['argentine'], axis=1, inplace=True) data_clean_enc['colombiano'] = data_clean_enc['colombiano'] + data_clean_enc['colombian'] data_clean_enc.drop(['colombian'], axis=1, inplace=True) data_clean_enc['electro'] = data_clean_enc['electro'] + data_clean_enc['electropop'] + data_clean_enc['electronic'] data_clean_enc.drop(['electropop', 'electronic'], axis=1, inplace=True) data_clean_enc['hiphop'] = data_clean_enc['hiphop'] + data_clean_enc['hip'] data_clean_enc.drop(['hip'], axis=1, inplace=True) data_clean_enc['italian'] = data_clean_enc['italian'] + data_clean_enc['italiana'] + data_clean_enc['italiano'] data_clean_enc.drop(['italiana', 'italiano'], axis=1, inplace=True) data_clean_enc['latino'] = data_clean_enc['latino'] + data_clean_enc['latin'] + data_clean_enc['latina'] data_clean_enc.drop(['latin', 'latina'], axis=1, inplace=True) data_clean_enc['puertorican'] = data_clean_enc['puerto'] + data_clean_enc['rican'] data_clean_enc.drop(['puerto', 'rican'], axis=1, inplace=True) # View totals again to determine list to keep and which ones to throw into "other" group # These columns will be retained further down at the beginning of the EDA. genres_to_keep = ['pop','rap','hiphop','trap','latino','dance','postteen','reggaeton','melodic','canadian','electro', 'uk','rb','rock','house','colombiano','german','atl','group','kpop','edm','chicago','alternative','drill', 'tropical','boy','contemporary'] # Since some songs have multiple artists, pull out the first artist in the column # since this is likely the lead artist and thus more important for predicting song success. data_clean_enc['Artist'] = data_clean_enc['Artist'].apply(lambda x: x.split(',')[0]) # Now encode. data_clean_enc['Artist Lb'] = le.fit_transform(data_clean_enc['Artist']) # Remove unencoded column data_clean_enc.drop(['Artist'], inplace=True, axis=1) # TODO: Encode song name. WILL DO AT SOME FUTURE DATE. # First remove data about featured artists and remix - anything in parenthesis or square brackets. # Then remove dashes # data_clean_enc['Song Name'] data_clean_enc.drop(['Song Name'], inplace=True, axis=1) # Encode chord data_clean_enc['Chord Lb'] = le.fit_transform(data_clean_enc['Chord']) data_clean_enc[['Chord Lb', 'Chord']] # Remove unencoded column data_clean_enc.drop(['Chord'], inplace=True, axis=1) # Split release date into month, day, day of week to see if release date has impact on popularity. data_clean_enc['Release Month'] = data_clean_enc['Release Date'].dt.month data_clean_enc['Release Day'] = data_clean_enc['Release Date'].dt.day data_clean_enc['Release Weekday'] = data_clean_enc['Release Date'].dt.dayofweek # Assumes week starts on Monday # Verify results data_clean_enc[['Release Date', 'Release Month', 'Release Day', 'Release Weekday']] # Remove Release Date because model cannot use dates. Model will use date parts instead. data_clean_enc.drop(['Release Date'], inplace=True, axis=1) # Final data review to see what else needs to be done data_clean_enc.head() # print(data_clean_enc.shape) # Get lists of columns that can be used for further analysis continuous_cols = ['Artist Followers','Danceability','Energy','Loudness','Speechiness','Acousticness','Liveness','Tempo', 'Duration (ms)','Valence'] other_cols = ['Artist Lb','Chord Lb'] release_date_cols = ['Release Month','Release Day','Release Weekday'] non_genre_cols = continuous_cols + other_cols + release_date_cols + ['Popularity'] cols_to_keep = non_genre_cols + genres_to_keep # Get continuous variables into a dataframe for analysis data_eda_cnt = data_clean_enc[continuous_cols].copy() # Get histograms of continuous variables to note biases in the data. across = 5 down = 2 fig, axs = plt.subplots(down,across, figsize = (20,8)) plt.subplots_adjust(hspace=.5) n = 0 for i in range(down): #loop rows for j in range(across): #loop cols if n < data_eda_cnt.shape[1]: #safety to avoid array out of bounds error - don't exceed number of cols col = data_eda_cnt.columns[n] axs[i,j].hist(data_eda_cnt[col]) axs[i,j].set_title(col) n+=1 # Get stats on continuous variables data_eda_cnt.describe() # Get continuous variables into a dataframe for analysis data_eda_rlse = data_clean_enc[release_date_cols].copy() # Show pie plots to observe value distribution fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize = (17,14)) plt.subplots_adjust(hspace=.25) data_eda_rlse['Release Month'].value_counts().plot.pie(ax=ax1) data_eda_rlse['Release Day'].value_counts().plot.pie(ax=ax2) data_eda_rlse['Release Weekday'].value_counts().plot.pie(ax=ax3, title='Week Starts with Sunday=0') # Correlations between variables - non-genre data_eda_corr = data_clean_enc[non_genre_cols].copy() plt.figure(figsize=(15,10)) correlation_matrix = data_eda_corr.corr().round(2) sns.heatmap(data=correlation_matrix, annot=True, cmap='Greens') # Correlations between target and predictors to see which predictors have a linear correlation with Popularity. corr = data_eda_corr.corr()['Popularity'].round(3) correlated_vars = abs(corr).sort_values(ascending=False) correlated_vars # Create new dataset that contains the columns we want to keep for our analysis. # data_final = data_clean_enc[cols_to_keep].copy() data_final = data_clean_enc[non_genre_cols].copy() # Break up data into training and testing sets X = data_final.drop(['Popularity'], axis = 1).copy() y = data_final['Popularity'] # Get target and feature names for visualizations target_name = 'Popularity' feature_names = X.columns.to_list() # List parameters to be used in all models test_size = 0.2 # Number of iterations for cross validation # Loop of 10 produces inconsistent results. # Anything over 50 produces optimal max depth between 5 and 7 - numbers are very close for all three. num_loops = 200 # Try different max depth for each CV test. max_depth = [1, 2, 3, 4, 5, 6, 7, 8] rms_depth = np.zeros(len(max_depth)) for n, depth in enumerate(max_depth): # Storage for each result so we can get mean from all iterations. rmse_results = np.zeros(num_loops) for idx in range(0,num_loops): # Create train and test data sets. X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) # Configure model model_dt = DecisionTreeRegressor(max_depth=depth, random_state=0) model_dt.fit(X_train,y_train) y_pred_dt = model_dt.predict(X_test) rmse_results[idx] = np.sqrt(mean_squared_error(y_test, y_pred_dt)) # Record RMSE by depth to find optimal depth rms_depth[n] = rmse_results.mean().round(3) # Print result so we can see which depth provided best RMSE print(f"CV RMSE by Depth:\n") for n, depth in enumerate(max_depth): print(f'Depth={max_depth[n]} | RMSE={rms_depth[n]}') if rms_depth[n] == rms_depth.min(): optimal_depth = max_depth[n] optimal_rmse = rms_depth[n] print(f'Optimal Depth={optimal_depth}') # Plot result of max depth showing optimal depth. plt.figure(figsize = (8,5)) plt.plot(max_depth, rms_depth) plt.plot(optimal_depth,optimal_rmse,'rx', markersize=10, markeredgewidth=3) plt.xlabel('Max Depth') plt.ylabel('RMSE') plt.title('Decision Tree Optimal Depth') plt.grid() # Train model with optimal max depth X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_dt = DecisionTreeRegressor(max_depth=optimal_depth, random_state=0) model_dt.fit(X_train,y_train) y_pred_dt = model_dt.predict(X_test) rms_error = np.sqrt(mean_squared_error(y_test, y_pred_dt)) print(f"Optimal RMSE: {rms_error.round(2)}") # Show Tree display(graphviz.Source(tree.export_graphviz(model_dt, feature_names=feature_names, filled = True))) # Visualize using dtreeviz # This plot is very small making it hard to read. Commenting out for now. # dtreeviz(model_dt, X_train, y_train, target_name=target_name, feature_names=feature_names) # Number of iterations for cross validation # Loop of 10 produces inconsistent results. # Anything over 50 produces optimal max depth between 5 and 7 - numbers are very close for all three. num_loops = 200 # Try different number of trees - go from min to max in steps # num_trees = range(min, max, steps) num_trees = range(10, 70, 10) rms_trees = np.zeros(len(num_trees)) for n, trees in enumerate(num_trees): # Storage for each result so we can get mean from all iterations. rmse_results = np.zeros(num_loops) for idx in range(0,num_loops): # Create train and test data sets. X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) # Configure model. Use optimal depth gained in Decision Tree above. model_rf = RandomForestRegressor(n_estimators=trees, max_depth=optimal_depth, random_state=0) model_rf.fit(X_train, y_train) y_pred_rf = model_rf.predict(X_test) rmse_results[idx] = np.sqrt(mean_squared_error(y_test, y_pred_rf)) # Record RMSE by depth to find optimal depth rms_trees[n] = rmse_results.mean().round(3) # Print result so we can see which tree count provided best RMSE print(f"CV RMSE by Tree Count:\n") for n, trees in enumerate(num_trees): print(f'Trees={num_trees[n]} | RMSE={rms_trees[n]}') if rms_trees[n] == rms_trees.min(): optimal_trees = num_trees[n] optimal_rmse = rms_trees[n] print(f'Optimal Trees={optimal_trees}') # Visualize result of optimal number of trees. plt.plot(num_trees, rms_trees) plt.plot(optimal_trees,optimal_rmse,'rx', markersize=10, markeredgewidth=3) plt.xlabel('Tree No.') plt.ylabel('RMSE') plt.title('Random Forest Optimal Tree Count') plt.grid() # Display one tree from the random forest display(graphviz.Source(tree.export_graphviz(model_rf.estimators_[0], feature_names=feature_names))) # XG Boost - returns mse, not rmse. So need to take sqrt of result at end. num_loops = 200 rmse_xgb = np.zeros(num_loops) for idx in range(0,num_loops): X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_xgb = xgb.XGBRegressor(objective ='reg:squarederror', verbosity=0, seed = 10) model_xgb.fit(X_train,y_train) y_pred_xgb = model_xgb.predict(X_test) rmse_xgb[idx] = np.sqrt(mean_squared_error(y_test,y_pred_xgb)) print(f'CV RMSE XG Boost: {rmse_xgb.mean().round(2)}') # View feature importance score so we know which features are contributing to the outcome feat_imp = pd.Series(model_xgb.feature_importances_, index=X.columns) # get rid of NaN and zero feature importance feat_imp.dropna(inplace=True) feat_imp.drop(feat_imp[feat_imp.values == 0].index, inplace=True) plt.figure(figsize = (20,6)) ax = sns.barplot(x = feat_imp.index, y = feat_imp.values) ax.set_xticklabels(ax.get_xticklabels(),rotation = 90) plt.xlabel('Feature') plt.ylabel('Feature Importance Score') plt.title('Feature Importance for XG Boost - No Grid') # Specify the parameters you want to try and their ranges. # small learning rate may require more estimators... # max_depth = tree depth # learning_rate = # n_estimators = number of trees param_test = { 'max_depth':[4,5,6,7,8], 'learning_rate' : [0.1, 0.2, 0.3, 0.4], 'n_estimators': [30,40,50,60,70] } # Perform the grid search # Allows us to specify multiple parameters for CV from parameter dictionaries. gsearch = GridSearchCV(estimator = xgb.XGBRegressor(objective = 'reg:squarederror', seed = 10), param_grid = param_test, scoring='neg_mean_squared_error', cv=5) # Fit to training data X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_xgb_grid = gsearch.fit(X_train,y_train) # Save results to variables for use in next step best_learning_rate = model_xgb_grid.best_params_['learning_rate'] best_max_depth = model_xgb_grid.best_params_['max_depth'] best_n_estimators = model_xgb_grid.best_params_['n_estimators'] # see grid search results print(model_xgb_grid.best_params_) # Try out the optimal parameters numLoops = 200 rmse_xgb_grid = np.zeros(numLoops) for idx in range(0,numLoops): X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size) model_xgb_grid = xgb.XGBRegressor(objective ='reg:squarederror', verbosity=0, learning_rate = best_learning_rate, max_depth = best_max_depth, n_estimators = best_n_estimators, seed = 10) model_xgb_grid.fit(X_train,y_train) y_pred_xgb_grid = model_xgb_grid.predict(X_test) rmse_xgb_grid[idx] = np.sqrt(mean_squared_error(y_test,y_pred_xgb_grid)) print(f'CV RMSE XG Boost Grid: {rmse_xgb_grid.mean()}') # View feature importance score so we know which features are contributing to the outcome feat_imp = pd.Series(model_xgb_grid.feature_importances_, index=X.columns) # get rid of NaN and zero feature importance feat_imp.dropna(inplace=True) feat_imp.drop(feat_imp[feat_imp.values == 0].index, inplace=True) plt.figure(figsize = (20,6)) ax = sns.barplot(x = feat_imp.index, y = feat_imp.values) ax.set_xticklabels(ax.get_xticklabels(),rotation = 90) plt.xlabel('Feature') plt.ylabel('Feature Importance Score')
0.658198
0.977522
<center> <img src="https://habrastorage.org/web/677/8e1/337/6778e1337c3d4b159d7e99df94227cb2.jpg"/> ## Специализация "Машинное обучение и анализ данных" <center>Автор материала: программист-исследователь Mail.Ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ [Юрий Кашницкий](https://yorko.github.io/) # <center> Capstone проект №1 <br> Идентификация пользователей по посещенным веб-страницам <img src='http://i.istockimg.com/file_thumbview_approve/21546327/5/stock-illustration-21546327-identification-de-l-utilisateur.jpg'> # <center>Неделя 5. Соревнование Kaggle "Catch Me If You Can" На этой неделе мы вспомним про концепцию стохастического градиентного спуска и опробуем классификатор Scikit-learn SGDClassifier, который работает намного быстрее на больших выборках, чем алгоритмы, которые мы тестировали на 4 неделе. Также мы познакомимся с данными [соревнования](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2) Kaggle по идентификации пользователей и сделаем в нем первые посылки. По итогам этой недели дополнительные баллы получат те, кто попадет в топ-30 публичного лидерборда соревнования. **В этой части проекта Вам могут быть полезны видеозаписи следующих лекций курса "Обучение на размеченных данных":** - [Стохатический градиентный спуск](https://www.coursera.org/learn/supervised-learning/lecture/xRY50/stokhastichieskii-ghradiientnyi-spusk) - [Линейные модели. Sklearn.linear_model. Классификация](https://www.coursera.org/learn/supervised-learning/lecture/EBg9t/linieinyie-modieli-sklearn-linear-model-klassifikatsiia) **Также рекомендуется вернуться и просмотреть [задание](https://www.coursera.org/learn/supervised-learning/programming/t2Idc/linieinaia-rieghriessiia-i-stokhastichieskii-ghradiientnyi-spusk) "Линейная регрессия и стохастический градиентный спуск" 1 недели 2 курса специализации.** ### Задание 1. Заполните код в этой тетрадке 2. Если вы проходите специализацию Яндеса и МФТИ, пошлите тетрадку в соответствующем Peer Review. <br> Если вы проходите курс ODS, выберите ответы в [веб-форме](https://docs.google.com/forms/d/1pLsegkAICL9PzOLyAeH9DmDOBfktte0l8JW75uWcTng). ``` from __future__ import division, print_function # отключим всякие предупреждения Anaconda import warnings warnings.filterwarnings('ignore') import os import pickle import numpy as np import pandas as pd from scipy.sparse import csr_matrix from sklearn.model_selection import train_test_split from sklearn.linear_model import SGDClassifier from sklearn.metrics import roc_auc_score ``` **Считаем данные [соревнования](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2) в DataFrame train_df и test_df (обучающая и тестовая выборки).** ``` # Поменяйте на свой путь к данным PATH_TO_DATA = 'capstone_user_identification' train_df = pd.read_csv(os.path.join(PATH_TO_DATA, 'train_sessions.csv'), index_col='session_id') test_df = pd.read_csv(os.path.join(PATH_TO_DATA, 'test_sessions.csv'), index_col='session_id') train_df.head() ``` **Объединим обучающую и тестовую выборки – это понадобится, чтоб вместе потом привести их к разреженному формату.** ``` train_test_df = pd.concat([train_df, test_df]) ``` В обучающей выборке видим следующие признаки: - site1 – индекс первого посещенного сайта в сессии - time1 – время посещения первого сайта в сессии - ... - site10 – индекс 10-го посещенного сайта в сессии - time10 – время посещения 10-го сайта в сессии - user_id – ID пользователя Сессии пользователей выделены таким образом, что они не могут быть длинее получаса или 10 сайтов. То есть сессия считается оконченной либо когда пользователь посетил 10 сайтов подряд, либо когда сессия заняла по времени более 30 минут. **Посмотрим на статистику признаков.** Пропуски возникают там, где сессии короткие (менее 10 сайтов). Скажем, если человек 1 января 2015 года посетил *vk.com* в 20:01, потом *yandex.ru* в 20:29, затем *google.com* в 20:33, то первая его сессия будет состоять только из двух сайтов (site1 – ID сайта *vk.com*, time1 – 2015-01-01 20:01:00, site2 – ID сайта *yandex.ru*, time2 – 2015-01-01 20:29:00, остальные признаки – NaN), а начиная с *google.com* пойдет новая сессия, потому что уже прошло более 30 минут с момента посещения *vk.com*. ``` train_df.info() test_df.head() test_df.info() ``` **В обучающей выборке – 2297 сессий одного пользователя (Alice) и 251264 сессий – других пользователей, не Элис. Дисбаланс классов очень сильный, и смотреть на долю верных ответов (accuracy) непоказательно.** ``` train_df['target'].value_counts() ``` **Пока для прогноза будем использовать только индексы посещенных сайтов. Индексы нумеровались с 1, так что заменим пропуски на нули.** ``` train_test_df_sites = train_test_df[['site%d' % i for i in range(1, 11)]].fillna(0).astype('int') train_test_df_sites.head(10) ``` **Создайте разреженные матрицы *X_train_sparse* и *X_test_sparse* аналогично тому, как мы это делали ранее. Используйте объединенную матрицу *train_test_df_sites*, потом разделите обратно на обучающую и тестовую части.** Обратите внимание на то, что в сессиях меньше 10 сайтов у нас остались нули, так что первый признак (сколько раз попался 0) по смыслу отличен от остальных (сколько раз попался сайт с индексом $i$). Поэтому первый столбец разреженной матрицы надо будет удалить. **Выделите в отдельный вектор *y* ответы на обучающей выборке.** ``` train_test_sparse = ''' ВАШ КОД ЗДЕСЬ ''' X_train_sparse = ''' ВАШ КОД ЗДЕСЬ ''' X_test_sparse = ''' ВАШ КОД ЗДЕСЬ ''' y = ''' ВАШ КОД ЗДЕСЬ ''' ``` **<font color='red'>Вопрос 1. </font> Выведите размерности матриц *X_train_sparse* и *X_test_sparse* – 4 числа на одной строке через пробел: число строк и столбцов матрицы *X_train_sparse*, затем число строк и столбцов матрицы *X_test_sparse*.** ``` ''' ВАШ КОД ЗДЕСЬ ''' ``` **Сохраним в pickle-файлы объекты *X_train_sparse*, *X_test_sparse* и *y* (последний – в файл *kaggle_data/train_target.pkl*).** ``` with open(os.path.join(PATH_TO_DATA, 'X_train_sparse.pkl'), 'wb') as X_train_sparse_pkl: pickle.dump(X_train_sparse, X_train_sparse_pkl, protocol=2) with open(os.path.join(PATH_TO_DATA, 'X_test_sparse.pkl'), 'wb') as X_test_sparse_pkl: pickle.dump(X_test_sparse, X_test_sparse_pkl, protocol=2) with open(os.path.join(PATH_TO_DATA, 'train_target.pkl'), 'wb') as train_target_pkl: pickle.dump(y, train_target_pkl, protocol=2) ``` **Разобьем обучающую выборку на 2 части в пропорции 7/3, причем не перемешивая. Исходные данные упорядочены по времени, тестовая выборка по времени четко отделена от обучающей, это же соблюдем и здесь.** ``` train_share = int(.7 * X_train_sparse.shape[0]) X_train, y_train = X_train_sparse[:train_share, :], y[:train_share] X_valid, y_valid = X_train_sparse[train_share:, :], y[train_share:] ``` **Создайте объект `sklearn.linear_model.SGDClassifier` с логистической функцией потерь и параметром *random_state*=17. Остальные параметры оставьте по умолчанию, разве что *n_jobs*=-1 никогда не помешает. Обучите модель на выборке `(X_train, y_train)`.** ``` sgd_logit = ''' ВАШ КОД ЗДЕСЬ ''' sgd_logit.fit ''' ВАШ КОД ЗДЕСЬ ''' ``` **Сделайте прогноз в виде предсказанных вероятностей того, что это сессия Элис, на отложенной выборке *(X_valid, y_valid)*.** ``` logit_valid_pred_proba = sgd_logit ''' ВАШ КОД ЗДЕСЬ ''' ``` **<font color='red'>Вопрос 2. </font> Посчитайте ROC AUC логистической регрессии, обученной с помощью стохастического градиентного спуска, на отложенной выборке. Округлите до 3 знаков после разделителя.** ``` ''' ВАШ КОД ЗДЕСЬ ''' ``` **Сделайте прогноз в виде предсказанных вероятностей отнесения к классу 1 для тестовой выборки с помощью той же *sgd_logit*, обученной уже на всей обучающей выборке (а не на 70%).** ``` %%time sgd_logit ''' ВАШ КОД ЗДЕСЬ ''' logit_test_pred_proba = ''' ВАШ КОД ЗДЕСЬ ''' ``` **Запишите ответы в файл и сделайте посылку на Kaggle. Дайте своей команде (из одного человека) на Kaggle говорящее название – по шаблону "[YDF & MIPT] Coursera_Username", чтоб можно было легко идентифицировать Вашу посылку на [лидерборде](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2/leaderboard/public).** **Результат, который мы только что получили, соответствует бейзлайну "SGDCLassifer" на лидерборде, задача на эту неделю – как минимум его побить.** ``` def write_to_submission_file(predicted_labels, out_file, target='target', index_label="session_id"): # turn predictions into data frame and save as csv file predicted_df = pd.DataFrame(predicted_labels, index = np.arange(1, predicted_labels.shape[0] + 1), columns=[target]) predicted_df.to_csv(out_file, index_label=index_label) write_to_submission_file ''' ВАШ КОД ЗДЕСЬ ''' ``` ## Критерии оценки работы (только для Peer Review в специализации): - Правильные ли получились размерности матриц в п. 1? (max. 2 балла) - Правильным ли получилось значения ROC AUC в п. 2? (max. 4 балла) - Побит ли бенчмарк "sgd_logit_benchmark.csv" на публичной части рейтинга в соревновании Kaggle? (max. 2 балла) - Побит ли бенчмарк "Logit +3 features" на публичной части рейтинга в соревновании Kaggle? (max. 2 балла) ## Пути улучшения На этой неделе дается много времени на соревнование. Не забывайте вносить хорошие идеи, к которым Вы пришли по ходу соревнования, в описание финального проекта (`html`, `pdf` или `ipynb`). Это только в случае, если вы проходите специализацию. Что можно попробовать: - Использовать ранее построенные признаки для улучшения модели (проверить их можно на меньшей выборке по 150 пользователям, отделив одного из пользователей от остальных – это быстрее) - Настроить параметры моделей (например, коэффициенты регуляризации) - Если позволяют мощности (или хватает терпения), можно попробовать смешивание (блендинг) ответов бустинга и линейной модели. [Вот](http://mlwave.com/kaggle-ensembling-guide/) один из самых известных тьюториалов по смешиванию ответов алгоритмов, также хороша [статья](https://alexanderdyakonov.wordpress.com/2017/03/10/cтекинг-stacking-и-блендинг-blending) Александра Дьяконова - Обратите внимание, что в соревновании также даны исходные данные о посещенных веб-страницах Элис и остальными 1557 пользователями (*train.zip*). По этим данным можно сформировать свою обучающую выборку. На 6 неделе мы пройдем большой тьюториал по Vowpal Wabbit и попробуем его в деле, на данных соревнования.
github_jupyter
from __future__ import division, print_function # отключим всякие предупреждения Anaconda import warnings warnings.filterwarnings('ignore') import os import pickle import numpy as np import pandas as pd from scipy.sparse import csr_matrix from sklearn.model_selection import train_test_split from sklearn.linear_model import SGDClassifier from sklearn.metrics import roc_auc_score # Поменяйте на свой путь к данным PATH_TO_DATA = 'capstone_user_identification' train_df = pd.read_csv(os.path.join(PATH_TO_DATA, 'train_sessions.csv'), index_col='session_id') test_df = pd.read_csv(os.path.join(PATH_TO_DATA, 'test_sessions.csv'), index_col='session_id') train_df.head() train_test_df = pd.concat([train_df, test_df]) train_df.info() test_df.head() test_df.info() train_df['target'].value_counts() train_test_df_sites = train_test_df[['site%d' % i for i in range(1, 11)]].fillna(0).astype('int') train_test_df_sites.head(10) train_test_sparse = ''' ВАШ КОД ЗДЕСЬ ''' X_train_sparse = ''' ВАШ КОД ЗДЕСЬ ''' X_test_sparse = ''' ВАШ КОД ЗДЕСЬ ''' y = ''' ВАШ КОД ЗДЕСЬ ''' ''' ВАШ КОД ЗДЕСЬ ''' with open(os.path.join(PATH_TO_DATA, 'X_train_sparse.pkl'), 'wb') as X_train_sparse_pkl: pickle.dump(X_train_sparse, X_train_sparse_pkl, protocol=2) with open(os.path.join(PATH_TO_DATA, 'X_test_sparse.pkl'), 'wb') as X_test_sparse_pkl: pickle.dump(X_test_sparse, X_test_sparse_pkl, protocol=2) with open(os.path.join(PATH_TO_DATA, 'train_target.pkl'), 'wb') as train_target_pkl: pickle.dump(y, train_target_pkl, protocol=2) train_share = int(.7 * X_train_sparse.shape[0]) X_train, y_train = X_train_sparse[:train_share, :], y[:train_share] X_valid, y_valid = X_train_sparse[train_share:, :], y[train_share:] sgd_logit = ''' ВАШ КОД ЗДЕСЬ ''' sgd_logit.fit ''' ВАШ КОД ЗДЕСЬ ''' logit_valid_pred_proba = sgd_logit ''' ВАШ КОД ЗДЕСЬ ''' ''' ВАШ КОД ЗДЕСЬ ''' %%time sgd_logit ''' ВАШ КОД ЗДЕСЬ ''' logit_test_pred_proba = ''' ВАШ КОД ЗДЕСЬ ''' def write_to_submission_file(predicted_labels, out_file, target='target', index_label="session_id"): # turn predictions into data frame and save as csv file predicted_df = pd.DataFrame(predicted_labels, index = np.arange(1, predicted_labels.shape[0] + 1), columns=[target]) predicted_df.to_csv(out_file, index_label=index_label) write_to_submission_file ''' ВАШ КОД ЗДЕСЬ '''
0.31944
0.947817
``` import malaya with open('dumping-cleaned-common-crawl.txt') as fopen: data = fopen.read().split('\n') len(data) import re from unidecode import unidecode alphabets = '([A-Za-z])' prefixes = ( '(Mr|St|Mrs|Ms|Dr|Prof|Capt|Cpt|Lt|Mt|Puan|puan|Tuan|tuan|sir|Sir)[.]' ) suffixes = '(Inc|Ltd|Jr|Sr|Co|Mo)' starters = '(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever|Dia|Mereka|Tetapi|Kita|Itu|Ini|Dan|Kami|Beliau|Seri|Datuk|Dato|Datin|Tuan|Puan)' acronyms = '([A-Z][.][A-Z][.](?:[A-Z][.])?)' websites = '[.](com|net|org|io|gov|me|edu|my)' another_websites = '(www|http|https)[.]' digits = '([0-9])' before_digits = '([Nn]o|[Nn]ombor|[Nn]umber|[Kk]e|=|al)' month = '([Jj]an(?:uari)?|[Ff]eb(?:ruari)?|[Mm]a(?:c)?|[Aa]pr(?:il)?|Mei|[Jj]u(?:n)?|[Jj]ula(?:i)?|[Aa]ug(?:ust)?|[Ss]ept?(?:ember)?|[Oo]kt(?:ober)?|[Nn]ov(?:ember)?|[Dd]is(?:ember)?)' def split_into_sentences(text, minimum_length = 5): text = text.replace('\x97', '\n') text = '. '.join([s for s in text.split('\n') if len(s)]) text = text + '.' text = unidecode(text) text = ' ' + text + ' ' text = text.replace('\n', ' ') text = re.sub(prefixes, '\\1<prd>', text) text = re.sub(websites, '<prd>\\1', text) text = re.sub(another_websites, '\\1<prd>', text) text = re.sub('[,][.]+', '<prd>', text) if '...' in text: text = text.replace('...', '<prd><prd><prd>') if 'Ph.D' in text: text = text.replace('Ph.D.', 'Ph<prd>D<prd>') text = re.sub('[.]\s*[,]', '<prd>,', text) text = re.sub(before_digits + '\s*[.]\s*' + digits, '\\1<prd>\\2', text) text = re.sub(month + '[.]\s*' + digits, '\\1<prd>\\2', text) text = re.sub('\s' + alphabets + '[.][ ]+', ' \\1<prd> ', text) text = re.sub(acronyms + ' ' + starters, '\\1<stop> \\2', text) text = re.sub( alphabets + '[.]' + alphabets + '[.]' + alphabets + '[.]', '\\1<prd>\\2<prd>\\3<prd>', text, ) text = re.sub( alphabets + '[.]' + alphabets + '[.]', '\\1<prd>\\2<prd>', text ) text = re.sub(' ' + suffixes + '[.][ ]+' + starters, ' \\1<stop> \\2', text) text = re.sub(' ' + suffixes + '[.]', ' \\1<prd>', text) text = re.sub(' ' + alphabets + '[.]', ' \\1<prd>', text) text = re.sub(digits + '[.]' + digits, '\\1<prd>\\2', text) if '”' in text: text = text.replace('.”', '”.') if '"' in text: text = text.replace('."', '".') if '!' in text: text = text.replace('!"', '"!') if '?' in text: text = text.replace('?"', '"?') text = text.replace('.', '.<stop>') text = text.replace('?', '?<stop>') text = text.replace('!', '!<stop>') text = text.replace('<prd>', '.') sentences = text.split('<stop>') sentences = sentences[:-1] sentences = [s.strip() for s in sentences if len(s) > minimum_length] return sentences split_into_sentences('Pembolehubah yang ketiga adalah niat yang merujuk kepada niat seseorang dalam melakukan pelbagai tingkah laku ( Fishbein et al . 1975 : 12 ), ') data[11000: 12000] import malaya fast_text = malaya.language_detection.fasttext() VOWELS = 'aeiou' PHONES = ['sh', 'ch', 'ph', 'sz', 'cz', 'sch', 'rz', 'dz'] punctuations = '!@#$%^&*()_+=-' def isword_malay(word): if re.sub('[^0-9!@#$%\^&*()-=_\+{}\[\];\':",./<>?\|~`\\\ ]+', '', word) == word: return True if not any([c in VOWELS for c in word]): return False return True def isword_english(word): if word: consecutiveVowels = 0 consecutiveConsonents = 0 for idx, letter in enumerate(word.lower()): vowel = True if letter in VOWELS else False if idx: prev = word[idx - 1] prevVowel = True if prev in VOWELS else False if not vowel and letter == 'y' and not prevVowel: vowel = True if prevVowel != vowel: consecutiveVowels = 0 consecutiveConsonents = 0 if vowel: consecutiveVowels += 1 else: consecutiveConsonents += 1 if consecutiveVowels >= 3 or consecutiveConsonents > 3: return False if consecutiveConsonents == 3: subStr = word[idx - 2 : idx + 1] if any(phone in subStr for phone in PHONES): consecutiveConsonents -= 1 continue return False return True def filter_string(string, min_len = 15): if len(string) < min_len: return '' string = re.sub( 'http\S+|www.\S+', '', ' '.join( [ word for word in string.split() if word.find('#') < 0 and word.find('@') < 0 ] ), ) string = [w for w in string.split() if isword_malay(w.lower())] string = ' '.join(string) if len(string) > 2: if fast_text.predict([string])[0] == 'other': return '' else: return string else: return string def loop(strings): results = [] for string in tqdm(strings): no = string[0] results.append((no, filter_string(string[1]))) return results import cleaning from tqdm import tqdm temp = [(no, s) for no, s in enumerate(data)] results = cleaning.multiprocessing(temp, loop) %%time results = sorted(results, key=lambda x: x[0]) results = [r[1] for r in results] results[:1000] with open('filtered-dumping-cleaned-common-crawl.txt', 'w') as fopen: fopen.write('\n'.join(results)) ```
github_jupyter
import malaya with open('dumping-cleaned-common-crawl.txt') as fopen: data = fopen.read().split('\n') len(data) import re from unidecode import unidecode alphabets = '([A-Za-z])' prefixes = ( '(Mr|St|Mrs|Ms|Dr|Prof|Capt|Cpt|Lt|Mt|Puan|puan|Tuan|tuan|sir|Sir)[.]' ) suffixes = '(Inc|Ltd|Jr|Sr|Co|Mo)' starters = '(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever|Dia|Mereka|Tetapi|Kita|Itu|Ini|Dan|Kami|Beliau|Seri|Datuk|Dato|Datin|Tuan|Puan)' acronyms = '([A-Z][.][A-Z][.](?:[A-Z][.])?)' websites = '[.](com|net|org|io|gov|me|edu|my)' another_websites = '(www|http|https)[.]' digits = '([0-9])' before_digits = '([Nn]o|[Nn]ombor|[Nn]umber|[Kk]e|=|al)' month = '([Jj]an(?:uari)?|[Ff]eb(?:ruari)?|[Mm]a(?:c)?|[Aa]pr(?:il)?|Mei|[Jj]u(?:n)?|[Jj]ula(?:i)?|[Aa]ug(?:ust)?|[Ss]ept?(?:ember)?|[Oo]kt(?:ober)?|[Nn]ov(?:ember)?|[Dd]is(?:ember)?)' def split_into_sentences(text, minimum_length = 5): text = text.replace('\x97', '\n') text = '. '.join([s for s in text.split('\n') if len(s)]) text = text + '.' text = unidecode(text) text = ' ' + text + ' ' text = text.replace('\n', ' ') text = re.sub(prefixes, '\\1<prd>', text) text = re.sub(websites, '<prd>\\1', text) text = re.sub(another_websites, '\\1<prd>', text) text = re.sub('[,][.]+', '<prd>', text) if '...' in text: text = text.replace('...', '<prd><prd><prd>') if 'Ph.D' in text: text = text.replace('Ph.D.', 'Ph<prd>D<prd>') text = re.sub('[.]\s*[,]', '<prd>,', text) text = re.sub(before_digits + '\s*[.]\s*' + digits, '\\1<prd>\\2', text) text = re.sub(month + '[.]\s*' + digits, '\\1<prd>\\2', text) text = re.sub('\s' + alphabets + '[.][ ]+', ' \\1<prd> ', text) text = re.sub(acronyms + ' ' + starters, '\\1<stop> \\2', text) text = re.sub( alphabets + '[.]' + alphabets + '[.]' + alphabets + '[.]', '\\1<prd>\\2<prd>\\3<prd>', text, ) text = re.sub( alphabets + '[.]' + alphabets + '[.]', '\\1<prd>\\2<prd>', text ) text = re.sub(' ' + suffixes + '[.][ ]+' + starters, ' \\1<stop> \\2', text) text = re.sub(' ' + suffixes + '[.]', ' \\1<prd>', text) text = re.sub(' ' + alphabets + '[.]', ' \\1<prd>', text) text = re.sub(digits + '[.]' + digits, '\\1<prd>\\2', text) if '”' in text: text = text.replace('.”', '”.') if '"' in text: text = text.replace('."', '".') if '!' in text: text = text.replace('!"', '"!') if '?' in text: text = text.replace('?"', '"?') text = text.replace('.', '.<stop>') text = text.replace('?', '?<stop>') text = text.replace('!', '!<stop>') text = text.replace('<prd>', '.') sentences = text.split('<stop>') sentences = sentences[:-1] sentences = [s.strip() for s in sentences if len(s) > minimum_length] return sentences split_into_sentences('Pembolehubah yang ketiga adalah niat yang merujuk kepada niat seseorang dalam melakukan pelbagai tingkah laku ( Fishbein et al . 1975 : 12 ), ') data[11000: 12000] import malaya fast_text = malaya.language_detection.fasttext() VOWELS = 'aeiou' PHONES = ['sh', 'ch', 'ph', 'sz', 'cz', 'sch', 'rz', 'dz'] punctuations = '!@#$%^&*()_+=-' def isword_malay(word): if re.sub('[^0-9!@#$%\^&*()-=_\+{}\[\];\':",./<>?\|~`\\\ ]+', '', word) == word: return True if not any([c in VOWELS for c in word]): return False return True def isword_english(word): if word: consecutiveVowels = 0 consecutiveConsonents = 0 for idx, letter in enumerate(word.lower()): vowel = True if letter in VOWELS else False if idx: prev = word[idx - 1] prevVowel = True if prev in VOWELS else False if not vowel and letter == 'y' and not prevVowel: vowel = True if prevVowel != vowel: consecutiveVowels = 0 consecutiveConsonents = 0 if vowel: consecutiveVowels += 1 else: consecutiveConsonents += 1 if consecutiveVowels >= 3 or consecutiveConsonents > 3: return False if consecutiveConsonents == 3: subStr = word[idx - 2 : idx + 1] if any(phone in subStr for phone in PHONES): consecutiveConsonents -= 1 continue return False return True def filter_string(string, min_len = 15): if len(string) < min_len: return '' string = re.sub( 'http\S+|www.\S+', '', ' '.join( [ word for word in string.split() if word.find('#') < 0 and word.find('@') < 0 ] ), ) string = [w for w in string.split() if isword_malay(w.lower())] string = ' '.join(string) if len(string) > 2: if fast_text.predict([string])[0] == 'other': return '' else: return string else: return string def loop(strings): results = [] for string in tqdm(strings): no = string[0] results.append((no, filter_string(string[1]))) return results import cleaning from tqdm import tqdm temp = [(no, s) for no, s in enumerate(data)] results = cleaning.multiprocessing(temp, loop) %%time results = sorted(results, key=lambda x: x[0]) results = [r[1] for r in results] results[:1000] with open('filtered-dumping-cleaned-common-crawl.txt', 'w') as fopen: fopen.write('\n'.join(results))
0.244814
0.319334
``` # autoreload notebook to update changes to imported packages %load_ext autoreload %autoreload 2 %load_ext watermark %watermark -a "Kenneth Brezinski" -v import torch from torchvision.datasets import FashionMNIST from torch.utils.tensorboard import SummaryWriter from torch.utils.data import DataLoader from torchvision import transforms import matplotlib.pyplot as plt %watermark --iversions train_ds = FashionMNIST("data", transform=transforms.ToTensor(), train=True, download=True) valid_ds = FashionMNIST("data", transform=transforms.ToTensor(), train=False, download=True) plt.figure(figsize=(8,4)) ds_iterable = iter(train_ds) # Checking the first 10 elements of the dataset for i in range(10): img, label = next(ds_iterable) plt.subplot(2, 5, i+1) plt.imshow(img[0], 'Greys') plt.title(f"Class label: {label}") batch_size = 256 train_loader = DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True) valid_loader = DataLoader(dataset=valid_ds, batch_size=batch_size, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break class SimpleMLP(torch.nn.Module): def __init__(self, features_per_layer, weight_init='he', bias=True): super().__init__() layers = [torch.nn.Flatten(start_dim=1)] for d_in, d_out in zip(features_per_layer[:-1], features_per_layer[1:]): layers.append(torch.nn.Linear(d_in, d_out, bias=True)) layers.append(torch.nn.ReLU()) layers.append(torch.nn.Linear(d_out, 10, bias=True)) self.model = torch.nn.Sequential(*layers) self.weight_init = torch.nn.init.kaiming_uniform_ self._init_weights() def _init_weights(self): for m in self.modules(): if isinstance(m, torch.nn.Linear): self.weight_init(m.weight, mode='fan_in') if m.bias is not None: torch.nn.init.constant_(m.bias, 0) def forward(self, x): return self.model(x) def fetch_learnables(model, writer, epoch): for i, layer in enumerate(model.model): if isinstance(layer, torch.nn.Linear): writer.add_histogram(f"layer_{i}_weight", layer.weight) writer.add_histogram(f"layer_{i}_bias", layer.bias) params = dict(epochs=5, arch=[28*28, 64, 32], weight_init=None, lr=1e-2, verbose=False, logs_per_epoch=2, writer=True ) model = SimpleMLP(features_per_layer=params['arch']) optimizer = torch.optim.SGD(model.parameters(), lr=params['lr']) loss_fn = torch.nn.CrossEntropyLoss(reduction='mean') criterion = lambda pred, y: (pred == y).sum().item() / len(y) iteration_name = f"lr={params['lr']}_arch={'-'.join(str(c) for c in params['arch'])}" writers = {'train': SummaryWriter(f'runs/train_' + iteration_name), 'valid': SummaryWriter(f'runs/valid_' + iteration_name)} def train_model(train_loader, valid_loader, params, writers): for epoch in range(params['epochs']): model.train() # run training loop for batch_idx, (img, label) in enumerate(train_loader): output = model(img) loss = loss_fn(output, label) class_pred = torch.argmax(output, dim=-1) accuracy = criterion(class_pred, label) optimizer.zero_grad() loss.backward() optimizer.step() if params['writer']: writers['train'].add_scalar('acc', accuracy, epoch+1) writers['train'].add_scalar('loss', loss.item(), epoch+1) fetch_learnables(model, writers['train'], epoch+1) if params['verbose']: if not batch_idx % (len(train_loader) // params['logs_per_epoch']): print(f"[Epoch {epoch+1:03d}][Batch {batch_idx:03d}/{len(train_loader)}]" f"[Loss {loss.item():.4f}][Acc {accuracy:.4f}]") # run validation with torch.no_grad(): model.eval() # run validation loop for batch_idx, (img, label) in enumerate(valid_loader): output = model(img) loss = loss_fn(output, label) class_pred = torch.argmax(output, dim=-1) accuracy = criterion(class_pred, label) if params['writer']: writers['valid'].add_scalar('acc', accuracy, epoch+1) writers['valid'].add_scalar('loss', loss.item(), epoch+1) if params['verbose']: if not batch_idx % (len(valid_loader) // params['logs_per_epoch']): print(f"[Epoch {epoch+1:03d}][Batch {batch_idx:03d}/{len(valid_loader)}]" f"[Loss {loss.item():.4f}][Acc {accuracy:.4f}]") print("*" * 60) print(f"Finished Epoch {epoch+1:02d}/{params['epochs']:02d}") writers['train'].add_graph(model, img) writers['train'].close() writers['valid'].close() train_model(train_loader, valid_loader, params, writers) ```
github_jupyter
# autoreload notebook to update changes to imported packages %load_ext autoreload %autoreload 2 %load_ext watermark %watermark -a "Kenneth Brezinski" -v import torch from torchvision.datasets import FashionMNIST from torch.utils.tensorboard import SummaryWriter from torch.utils.data import DataLoader from torchvision import transforms import matplotlib.pyplot as plt %watermark --iversions train_ds = FashionMNIST("data", transform=transforms.ToTensor(), train=True, download=True) valid_ds = FashionMNIST("data", transform=transforms.ToTensor(), train=False, download=True) plt.figure(figsize=(8,4)) ds_iterable = iter(train_ds) # Checking the first 10 elements of the dataset for i in range(10): img, label = next(ds_iterable) plt.subplot(2, 5, i+1) plt.imshow(img[0], 'Greys') plt.title(f"Class label: {label}") batch_size = 256 train_loader = DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True) valid_loader = DataLoader(dataset=valid_ds, batch_size=batch_size, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break class SimpleMLP(torch.nn.Module): def __init__(self, features_per_layer, weight_init='he', bias=True): super().__init__() layers = [torch.nn.Flatten(start_dim=1)] for d_in, d_out in zip(features_per_layer[:-1], features_per_layer[1:]): layers.append(torch.nn.Linear(d_in, d_out, bias=True)) layers.append(torch.nn.ReLU()) layers.append(torch.nn.Linear(d_out, 10, bias=True)) self.model = torch.nn.Sequential(*layers) self.weight_init = torch.nn.init.kaiming_uniform_ self._init_weights() def _init_weights(self): for m in self.modules(): if isinstance(m, torch.nn.Linear): self.weight_init(m.weight, mode='fan_in') if m.bias is not None: torch.nn.init.constant_(m.bias, 0) def forward(self, x): return self.model(x) def fetch_learnables(model, writer, epoch): for i, layer in enumerate(model.model): if isinstance(layer, torch.nn.Linear): writer.add_histogram(f"layer_{i}_weight", layer.weight) writer.add_histogram(f"layer_{i}_bias", layer.bias) params = dict(epochs=5, arch=[28*28, 64, 32], weight_init=None, lr=1e-2, verbose=False, logs_per_epoch=2, writer=True ) model = SimpleMLP(features_per_layer=params['arch']) optimizer = torch.optim.SGD(model.parameters(), lr=params['lr']) loss_fn = torch.nn.CrossEntropyLoss(reduction='mean') criterion = lambda pred, y: (pred == y).sum().item() / len(y) iteration_name = f"lr={params['lr']}_arch={'-'.join(str(c) for c in params['arch'])}" writers = {'train': SummaryWriter(f'runs/train_' + iteration_name), 'valid': SummaryWriter(f'runs/valid_' + iteration_name)} def train_model(train_loader, valid_loader, params, writers): for epoch in range(params['epochs']): model.train() # run training loop for batch_idx, (img, label) in enumerate(train_loader): output = model(img) loss = loss_fn(output, label) class_pred = torch.argmax(output, dim=-1) accuracy = criterion(class_pred, label) optimizer.zero_grad() loss.backward() optimizer.step() if params['writer']: writers['train'].add_scalar('acc', accuracy, epoch+1) writers['train'].add_scalar('loss', loss.item(), epoch+1) fetch_learnables(model, writers['train'], epoch+1) if params['verbose']: if not batch_idx % (len(train_loader) // params['logs_per_epoch']): print(f"[Epoch {epoch+1:03d}][Batch {batch_idx:03d}/{len(train_loader)}]" f"[Loss {loss.item():.4f}][Acc {accuracy:.4f}]") # run validation with torch.no_grad(): model.eval() # run validation loop for batch_idx, (img, label) in enumerate(valid_loader): output = model(img) loss = loss_fn(output, label) class_pred = torch.argmax(output, dim=-1) accuracy = criterion(class_pred, label) if params['writer']: writers['valid'].add_scalar('acc', accuracy, epoch+1) writers['valid'].add_scalar('loss', loss.item(), epoch+1) if params['verbose']: if not batch_idx % (len(valid_loader) // params['logs_per_epoch']): print(f"[Epoch {epoch+1:03d}][Batch {batch_idx:03d}/{len(valid_loader)}]" f"[Loss {loss.item():.4f}][Acc {accuracy:.4f}]") print("*" * 60) print(f"Finished Epoch {epoch+1:02d}/{params['epochs']:02d}") writers['train'].add_graph(model, img) writers['train'].close() writers['valid'].close() train_model(train_loader, valid_loader, params, writers)
0.932361
0.713743
``` Questions : 1. Categorical features are way more than numerical features, can we use this fact in some way ? 2. How to deal with categorical features, one hot encoding would yield many features and which would increase the dimensionality of the problem ? 3. Do continuous variables need any kind of transformation ? ``` ``` Ideas : Forward feature selection based on minimizing mae with 5-fold cross validation ``` ``` import numpy as np import pandas as pd import os, sys from sklearn.cross_validation import StratifiedKFold, train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error import xgboost as xgb import warnings warnings.filterwarnings('ignore') basepath = os.path.expanduser('~/Desktop/src/AllState_Claims_Severity/') sys.path.append(os.path.join(basepath, 'src')) np.random.seed(2016) from data import * from utils import * # load files train = pd.read_csv(os.path.join(basepath, 'data/raw/train.csv')) test = pd.read_csv(os.path.join(basepath, 'data/raw/test.csv')) sample_sub = pd.read_csv(os.path.join(basepath, 'data/raw/sample_submission.csv')) # create an indicator for somewhat precarious values for loss. ( only to reduce the number of training examples. ) train['outlier_flag'] = train.loss.map(lambda x: int(x < 4e3)) # encode categorical variables train, test = encode_categorical_features(train, test) # get stratified sample itrain, itest = get_stratified_sample(train.outlier_flag) # subsample of data to work with train_sub = train.iloc[itrain] # target variable y = np.log(train.loss) def forward_feature_selection(df): columns = df.columns # rearrange columns in such a way that target variables ( loss, outlier_flag ) is # followed by continuous and categorical variables cont_columns = [col for col in columns if 'cont' in col] cat_columns = [col for col in columns if 'cat' in col] df = df[list(columns[-2:]) + cont_columns + cat_columns] y = np.log(df.loss) outlier_flag = df.outlier_flag selected_features = [] features_to_test = df.columns[2:] n_fold = 5 cv = StratifiedKFold(outlier_flag, n_folds=n_fold, shuffle=True, random_state=23232) mae_cv_old = 5000 is_improving = True while is_improving: mae_cvs = [] for feature in features_to_test: print('{}'.format(selected_features + [feature])) X = df[selected_features + [feature]] mae_cv = 0 for i, (i_trn, i_val) in enumerate(cv, start=1): est = xgb.XGBRegressor(seed=121212) est.fit(X.values[i_trn], y.values[i_trn]) yhat = np.exp(est.predict(X.values[i_val])) mae = mean_absolute_error(np.exp(y.values[i_val]), yhat) mae_cv += mae / n_fold print('MAE CV: {}'.format(mae_cv)) mae_cvs.append(mae_cv) mae_cv_new = min(mae_cvs) if mae_cv_new < mae_cv_old: mae_cv_old = mae_cv_new feature = list(features_to_test).pop(mae_cvs.index(mae_cv_new)) selected_features.append(feature) print('selected features: {}'.format(selected_features)) with open(os.path.join(basepath, 'data/processed/features_xgboost/selected_features.txt'), 'w') as f: f.write('{}\n'.format('\n'.join(selected_features))) f.close() else: is_improving = False print('final selected features: {}'.format(selected_features)) print('saving selected feature names as a file') with open(os.path.join(basepath, 'data/processed/features_xgboost/selected_features.txt'), 'w') as f: f.write('{}\n'.format('\n'.join(selected_features))) f.close() forward_feature_selection(train) selected_features = [ 'cat80', 'cat101', 'cat100', 'cat57', 'cat114', 'cat79', 'cat44', 'cat26', 'cat94', 'cat38', 'cat32', 'cat35', 'cat67', 'cat59' ] X = train[selected_features] itrain, itest = train_test_split(range(len(X)), stratify=train.outlier_flag, test_size=0.2, random_state=11232) X_train = X.iloc[itrain] X_test = X.iloc[itest] y_train = y.iloc[itrain] y_test = y.iloc[itest] clf = RandomForestRegressor(n_estimators=100, max_depth=13, n_jobs=-1, random_state=12121) clf.fit(X_train, y_train) y_hat = np.exp(clf.predict(X_test)) print('MAE on unseen examples ', mean_absolute_error(np.exp(y_test), y_hat)) ```
github_jupyter
Questions : 1. Categorical features are way more than numerical features, can we use this fact in some way ? 2. How to deal with categorical features, one hot encoding would yield many features and which would increase the dimensionality of the problem ? 3. Do continuous variables need any kind of transformation ? Ideas : Forward feature selection based on minimizing mae with 5-fold cross validation import numpy as np import pandas as pd import os, sys from sklearn.cross_validation import StratifiedKFold, train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error import xgboost as xgb import warnings warnings.filterwarnings('ignore') basepath = os.path.expanduser('~/Desktop/src/AllState_Claims_Severity/') sys.path.append(os.path.join(basepath, 'src')) np.random.seed(2016) from data import * from utils import * # load files train = pd.read_csv(os.path.join(basepath, 'data/raw/train.csv')) test = pd.read_csv(os.path.join(basepath, 'data/raw/test.csv')) sample_sub = pd.read_csv(os.path.join(basepath, 'data/raw/sample_submission.csv')) # create an indicator for somewhat precarious values for loss. ( only to reduce the number of training examples. ) train['outlier_flag'] = train.loss.map(lambda x: int(x < 4e3)) # encode categorical variables train, test = encode_categorical_features(train, test) # get stratified sample itrain, itest = get_stratified_sample(train.outlier_flag) # subsample of data to work with train_sub = train.iloc[itrain] # target variable y = np.log(train.loss) def forward_feature_selection(df): columns = df.columns # rearrange columns in such a way that target variables ( loss, outlier_flag ) is # followed by continuous and categorical variables cont_columns = [col for col in columns if 'cont' in col] cat_columns = [col for col in columns if 'cat' in col] df = df[list(columns[-2:]) + cont_columns + cat_columns] y = np.log(df.loss) outlier_flag = df.outlier_flag selected_features = [] features_to_test = df.columns[2:] n_fold = 5 cv = StratifiedKFold(outlier_flag, n_folds=n_fold, shuffle=True, random_state=23232) mae_cv_old = 5000 is_improving = True while is_improving: mae_cvs = [] for feature in features_to_test: print('{}'.format(selected_features + [feature])) X = df[selected_features + [feature]] mae_cv = 0 for i, (i_trn, i_val) in enumerate(cv, start=1): est = xgb.XGBRegressor(seed=121212) est.fit(X.values[i_trn], y.values[i_trn]) yhat = np.exp(est.predict(X.values[i_val])) mae = mean_absolute_error(np.exp(y.values[i_val]), yhat) mae_cv += mae / n_fold print('MAE CV: {}'.format(mae_cv)) mae_cvs.append(mae_cv) mae_cv_new = min(mae_cvs) if mae_cv_new < mae_cv_old: mae_cv_old = mae_cv_new feature = list(features_to_test).pop(mae_cvs.index(mae_cv_new)) selected_features.append(feature) print('selected features: {}'.format(selected_features)) with open(os.path.join(basepath, 'data/processed/features_xgboost/selected_features.txt'), 'w') as f: f.write('{}\n'.format('\n'.join(selected_features))) f.close() else: is_improving = False print('final selected features: {}'.format(selected_features)) print('saving selected feature names as a file') with open(os.path.join(basepath, 'data/processed/features_xgboost/selected_features.txt'), 'w') as f: f.write('{}\n'.format('\n'.join(selected_features))) f.close() forward_feature_selection(train) selected_features = [ 'cat80', 'cat101', 'cat100', 'cat57', 'cat114', 'cat79', 'cat44', 'cat26', 'cat94', 'cat38', 'cat32', 'cat35', 'cat67', 'cat59' ] X = train[selected_features] itrain, itest = train_test_split(range(len(X)), stratify=train.outlier_flag, test_size=0.2, random_state=11232) X_train = X.iloc[itrain] X_test = X.iloc[itest] y_train = y.iloc[itrain] y_test = y.iloc[itest] clf = RandomForestRegressor(n_estimators=100, max_depth=13, n_jobs=-1, random_state=12121) clf.fit(X_train, y_train) y_hat = np.exp(clf.predict(X_test)) print('MAE on unseen examples ', mean_absolute_error(np.exp(y_test), y_hat))
0.564939
0.816809
A socket is one endpoint of a communication channel used by programs to pass data back and forth locally or across the Internet. Sockets have two primary properties controlling the way they send data: the address family controls the OSI network layer protocol used and the socket type controls the transport layer protocol. Python supports three address families. The most common, AF_INET, is used for IPv4 Internet addressing. IPv4 addresses are four bytes long and are usually represented as a sequence of four numbers, one per octet, separated by dots (e.g., 10.1.1.5 and 127.0.0.1). These values are more commonly referred to as “IP addresses.” Almost all Internet networking is done using IP version 4 at this time. AF_INET6 is used for IPv6 Internet addressing. IPv6 is the “next generation” version of the Internet protocol, and supports 128-bit addresses, traffic shaping, and routing features not available under IPv4. Adoption of IPv6 continues to grow, especially with the proliferation of cloud computing and the extra devices being added to the network because of Internet-of-things projects. AF_UNIX is the address family for Unix Domain Sockets (UDS), an inter-process communication protocol available on POSIX-compliant systems. The implementation of UDS typically allows the operating system to pass data directly from process to process, without going through the network stack. This is more efficient than using AF_INET, but because the file system is used as the namespace for addressing, UDS is restricted to processes on the same system. The appeal of using UDS over other IPC mechanisms such as named pipes or shared memory is that the programming interface is the same as for IP networking, so the application can take advantage of efficient communication when running on a single host, but use the same code when sending data across the network. **Note**: THE AF_UNIX constant is only defined on systems where UDS is supported. The socket type is usually either SOCK_DGRAM for message-oriented datagram transport or SOCK_STREAM for stream-oriented transport. Datagram sockets are most often associated with UDP, the user datagram protocol. They provide unreliable delivery of individual messages. Stream-oriented sockets are associated with TCP, transmission control protocol. They provide byte streams between the client and server, ensuring message delivery or failure notification through timeout management, retransmission, and other features. Most application protocols that deliver a large amount of data, such as HTTP, are built on top of TCP because it makes it simpler to create complex applications when message ordering and delivery is handled automatically. UDP is commonly used for protocols where order is less important (since the messages are self-contained and often small, such as name look-ups via DNS), or for multicasting (sending the same data to several hosts). Both UDP and TCP can be used with either IPv4 or IPv6 addressing. ## Looking up Hosts on the Network socket includes functions to interface with the domain name services on the network so a program can convert the host name of a server into its numerical network address. Applications do not need to convert addresses explicitly before using them to connect to a server, but it can be useful when reporting errors to include the numerical address as well as the name value being used. To find the official name of the current host, use gethostname() ``` import socket print(socket.gethostname()) ``` Use gethostbyname() to consult the operating system hostname resolution API and convert the name of a server to its numerical address. ``` import socket HOSTS = [ 'apu', 'pymotw.com', 'www.python.org', 'nosuchname', ] for host in HOSTS: try: print('{} : {}'.format(host, socket.gethostbyname(host))) except socket.error as msg: print('{} : {}'.format(host, msg)) ``` For access to more naming information about a server, use gethostbyname_ex(). It returns the canonical hostname of the server, any aliases, and all of the available IP addresses that can be used to reach it. ``` import socket HOSTS = [ 'apu', 'pymotw.com', 'www.python.org', 'nosuchname', ] for host in HOSTS: print(host) try: name, aliases, addresses = socket.gethostbyname_ex(host) print(' Hostname:', name) print(' Aliases :', aliases) print(' Addresses:', addresses) except socket.error as msg: print('ERROR:', msg) print() ``` Use getfqdn() to convert a partial name to a fully qualified domain name. ``` import socket for host in ['scott-t460', 'pymotw.com']: print('{:>10} : {}'.format(host, socket.getfqdn(host))) ``` When the address of a server is available, use gethostbyaddr() to do a “reverse” lookup for the name. ``` import socket hostname, aliases, addresses = socket.gethostbyaddr('10.104.190.53') print('Hostname :', hostname) print('Aliases :', aliases) print('Addresses:', addresses) ``` ## Finding Service Information In addition to an IP address, each socket address includes an integer port number. Many applications can run on the same host, listening on a single IP address, but only one socket at a time can use a port at that address. The combination of IP address, protocol, and port number uniquely identify a communication channel and ensure that messages sent through a socket arrive at the correct destination. Some of the port numbers are pre-allocated for a specific protocol. For example, communication between email servers using SMTP occurs over port number 25 using TCP, and web clients and servers use port 80 for HTTP. The port numbers for network services with standardized names can be looked up with getservbyname(). ``` import socket from urllib.parse import urlparse URLS = [ 'http://www.python.org', 'https://www.mybank.com', 'ftp://prep.ai.mit.edu', 'gopher://gopher.micro.umn.edu', 'smtp://mail.example.com', 'imap://mail.example.com', 'imaps://mail.example.com', 'pop3://pop.example.com', 'pop3s://pop.example.com', ] for url in URLS: parsed_url = urlparse(url) port = socket.getservbyname(parsed_url.scheme) print('{:>6} : {}'.format(parsed_url.scheme, port)) ``` To reverse the service port lookup, use getservbyport(). ``` import socket from urllib.parse import urlunparse for port in [80, 443, 21, 70, 25, 143, 993, 110, 995]: url = '{}://example.com/'.format(socket.getservbyport(port)) print(url) ``` The number assigned to a transport protocol can be retrieved with getprotobyname(). ``` import socket def get_constants(prefix): """Create a dictionary mapping socket module constants to their names. """ return { getattr(socket, n): n for n in dir(socket) if n.startswith(prefix) } protocols = get_constants('IPPROTO_') for name in ['icmp', 'udp', 'tcp']: proto_num = socket.getprotobyname(name) const_name = protocols[proto_num] print('{:>4} -> {:2d} (socket.{:<12} = {:2d})'.format( name, proto_num, const_name, getattr(socket, const_name))) ``` ## Looking Up Server Addresses getaddrinfo() converts the basic address of a service into a list of tuples with all of the information necessary to make a connection. The contents of each tuple will vary, containing different network families or protocols. ``` import socket def get_constants(prefix): """Create a dictionary mapping socket module constants to their names. """ return { getattr(socket, n): n for n in dir(socket) if n.startswith(prefix) } families = get_constants('AF_') types = get_constants('SOCK_') protocols = get_constants('IPPROTO_') for response in socket.getaddrinfo('www.python.org', 'http'): # Unpack the response tuple family, socktype, proto, canonname, sockaddr = response print('Family :', families[family]) print('Type :', types[socktype]) print('Protocol :', protocols[proto]) print('Canonical name:', canonname) print('Socket address:', sockaddr) print() ``` ## IP Address Representations Network programs written in C use the data type struct sockaddr to represent IP addresses as binary values (instead of the string addresses usually found in Python programs). To convert IPv4 addresses between the Python representation and the C representation, use inet_aton() and inet_ntoa(). ``` import binascii import socket import struct import sys for string_address in ['192.168.1.1', '127.0.0.1']: packed = socket.inet_aton(string_address) print('Original:', string_address) print('Packed :', binascii.hexlify(packed)) print('Unpacked:', socket.inet_ntoa(packed)) print() ``` The four bytes in the packed format can be passed to C libraries, transmitted safely over the network, or saved to a database compactly. The related functions inet_pton() and inet_ntop() work with both IPv4 and IPv6 addresses, producing the appropriate format based on the address family parameter passed in. ``` import binascii import socket import struct import sys string_address = '2002:ac10:10a:1234:21e:52ff:fe74:40e' packed = socket.inet_pton(socket.AF_INET6, string_address) print('Original:', string_address) print('Packed :', binascii.hexlify(packed)) print('Unpacked:', socket.inet_ntop(socket.AF_INET6, packed)) ```
github_jupyter
import socket print(socket.gethostname()) import socket HOSTS = [ 'apu', 'pymotw.com', 'www.python.org', 'nosuchname', ] for host in HOSTS: try: print('{} : {}'.format(host, socket.gethostbyname(host))) except socket.error as msg: print('{} : {}'.format(host, msg)) import socket HOSTS = [ 'apu', 'pymotw.com', 'www.python.org', 'nosuchname', ] for host in HOSTS: print(host) try: name, aliases, addresses = socket.gethostbyname_ex(host) print(' Hostname:', name) print(' Aliases :', aliases) print(' Addresses:', addresses) except socket.error as msg: print('ERROR:', msg) print() import socket for host in ['scott-t460', 'pymotw.com']: print('{:>10} : {}'.format(host, socket.getfqdn(host))) import socket hostname, aliases, addresses = socket.gethostbyaddr('10.104.190.53') print('Hostname :', hostname) print('Aliases :', aliases) print('Addresses:', addresses) import socket from urllib.parse import urlparse URLS = [ 'http://www.python.org', 'https://www.mybank.com', 'ftp://prep.ai.mit.edu', 'gopher://gopher.micro.umn.edu', 'smtp://mail.example.com', 'imap://mail.example.com', 'imaps://mail.example.com', 'pop3://pop.example.com', 'pop3s://pop.example.com', ] for url in URLS: parsed_url = urlparse(url) port = socket.getservbyname(parsed_url.scheme) print('{:>6} : {}'.format(parsed_url.scheme, port)) import socket from urllib.parse import urlunparse for port in [80, 443, 21, 70, 25, 143, 993, 110, 995]: url = '{}://example.com/'.format(socket.getservbyport(port)) print(url) import socket def get_constants(prefix): """Create a dictionary mapping socket module constants to their names. """ return { getattr(socket, n): n for n in dir(socket) if n.startswith(prefix) } protocols = get_constants('IPPROTO_') for name in ['icmp', 'udp', 'tcp']: proto_num = socket.getprotobyname(name) const_name = protocols[proto_num] print('{:>4} -> {:2d} (socket.{:<12} = {:2d})'.format( name, proto_num, const_name, getattr(socket, const_name))) import socket def get_constants(prefix): """Create a dictionary mapping socket module constants to their names. """ return { getattr(socket, n): n for n in dir(socket) if n.startswith(prefix) } families = get_constants('AF_') types = get_constants('SOCK_') protocols = get_constants('IPPROTO_') for response in socket.getaddrinfo('www.python.org', 'http'): # Unpack the response tuple family, socktype, proto, canonname, sockaddr = response print('Family :', families[family]) print('Type :', types[socktype]) print('Protocol :', protocols[proto]) print('Canonical name:', canonname) print('Socket address:', sockaddr) print() import binascii import socket import struct import sys for string_address in ['192.168.1.1', '127.0.0.1']: packed = socket.inet_aton(string_address) print('Original:', string_address) print('Packed :', binascii.hexlify(packed)) print('Unpacked:', socket.inet_ntoa(packed)) print() import binascii import socket import struct import sys string_address = '2002:ac10:10a:1234:21e:52ff:fe74:40e' packed = socket.inet_pton(socket.AF_INET6, string_address) print('Original:', string_address) print('Packed :', binascii.hexlify(packed)) print('Unpacked:', socket.inet_ntop(socket.AF_INET6, packed))
0.24899
0.947137
``` import os import numpy as np import pandas as pd import yfinance as yf from datetime import datetime # os.environ['NUMBA_DISABLE_JIT'] = '1' # uncomment this if you want to use pypfopt within simulation from numba import njit from pypfopt.efficient_frontier import EfficientFrontier from pypfopt import risk_models from pypfopt import expected_returns from pypfopt import base_optimizer import vectorbt as vbt from vectorbt.generic.nb import nanmean_nb from vectorbt.portfolio.nb import create_order_nb, auto_call_seq_ctx_nb from vectorbt.portfolio.enums import SizeType, Direction # Define params symbols = ['FB', 'AMZN', 'NFLX', 'GOOG', 'AAPL'] start_date = datetime(2017, 1, 1) end_date = datetime.now() num_tests = 2000 vbt.settings.returns['year_freq'] = '252 days' ohlcv_by_symbol = vbt.utils.data.download(symbols, start=start_date, end=end_date) print(ohlcv_by_symbol.keys()) ohlcv = vbt.utils.data.concat_symbols(ohlcv_by_symbol) print(ohlcv.keys()) price = ohlcv['Close'] # Plot normalized price series (price / price.iloc[0]).vbt.plot().show_png() returns = price.pct_change() print(returns.mean()) print(returns.std()) print(returns.corr()) ``` ## vectorbt: Random search ### One-time allocation ``` np.random.seed(42) # Generate random weights, n times weights = [] for i in range(num_tests): w = np.random.random_sample(len(symbols)) w = w / np.sum(w) weights.append(w) print(len(weights)) # Build column hierarchy such that one weight corresponds to one price series _price = price.vbt.tile(num_tests, keys=pd.Index(np.arange(num_tests), name='symbol_group')) _price = _price.vbt.stack_index(pd.Index(np.concatenate(weights), name='weights')) print(_price.columns) # Define order size size = np.full_like(_price, np.nan) size[0, :] = np.concatenate(weights) # allocate at first timestamp, do nothing afterwards print(size.shape) ``` **NOTE:** Do not attempt to run the following simulation with Numba disabled. ``` %%time # Run simulation portfolio = vbt.Portfolio.from_orders( close=_price, size=size, size_type='targetpercent', group_by='symbol_group', cash_sharing=True, freq='D', incl_unrealized=True ) # all weights sum to 1, no shorting, and 100% investment in risky assets print(len(portfolio.orders)) # Plot annualized return against volatility, color by sharpe ratio annualized_return = portfolio.annualized_return() annualized_return.index = portfolio.annualized_volatility() annualized_return.vbt.scatterplot( trace_kwargs=dict( mode='markers', marker=dict( color=portfolio.sharpe_ratio(), colorbar=dict( title='sharpe_ratio' ), size=5, opacity=0.7 ) ), xaxis_title='annualized_volatility', yaxis_title='annualized_return' ).show_png() # Get index of the best group according to the target metric best_symbol_group = portfolio.sharpe_ratio().idxmax() print(best_symbol_group) # Print best weights print(weights[best_symbol_group]) # Compute default stats print(portfolio.iloc[best_symbol_group].stats()) ``` ### Rebalance monthly ``` # Select the first index of each month rb_mask = ~_price.index.to_period('m').duplicated() print(rb_mask.sum()) rb_size = np.full_like(_price, np.nan) rb_size[rb_mask, :] = np.concatenate(weights) # allocate at mask print(rb_size.shape) ``` **NOTE:** Do not attempt to run the following simulation with Numba disabled. ``` %%time # Run simulation, with rebalancing monthly rb_portfolio = vbt.Portfolio.from_orders( close=_price, size=rb_size, size_type='targetpercent', group_by='symbol_group', cash_sharing=True, call_seq='auto', # important: sell before buy freq='D', incl_unrealized=True ) print(len(rb_portfolio.orders)) rb_best_symbol_group = portfolio.sharpe_ratio().idxmax() print(rb_best_symbol_group) print(weights[rb_best_symbol_group]) print(rb_portfolio.iloc[rb_best_symbol_group].stats()) def plot_allocation(rb_portfolio): # Plot weights development of the portfolio rb_holding_value = rb_portfolio.holding_value(group_by=False) rb_value = rb_portfolio.value() rb_idxs = np.flatnonzero((rb_portfolio.share_flow() != 0).any(axis=1)) (rb_holding_value.vbt / rb_value).vbt.plot( trace_names=symbols, trace_kwargs=dict( stackgroup='one' ), shapes=[dict( xref='x', yref='paper', x0=date, x1=date, y0=0, y1=1, line_color=vbt.settings.layout['template']['layout']['plot_bgcolor'] ) for date in price.index[rb_idxs]] ).show_png() plot_allocation(rb_portfolio.iloc[rb_best_symbol_group]) # best group ``` ### Search and rebalance every 30 days Utilize low-level API to dynamically search for best Sharpe ratio and rebalance accordingly. Compared to previous method, we won't utilize stacking, but do search in a loop instead. We also will use days instead of months, as latter may contain a various number of trading days. ``` srb_sharpe = np.full(price.shape[0], np.nan) @njit def prep_func_nb(simc, every_nth): # Define rebalancing days simc.active_mask[:, :] = False simc.active_mask[every_nth::every_nth, :] = True return () @njit def find_weights_nb(sc, price, num_tests): # Find optimal weights based on best Sharpe ratio returns = (price[1:] - price[:-1]) / price[:-1] returns = returns[1:, :] # cannot compute np.cov with NaN mean = nanmean_nb(returns) cov = np.cov(returns, rowvar=False) # masked arrays not supported by Numba (yet) best_sharpe_ratio = -np.inf for i in range(num_tests): # Generate weights w = np.random.random_sample(sc.group_len) w = w / np.sum(w) # Compute annualized mean, covariance, and Sharpe ratio p_return = np.sum(mean * w) * ann_factor p_std = np.sqrt(np.dot(w.T, np.dot(cov, w))) * np.sqrt(ann_factor) sharpe_ratio = p_return / p_std if sharpe_ratio > best_sharpe_ratio: best_sharpe_ratio = sharpe_ratio weights = w return best_sharpe_ratio, weights @njit def segment_prep_func_nb(sc, find_weights_nb, history_len, ann_factor, num_tests, srb_sharpe): if history_len == -1: # Look back at the entire time period close = sc.close[:sc.i, sc.from_col:sc.to_col] else: # Look back at a fixed time period if sc.i - history_len <= 0: return (np.full(sc.group_len, np.nan),) # insufficient data close = sc.close[sc.i - history_len:sc.i, sc.from_col:sc.to_col] # Find optimal weights best_sharpe_ratio, weights = find_weights_nb(sc, close, num_tests) srb_sharpe[sc.i] = best_sharpe_ratio # Update valuation price and reorder orders size_type = np.full(sc.group_len, SizeType.TargetPercent) direction = np.full(sc.group_len, Direction.LongOnly) temp_float_arr = np.empty(sc.group_len, dtype=np.float_) for k in range(sc.group_len): col = sc.from_col + k sc.last_val_price[col] = sc.close[sc.i, col] auto_call_seq_ctx_nb(sc, weights, size_type, direction, temp_float_arr) return (weights,) @njit def order_func_nb(oc, weights): col_i = oc.call_seq_now[oc.call_idx] return create_order_nb( size=weights[col_i], size_type=SizeType.TargetPercent, price=oc.close[oc.i, oc.col] ) ann_factor = returns.vbt.returns(freq='D').ann_factor %%time # Run simulation using a custom order function srb_portfolio = vbt.Portfolio.from_order_func( price, order_func_nb, prep_func_nb=prep_func_nb, prep_args=(30,), segment_prep_func_nb=segment_prep_func_nb, segment_prep_args=(find_weights_nb, -1, ann_factor, num_tests, srb_sharpe), cash_sharing=True, group_by=True, freq='D', incl_unrealized=True, seed=42 ) # Plot best Sharpe ratio at each rebalancing day pd.Series(srb_sharpe, index=price.index).vbt.scatterplot(trace_kwargs=dict(mode='markers')).show_png() print(srb_portfolio.stats()) plot_allocation(srb_portfolio) ``` You can see how weights stabilize themselves with growing data. ``` %%time # Run simulation, but now consider only the last 252 days of data srb252_sharpe = np.full(price.shape[0], np.nan) srb252_portfolio = vbt.Portfolio.from_order_func( price, order_func_nb, prep_func_nb=prep_func_nb, prep_args=(30,), segment_prep_func_nb=segment_prep_func_nb, segment_prep_args=(find_weights_nb, 252, ann_factor, num_tests, srb252_sharpe), cash_sharing=True, group_by=True, freq='D', incl_unrealized=True, seed=42 ) pd.Series(srb252_sharpe, index=price.index).vbt.scatterplot(trace_kwargs=dict(mode='markers')).show_png() print(srb252_portfolio.stats()) plot_allocation(srb252_portfolio) ``` A much more volatile weight distribution. ## PyPortfolioOpt + vectorbt ### One-time allocation ``` # Calculate expected returns and sample covariance amtrix avg_returns = expected_returns.mean_historical_return(price) cov_mat = risk_models.sample_cov(price) # Get weights maximizing the Sharpe ratio ef = EfficientFrontier(avg_returns, cov_mat) weights = ef.max_sharpe() clean_weights = ef.clean_weights() pyopt_weights = np.array([clean_weights[symbol] for symbol in symbols]) print(pyopt_weights) pyopt_size = np.full_like(price, np.nan) pyopt_size[0, :] = pyopt_weights # allocate at first timestamp, do nothing afterwards print(pyopt_size.shape) %%time # Run simulation with weights from PyPortfolioOpt pyopt_portfolio = vbt.Portfolio.from_orders( close=price, size=pyopt_size, size_type='targetpercent', group_by=True, cash_sharing=True, freq='D', incl_unrealized=True ) print(len(pyopt_portfolio.orders)) ``` Faster than stacking solution, but doesn't let you compare weights. ``` print(pyopt_portfolio.stats()) ``` ### Search and rebalance monthly **NOTE:** PyPortfolioOpt cannot run within Numba, so restart the notebook and disable Numba in the first cell. ``` def pyopt_find_weights(sc, price, num_tests): # Calculate expected returns and sample covariance matrix price = pd.DataFrame(price, columns=symbols) avg_returns = expected_returns.mean_historical_return(price) cov_mat = risk_models.sample_cov(price) # Get weights maximizing the Sharpe ratio ef = EfficientFrontier(avg_returns, cov_mat) weights = ef.max_sharpe() clean_weights = ef.clean_weights() weights = np.array([clean_weights[symbol] for symbol in symbols]) best_sharpe_ratio = base_optimizer.portfolio_performance(weights, avg_returns, cov_mat)[2] return best_sharpe_ratio, weights %%time pyopt_srb_sharpe = np.full(price.shape[0], np.nan) # Run simulation with a custom order function (Numba should be disabled) pyopt_srb_portfolio = vbt.Portfolio.from_order_func( price, order_func_nb, prep_func_nb=prep_func_nb, prep_args=(30,), segment_prep_func_nb=segment_prep_func_nb, segment_prep_args=(pyopt_find_weights, -1, ann_factor, num_tests, pyopt_srb_sharpe), cash_sharing=True, group_by=True, freq='D', incl_unrealized=True, seed=42 ) pd.Series(pyopt_srb_sharpe, index=price.index).vbt.scatterplot(trace_kwargs=dict(mode='markers')).show_png() print(pyopt_srb_portfolio.stats()) plot_allocation(pyopt_srb_portfolio) ```
github_jupyter
import os import numpy as np import pandas as pd import yfinance as yf from datetime import datetime # os.environ['NUMBA_DISABLE_JIT'] = '1' # uncomment this if you want to use pypfopt within simulation from numba import njit from pypfopt.efficient_frontier import EfficientFrontier from pypfopt import risk_models from pypfopt import expected_returns from pypfopt import base_optimizer import vectorbt as vbt from vectorbt.generic.nb import nanmean_nb from vectorbt.portfolio.nb import create_order_nb, auto_call_seq_ctx_nb from vectorbt.portfolio.enums import SizeType, Direction # Define params symbols = ['FB', 'AMZN', 'NFLX', 'GOOG', 'AAPL'] start_date = datetime(2017, 1, 1) end_date = datetime.now() num_tests = 2000 vbt.settings.returns['year_freq'] = '252 days' ohlcv_by_symbol = vbt.utils.data.download(symbols, start=start_date, end=end_date) print(ohlcv_by_symbol.keys()) ohlcv = vbt.utils.data.concat_symbols(ohlcv_by_symbol) print(ohlcv.keys()) price = ohlcv['Close'] # Plot normalized price series (price / price.iloc[0]).vbt.plot().show_png() returns = price.pct_change() print(returns.mean()) print(returns.std()) print(returns.corr()) np.random.seed(42) # Generate random weights, n times weights = [] for i in range(num_tests): w = np.random.random_sample(len(symbols)) w = w / np.sum(w) weights.append(w) print(len(weights)) # Build column hierarchy such that one weight corresponds to one price series _price = price.vbt.tile(num_tests, keys=pd.Index(np.arange(num_tests), name='symbol_group')) _price = _price.vbt.stack_index(pd.Index(np.concatenate(weights), name='weights')) print(_price.columns) # Define order size size = np.full_like(_price, np.nan) size[0, :] = np.concatenate(weights) # allocate at first timestamp, do nothing afterwards print(size.shape) %%time # Run simulation portfolio = vbt.Portfolio.from_orders( close=_price, size=size, size_type='targetpercent', group_by='symbol_group', cash_sharing=True, freq='D', incl_unrealized=True ) # all weights sum to 1, no shorting, and 100% investment in risky assets print(len(portfolio.orders)) # Plot annualized return against volatility, color by sharpe ratio annualized_return = portfolio.annualized_return() annualized_return.index = portfolio.annualized_volatility() annualized_return.vbt.scatterplot( trace_kwargs=dict( mode='markers', marker=dict( color=portfolio.sharpe_ratio(), colorbar=dict( title='sharpe_ratio' ), size=5, opacity=0.7 ) ), xaxis_title='annualized_volatility', yaxis_title='annualized_return' ).show_png() # Get index of the best group according to the target metric best_symbol_group = portfolio.sharpe_ratio().idxmax() print(best_symbol_group) # Print best weights print(weights[best_symbol_group]) # Compute default stats print(portfolio.iloc[best_symbol_group].stats()) # Select the first index of each month rb_mask = ~_price.index.to_period('m').duplicated() print(rb_mask.sum()) rb_size = np.full_like(_price, np.nan) rb_size[rb_mask, :] = np.concatenate(weights) # allocate at mask print(rb_size.shape) %%time # Run simulation, with rebalancing monthly rb_portfolio = vbt.Portfolio.from_orders( close=_price, size=rb_size, size_type='targetpercent', group_by='symbol_group', cash_sharing=True, call_seq='auto', # important: sell before buy freq='D', incl_unrealized=True ) print(len(rb_portfolio.orders)) rb_best_symbol_group = portfolio.sharpe_ratio().idxmax() print(rb_best_symbol_group) print(weights[rb_best_symbol_group]) print(rb_portfolio.iloc[rb_best_symbol_group].stats()) def plot_allocation(rb_portfolio): # Plot weights development of the portfolio rb_holding_value = rb_portfolio.holding_value(group_by=False) rb_value = rb_portfolio.value() rb_idxs = np.flatnonzero((rb_portfolio.share_flow() != 0).any(axis=1)) (rb_holding_value.vbt / rb_value).vbt.plot( trace_names=symbols, trace_kwargs=dict( stackgroup='one' ), shapes=[dict( xref='x', yref='paper', x0=date, x1=date, y0=0, y1=1, line_color=vbt.settings.layout['template']['layout']['plot_bgcolor'] ) for date in price.index[rb_idxs]] ).show_png() plot_allocation(rb_portfolio.iloc[rb_best_symbol_group]) # best group srb_sharpe = np.full(price.shape[0], np.nan) @njit def prep_func_nb(simc, every_nth): # Define rebalancing days simc.active_mask[:, :] = False simc.active_mask[every_nth::every_nth, :] = True return () @njit def find_weights_nb(sc, price, num_tests): # Find optimal weights based on best Sharpe ratio returns = (price[1:] - price[:-1]) / price[:-1] returns = returns[1:, :] # cannot compute np.cov with NaN mean = nanmean_nb(returns) cov = np.cov(returns, rowvar=False) # masked arrays not supported by Numba (yet) best_sharpe_ratio = -np.inf for i in range(num_tests): # Generate weights w = np.random.random_sample(sc.group_len) w = w / np.sum(w) # Compute annualized mean, covariance, and Sharpe ratio p_return = np.sum(mean * w) * ann_factor p_std = np.sqrt(np.dot(w.T, np.dot(cov, w))) * np.sqrt(ann_factor) sharpe_ratio = p_return / p_std if sharpe_ratio > best_sharpe_ratio: best_sharpe_ratio = sharpe_ratio weights = w return best_sharpe_ratio, weights @njit def segment_prep_func_nb(sc, find_weights_nb, history_len, ann_factor, num_tests, srb_sharpe): if history_len == -1: # Look back at the entire time period close = sc.close[:sc.i, sc.from_col:sc.to_col] else: # Look back at a fixed time period if sc.i - history_len <= 0: return (np.full(sc.group_len, np.nan),) # insufficient data close = sc.close[sc.i - history_len:sc.i, sc.from_col:sc.to_col] # Find optimal weights best_sharpe_ratio, weights = find_weights_nb(sc, close, num_tests) srb_sharpe[sc.i] = best_sharpe_ratio # Update valuation price and reorder orders size_type = np.full(sc.group_len, SizeType.TargetPercent) direction = np.full(sc.group_len, Direction.LongOnly) temp_float_arr = np.empty(sc.group_len, dtype=np.float_) for k in range(sc.group_len): col = sc.from_col + k sc.last_val_price[col] = sc.close[sc.i, col] auto_call_seq_ctx_nb(sc, weights, size_type, direction, temp_float_arr) return (weights,) @njit def order_func_nb(oc, weights): col_i = oc.call_seq_now[oc.call_idx] return create_order_nb( size=weights[col_i], size_type=SizeType.TargetPercent, price=oc.close[oc.i, oc.col] ) ann_factor = returns.vbt.returns(freq='D').ann_factor %%time # Run simulation using a custom order function srb_portfolio = vbt.Portfolio.from_order_func( price, order_func_nb, prep_func_nb=prep_func_nb, prep_args=(30,), segment_prep_func_nb=segment_prep_func_nb, segment_prep_args=(find_weights_nb, -1, ann_factor, num_tests, srb_sharpe), cash_sharing=True, group_by=True, freq='D', incl_unrealized=True, seed=42 ) # Plot best Sharpe ratio at each rebalancing day pd.Series(srb_sharpe, index=price.index).vbt.scatterplot(trace_kwargs=dict(mode='markers')).show_png() print(srb_portfolio.stats()) plot_allocation(srb_portfolio) %%time # Run simulation, but now consider only the last 252 days of data srb252_sharpe = np.full(price.shape[0], np.nan) srb252_portfolio = vbt.Portfolio.from_order_func( price, order_func_nb, prep_func_nb=prep_func_nb, prep_args=(30,), segment_prep_func_nb=segment_prep_func_nb, segment_prep_args=(find_weights_nb, 252, ann_factor, num_tests, srb252_sharpe), cash_sharing=True, group_by=True, freq='D', incl_unrealized=True, seed=42 ) pd.Series(srb252_sharpe, index=price.index).vbt.scatterplot(trace_kwargs=dict(mode='markers')).show_png() print(srb252_portfolio.stats()) plot_allocation(srb252_portfolio) # Calculate expected returns and sample covariance amtrix avg_returns = expected_returns.mean_historical_return(price) cov_mat = risk_models.sample_cov(price) # Get weights maximizing the Sharpe ratio ef = EfficientFrontier(avg_returns, cov_mat) weights = ef.max_sharpe() clean_weights = ef.clean_weights() pyopt_weights = np.array([clean_weights[symbol] for symbol in symbols]) print(pyopt_weights) pyopt_size = np.full_like(price, np.nan) pyopt_size[0, :] = pyopt_weights # allocate at first timestamp, do nothing afterwards print(pyopt_size.shape) %%time # Run simulation with weights from PyPortfolioOpt pyopt_portfolio = vbt.Portfolio.from_orders( close=price, size=pyopt_size, size_type='targetpercent', group_by=True, cash_sharing=True, freq='D', incl_unrealized=True ) print(len(pyopt_portfolio.orders)) print(pyopt_portfolio.stats()) def pyopt_find_weights(sc, price, num_tests): # Calculate expected returns and sample covariance matrix price = pd.DataFrame(price, columns=symbols) avg_returns = expected_returns.mean_historical_return(price) cov_mat = risk_models.sample_cov(price) # Get weights maximizing the Sharpe ratio ef = EfficientFrontier(avg_returns, cov_mat) weights = ef.max_sharpe() clean_weights = ef.clean_weights() weights = np.array([clean_weights[symbol] for symbol in symbols]) best_sharpe_ratio = base_optimizer.portfolio_performance(weights, avg_returns, cov_mat)[2] return best_sharpe_ratio, weights %%time pyopt_srb_sharpe = np.full(price.shape[0], np.nan) # Run simulation with a custom order function (Numba should be disabled) pyopt_srb_portfolio = vbt.Portfolio.from_order_func( price, order_func_nb, prep_func_nb=prep_func_nb, prep_args=(30,), segment_prep_func_nb=segment_prep_func_nb, segment_prep_args=(pyopt_find_weights, -1, ann_factor, num_tests, pyopt_srb_sharpe), cash_sharing=True, group_by=True, freq='D', incl_unrealized=True, seed=42 ) pd.Series(pyopt_srb_sharpe, index=price.index).vbt.scatterplot(trace_kwargs=dict(mode='markers')).show_png() print(pyopt_srb_portfolio.stats()) plot_allocation(pyopt_srb_portfolio)
0.657428
0.728893
ERROR: type should be string, got "https://scikit-learn.org/stable/modules/outlier_detection.html#isolation-forest\n\nhttps://scikit-learn.org/stable/auto_examples/plot_anomaly_comparison.html#sphx-glr-auto-examples-plot-anomaly-comparison-py\n\nThis example shows characteristics of different anomaly detection algorithms on 2D datasets. Datasets contain one or two modes (regions of high density) to illustrate the ability of algorithms to cope with multimodal data.\n\nFor each dataset, 15% of samples are generated as random uniform noise. This proportion is the value given to the nu parameter of the OneClassSVM and the contamination parameter of the other outlier detection algorithms. Decision boundaries between inliers and outliers are displayed in black except for Local Outlier Factor (LOF) as it has no predict method to be applied on new data when it is used for outlier detection.\n\nThe sklearn.svm.OneClassSVM is known to be sensitive to outliers and thus does not perform very well for outlier detection. This estimator is best suited for novelty detection when the training set is not contaminated by outliers. That said, outlier detection in high-dimension, or without any assumptions on the distribution of the inlying data is very challenging, and a One-class SVM might give useful results in these situations depending on the value of its hyperparameters.\n\nsklearn.covariance.EllipticEnvelope assumes the data is Gaussian and learns an ellipse. It thus degrades when the data is not unimodal. Notice however that this estimator is robust to outliers.\n\nsklearn.ensemble.IsolationForest and sklearn.neighbors.LocalOutlierFactor seem to perform reasonably well for multi-modal data sets. The advantage of sklearn.neighbors.LocalOutlierFactor over the other estimators is shown for the third data set, where the two modes have different densities. This advantage is explained by the local aspect of LOF, meaning that it only compares the score of abnormality of one sample with the scores of its neighbors.\n\nFinally, for the last data set, it is hard to say that one sample is more abnormal than another sample as they are uniformly distributed in a hypercube. Except for the sklearn.svm.OneClassSVM which overfits a little, all estimators present decent solutions for this situation. In such a case, it would be wise to look more closely at the scores of abnormality of the samples as a good estimator should assign similar scores to all the samples.\n\nWhile these examples give some intuition about the algorithms, this intuition might not apply to very high dimensional data.\n\nFinally, note that parameters of the models have been here handpicked but that in practice they need to be adjusted. In the absence of labelled data, the problem is completely unsupervised so model selection can be a challenge.\n\n```\n# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Albert Thomas <albert.thomas@telecom-paristech.fr>\n# License: BSD 3 clause\n\nimport time\n\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nfrom sklearn import svm\nfrom sklearn.datasets import make_moons, make_blobs\nfrom sklearn.covariance import EllipticEnvelope\nfrom sklearn.ensemble import IsolationForest\nfrom sklearn.neighbors import LocalOutlierFactor\n\nprint(__doc__)\nmatplotlib.rcParams['contour.negative_linestyle'] = 'solid'\n```\n\n# 样本集参数设定\n\n```\n# Example settings\nn_samples = 300\noutliers_fraction = 0.15\nn_outliers = int(outliers_fraction * n_samples)\nn_inliers = n_samples - n_outliers\n```\n\n# define outlier/anomaly detection methods to be compared\n\n```\nanomaly_algorithms = [\n (\"Robust covariance\", EllipticEnvelope(contamination=outliers_fraction)),\n (\"One-Class SVM\", svm.OneClassSVM(nu=outliers_fraction, kernel=\"rbf\",\n gamma=0.1)),\n (\"Isolation Forest\", IsolationForest(behaviour='new',\n contamination=outliers_fraction,\n random_state=42)),\n (\"Local Outlier Factor\", LocalOutlierFactor(\n n_neighbors=35, contamination=outliers_fraction))]\n```\n\n# 数据集生成\n\n```\nblobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2)\nblobs_params\ndatasets = [\n make_blobs(centers=[[0, 0], [0, 0]], cluster_std=0.5,\n **blobs_params)[0],\n make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[0.5, 0.5],\n **blobs_params)[0],\n make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[1.5, .3],\n **blobs_params)[0],\n 4. * (make_moons(n_samples=n_samples, noise=.05, random_state=0)[0] -\n np.array([0.5, 0.25])),\n 14. * (np.random.RandomState(42).rand(n_samples, 2) - 0.5)]\n# Compare given classifiers under given settings\nxx, yy = np.meshgrid(np.linspace(-7, 7, 150),\n np.linspace(-7, 7, 150))\nplt.figure(figsize=(len(anomaly_algorithms) * 2 + 3, 12.5))\nplt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05,\n hspace=.01)\n\nplot_num = 1\nrng = np.random.RandomState(42)\n\nfor i_dataset, X in enumerate(datasets):\n # Add outliers\n X = np.concatenate([X, rng.uniform(low=-6, high=6,\n size=(n_outliers, 2))], axis=0)\n\n for name, algorithm in anomaly_algorithms:\n t0 = time.time()\n algorithm.fit(X)\n t1 = time.time()\n plt.subplot(len(datasets), len(anomaly_algorithms), plot_num)\n if i_dataset == 0:\n plt.title(name, size=18)\n\n # fit the data and tag outliers\n if name == \"Local Outlier Factor\":\n y_pred = algorithm.fit_predict(X)\n else:\n y_pred = algorithm.fit(X).predict(X)\n\n # plot the levels lines and the points\n if name != \"Local Outlier Factor\": # LOF does not implement predict\n Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='black')\n\n colors = np.array(['#377eb8', '#ff7f00'])\n plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[(y_pred + 1) // 2])\n\n plt.xlim(-7, 7)\n plt.ylim(-7, 7)\n plt.xticks(())\n plt.yticks(())\n plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'),\n transform=plt.gca().transAxes, size=15,\n horizontalalignment='right')\n plot_num += 1\n\nplt.show()\n```\n\n"
github_jupyter
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Albert Thomas <albert.thomas@telecom-paristech.fr> # License: BSD 3 clause import time import numpy as np import matplotlib import matplotlib.pyplot as plt from sklearn import svm from sklearn.datasets import make_moons, make_blobs from sklearn.covariance import EllipticEnvelope from sklearn.ensemble import IsolationForest from sklearn.neighbors import LocalOutlierFactor print(__doc__) matplotlib.rcParams['contour.negative_linestyle'] = 'solid' # Example settings n_samples = 300 outliers_fraction = 0.15 n_outliers = int(outliers_fraction * n_samples) n_inliers = n_samples - n_outliers anomaly_algorithms = [ ("Robust covariance", EllipticEnvelope(contamination=outliers_fraction)), ("One-Class SVM", svm.OneClassSVM(nu=outliers_fraction, kernel="rbf", gamma=0.1)), ("Isolation Forest", IsolationForest(behaviour='new', contamination=outliers_fraction, random_state=42)), ("Local Outlier Factor", LocalOutlierFactor( n_neighbors=35, contamination=outliers_fraction))] blobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2) blobs_params datasets = [ make_blobs(centers=[[0, 0], [0, 0]], cluster_std=0.5, **blobs_params)[0], make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[0.5, 0.5], **blobs_params)[0], make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[1.5, .3], **blobs_params)[0], 4. * (make_moons(n_samples=n_samples, noise=.05, random_state=0)[0] - np.array([0.5, 0.25])), 14. * (np.random.RandomState(42).rand(n_samples, 2) - 0.5)] # Compare given classifiers under given settings xx, yy = np.meshgrid(np.linspace(-7, 7, 150), np.linspace(-7, 7, 150)) plt.figure(figsize=(len(anomaly_algorithms) * 2 + 3, 12.5)) plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05, hspace=.01) plot_num = 1 rng = np.random.RandomState(42) for i_dataset, X in enumerate(datasets): # Add outliers X = np.concatenate([X, rng.uniform(low=-6, high=6, size=(n_outliers, 2))], axis=0) for name, algorithm in anomaly_algorithms: t0 = time.time() algorithm.fit(X) t1 = time.time() plt.subplot(len(datasets), len(anomaly_algorithms), plot_num) if i_dataset == 0: plt.title(name, size=18) # fit the data and tag outliers if name == "Local Outlier Factor": y_pred = algorithm.fit_predict(X) else: y_pred = algorithm.fit(X).predict(X) # plot the levels lines and the points if name != "Local Outlier Factor": # LOF does not implement predict Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='black') colors = np.array(['#377eb8', '#ff7f00']) plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[(y_pred + 1) // 2]) plt.xlim(-7, 7) plt.ylim(-7, 7) plt.xticks(()) plt.yticks(()) plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'), transform=plt.gca().transAxes, size=15, horizontalalignment='right') plot_num += 1 plt.show()
0.797439
0.979354
# Data format and SED data ``` import jetset print('tested on jetset',jetset.__version__) import warnings warnings.filterwarnings('ignore') import matplotlib import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` ## Data format for Data object The SED data are internally stored as astropy tables, but it is very easy to import from 1. ascii files 2. numpy array in general once that is clear the data format. The easiest way to understand the data format is to build an empty table to have a look at the structure of the table: ``` from jetset.data_loader import Data data=Data(n_rows=10) ``` we can easily access the astropy table ``` data.table ``` - ``x`` column is reserved to frequencies (mandatory) - ``y`` columm is reserved to fluxes (mandatory) - ``dx`` columm is reserved to the error on the frequency,or bin width - ``dy`` columm is reserved to the error on the fluxes - ``UL`` columm is reserved to the flag for Upper Limit - ``T_start`` and ``T_stop`` are used to identify the time range to select data using the class `ObsData` - ``data_set`` ``` data.table['x'] ``` columns with units are implemented using the `Units` module of astropy (https://docs.astropy.org/en/stable/units/). and we can easily access the metadata ``` data.metadata ``` - ``z``: the redshift of the object - ``UL_CL``: the CL for the UL - ``restframe``: possible values``obs`` or ``src``, indicating if the data are observed flux, or luminosities, respectively - ``data_scale``: possible values``lin-lin`` or ``log-log``, indicating if the data are in linear or logarithmic scale, respectively - ``obj_name``: the name of the object ### Loading from astropy table you can use the default SEDs distributed with the package to get familiar with data handling ``` from jetset.test_data_helper import test_SEDs test_SEDs ``` ``` from jetset.data_loader import Data data=Data.from_file(data_table=test_SEDs[1]) data.table data.metadata ``` ``` # %ECSV 0.9 # --- # datatype: # - {name: x, unit: Hz, datatype: float64} # - {name: dx, unit: Hz, datatype: float64} # - {name: y, unit: erg / (cm2 s), datatype: float64} # - {name: dy, unit: erg / (cm2 s), datatype: float64} # - {name: T_start, unit: MJD, datatype: float64} # - {name: T_stop, unit: MJD, datatype: float64} # - {name: UL, datatype: bool} # - {name: data_set, datatype: string} # meta: !!omap # - {z: 0.0308} # - {restframe: obs} # - {data_scale: lin-lin} # - {obj_name: 'J1104+3812,Mrk421'} # schema: astropy-2.0 x dx y dy T_start T_stop UL data_set 2299540000.0 0.0 1.3409e-14 3.91e-16 0.0 0.0 False campaing-2009 2639697000.0 0.0 1.793088e-14 3.231099e-26 0.0 0.0 False campaing-2009 4799040000.0 0.0 2.3136e-14 2.4e-16 0.0 0.0 False campaing-2009 ``` ### Saving Data object to a file ``` data.save_file('test.ecsv') ``` the data can be loaded from the saved table ``` data=Data.from_file('test.ecsv') data.table ``` ### Importing from an arbitrary ascii file or numpy array to Data object Assume that your data are stored in an ASCII file named 'test-ascii.txt', with - ``x`` in the first column with frequency in ``Hz`` , - ``y`` in the second column with fluxes in erg ``cm-2 s-1``, - ``dy`` in the third column with the same units as ``y`` - the data are in ``log-log`` scale **of course the column number depends on the file that you are using, this is only an example** ``` from jetset.data_loader import Data import numpy as np d=np.genfromtxt('test-ascii.txt') data=Data(n_rows=d.shape[0]) data.set_field('x',d[:,0]) data.set_field('y',d[:,1]) data.set_field('dy',value=d[:,2]) ``` then you can set the meatdata as follows ``` data.set_meta_data('z',1.02) data.set_meta_data('restframe','obs') data.set_meta_data('data_scale','log-log') ``` of course this method applies if you have a generic 2-dim numpy array. ``` data.table ``` ### Importing to Data object from a generic astropy table mapping columns If you want to use a ``TABLE`` with arbitrary column names, you can use an import dictionary, mapping the input name to the target. E.g. assume that you column in the input table column named ``freq`` that should target the ``x`` column, and another named ``freq err`` associated to ``dx`` you can simply pass the dictionary to the ``from_file`` method: ```python data=Data.from_file(data_table='your-file',import_dictionary={'freq':'x','freq err':'dx'}) ``` ### Importing from the ASI ssdc sedtool to Data object To import data from a data file downloaded from the asi ssdc sedtool: https://tools.ssdc.asi.it/SED/ ``` from jetset.data_loader import Data data=Data.from_asdc(asdc_sed_file='MRK421_asdc.txt',obj_name='Mrk421',restframe='obs',data_scale='lin-lin',z=0.038) ``` ``` data.table ``` ## Building the SED the ObsData object ``` from jetset.data_loader import Data from jetset.data_loader import ObsData from jetset.test_data_helper import test_SEDs data_table=Data.from_file(test_SEDs[1]) sed_data=ObsData(data_table=data_table) ``` As you can see all the meta-data have been properly sourced from the SED file header. You also get information on the length of the data, before and after elimination of duplicated entries, and upper limits ``` sed_data.table sed_data.metadata ``` ### Plotting ObsData ``` from jetset.plot_sedfit import PlotSED myPlot=PlotSED(sed_data) ``` or you can create the object to plot on the fly in this way ``` myPlot=sed_data.plot_sed() ``` you can rescale your plot ``` myPlot=sed_data.plot_sed() myPlot.rescale(x_min=7,x_max=28,y_min=-15,y_max=-9) ``` plotting in the ``src`` restframe ``` myPlot=sed_data.plot_sed(frame='src') myPlot.rescale(x_min=7,x_max=28,y_min=40,y_max=46) ``` **to have interactive plot in jupyter** if you want to to have interacitve plot in a jupyter notebook use: .. code-block:: no %matplotlib notebook to have interactive plot in jupyter lab use: .. code-block:: no %matplotlib widget ### Grouping data As you can see, due to the overlapping of different instruments and to different time snapshots, some points have multiple values. Although this is not a problem for the fit process, you might want to rebin (group) your data. This can be obtained with the following command: ``` %matplotlib inline myPlot=sed_data.plot_sed() myPlot.rescale(y_min=-15) sed_data.group_data(bin_width=0.2) myPlot.add_data_plot(sed_data,label='rebinned') ``` ### Handling errors and systematics Another important issue when dealing with fitting of data, is the proper handling of errors. Typically one might need to add systematics for different reasons: - data are not really simultaneous, and you want to add systematics to take this into account - data (typically IR up to UV), might have very small errors compared to those at higher energies. This might bias the minimizer to accommodate the parameters in order to fit 'better' the low frequencies branch. For these reasons the package offer the possibility to add systematics ``` sed_data.add_systematics(0.2,[10.**6,10.**29]) myPlot=sed_data.plot_sed() myPlot.rescale(y_min=-15) ``` ### Filtering data sets we use the `show_data_sets()` method to have know wich data sets are defined in our table ``` sed_data.show_data_sets() ``` we use `show_dataset=True` to have the legend of all the datasets ``` data=Data.from_file(test_SEDs[0]) sed_data=ObsData(data_table=data) %matplotlib inline p=sed_data.plot_sed(show_dataset=True) sed_data.show_data_sets() ``` we filter out the data set `-1` using the `filter_data_set()` method. Please not with `exclude=True` we exclude dataset in `filters` ``` sed_data.filter_data_set(filters='-1',exclude=True) sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) ``` we can pass more datasets, comma separated ``` sed_data.filter_data_set(filters='-1,0',exclude=True) sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) ``` we can also use `filter_data_set` to exclude *only* the datasets in `filters` with `exclude=False` ``` sed_data.filter_data_set(filters='-1',exclude=True) sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) ``` we can revert `sed_data` to the original state with the `reset_data()` method ``` sed_data.reset_data() sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) ``` ### Saving sed_data and loading you can save and relaod you sed_data ``` sed_data.save('3C454_data.pkl') sed_data=ObsData.load('3C454_data.pkl') p=sed_data.plot_sed() ```
github_jupyter
import jetset print('tested on jetset',jetset.__version__) import warnings warnings.filterwarnings('ignore') import matplotlib import numpy as np import matplotlib.pyplot as plt %matplotlib inline from jetset.data_loader import Data data=Data(n_rows=10) data.table data.table['x'] data.metadata from jetset.test_data_helper import test_SEDs test_SEDs from jetset.data_loader import Data data=Data.from_file(data_table=test_SEDs[1]) data.table data.metadata # %ECSV 0.9 # --- # datatype: # - {name: x, unit: Hz, datatype: float64} # - {name: dx, unit: Hz, datatype: float64} # - {name: y, unit: erg / (cm2 s), datatype: float64} # - {name: dy, unit: erg / (cm2 s), datatype: float64} # - {name: T_start, unit: MJD, datatype: float64} # - {name: T_stop, unit: MJD, datatype: float64} # - {name: UL, datatype: bool} # - {name: data_set, datatype: string} # meta: !!omap # - {z: 0.0308} # - {restframe: obs} # - {data_scale: lin-lin} # - {obj_name: 'J1104+3812,Mrk421'} # schema: astropy-2.0 x dx y dy T_start T_stop UL data_set 2299540000.0 0.0 1.3409e-14 3.91e-16 0.0 0.0 False campaing-2009 2639697000.0 0.0 1.793088e-14 3.231099e-26 0.0 0.0 False campaing-2009 4799040000.0 0.0 2.3136e-14 2.4e-16 0.0 0.0 False campaing-2009 data.save_file('test.ecsv') data=Data.from_file('test.ecsv') data.table from jetset.data_loader import Data import numpy as np d=np.genfromtxt('test-ascii.txt') data=Data(n_rows=d.shape[0]) data.set_field('x',d[:,0]) data.set_field('y',d[:,1]) data.set_field('dy',value=d[:,2]) data.set_meta_data('z',1.02) data.set_meta_data('restframe','obs') data.set_meta_data('data_scale','log-log') data.table data=Data.from_file(data_table='your-file',import_dictionary={'freq':'x','freq err':'dx'}) from jetset.data_loader import Data data=Data.from_asdc(asdc_sed_file='MRK421_asdc.txt',obj_name='Mrk421',restframe='obs',data_scale='lin-lin',z=0.038) data.table from jetset.data_loader import Data from jetset.data_loader import ObsData from jetset.test_data_helper import test_SEDs data_table=Data.from_file(test_SEDs[1]) sed_data=ObsData(data_table=data_table) sed_data.table sed_data.metadata from jetset.plot_sedfit import PlotSED myPlot=PlotSED(sed_data) myPlot=sed_data.plot_sed() myPlot=sed_data.plot_sed() myPlot.rescale(x_min=7,x_max=28,y_min=-15,y_max=-9) myPlot=sed_data.plot_sed(frame='src') myPlot.rescale(x_min=7,x_max=28,y_min=40,y_max=46) %matplotlib inline myPlot=sed_data.plot_sed() myPlot.rescale(y_min=-15) sed_data.group_data(bin_width=0.2) myPlot.add_data_plot(sed_data,label='rebinned') sed_data.add_systematics(0.2,[10.**6,10.**29]) myPlot=sed_data.plot_sed() myPlot.rescale(y_min=-15) sed_data.show_data_sets() data=Data.from_file(test_SEDs[0]) sed_data=ObsData(data_table=data) %matplotlib inline p=sed_data.plot_sed(show_dataset=True) sed_data.show_data_sets() sed_data.filter_data_set(filters='-1',exclude=True) sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) sed_data.filter_data_set(filters='-1,0',exclude=True) sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) sed_data.filter_data_set(filters='-1',exclude=True) sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) sed_data.reset_data() sed_data.show_data_sets() p=sed_data.plot_sed(show_dataset=True) sed_data.save('3C454_data.pkl') sed_data=ObsData.load('3C454_data.pkl') p=sed_data.plot_sed()
0.406037
0.977905
# Constellation Wizard The Notebook format allows for more control over various analyses and automation. ``` # Import libraries from DeckAccessReader import * import numpy as np import pandas as pd pd.set_option('display.max_rows', 10) import os cwd = os.getcwd() cwdFiles = cwd+'\\Files' import seaborn as sns from comtypes.client import CreateObject from comtypes.client import GetActiveObject from comtypes.gen import STKObjects import matplotlib.pyplot as plt ``` # Define the constellation and Inputs ``` # Create or Load a constellation saved as TLEs constellationName = 'Oneweb' # Either Created or loaded create = False # 'Create' or 'Load' accessObjPath = '*/Facility/AGI/Sensor/FOV' # Used for deck access satTemplateName = 'OneWeb' # Used for deck access constraints and child ojects when loading satellites. This can be an empty string '' startTimeDA = 0 # Deck Access start time. Relative to the scenario start time [EpSec] stopTimeDA = 3600 # Deck Access stop time. Relative to the scenario start time [EpSec] ``` ## Connect To STK ``` scenarioPath = cwd+'\\ConstellationWizardExampleScenario' scenarioName = 'ConstellationWizardExample' root = ConnectToSTK(version=12,scenarioPath = scenarioPath,scenarioName=scenarioName) # Tries to connect to open scenario,then Load scenario,then create new scenario sc = root.CurrentScenario sc2 = root.CurrentScenario.QueryInterface(STKObjects.IAgScenario) # Turn on Antialiasing for better visualization. Options are: Off,FXAA,2,3,4 cmd = 'SoftVtr3d * AntiAlias FXAA' root.ExecuteCommand(cmd); ``` ## Create the constellation if needed ``` TLEFileName = cwdFiles+'\\Constellations\\'+constellationName+'.tce' if create == True: CreateConstellation(root,TLEFileName,ssc=00000) print(TLEFileName+' Created\n') ``` ## Look at TLEs ``` tleList = getTLEs(TLEFileName) dfTLE = tleListToDF(tleList) dfTLE ``` ## Load TLE file as a MTO ``` LoadMTO(root,TLEFileName,timestep=60,color='green',orbitsOnOrOff='off',orbitFrame='Inertial') ``` ## Run Deck Access ``` # Run deck Access, using a constraint object if it exists startTime = sc2.StartTime+startTimeDA stopTime = sc2.StartTime+stopTimeDA NumOfSC,deckAccessFileName,deckAccessTLEFileName = runDeckAccess(root,startTime,stopTime,TLEFileName,accessObjPath,constraintSatName=satTemplateName) NumOfSC # Read deck access report file dfAccess = deckAccessReportToDF(deckAccessFileName) dfAccess LoadMTO(root,deckAccessTLEFileName,timestep=60,color='cyan',orbitsOnOrOff='off',orbitFrame='Inertial') ``` ## Look at TLEs with access ``` # Look at the TLEs with Access tleList = getTLEs(deckAccessTLEFileName) dfTLEwAccess = tleListToDF(tleList) # NewTLEFileName = TLEFileName + 'DeckAccess' # dfToTLE(dfTLEwAccess,NewTLEFileName) # Example of how to save DeckAccess satellites to a TCE file dfTLEwAccess ``` ## Load subset of Satellites ``` # dfLoad = dfTLE # Load all satellites in TLE files # dfLoad = dfTLE[dfTLE['i']>45] # Use subset based on filtering # dfLoad = dfTLEwAccess[(dfTLEwAccess['i']>45) & (dfTLEwAccess['RAAN'].astype(float)>180)] # Use subset based on deck access and filtering dfLoad = dfTLEwAccess # Use subset based on deck access # dfLoad = dfLoad.head() dfLoad # Load satellites using a satellite template LoadSatsUsingTemplate(root,dfLoad,startTime,stopTime,TLEFileName,satTemplateName,color='green') ``` ## Perform Analysis ``` def chainAnalysis(root,chainPath,objsToAdd,startTime,stopTime,exportFileName): chain = root.GetObjectFromPath(chainPath) chain2 = chain.QueryInterface(STKObjects.IAgChain) chain2.Objects.RemoveAll() for obj in objsToAdd: chain2.Objects.Add(obj) chain2.ClearAccess() chain2.ComputeAccess() cmd = 'ReportCreate '+chainPath+' Type Export Style "Bent Pipe Comm Link" File "'+exportFileName+'" TimePeriod "'+str(startTime)+'" "'+str(stopTime)+'" TimeStep 60' root.ExecuteCommand(cmd) df = pd.read_csv(exportFileName) df = df[df.columns[:-1]] return df def covAnalysis(root,covDefPath,objsToAdd,exportFileName): cov= root.GetObjectFromPath(covDefPath) cov2 = cov.QueryInterface(STKObjects.IAgCoverageDefinition) cov2.AssetList.RemoveAll() for obj in objsToAdd: cov2.AssetList.Add(obj) cov2.ClearAccesses() cov2.ComputeAccesses() cmd = 'ReportCreate '+covDefPath+'/FigureOfMerit/NAsset Type Export Style "Value By Grid Point" File "'+exportFileName+'"' root.ExecuteCommand(cmd) f = open(exportFileName,'r') txt = f.readlines() f.close() k = 0 for line in txt: if 'Latitude' in line: start = k break k += 1 f = open(exportFileName+'Temp','w') for line in txt[start:-1]: f.write(line) f.close() df = pd.read_csv(exportFileName+'Temp') os.remove(exportFileName+'Temp') return df def commSysAnalysis(root,commSysPath,accessReceiver,objsToAdd,exportFileName): commSys= root.GetObjectFromPath(commSysPath) commSys2 = commSys.QueryInterface(STKObjects.IAgCommSystem) commSys2.InterferenceSources.RemoveAll() for obj in objsToAdd: commSys2.InterferenceSources.Add(obj) cmd = 'ReportCreate '+commSysPath+' Type Export Style "Link Information" File "'+exportFileName+'" AdditionalData "'+accessReceiver+'"' root.ExecuteCommand(cmd) df = pd.read_csv(exportFileName,header=4) return df chainPath = '*/Chain/AGIToConstellation' accessReceiver = '*/Facility/AGI/Sensor/FOV/Receiver/GroundReceiver' objsToAdd = ['*/Constellation/OneWebTransmitters',accessReceiver] exportFileName = cwdFiles+'\\AnalysisResults\\'+constellationName+'ChainComm.csv' exportFileDir = cwdFiles+'\\AnalysisResults\\' if os.path.isdir(exportFileDir): pass else: os.mkdir(exportFileDir) dfChainData = chainAnalysis(root,chainPath,objsToAdd,startTime,stopTime,exportFileName) dfChainData covDefPath = '*/CoverageDefinition/CovUS' objsToAdd = ['Constellation/'+constellationName] exportFileName = cwdFiles+'\\AnalysisResults\\'+constellationName+'NAsset.csv' dfCovAnalysis = covAnalysis(root,covDefPath,objsToAdd,exportFileName) dfCovAnalysis commSysPath = '*/CommSystem/InterferanceAnalysis' accessReceiver = 'Facility/AGI/Sensor/FOV/Receiver/GroundReceiver' objsToAdd = ['*/Constellation/OneWebTransmitters'] exportFileName = cwdFiles+'\\AnalysisResults\\'+constellationName+'InterferanceLinkBudget.csv' dfCommSys = commSysAnalysis(root,commSysPath,accessReceiver,objsToAdd,exportFileName) dfCommSys # See Interferance Effects on link plt.plot(dfCommSys['C/(N+I) (dB)']) plt.xlabel('Time (EpMin)') plt.ylabel('C/(N+I) (dB)'); ``` This is the section where you will need to modify the code to perform the type of analysis you need, it could involve sensors, chains, coverage etc. As well as save results. A perfectly reasonable alternative to writing the code to do this is to just do all of the analysis manually inside STK's GUI and use the rest of the script to set up your scenario. If that is the case use, the ConstellationWizardUI may be easier to use. ## Unload Satellites, Constellations, and MTOs ``` UnloadObjs(root,'Satellite',pattern='tle-*') UnloadObjs(root,'Constellation',pattern='One*') UnloadObjs(root,'MTO',pattern='*') ``` Rinse and Repeat! Rerunning the notebook with different parameters gives the ability to quickly run analysis and trade studies on subsets of large constellations over different time periods, facility locations, constellation patterns, etc. Converting this notebook to a .py file and putting the contents into a for loop would be recommended for large analyses.
github_jupyter
# Import libraries from DeckAccessReader import * import numpy as np import pandas as pd pd.set_option('display.max_rows', 10) import os cwd = os.getcwd() cwdFiles = cwd+'\\Files' import seaborn as sns from comtypes.client import CreateObject from comtypes.client import GetActiveObject from comtypes.gen import STKObjects import matplotlib.pyplot as plt # Create or Load a constellation saved as TLEs constellationName = 'Oneweb' # Either Created or loaded create = False # 'Create' or 'Load' accessObjPath = '*/Facility/AGI/Sensor/FOV' # Used for deck access satTemplateName = 'OneWeb' # Used for deck access constraints and child ojects when loading satellites. This can be an empty string '' startTimeDA = 0 # Deck Access start time. Relative to the scenario start time [EpSec] stopTimeDA = 3600 # Deck Access stop time. Relative to the scenario start time [EpSec] scenarioPath = cwd+'\\ConstellationWizardExampleScenario' scenarioName = 'ConstellationWizardExample' root = ConnectToSTK(version=12,scenarioPath = scenarioPath,scenarioName=scenarioName) # Tries to connect to open scenario,then Load scenario,then create new scenario sc = root.CurrentScenario sc2 = root.CurrentScenario.QueryInterface(STKObjects.IAgScenario) # Turn on Antialiasing for better visualization. Options are: Off,FXAA,2,3,4 cmd = 'SoftVtr3d * AntiAlias FXAA' root.ExecuteCommand(cmd); TLEFileName = cwdFiles+'\\Constellations\\'+constellationName+'.tce' if create == True: CreateConstellation(root,TLEFileName,ssc=00000) print(TLEFileName+' Created\n') tleList = getTLEs(TLEFileName) dfTLE = tleListToDF(tleList) dfTLE LoadMTO(root,TLEFileName,timestep=60,color='green',orbitsOnOrOff='off',orbitFrame='Inertial') # Run deck Access, using a constraint object if it exists startTime = sc2.StartTime+startTimeDA stopTime = sc2.StartTime+stopTimeDA NumOfSC,deckAccessFileName,deckAccessTLEFileName = runDeckAccess(root,startTime,stopTime,TLEFileName,accessObjPath,constraintSatName=satTemplateName) NumOfSC # Read deck access report file dfAccess = deckAccessReportToDF(deckAccessFileName) dfAccess LoadMTO(root,deckAccessTLEFileName,timestep=60,color='cyan',orbitsOnOrOff='off',orbitFrame='Inertial') # Look at the TLEs with Access tleList = getTLEs(deckAccessTLEFileName) dfTLEwAccess = tleListToDF(tleList) # NewTLEFileName = TLEFileName + 'DeckAccess' # dfToTLE(dfTLEwAccess,NewTLEFileName) # Example of how to save DeckAccess satellites to a TCE file dfTLEwAccess # dfLoad = dfTLE # Load all satellites in TLE files # dfLoad = dfTLE[dfTLE['i']>45] # Use subset based on filtering # dfLoad = dfTLEwAccess[(dfTLEwAccess['i']>45) & (dfTLEwAccess['RAAN'].astype(float)>180)] # Use subset based on deck access and filtering dfLoad = dfTLEwAccess # Use subset based on deck access # dfLoad = dfLoad.head() dfLoad # Load satellites using a satellite template LoadSatsUsingTemplate(root,dfLoad,startTime,stopTime,TLEFileName,satTemplateName,color='green') def chainAnalysis(root,chainPath,objsToAdd,startTime,stopTime,exportFileName): chain = root.GetObjectFromPath(chainPath) chain2 = chain.QueryInterface(STKObjects.IAgChain) chain2.Objects.RemoveAll() for obj in objsToAdd: chain2.Objects.Add(obj) chain2.ClearAccess() chain2.ComputeAccess() cmd = 'ReportCreate '+chainPath+' Type Export Style "Bent Pipe Comm Link" File "'+exportFileName+'" TimePeriod "'+str(startTime)+'" "'+str(stopTime)+'" TimeStep 60' root.ExecuteCommand(cmd) df = pd.read_csv(exportFileName) df = df[df.columns[:-1]] return df def covAnalysis(root,covDefPath,objsToAdd,exportFileName): cov= root.GetObjectFromPath(covDefPath) cov2 = cov.QueryInterface(STKObjects.IAgCoverageDefinition) cov2.AssetList.RemoveAll() for obj in objsToAdd: cov2.AssetList.Add(obj) cov2.ClearAccesses() cov2.ComputeAccesses() cmd = 'ReportCreate '+covDefPath+'/FigureOfMerit/NAsset Type Export Style "Value By Grid Point" File "'+exportFileName+'"' root.ExecuteCommand(cmd) f = open(exportFileName,'r') txt = f.readlines() f.close() k = 0 for line in txt: if 'Latitude' in line: start = k break k += 1 f = open(exportFileName+'Temp','w') for line in txt[start:-1]: f.write(line) f.close() df = pd.read_csv(exportFileName+'Temp') os.remove(exportFileName+'Temp') return df def commSysAnalysis(root,commSysPath,accessReceiver,objsToAdd,exportFileName): commSys= root.GetObjectFromPath(commSysPath) commSys2 = commSys.QueryInterface(STKObjects.IAgCommSystem) commSys2.InterferenceSources.RemoveAll() for obj in objsToAdd: commSys2.InterferenceSources.Add(obj) cmd = 'ReportCreate '+commSysPath+' Type Export Style "Link Information" File "'+exportFileName+'" AdditionalData "'+accessReceiver+'"' root.ExecuteCommand(cmd) df = pd.read_csv(exportFileName,header=4) return df chainPath = '*/Chain/AGIToConstellation' accessReceiver = '*/Facility/AGI/Sensor/FOV/Receiver/GroundReceiver' objsToAdd = ['*/Constellation/OneWebTransmitters',accessReceiver] exportFileName = cwdFiles+'\\AnalysisResults\\'+constellationName+'ChainComm.csv' exportFileDir = cwdFiles+'\\AnalysisResults\\' if os.path.isdir(exportFileDir): pass else: os.mkdir(exportFileDir) dfChainData = chainAnalysis(root,chainPath,objsToAdd,startTime,stopTime,exportFileName) dfChainData covDefPath = '*/CoverageDefinition/CovUS' objsToAdd = ['Constellation/'+constellationName] exportFileName = cwdFiles+'\\AnalysisResults\\'+constellationName+'NAsset.csv' dfCovAnalysis = covAnalysis(root,covDefPath,objsToAdd,exportFileName) dfCovAnalysis commSysPath = '*/CommSystem/InterferanceAnalysis' accessReceiver = 'Facility/AGI/Sensor/FOV/Receiver/GroundReceiver' objsToAdd = ['*/Constellation/OneWebTransmitters'] exportFileName = cwdFiles+'\\AnalysisResults\\'+constellationName+'InterferanceLinkBudget.csv' dfCommSys = commSysAnalysis(root,commSysPath,accessReceiver,objsToAdd,exportFileName) dfCommSys # See Interferance Effects on link plt.plot(dfCommSys['C/(N+I) (dB)']) plt.xlabel('Time (EpMin)') plt.ylabel('C/(N+I) (dB)'); UnloadObjs(root,'Satellite',pattern='tle-*') UnloadObjs(root,'Constellation',pattern='One*') UnloadObjs(root,'MTO',pattern='*')
0.584983
0.64526
![人工智慧 - 自由團隊](https://raw.githubusercontent.com/chenkenanalytic/img/master/af/aifreeteam.png) <center>Welcome to the image recognition practice for Traditional Chinese Handwriting Characters by AI . FREE Team.</center> <br> <center>歡迎大家來到 AI . FREE Team 所開發的繁體中文手寫圖像辨識實作。 </center> <br> <center>(Author: Yen-Lin 博士, Chen Ken;Date of published: 2020/4/29;AI . FREE Team Website: https://aifreeblog.herokuapp.com/)</center> 說明:此圖像辨識實作使用 <a href="https://github.com/AI-FREE-Team/Traditional-Chinese-Handwriting-Dataset">Traditional Chinese Handwriting Dataset</a> 專案之資料集。 # Step 0: 匯入套件 ``` from platform import python_version import os import shutil import numpy as np import pandas as pd import PIL.Image from matplotlib import pyplot as plt from matplotlib.font_manager import findfont, FontProperties ''' 繁體中文顯示設定 ''' from matplotlib.font_manager import FontProperties default_type = findfont( FontProperties( family=FontProperties().get_family() ) ) ttf_path = '/'.join( default_type.split('/')[:-1] ) # 預設字型的資料夾路徑 os.chdir( '/content' ) if not os.path.exists( '/content/matplotlib_Display_Chinese_in_Colab' ): !git clone https://github.com/YenLinWu/matplotlib_Display_Chinese_in_Colab os.chdir( '/content/matplotlib_Display_Chinese_in_Colab' ) for item in os.listdir(): if item.endswith( '.ttf' ): msj_ttf_path = os.path.abspath( item ) msj_name = msj_ttf_path.split('/')[-1] try: shutil.move( msj_ttf_path, ttf_path ) except: pass finally: os.chdir( '/content' ) shutil.rmtree( '/content/matplotlib_Display_Chinese_in_Colab' ) font = FontProperties( fname=ttf_path+'/'+msj_name ) import tensorflow as tf from tensorflow.keras.preprocessing.image import load_img, ImageDataGenerator from tensorflow.keras.models import Sequential, load_model from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout from tensorflow.keras.optimizers import * print( 'Python Version: ', python_version() ) print( 'TensorFlow Version: ', tf.__version__ ) print( 'Keras Version: ', tf.keras.__version__ ) ``` # Step 1: 使用 Data Deployment 教學,下載繁體中文手寫資料集 資料部署教學:<a href="https://colab.research.google.com/github/AI-FREE-Team/Traditional-Chinese-Handwriting-Dataset/blob/master/Data_Deployment_colab.ipynb#scrollTo=BtJidZSSed2C">範例連結</a> ``` !git clone https://github.com/AI-FREE-Team/Traditional-Chinese-Handwriting-Dataset.git import os import zipfile import shutil OutputFolder = '/content/Handwritten_Data' if not os.path.exists(OutputFolder): os.mkdir(OutputFolder) print( f'Create the new "{OutputFolder}" folder' ) os.chdir(OutputFolder) ### 檢查路徑 !pwd CompressedFiles = [] os.chdir('/content/Traditional-Chinese-Handwriting-Dataset/data') for item in os.listdir(): if item.endswith('.zip'): # Check for ".zip" extension. file_path = os.path.abspath(item) # Get full path of the compressed file. CompressedFiles.append(file_path) for file in CompressedFiles: # Construct a ZipFile object with the filename, and then extract it. zip_ref = zipfile.ZipFile(file).extractall(OutputFolder) source_path = OutputFolder + '/cleaned_data(50_50)' img_list = os.listdir(source_path) for img in img_list: shutil.move(source_path + '/' + img, OutputFolder) # Move a file to another location. shutil.rmtree(OutputFolder + '/cleaned_data(50_50)') print(f'Decompress successfully {file} ......') print( 'Moving images according to traditional Chinese characters......' ) ImageList = os.listdir(OutputFolder) ImageList = [img for img in ImageList if len(img)>1] WordList = list(set([w.split('_')[0] for w in ImageList])) for w in WordList: try: os.chdir(OutputFolder) # Change the current working directory to OutputPath. os.mkdir(w) # Create the new word folder in OutputPath. MoveList = [img for img in ImageList if w in img] except: os.chdir(OutputFolder) MoveList = [ img for img in ImageList if w in img ] finally: for img in MoveList: old_path = OutputFolder + '/' + img new_path = OutputFolder + '/' + w + '/' + img shutil.move( old_path, new_path ) print( 'Data Deployment completed.' ) a=0 b=0 for item in os.listdir(OutputFolder): a += 1 for i in os.listdir(OutputFolder + '/' + item): b +=1 print('總共: ' + str(a) + ' 個字(資料夾) / 總共: ' + str(b) + '個樣本') print('平均每個字有: ' + str(b/a) + ' 個樣本') ``` # Step 2: 訓練集與自製測試集路徑 ``` os.chdir('/content') os.mkdir('Traditional_Chinese_Testing_Data') os.chdir('/content/Traditional_Chinese_Testing_Data') !git clone https://github.com/AI-FREE-Team/Handwriting-Chinese-Characters-Recognition ``` ### 自製繁中手寫測試集 : 利用小畫家自製繁體中文字 * 底圖大小: 50x50 像素 * 白底黑字 * 像素筆線條粗細: 1 像素 將用小畫家自製的繁中手寫文字,以 png 檔儲存,且放於以該字為名的子資料夾中,如下圖所示: ![自製繁中手寫資料集](https://raw.githubusercontent.com/AI-FREE-Team/Traditional-Chinese-Handwriting-Dataset/master/img/HomeMade_Traditional_Chinese_Dataset.png) ``` ''' RawDataPath: 繁中手寫資料集路徑 TraningDataPath: 訓練集路徑 TestingDataPath: 自製繁中手寫資料集路徑 ''' RawDataPath = '/content/Handwritten_Data' TraningDataPath = '/content/Traditional_Chinese_Testing_Data/Handwriting-Chinese-Characters-Recognition/train data' TestingDataPath = '/content/Traditional_Chinese_Testing_Data/Handwriting-Chinese-Characters-Recognition/test data' os.chdir( RawDataPath ) print( 'Current working directory:', os.getcwd() ) ``` # Step 3: 訓練集 從繁體中文手寫資料集中,選擇欲辨識的繁體中文字集,作為訓練集。 ``` SelectedWords = [ '人', '工', '智', '慧' ] os.chdir( RawDataPath ) try: os.mkdir( TraningDataPath ) except: shutil.rmtree( TraningDataPath ) os.mkdir( TraningDataPath ) finally: nonexistence = [] for c in SelectedWords: try: shutil.copytree( RawDataPath+'/'+c, TraningDataPath+'/'+c ) except: nonexistence.append( c ) if len(nonexistence)>1: print( f'There are {len(nonexistence)} characters that are not in dataset. \n{nonexistence}' ) elif len(nonexistence)==1: print( f'There is {len(nonexistence)} character that is not in dataset. \n{nonexistence}' ) else: print('') def Loading_Image( image_path ): img = load_img( image_path ) img = tf.constant( np.array(img) ) return img def Show( image, title=None ) : if len( image.shape )>3 : image = tf.squeeze( image, axis=0 ) plt.imshow( image ) if title: plt.title( title, fontproperties=font ) img_list = [] for c in SelectedWords : folder_path = TraningDataPath+'/'+c file_names = os.listdir( folder_path ) for i in range(5) : img_list.append( folder_path+'/'+file_names[i] ) plt.gcf().set_size_inches( (12,12) ) for i in range(20): plt.subplot(4,5,i+1) title = img_list[i].split('/')[-1].split('_')[-2] img = Loading_Image( img_list[i] ) Show( img, title ) ``` # Step 4: (超)參數 ``` Num_Classes = len(SelectedWords) Image_Size = ( 50, 50 ) Epochs = 50 Batch_Size = 8 ``` # Step 5: 資料擴增( Data Augmentation ) ## (5.1) 訓練集 ``` Train_Data_Genetor = ImageDataGenerator( rescale = 1./255, validation_split = 0.2, width_shift_range = 0.05, height_shift_range = 0.05, zoom_range = 0.1, horizontal_flip = False ) Train_Generator = Train_Data_Genetor.flow_from_directory( TraningDataPath , target_size = Image_Size, batch_size = Batch_Size, class_mode = 'categorical', shuffle = True, subset = 'training' ) def Plot_Genetor( imgs, labels=[], grid=(1,10), size=(20,2) ): n = len( imgs ) plt.gcf().set_size_inches(size) for i in range(n): ax = plt.subplot( grid[0], grid[1], i+1 ) ax.imshow( imgs[i] ) if len(labels): ax.set_title( f'Label={labels[i]}' ) ax.set_xticks([]); ax.set_yticks([]) plt.show() batch = 1 for data, label in Train_Generator: print( f'batch {batch}: \n shape of images: {data.shape} \n shape of labels: {label.shape}' ) Plot_Genetor( data, label ) batch += 1 if batch > len(Train_Generator): break print( f'There are {len(Train_Generator)} batches.' ) ``` ## (5.2) 驗證集 ``` Val_Data_Genetor = ImageDataGenerator( rescale=1./255, validation_split = 0.2 ) Val_Generator = Train_Data_Genetor.flow_from_directory( TraningDataPath , target_size = Image_Size, batch_size = Batch_Size, class_mode = 'categorical', shuffle = True, subset = 'validation' ) ``` # Step 6: 建立及編譯模型 ``` CNN = Sequential( name = 'CNN_Model' ) CNN.add( Conv2D( 5, kernel_size = (2,2), padding = 'same', input_shape = (Image_Size[0],Image_Size[1],3), name = 'Convolution' ) ) CNN.add( MaxPooling2D( pool_size = (2,2), name = 'Pooling' ) ) CNN.add( Flatten( name = 'Flatten' ) ) CNN.add( Dropout( 0.5, name = 'Dropout_1' ) ) CNN.add( Dense( 512, activation = 'relu', name = 'Dense' ) ) CNN.add( Dropout( 0.5, name = 'Dropout_2' ) ) CNN.add( Dense( Num_Classes, activation = 'softmax', name = 'Softmax' ) ) CNN.summary() CNN.compile( optimizer = Adam(), loss = 'categorical_crossentropy', metrics = ['accuracy'] ) ``` # Step 7: 訓練及儲存模型 ``` History = CNN.fit( Train_Generator, steps_per_epoch = Train_Generator.samples//Batch_Size, validation_data = Val_Generator, validation_steps = Val_Generator.samples//Batch_Size, epochs = Epochs ) Train_Accuracy = History.history['accuracy'] Val_Accuracy = History.history['val_accuracy'] Train_Loss = History.history['loss'] Val_Loss = History.history['val_loss'] epochs_range = range(Epochs) plt.figure( figsize=(14,4) ) plt.subplot( 1,2,1 ) plt.plot( range( len(Train_Accuracy) ), Train_Accuracy, label='Train' ) plt.plot( range( len(Val_Accuracy) ), Val_Accuracy, label='Validation' ) plt.legend( loc='lower right' ) plt.title( 'Accuracy' ) plt.subplot( 1,2,2 ) plt.plot( range( len(Train_Loss) ), Train_Loss, label='Train' ) plt.plot( range( len(Val_Loss) ), Val_Loss, label='Validation' ) plt.legend( loc='upper right' ) plt.title( 'Loss') plt.show() ``` ### 儲存模型 ``` os.chdir( '/content' ) CNN.save( 'CNN_Model.h5' ) ``` # Step 8: 自製繁中手寫測試集預測 ## (8.1) 建立自製測試集的生成器( Generator )及走訪器( Iterator ) ``` Test_Data_Genetor = ImageDataGenerator( rescale=1./255 ) Test_Generator = Test_Data_Genetor.flow_from_directory( TestingDataPath, target_size = Image_Size, shuffle = False, class_mode = 'categorical' ) batch = 1 for data, label in Test_Generator: print( f'batch {batch}: \n shape of images: {data.shape} \n shape of labels: {label.shape}' ) Plot_Genetor( data, label ) batch += 1 if batch > 1: break ``` ## (8.2) 載入模型且預測 ``` Test_Generator.reset() Predicts=CNN.predict(Test_Generator,verbose=1, steps =8) ``` ## (8.3) 檢視預測結果 ``` test_data, test_label = Test_Generator.next() def Plot_Predict( img, labels=[], predicts=[], size=(20,2) ): plt.gcf().set_size_inches(size) ax = plt.subplot( ) ax.imshow( img ) ax.set_title( f'Predict={predicts.round(1)} \nLabel={labels}' ) ax.set_xticks([]); ax.set_yticks([]) plt.show() for data, label, predict_label in zip(test_data, test_label, Predicts): Plot_Predict( data, label, predict_label ) ``` ## (8.4) 其他做法 - 無分類資料夾預測方法 ``` os.mkdir('test') for i in os.listdir(TestingDataPath): folder = TestingDataPath + '/' + i for f in os.listdir(folder): img_file = folder + '/' + f shutil.copyfile(img_file,'/content/test/' + f) from tensorflow.python.keras.preprocessing import image directory = os.fsencode('/content/test') # load trained model model = load_model('CNN_Model.h5') # predict all photos (loop though the folder) for f in os.listdir(directory): f = os.fsdecode(f) img = image.load_img('/content/test/'+ str(f), target_size=(50, 50)) x = image.img_to_array(img) x = np.expand_dims(x, axis = 0) pred = model.predict(x) ax = plt.subplot( ) ax.imshow(img) ax.set_title( f'Predict={pred.round(1)}' ) ax.set_xticks([]); ax.set_yticks([]) plt.gcf().set_size_inches((20,2)) plt.show() print(f) ```
github_jupyter
from platform import python_version import os import shutil import numpy as np import pandas as pd import PIL.Image from matplotlib import pyplot as plt from matplotlib.font_manager import findfont, FontProperties ''' 繁體中文顯示設定 ''' from matplotlib.font_manager import FontProperties default_type = findfont( FontProperties( family=FontProperties().get_family() ) ) ttf_path = '/'.join( default_type.split('/')[:-1] ) # 預設字型的資料夾路徑 os.chdir( '/content' ) if not os.path.exists( '/content/matplotlib_Display_Chinese_in_Colab' ): !git clone https://github.com/YenLinWu/matplotlib_Display_Chinese_in_Colab os.chdir( '/content/matplotlib_Display_Chinese_in_Colab' ) for item in os.listdir(): if item.endswith( '.ttf' ): msj_ttf_path = os.path.abspath( item ) msj_name = msj_ttf_path.split('/')[-1] try: shutil.move( msj_ttf_path, ttf_path ) except: pass finally: os.chdir( '/content' ) shutil.rmtree( '/content/matplotlib_Display_Chinese_in_Colab' ) font = FontProperties( fname=ttf_path+'/'+msj_name ) import tensorflow as tf from tensorflow.keras.preprocessing.image import load_img, ImageDataGenerator from tensorflow.keras.models import Sequential, load_model from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout from tensorflow.keras.optimizers import * print( 'Python Version: ', python_version() ) print( 'TensorFlow Version: ', tf.__version__ ) print( 'Keras Version: ', tf.keras.__version__ ) !git clone https://github.com/AI-FREE-Team/Traditional-Chinese-Handwriting-Dataset.git import os import zipfile import shutil OutputFolder = '/content/Handwritten_Data' if not os.path.exists(OutputFolder): os.mkdir(OutputFolder) print( f'Create the new "{OutputFolder}" folder' ) os.chdir(OutputFolder) ### 檢查路徑 !pwd CompressedFiles = [] os.chdir('/content/Traditional-Chinese-Handwriting-Dataset/data') for item in os.listdir(): if item.endswith('.zip'): # Check for ".zip" extension. file_path = os.path.abspath(item) # Get full path of the compressed file. CompressedFiles.append(file_path) for file in CompressedFiles: # Construct a ZipFile object with the filename, and then extract it. zip_ref = zipfile.ZipFile(file).extractall(OutputFolder) source_path = OutputFolder + '/cleaned_data(50_50)' img_list = os.listdir(source_path) for img in img_list: shutil.move(source_path + '/' + img, OutputFolder) # Move a file to another location. shutil.rmtree(OutputFolder + '/cleaned_data(50_50)') print(f'Decompress successfully {file} ......') print( 'Moving images according to traditional Chinese characters......' ) ImageList = os.listdir(OutputFolder) ImageList = [img for img in ImageList if len(img)>1] WordList = list(set([w.split('_')[0] for w in ImageList])) for w in WordList: try: os.chdir(OutputFolder) # Change the current working directory to OutputPath. os.mkdir(w) # Create the new word folder in OutputPath. MoveList = [img for img in ImageList if w in img] except: os.chdir(OutputFolder) MoveList = [ img for img in ImageList if w in img ] finally: for img in MoveList: old_path = OutputFolder + '/' + img new_path = OutputFolder + '/' + w + '/' + img shutil.move( old_path, new_path ) print( 'Data Deployment completed.' ) a=0 b=0 for item in os.listdir(OutputFolder): a += 1 for i in os.listdir(OutputFolder + '/' + item): b +=1 print('總共: ' + str(a) + ' 個字(資料夾) / 總共: ' + str(b) + '個樣本') print('平均每個字有: ' + str(b/a) + ' 個樣本') os.chdir('/content') os.mkdir('Traditional_Chinese_Testing_Data') os.chdir('/content/Traditional_Chinese_Testing_Data') !git clone https://github.com/AI-FREE-Team/Handwriting-Chinese-Characters-Recognition ''' RawDataPath: 繁中手寫資料集路徑 TraningDataPath: 訓練集路徑 TestingDataPath: 自製繁中手寫資料集路徑 ''' RawDataPath = '/content/Handwritten_Data' TraningDataPath = '/content/Traditional_Chinese_Testing_Data/Handwriting-Chinese-Characters-Recognition/train data' TestingDataPath = '/content/Traditional_Chinese_Testing_Data/Handwriting-Chinese-Characters-Recognition/test data' os.chdir( RawDataPath ) print( 'Current working directory:', os.getcwd() ) SelectedWords = [ '人', '工', '智', '慧' ] os.chdir( RawDataPath ) try: os.mkdir( TraningDataPath ) except: shutil.rmtree( TraningDataPath ) os.mkdir( TraningDataPath ) finally: nonexistence = [] for c in SelectedWords: try: shutil.copytree( RawDataPath+'/'+c, TraningDataPath+'/'+c ) except: nonexistence.append( c ) if len(nonexistence)>1: print( f'There are {len(nonexistence)} characters that are not in dataset. \n{nonexistence}' ) elif len(nonexistence)==1: print( f'There is {len(nonexistence)} character that is not in dataset. \n{nonexistence}' ) else: print('') def Loading_Image( image_path ): img = load_img( image_path ) img = tf.constant( np.array(img) ) return img def Show( image, title=None ) : if len( image.shape )>3 : image = tf.squeeze( image, axis=0 ) plt.imshow( image ) if title: plt.title( title, fontproperties=font ) img_list = [] for c in SelectedWords : folder_path = TraningDataPath+'/'+c file_names = os.listdir( folder_path ) for i in range(5) : img_list.append( folder_path+'/'+file_names[i] ) plt.gcf().set_size_inches( (12,12) ) for i in range(20): plt.subplot(4,5,i+1) title = img_list[i].split('/')[-1].split('_')[-2] img = Loading_Image( img_list[i] ) Show( img, title ) Num_Classes = len(SelectedWords) Image_Size = ( 50, 50 ) Epochs = 50 Batch_Size = 8 Train_Data_Genetor = ImageDataGenerator( rescale = 1./255, validation_split = 0.2, width_shift_range = 0.05, height_shift_range = 0.05, zoom_range = 0.1, horizontal_flip = False ) Train_Generator = Train_Data_Genetor.flow_from_directory( TraningDataPath , target_size = Image_Size, batch_size = Batch_Size, class_mode = 'categorical', shuffle = True, subset = 'training' ) def Plot_Genetor( imgs, labels=[], grid=(1,10), size=(20,2) ): n = len( imgs ) plt.gcf().set_size_inches(size) for i in range(n): ax = plt.subplot( grid[0], grid[1], i+1 ) ax.imshow( imgs[i] ) if len(labels): ax.set_title( f'Label={labels[i]}' ) ax.set_xticks([]); ax.set_yticks([]) plt.show() batch = 1 for data, label in Train_Generator: print( f'batch {batch}: \n shape of images: {data.shape} \n shape of labels: {label.shape}' ) Plot_Genetor( data, label ) batch += 1 if batch > len(Train_Generator): break print( f'There are {len(Train_Generator)} batches.' ) Val_Data_Genetor = ImageDataGenerator( rescale=1./255, validation_split = 0.2 ) Val_Generator = Train_Data_Genetor.flow_from_directory( TraningDataPath , target_size = Image_Size, batch_size = Batch_Size, class_mode = 'categorical', shuffle = True, subset = 'validation' ) CNN = Sequential( name = 'CNN_Model' ) CNN.add( Conv2D( 5, kernel_size = (2,2), padding = 'same', input_shape = (Image_Size[0],Image_Size[1],3), name = 'Convolution' ) ) CNN.add( MaxPooling2D( pool_size = (2,2), name = 'Pooling' ) ) CNN.add( Flatten( name = 'Flatten' ) ) CNN.add( Dropout( 0.5, name = 'Dropout_1' ) ) CNN.add( Dense( 512, activation = 'relu', name = 'Dense' ) ) CNN.add( Dropout( 0.5, name = 'Dropout_2' ) ) CNN.add( Dense( Num_Classes, activation = 'softmax', name = 'Softmax' ) ) CNN.summary() CNN.compile( optimizer = Adam(), loss = 'categorical_crossentropy', metrics = ['accuracy'] ) History = CNN.fit( Train_Generator, steps_per_epoch = Train_Generator.samples//Batch_Size, validation_data = Val_Generator, validation_steps = Val_Generator.samples//Batch_Size, epochs = Epochs ) Train_Accuracy = History.history['accuracy'] Val_Accuracy = History.history['val_accuracy'] Train_Loss = History.history['loss'] Val_Loss = History.history['val_loss'] epochs_range = range(Epochs) plt.figure( figsize=(14,4) ) plt.subplot( 1,2,1 ) plt.plot( range( len(Train_Accuracy) ), Train_Accuracy, label='Train' ) plt.plot( range( len(Val_Accuracy) ), Val_Accuracy, label='Validation' ) plt.legend( loc='lower right' ) plt.title( 'Accuracy' ) plt.subplot( 1,2,2 ) plt.plot( range( len(Train_Loss) ), Train_Loss, label='Train' ) plt.plot( range( len(Val_Loss) ), Val_Loss, label='Validation' ) plt.legend( loc='upper right' ) plt.title( 'Loss') plt.show() os.chdir( '/content' ) CNN.save( 'CNN_Model.h5' ) Test_Data_Genetor = ImageDataGenerator( rescale=1./255 ) Test_Generator = Test_Data_Genetor.flow_from_directory( TestingDataPath, target_size = Image_Size, shuffle = False, class_mode = 'categorical' ) batch = 1 for data, label in Test_Generator: print( f'batch {batch}: \n shape of images: {data.shape} \n shape of labels: {label.shape}' ) Plot_Genetor( data, label ) batch += 1 if batch > 1: break Test_Generator.reset() Predicts=CNN.predict(Test_Generator,verbose=1, steps =8) test_data, test_label = Test_Generator.next() def Plot_Predict( img, labels=[], predicts=[], size=(20,2) ): plt.gcf().set_size_inches(size) ax = plt.subplot( ) ax.imshow( img ) ax.set_title( f'Predict={predicts.round(1)} \nLabel={labels}' ) ax.set_xticks([]); ax.set_yticks([]) plt.show() for data, label, predict_label in zip(test_data, test_label, Predicts): Plot_Predict( data, label, predict_label ) os.mkdir('test') for i in os.listdir(TestingDataPath): folder = TestingDataPath + '/' + i for f in os.listdir(folder): img_file = folder + '/' + f shutil.copyfile(img_file,'/content/test/' + f) from tensorflow.python.keras.preprocessing import image directory = os.fsencode('/content/test') # load trained model model = load_model('CNN_Model.h5') # predict all photos (loop though the folder) for f in os.listdir(directory): f = os.fsdecode(f) img = image.load_img('/content/test/'+ str(f), target_size=(50, 50)) x = image.img_to_array(img) x = np.expand_dims(x, axis = 0) pred = model.predict(x) ax = plt.subplot( ) ax.imshow(img) ax.set_title( f'Predict={pred.round(1)}' ) ax.set_xticks([]); ax.set_yticks([]) plt.gcf().set_size_inches((20,2)) plt.show() print(f)
0.296451
0.763153
``` '''Python for Data Science - Perform Data Science on Titanic Dataset a)Load the Titanic dataset into one of the data structures (NumPy or Pandas). b)Display header rows and description of the loaded dataset. c) Remove unnecessary features (E.g. drop unwanted columns) from the dataset. d) Manipulate data by replacing empty column values with a default value. e) Perform the following visualizations on the loaded dataset: i) Passenger status (Survived/Died) against Passenger Class ii) Survival rate of male vs female iii) No of passengers in each age group ''' import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas as pd titanic_df = pd.read_csv('titanictrain.csv') # Convert the survived column to strings for easier reading titanic_df ['Survived'] = titanic_df ['Survived'].map({ 0: 'Died', 1: 'Survived' }) print("======Data Headers Before Dropping Columns=======") print(titanic_df.head(5)) print("**** \n\nDATA TRANSFORMATION *****\n") print("======Data Headers After Dropping Columns - First Way=======") titanic_df.drop(['Parch','PassengerId','Name','Ticket'], axis=1, inplace=True) print(titanic_df.head(5)) print("======Data Headers After Dropping Columns - Second Way =======") titanic_df = titanic_df.drop(['SibSp','Fare'], axis=1) print(titanic_df.head(5)) # Convert the Class column to strings for easier reading titanic_df ['Pclass'] = titanic_df ['Pclass'].map({ 1: 'Luxury Class', 2: 'Economy Class', 3: 'Lower Class' }) print("======Data Headers After Transforming Class Column =======") print(titanic_df.head(5)) titanic_df["Embarked"] = titanic_df["Embarked"].fillna("S") print("======Data Headers After Filling with default value for Embarked Column =======") print(titanic_df.head(5)) # Convert the Embarked column to strings for easier reading titanic_df ['Embarked'] = titanic_df ['Embarked'].map({ 'C':'Cherbourg', 'Q':'Queenstown', 'S':'Southampton' }) print("======Data Headers After Transforming Embarked Column =======") print(titanic_df.head(5)) print("\n\n\n**** DATA VISUALIZATIONS****\n\n") print("Visualization #1 : Survival Rate Based on Passenger Sitting Class") ax = sns.countplot(x = 'Pclass', hue = 'Survived', palette = 'Set1',data = titanic_df) ax.set(title = 'Passenger status (Survived/Died) against Passenger Class', xlabel = 'Passenger Class', ylabel = 'Total') plt.show() print("Visualization #2 : Survival Rate Based on Gender") print(pd.crosstab(titanic_df["Sex"],titanic_df.Survived)) ax = sns.countplot(x = 'Sex', hue = 'Survived', palette = 'Set2', data = titanic_df) ax.set(title = 'Total Survivors According to Sex', xlabel = 'Sex', ylabel='Total') plt.show() print("Visualization #3 : Survival Rate Based on Passenger Age Group") # We look at Age column and set Intevals on the ages and the map them to their categories #as (Children, Teen, Adult, Old) interval = (0,18,35,60,120) categories = ['Children','Teens','Adult', 'Old'] titanic_df['Age_cats'] = pd.cut(titanic_df.Age, interval, labels = categories) ax = sns.countplot(x = 'Age_cats', data = titanic_df, hue = 'Survived', palette = 'Set3') ax.set(xlabel='Age Categorical', ylabel='Total', title="Age Categorical Survival Distribution") plt.show() print("Visualization #4 : Survival Rate Based on Passenger Embarked Port") print(pd.crosstab(titanic_df['Embarked'], titanic_df.Survived)) ax = sns.countplot(x = 'Embarked', hue = 'Survived', palette = 'Set1', data = titanic_df) ax.set(title = 'Survival distribution according to Embarking place') plt.show() '''Python for Data Science - Perform Data Science on Titanic Dataset a)Load the Titanic dataset into one of the data structures (NumPy or Pandas). *****DATA TRANSFORMATIONS***** b)Convert the survived column to strings for easier reading # Convert the Class column to strings for easier reading # Convert the Embarked column to strings for easier reading c)Fill the empty coulmn of Embarked with 'S' #Fill the empty coulmn of Cabin with 'XXX' d) Drop the columns 'Parch','PassengerId','Name','Ticket','Embarked' #Understand meaning of axis = 1 and inplace = True or False e)Display header rows and description of the loaded dataset. ''' import numpy as np import pandas as pd # Doing a) titanic_df = pd.read_csv('titanictrain.csv') #Doing c) line #8 titanic_df ['Survived'] = titanic_df ['Survived'].map({ 0: 'Died', 1: 'Survived' }) #Doing d) titanic_df.drop(['Parch','PassengerId','Name','Ticket'], axis = 1, inplace=True) titanic_df["Embarked"] = titanic_df["Embarked"].fillna("S") #line #32 print(titanic_df.head(5)) ''' f) Perform the following visualizations on the loaded dataset: i) Passenger status (Survived/Died) against Passenger Class #Understand the funda of 'Categorical Attribute' and plot based on it # Try this : Survival rate of male vs female Survival Rate Based on Passenger Embarked Port #Understand meaning of 'hue', 'palette' #Change value of 'set'and see ''' import seaborn as sns import matplotlib.pyplot as plt print("Visualization #1 : Survival Rate Based on Passenger Sitting Class") ax = sns.countplot(x = 'Pclass', hue = 'Survived', palette = 'Set1',data = titanic_df) ax.set(title = 'Passenger status (Survived/Died) against Passenger Class', xlabel = 'Passenger Class', ylabel = 'Total') plt.show() ''' f) Perform the following visualizations on the loaded dataset: ii) Survival rate of male vs female ''' print("Visualization #2 : Survival Rate Based on Gender") print(pd.crosstab(titanic_df["Sex"],titanic_df.Survived)) ax = sns.countplot(x = 'Sex', hue = 'Survived', palette = 'Set2', data = titanic_df) ax.set(title = 'Total Survivors According to Sex', xlabel = 'Sex', ylabel='Total') plt.show() ''' f) Perform the following visualizations on the loaded dataset: iii) No of passengers in each age group # We look at Age column and set Intevals on the ages and the map them to their categories #as (Children, Teen, Adult, Old) #Understand why 5 values in 'interval' and 4 values in 'categories' ''' print("Visualization #3 : Survival Rate Based on Passenger Age Group") interval = (0,18,35,60,120) categories = ['Children','Teens','Adult', 'Old'] titanic_df['Age_cats'] = pd.cut(titanic_df.Age, interval, labels = categories) ax = sns.countplot(x = 'Age_cats', data = titanic_df, hue = 'Survived', palette = 'Set3') ax.set(xlabel='Age Categorical', ylabel='Total', title="Age Categorical Survival Distribution") plt.show() ```
github_jupyter
'''Python for Data Science - Perform Data Science on Titanic Dataset a)Load the Titanic dataset into one of the data structures (NumPy or Pandas). b)Display header rows and description of the loaded dataset. c) Remove unnecessary features (E.g. drop unwanted columns) from the dataset. d) Manipulate data by replacing empty column values with a default value. e) Perform the following visualizations on the loaded dataset: i) Passenger status (Survived/Died) against Passenger Class ii) Survival rate of male vs female iii) No of passengers in each age group ''' import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas as pd titanic_df = pd.read_csv('titanictrain.csv') # Convert the survived column to strings for easier reading titanic_df ['Survived'] = titanic_df ['Survived'].map({ 0: 'Died', 1: 'Survived' }) print("======Data Headers Before Dropping Columns=======") print(titanic_df.head(5)) print("**** \n\nDATA TRANSFORMATION *****\n") print("======Data Headers After Dropping Columns - First Way=======") titanic_df.drop(['Parch','PassengerId','Name','Ticket'], axis=1, inplace=True) print(titanic_df.head(5)) print("======Data Headers After Dropping Columns - Second Way =======") titanic_df = titanic_df.drop(['SibSp','Fare'], axis=1) print(titanic_df.head(5)) # Convert the Class column to strings for easier reading titanic_df ['Pclass'] = titanic_df ['Pclass'].map({ 1: 'Luxury Class', 2: 'Economy Class', 3: 'Lower Class' }) print("======Data Headers After Transforming Class Column =======") print(titanic_df.head(5)) titanic_df["Embarked"] = titanic_df["Embarked"].fillna("S") print("======Data Headers After Filling with default value for Embarked Column =======") print(titanic_df.head(5)) # Convert the Embarked column to strings for easier reading titanic_df ['Embarked'] = titanic_df ['Embarked'].map({ 'C':'Cherbourg', 'Q':'Queenstown', 'S':'Southampton' }) print("======Data Headers After Transforming Embarked Column =======") print(titanic_df.head(5)) print("\n\n\n**** DATA VISUALIZATIONS****\n\n") print("Visualization #1 : Survival Rate Based on Passenger Sitting Class") ax = sns.countplot(x = 'Pclass', hue = 'Survived', palette = 'Set1',data = titanic_df) ax.set(title = 'Passenger status (Survived/Died) against Passenger Class', xlabel = 'Passenger Class', ylabel = 'Total') plt.show() print("Visualization #2 : Survival Rate Based on Gender") print(pd.crosstab(titanic_df["Sex"],titanic_df.Survived)) ax = sns.countplot(x = 'Sex', hue = 'Survived', palette = 'Set2', data = titanic_df) ax.set(title = 'Total Survivors According to Sex', xlabel = 'Sex', ylabel='Total') plt.show() print("Visualization #3 : Survival Rate Based on Passenger Age Group") # We look at Age column and set Intevals on the ages and the map them to their categories #as (Children, Teen, Adult, Old) interval = (0,18,35,60,120) categories = ['Children','Teens','Adult', 'Old'] titanic_df['Age_cats'] = pd.cut(titanic_df.Age, interval, labels = categories) ax = sns.countplot(x = 'Age_cats', data = titanic_df, hue = 'Survived', palette = 'Set3') ax.set(xlabel='Age Categorical', ylabel='Total', title="Age Categorical Survival Distribution") plt.show() print("Visualization #4 : Survival Rate Based on Passenger Embarked Port") print(pd.crosstab(titanic_df['Embarked'], titanic_df.Survived)) ax = sns.countplot(x = 'Embarked', hue = 'Survived', palette = 'Set1', data = titanic_df) ax.set(title = 'Survival distribution according to Embarking place') plt.show() '''Python for Data Science - Perform Data Science on Titanic Dataset a)Load the Titanic dataset into one of the data structures (NumPy or Pandas). *****DATA TRANSFORMATIONS***** b)Convert the survived column to strings for easier reading # Convert the Class column to strings for easier reading # Convert the Embarked column to strings for easier reading c)Fill the empty coulmn of Embarked with 'S' #Fill the empty coulmn of Cabin with 'XXX' d) Drop the columns 'Parch','PassengerId','Name','Ticket','Embarked' #Understand meaning of axis = 1 and inplace = True or False e)Display header rows and description of the loaded dataset. ''' import numpy as np import pandas as pd # Doing a) titanic_df = pd.read_csv('titanictrain.csv') #Doing c) line #8 titanic_df ['Survived'] = titanic_df ['Survived'].map({ 0: 'Died', 1: 'Survived' }) #Doing d) titanic_df.drop(['Parch','PassengerId','Name','Ticket'], axis = 1, inplace=True) titanic_df["Embarked"] = titanic_df["Embarked"].fillna("S") #line #32 print(titanic_df.head(5)) ''' f) Perform the following visualizations on the loaded dataset: i) Passenger status (Survived/Died) against Passenger Class #Understand the funda of 'Categorical Attribute' and plot based on it # Try this : Survival rate of male vs female Survival Rate Based on Passenger Embarked Port #Understand meaning of 'hue', 'palette' #Change value of 'set'and see ''' import seaborn as sns import matplotlib.pyplot as plt print("Visualization #1 : Survival Rate Based on Passenger Sitting Class") ax = sns.countplot(x = 'Pclass', hue = 'Survived', palette = 'Set1',data = titanic_df) ax.set(title = 'Passenger status (Survived/Died) against Passenger Class', xlabel = 'Passenger Class', ylabel = 'Total') plt.show() ''' f) Perform the following visualizations on the loaded dataset: ii) Survival rate of male vs female ''' print("Visualization #2 : Survival Rate Based on Gender") print(pd.crosstab(titanic_df["Sex"],titanic_df.Survived)) ax = sns.countplot(x = 'Sex', hue = 'Survived', palette = 'Set2', data = titanic_df) ax.set(title = 'Total Survivors According to Sex', xlabel = 'Sex', ylabel='Total') plt.show() ''' f) Perform the following visualizations on the loaded dataset: iii) No of passengers in each age group # We look at Age column and set Intevals on the ages and the map them to their categories #as (Children, Teen, Adult, Old) #Understand why 5 values in 'interval' and 4 values in 'categories' ''' print("Visualization #3 : Survival Rate Based on Passenger Age Group") interval = (0,18,35,60,120) categories = ['Children','Teens','Adult', 'Old'] titanic_df['Age_cats'] = pd.cut(titanic_df.Age, interval, labels = categories) ax = sns.countplot(x = 'Age_cats', data = titanic_df, hue = 'Survived', palette = 'Set3') ax.set(xlabel='Age Categorical', ylabel='Total', title="Age Categorical Survival Distribution") plt.show()
0.652574
0.842604
``` import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression steps=250 distance=0 x=0 distance_list=[] steps_list=[] while x<steps: distance+=np.random.randint(-1,2) distance_list.append(distance) x+=1 steps_list.append(x) plt.plot(steps_list,distance_list, color='green', label="Random Walk Data") steps_list=np.asarray(steps_list) distance_list=np.asarray(distance_list) X=steps_list[:,np.newaxis] #Polynomial fits #Degree 2 poly_features=PolynomialFeatures(degree=2, include_bias=False) X_poly=poly_features.fit_transform(X) lin_reg=LinearRegression() poly_fit=lin_reg.fit(X_poly,distance_list) b=lin_reg.coef_ c=lin_reg.intercept_ print ("2nd degree coefficients:") print ("zero power: ",c) print ("first power: ", b[0]) print ("second power: ",b[1]) z = np.arange(0, steps, .01) z_mod=b[1]*z**2+b[0]*z+c fit_mod=b[1]*X**2+b[0]*X+c plt.plot(z, z_mod, color='r', label="2nd Degree Fit") plt.title("Polynomial Regression") plt.xlabel("Steps") plt.ylabel("Distance") #Degree 10 poly_features10=PolynomialFeatures(degree=10, include_bias=False) X_poly10=poly_features10.fit_transform(X) poly_fit10=lin_reg.fit(X_poly10,distance_list) y_plot=poly_fit10.predict(X_poly10) plt.plot(X, y_plot, color='black', label="10th Degree Fit") plt.legend() plt.show() #Decision Tree Regression from sklearn.tree import DecisionTreeRegressor regr_1=DecisionTreeRegressor(max_depth=2) regr_2=DecisionTreeRegressor(max_depth=5) regr_3=DecisionTreeRegressor(max_depth=7) regr_1.fit(X, distance_list) regr_2.fit(X, distance_list) regr_3.fit(X, distance_list) X_test = np.arange(0.0, steps, 0.01)[:, np.newaxis] y_1 = regr_1.predict(X_test) y_2 = regr_2.predict(X_test) y_3=regr_3.predict(X_test) # Plot the results plt.figure() plt.scatter(X, distance_list, s=2.5, c="black", label="data") plt.plot(X_test, y_1, color="red", label="max_depth=2", linewidth=2) plt.plot(X_test, y_2, color="green", label="max_depth=5", linewidth=2) plt.plot(X_test, y_3, color="m", label="max_depth=7", linewidth=2) plt.xlabel("Data") plt.ylabel("Darget") plt.title("Decision Tree Regression") plt.legend() plt.show() """new_dist=distance_list[-1] step_max=2500 new_x=steps new_dist_list=[] new_steps_list=np.arange(steps,step_max) while new_x>=steps and new_x<step_max: dist_prediction=clf.predict(new_x) new_dist_list.append(dist_prediction) new_x+=1 plt.plot(new_steps_list,new_dist_list, color='red') plt.show()""" ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression steps=250 distance=0 x=0 distance_list=[] steps_list=[] while x<steps: distance+=np.random.randint(-1,2) distance_list.append(distance) x+=1 steps_list.append(x) plt.plot(steps_list,distance_list, color='green', label="Random Walk Data") steps_list=np.asarray(steps_list) distance_list=np.asarray(distance_list) X=steps_list[:,np.newaxis] #Polynomial fits #Degree 2 poly_features=PolynomialFeatures(degree=2, include_bias=False) X_poly=poly_features.fit_transform(X) lin_reg=LinearRegression() poly_fit=lin_reg.fit(X_poly,distance_list) b=lin_reg.coef_ c=lin_reg.intercept_ print ("2nd degree coefficients:") print ("zero power: ",c) print ("first power: ", b[0]) print ("second power: ",b[1]) z = np.arange(0, steps, .01) z_mod=b[1]*z**2+b[0]*z+c fit_mod=b[1]*X**2+b[0]*X+c plt.plot(z, z_mod, color='r', label="2nd Degree Fit") plt.title("Polynomial Regression") plt.xlabel("Steps") plt.ylabel("Distance") #Degree 10 poly_features10=PolynomialFeatures(degree=10, include_bias=False) X_poly10=poly_features10.fit_transform(X) poly_fit10=lin_reg.fit(X_poly10,distance_list) y_plot=poly_fit10.predict(X_poly10) plt.plot(X, y_plot, color='black', label="10th Degree Fit") plt.legend() plt.show() #Decision Tree Regression from sklearn.tree import DecisionTreeRegressor regr_1=DecisionTreeRegressor(max_depth=2) regr_2=DecisionTreeRegressor(max_depth=5) regr_3=DecisionTreeRegressor(max_depth=7) regr_1.fit(X, distance_list) regr_2.fit(X, distance_list) regr_3.fit(X, distance_list) X_test = np.arange(0.0, steps, 0.01)[:, np.newaxis] y_1 = regr_1.predict(X_test) y_2 = regr_2.predict(X_test) y_3=regr_3.predict(X_test) # Plot the results plt.figure() plt.scatter(X, distance_list, s=2.5, c="black", label="data") plt.plot(X_test, y_1, color="red", label="max_depth=2", linewidth=2) plt.plot(X_test, y_2, color="green", label="max_depth=5", linewidth=2) plt.plot(X_test, y_3, color="m", label="max_depth=7", linewidth=2) plt.xlabel("Data") plt.ylabel("Darget") plt.title("Decision Tree Regression") plt.legend() plt.show() """new_dist=distance_list[-1] step_max=2500 new_x=steps new_dist_list=[] new_steps_list=np.arange(steps,step_max) while new_x>=steps and new_x<step_max: dist_prediction=clf.predict(new_x) new_dist_list.append(dist_prediction) new_x+=1 plt.plot(new_steps_list,new_dist_list, color='red') plt.show()"""
0.59514
0.795539
# pystac-client CQL Filtering This notebook demonstrates the use of pystac-client to use [CQL Filtering](https://github.com/radiantearth/stac-api-spec/tree/master/fragments/filter). The server needs to support this and will advertise conformance as the `https://api.stacspec.org/v1.0.0-beta.3/item-search#filter:filter` class in the `conformsTo` attribute of the root API. **This should be considered an experimental feature. This notebook uses the Microsoft Planetary Computer staging environment as it is currently the only public CQL implementation. The Planetary Computer also does not advertise the correct conformance class, thus the `ignore_conformance` keyword is specified in the `Client.open` function below.** ``` from pystac_client import Client # set pystac_client logger to DEBUG to see API calls import logging logging.basicConfig() logger = logging.getLogger('pystac_client') logger.setLevel(logging.INFO) # function for creating GeoDataFrame from Items from copy import deepcopy import geopandas as gpd import pandas as pd from shapely.geometry import shape # convert a list of STAC Items into a GeoDataFrame def items_to_geodataframe(items): _items = [] for i in items: _i = deepcopy(i) _i['geometry'] = shape(_i['geometry']) _items.append(_i) gdf = gpd.GeoDataFrame(pd.json_normalize(_items)) for field in ['properties.datetime', 'properties.created', 'properties.updated']: if field in gdf: gdf[field] = pd.to_datetime(gdf[field]) gdf.set_index('properties.datetime', inplace=True) return gdf # STAC API root URL URL = 'https://planetarycomputer-staging.microsoft.com/api/stac/v1' # custom headers headers = [] cat = Client.open(URL, headers=headers, ignore_conformance=True) cat ``` ## Initial Search Parameters Here we perform a search with the `Client.search` function, providing a geometry (`intersects`) a datetime range (`datetime`), and filtering by Item properties (`filter`) using CQL-JSON. ``` # AOI around Delfzijl, in the north of The Netherlands geom = { "type": "Polygon", "coordinates": [ [ [ 6.42425537109375, 53.174765470134616 ], [ 7.344360351562499, 53.174765470134616 ], [ 7.344360351562499, 53.67393435835391 ], [ 6.42425537109375, 53.67393435835391 ], [ 6.42425537109375, 53.174765470134616 ] ] ] } params = { "collections": "landsat-8-c2-l2", "intersects": geom, "datetime": "2018-01-01/2020-12-31", "max_items": 100, } import hvplot.pandas import json # reusable search function def search_fetch_plot(params, filt): # limit sets the # of items per page so we can see multiple pages getting fetched params['filter'] = filt search = cat.search(**params) items_json = search.get_all_items_as_dict() # DataFrame items_df = pd.DataFrame(items_to_geodataframe(items_json['features'])) print(f"{len(items_df.index)} items found") field = 'properties.eo:cloud_cover' return items_df.hvplot(y=field, label=json.dumps(filt), frame_height=500, frame_width=800) ``` ## CQL Filters Below are examples of several different CQL filters on the `eo:cloud_cover` property. Up to 100 Items are fetched and the eo:cloud_cover values plotted. ``` filt = { "lte": [{"property": "eo:cloud_cover"}, 10] } search_fetch_plot(params, filt) filt = { "gte": [{"property": "eo:cloud_cover"}, 80] } search_fetch_plot(params, filt) filt = { "lte": [{"property": "eo:cloud_cover"}, 60], "gte": [{"property": "eo:cloud_cover"}, 40] } search_fetch_plot(params, filt) ```
github_jupyter
from pystac_client import Client # set pystac_client logger to DEBUG to see API calls import logging logging.basicConfig() logger = logging.getLogger('pystac_client') logger.setLevel(logging.INFO) # function for creating GeoDataFrame from Items from copy import deepcopy import geopandas as gpd import pandas as pd from shapely.geometry import shape # convert a list of STAC Items into a GeoDataFrame def items_to_geodataframe(items): _items = [] for i in items: _i = deepcopy(i) _i['geometry'] = shape(_i['geometry']) _items.append(_i) gdf = gpd.GeoDataFrame(pd.json_normalize(_items)) for field in ['properties.datetime', 'properties.created', 'properties.updated']: if field in gdf: gdf[field] = pd.to_datetime(gdf[field]) gdf.set_index('properties.datetime', inplace=True) return gdf # STAC API root URL URL = 'https://planetarycomputer-staging.microsoft.com/api/stac/v1' # custom headers headers = [] cat = Client.open(URL, headers=headers, ignore_conformance=True) cat # AOI around Delfzijl, in the north of The Netherlands geom = { "type": "Polygon", "coordinates": [ [ [ 6.42425537109375, 53.174765470134616 ], [ 7.344360351562499, 53.174765470134616 ], [ 7.344360351562499, 53.67393435835391 ], [ 6.42425537109375, 53.67393435835391 ], [ 6.42425537109375, 53.174765470134616 ] ] ] } params = { "collections": "landsat-8-c2-l2", "intersects": geom, "datetime": "2018-01-01/2020-12-31", "max_items": 100, } import hvplot.pandas import json # reusable search function def search_fetch_plot(params, filt): # limit sets the # of items per page so we can see multiple pages getting fetched params['filter'] = filt search = cat.search(**params) items_json = search.get_all_items_as_dict() # DataFrame items_df = pd.DataFrame(items_to_geodataframe(items_json['features'])) print(f"{len(items_df.index)} items found") field = 'properties.eo:cloud_cover' return items_df.hvplot(y=field, label=json.dumps(filt), frame_height=500, frame_width=800) filt = { "lte": [{"property": "eo:cloud_cover"}, 10] } search_fetch_plot(params, filt) filt = { "gte": [{"property": "eo:cloud_cover"}, 80] } search_fetch_plot(params, filt) filt = { "lte": [{"property": "eo:cloud_cover"}, 60], "gte": [{"property": "eo:cloud_cover"}, 40] } search_fetch_plot(params, filt)
0.61231
0.880643
<a href="https://colab.research.google.com/github/gurjot-kaur/CSYE7245-Tutorials/blob/master/Diagrams_Tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # CSYE 7245 Big Data Systems and Intelligence Analytics - Diagrams Tutorial ## Diagrams as Code ***Diagrams lets you draw the cloud system architecture in Python code*** * It is used for prototyping a new system architecture design without any design tools. You can also describe or visualize the existing system architecture as well. * Diagrams currently supports main major providers including: AWS, Azure, GCP, Kubernetes, Alibaba Cloud, Oracle Cloud etc. * It also supports On-Premise nodes, SaaS and major Programming frameworks and languages. * It also allows you to track the architecture diagram changes in any version control system. NOTE: It does not control any actual cloud resources nor does it generate cloud formation or terraform code. It is just for drawing the cloud system architecture diagrams. Diagrams can be a good replacement for designing your workflows and pipeline architectures rather than using Draw.io or LucidCharts ### Install below packages ***1. for installing graphviz*** - - pip install graphviz ***2. for installing diagrams***- - pip install diagrams ``` !pip install graphviz !pip install diagrams ``` ## Diagrams * Diagram is a primary object representing a diagram. * Diagram represents a global diagram context. * You can create a diagram context with Diagram class. The first parameter of Diagram constructor will be used for output filename. ``` from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram"): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram", outformat="jpg"): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram", filename="my_diagram"): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram", show=False): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 graph_attr = { "fontsize": "45", "bgcolor": "transparent" } with Diagram("Simple Diagram", show=False, graph_attr=graph_attr): EC2("web") ``` ## Nodes * Node is a second object representing a node or system component. * Node is an abstract concept that represents a single system component object. ``` from diagrams import Diagram from diagrams.aws.compute import EC2 from diagrams.aws.database import RDS from diagrams.aws.network import ELB from diagrams.aws.storage import S3 with Diagram("Web Services", show=False): ELB("lb") >> EC2("web") >> RDS("userdb") >> S3("store") ELB("lb") >> EC2("web") >> RDS("userdb") << EC2("stat") (ELB("lb") >> EC2("web")) - EC2("web") >> RDS("userdb") from diagrams import Diagram from diagrams.aws.compute import EC2 from diagrams.aws.database import RDS from diagrams.aws.network import ELB with Diagram("Workers", show=False, direction="TB"): lb = ELB("lb") db = RDS("events") lb >> EC2("worker1") >> db lb >> EC2("worker2") >> db lb >> EC2("worker3") >> db lb >> EC2("worker4") >> db lb >> EC2("worker5") >> db from diagrams import Diagram from diagrams.aws.compute import EC2 from diagrams.aws.database import RDS from diagrams.aws.network import ELB with Diagram("Grouped Workers", show=False, direction="TB"): ELB("lb") >> [EC2("worker1"), EC2("worker2"), EC2("worker3"), EC2("worker4"), EC2("worker5")] >> RDS("events") ``` ## Clusters Cluster allows you group (or clustering) the nodes in an isolated group. ``` from diagrams import Cluster, Diagram from diagrams.aws.compute import ECS from diagrams.aws.database import RDS from diagrams.aws.network import Route53 with Diagram("Simple Web Service with DB Cluster", show=False): dns = Route53("dns") web = ECS("service") with Cluster("DB Cluster"): db_master = RDS("master") db_master - [RDS("slave1"), RDS("slave2")] dns >> web >> db_master from diagrams import Cluster, Diagram from diagrams.aws.compute import ECS, EKS, Lambda from diagrams.aws.database import Redshift from diagrams.aws.integration import SQS from diagrams.aws.storage import S3 with Diagram("Event Processing", show=False): source = EKS("k8s source") with Cluster("Event Flows"): with Cluster("Event Workers"): workers = [ECS("worker1"), ECS("worker2"), ECS("worker3")] queue = SQS("event queue") with Cluster("Processing"): handlers = [Lambda("proc1"), Lambda("proc2"), Lambda("proc3")] store = S3("events store") dw = Redshift("analytics") source >> workers >> queue >> handlers handlers >> store handlers >> dw ``` ## Edges Edge is representing an edge between Nodes. ``` from diagrams import Cluster, Diagram, Edge from diagrams.onprem.analytics import Spark from diagrams.onprem.compute import Server from diagrams.onprem.database import PostgreSQL from diagrams.onprem.inmemory import Redis from diagrams.onprem.logging import Fluentd from diagrams.onprem.monitoring import Grafana, Prometheus from diagrams.onprem.network import Nginx from diagrams.onprem.queue import Kafka with Diagram(name="Advanced Web Service with On-Premise (colored)", show=False): ingress = Nginx("ingress") metrics = Prometheus("metric") metrics << Edge(color="firebrick", style="dashed") << Grafana("monitoring") with Cluster("Service Cluster"): grpcsvc = [ Server("grpc1"), Server("grpc2"), Server("grpc3")] with Cluster("Sessions HA"): master = Redis("session") master - Edge(color="brown", style="dashed") - Redis("replica") << Edge(label="collect") << metrics grpcsvc >> Edge(color="brown") >> master with Cluster("Database HA"): master = PostgreSQL("users") master - Edge(color="brown", style="dotted") - PostgreSQL("slave") << Edge(label="collect") << metrics grpcsvc >> Edge(color="black") >> master aggregator = Fluentd("logging") aggregator >> Edge(label="parse") >> Kafka("stream") >> Edge(color="black", style="bold") >> Spark("analytics") ingress >> Edge(color="darkgreen") << grpcsvc >> Edge(color="darkorange") >> aggregator from diagrams import Cluster, Diagram from diagrams.gcp.analytics import BigQuery, Dataflow, PubSub from diagrams.gcp.compute import AppEngine, Functions from diagrams.gcp.database import BigTable from diagrams.gcp.iot import IotCore from diagrams.gcp.storage import GCS with Diagram("Message Collecting", show=False): pubsub = PubSub("pubsub") with Cluster("Source of Data"): [IotCore("core1"), IotCore("core2"), IotCore("core3")] >> pubsub with Cluster("Targets"): with Cluster("Data Flow"): flow = Dataflow("data flow") with Cluster("Data Lake"): flow >> [BigQuery("bq"), GCS("storage")] with Cluster("Event Driven"): with Cluster("Processing"): flow >> AppEngine("engine") >> BigTable("bigtable") with Cluster("Serverless"): flow >> Functions("func") >> AppEngine("appengine") pubsub >> flow Image(filename='advanced_web_service_with_onpremise_colored.png',width=600, height=400) ``` ## Message Collecting System and Data Lake on GCP ``` from diagrams import Cluster, Diagram from diagrams.gcp.analytics import BigQuery, Dataflow, PubSub from diagrams.gcp.compute import AppEngine, Functions from diagrams.gcp.database import BigTable from diagrams.gcp.iot import IotCore from diagrams.gcp.storage import GCS with Diagram("Message Collecting", show=False): pubsub = PubSub("pubsub") with Cluster("Source of Data"): [IotCore("core1"), IotCore("core2"), IotCore("core3")] >> pubsub with Cluster("Targets"): with Cluster("Data Flow"): flow = Dataflow("data flow") with Cluster("Data Lake"): flow >> [BigQuery("bq"), GCS("storage")] with Cluster("Event Driven"): with Cluster("Processing"): flow >> AppEngine("engine") >> BigTable("bigtable") with Cluster("Serverless"): flow >> Functions("func") >> AppEngine("appengine") pubsub >> flow ``` ## Airflow Diagrams Airflow Diagrams is an Airflow plugin that aims to easily visualise your Airflow DAGs on service level from providers like AWS, GCP, Azure, etc. via diagrams. Install Airflow and Airflow Diagrams - * pip install apache-airflow * pip install airflow-diagrams ``` !pip install airflow-diagrams from airflow.models.dag import DAG from airflow.operators.dummy_operator import DummyOperator from airflow.utils.dates import days_ago from airflow_diagrams import generate_diagram_from_dag with DAG('example_dag', schedule_interval=None, default_args=dict(start_date=days_ago(2))) as dag: DummyOperator(task_id='run_this_1') >> [ DummyOperator(task_id='run_this_2a'), DummyOperator(task_id='run_this_2b') ] >> DummyOperator(task_id='run_this_3') generate_diagram_from_dag(dag=dag, diagram_file="example_dag.py") from diagrams import Diagram from diagrams.generic.blank import Blank with Diagram("example_dag", show=False): run_this_1 = Blank("run_this_1") run_this_2a = Blank("run_this_2a") run_this_2b = Blank("run_this_2b") run_this_3 = Blank("run_this_3") run_this_1 >> run_this_2b run_this_2b >> run_this_3 run_this_1 >> run_this_2a run_this_2a >> run_this_3 ``` ## References https://github.com/mingrammer/diagrams https://diagrams.mingrammer.com/
github_jupyter
!pip install graphviz !pip install diagrams from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram"): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram", outformat="jpg"): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram", filename="my_diagram"): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 with Diagram("Simple Diagram", show=False): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 graph_attr = { "fontsize": "45", "bgcolor": "transparent" } with Diagram("Simple Diagram", show=False, graph_attr=graph_attr): EC2("web") from diagrams import Diagram from diagrams.aws.compute import EC2 from diagrams.aws.database import RDS from diagrams.aws.network import ELB from diagrams.aws.storage import S3 with Diagram("Web Services", show=False): ELB("lb") >> EC2("web") >> RDS("userdb") >> S3("store") ELB("lb") >> EC2("web") >> RDS("userdb") << EC2("stat") (ELB("lb") >> EC2("web")) - EC2("web") >> RDS("userdb") from diagrams import Diagram from diagrams.aws.compute import EC2 from diagrams.aws.database import RDS from diagrams.aws.network import ELB with Diagram("Workers", show=False, direction="TB"): lb = ELB("lb") db = RDS("events") lb >> EC2("worker1") >> db lb >> EC2("worker2") >> db lb >> EC2("worker3") >> db lb >> EC2("worker4") >> db lb >> EC2("worker5") >> db from diagrams import Diagram from diagrams.aws.compute import EC2 from diagrams.aws.database import RDS from diagrams.aws.network import ELB with Diagram("Grouped Workers", show=False, direction="TB"): ELB("lb") >> [EC2("worker1"), EC2("worker2"), EC2("worker3"), EC2("worker4"), EC2("worker5")] >> RDS("events") from diagrams import Cluster, Diagram from diagrams.aws.compute import ECS from diagrams.aws.database import RDS from diagrams.aws.network import Route53 with Diagram("Simple Web Service with DB Cluster", show=False): dns = Route53("dns") web = ECS("service") with Cluster("DB Cluster"): db_master = RDS("master") db_master - [RDS("slave1"), RDS("slave2")] dns >> web >> db_master from diagrams import Cluster, Diagram from diagrams.aws.compute import ECS, EKS, Lambda from diagrams.aws.database import Redshift from diagrams.aws.integration import SQS from diagrams.aws.storage import S3 with Diagram("Event Processing", show=False): source = EKS("k8s source") with Cluster("Event Flows"): with Cluster("Event Workers"): workers = [ECS("worker1"), ECS("worker2"), ECS("worker3")] queue = SQS("event queue") with Cluster("Processing"): handlers = [Lambda("proc1"), Lambda("proc2"), Lambda("proc3")] store = S3("events store") dw = Redshift("analytics") source >> workers >> queue >> handlers handlers >> store handlers >> dw from diagrams import Cluster, Diagram, Edge from diagrams.onprem.analytics import Spark from diagrams.onprem.compute import Server from diagrams.onprem.database import PostgreSQL from diagrams.onprem.inmemory import Redis from diagrams.onprem.logging import Fluentd from diagrams.onprem.monitoring import Grafana, Prometheus from diagrams.onprem.network import Nginx from diagrams.onprem.queue import Kafka with Diagram(name="Advanced Web Service with On-Premise (colored)", show=False): ingress = Nginx("ingress") metrics = Prometheus("metric") metrics << Edge(color="firebrick", style="dashed") << Grafana("monitoring") with Cluster("Service Cluster"): grpcsvc = [ Server("grpc1"), Server("grpc2"), Server("grpc3")] with Cluster("Sessions HA"): master = Redis("session") master - Edge(color="brown", style="dashed") - Redis("replica") << Edge(label="collect") << metrics grpcsvc >> Edge(color="brown") >> master with Cluster("Database HA"): master = PostgreSQL("users") master - Edge(color="brown", style="dotted") - PostgreSQL("slave") << Edge(label="collect") << metrics grpcsvc >> Edge(color="black") >> master aggregator = Fluentd("logging") aggregator >> Edge(label="parse") >> Kafka("stream") >> Edge(color="black", style="bold") >> Spark("analytics") ingress >> Edge(color="darkgreen") << grpcsvc >> Edge(color="darkorange") >> aggregator from diagrams import Cluster, Diagram from diagrams.gcp.analytics import BigQuery, Dataflow, PubSub from diagrams.gcp.compute import AppEngine, Functions from diagrams.gcp.database import BigTable from diagrams.gcp.iot import IotCore from diagrams.gcp.storage import GCS with Diagram("Message Collecting", show=False): pubsub = PubSub("pubsub") with Cluster("Source of Data"): [IotCore("core1"), IotCore("core2"), IotCore("core3")] >> pubsub with Cluster("Targets"): with Cluster("Data Flow"): flow = Dataflow("data flow") with Cluster("Data Lake"): flow >> [BigQuery("bq"), GCS("storage")] with Cluster("Event Driven"): with Cluster("Processing"): flow >> AppEngine("engine") >> BigTable("bigtable") with Cluster("Serverless"): flow >> Functions("func") >> AppEngine("appengine") pubsub >> flow Image(filename='advanced_web_service_with_onpremise_colored.png',width=600, height=400) from diagrams import Cluster, Diagram from diagrams.gcp.analytics import BigQuery, Dataflow, PubSub from diagrams.gcp.compute import AppEngine, Functions from diagrams.gcp.database import BigTable from diagrams.gcp.iot import IotCore from diagrams.gcp.storage import GCS with Diagram("Message Collecting", show=False): pubsub = PubSub("pubsub") with Cluster("Source of Data"): [IotCore("core1"), IotCore("core2"), IotCore("core3")] >> pubsub with Cluster("Targets"): with Cluster("Data Flow"): flow = Dataflow("data flow") with Cluster("Data Lake"): flow >> [BigQuery("bq"), GCS("storage")] with Cluster("Event Driven"): with Cluster("Processing"): flow >> AppEngine("engine") >> BigTable("bigtable") with Cluster("Serverless"): flow >> Functions("func") >> AppEngine("appengine") pubsub >> flow !pip install airflow-diagrams from airflow.models.dag import DAG from airflow.operators.dummy_operator import DummyOperator from airflow.utils.dates import days_ago from airflow_diagrams import generate_diagram_from_dag with DAG('example_dag', schedule_interval=None, default_args=dict(start_date=days_ago(2))) as dag: DummyOperator(task_id='run_this_1') >> [ DummyOperator(task_id='run_this_2a'), DummyOperator(task_id='run_this_2b') ] >> DummyOperator(task_id='run_this_3') generate_diagram_from_dag(dag=dag, diagram_file="example_dag.py") from diagrams import Diagram from diagrams.generic.blank import Blank with Diagram("example_dag", show=False): run_this_1 = Blank("run_this_1") run_this_2a = Blank("run_this_2a") run_this_2b = Blank("run_this_2b") run_this_3 = Blank("run_this_3") run_this_1 >> run_this_2b run_this_2b >> run_this_3 run_this_1 >> run_this_2a run_this_2a >> run_this_3
0.672977
0.860896
# Synthetic recruiting data This notebook constructs a synthetic recruiting data set that we will use for exploring fairness interventions. ``` from pathlib import Path import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler ``` We suppose that a large company has historical records of people that have applied to join the company, and whether or not that candidate was subsequently employed. We will use this data to train a model to predict whether individuals should be employed or not. A discussion of whether this is appropriate and how to mitigate potential biases is contained in the app. We aim to generate data in such a way that each of the features reflects certain unfair biases, as do the actual labels themselves. Biases in the features such as the level of education attained reflect systemic biases, whereas bias in the labels reflects historical biases in the hiring practices of the company. We have settled on the following features as ones that might be relevant in an automated recruitment setting. - Was the candidate referred for this position? - Number of career years relevant for the job - Whether candidate went to a Russell Group univserity - Did the candidate graduate with an honours degree - GCSE results - A-levels - Current income - Sex - Race - Quality of written cv - Years of volunteering experience - Years of gaps in cv - Level of IT skills - Whether currently employed or not We start by defining some high-level parameters that will control the data generation. ``` N = 10000 # number of data points to generate P_SEX_MALE = 0.5 P_RACE_WHITE = 0.5 P_EMPLOYED_WHITE_MALE = 0.7 P_EMPLOYED_BLACK_MALE = 0.45 P_EMPLOYED_WHITE_FEMALE = 0.5 P_EMPLOYED_BLACK_FEMALE = 0.25 ``` ## Sampling the data We build the data up starting with demographic features. Remaining features are sampled conditional on the demographic features. ``` df = pd.DataFrame() df["sex_male"] = np.random.binomial(1, P_SEX_MALE, N) df["race_white"] = np.random.binomial(1, P_RACE_WHITE, N) # we won't use age in the final data, we just use it # to ensure other features like years of experience # are generated consistently df["age"] = np.floor(np.random.poisson(70, N) / 2) ``` We assume that on average individuals have spent half of the time they've been of working age accumulating relevant experience. We sample from the Poisson distribution with this mean. ``` df["years_experience"] = np.random.poisson( 0.4 * np.where(df.age >= 22, df.age - 22, 0) + df.race_white * 0.2 + df.sex_male * 0.1 ) ``` Binary variable stating whether the applicant has been referred or not. We assume men are more likely to be referred than women, and white people are more likely to be referred than black people. ``` df["referred"] = np.random.binomial( 1, 0.2 + 0.4 * df.sex_male + 0.3 * df.race_white ) ``` We model the number of GCSEs better than C grade as a binomial distribution with 10 trials. The increased probability of good grades for white students is intended to reflect systemic biases in access to education. ``` df["gcse"] = np.random.binomial(10, 0.6 + df.race_white * 0.15) ``` A level results are mostly determined by GCSE results. ``` a_level_prob = ( 0.4 # baseline probability + df.gcse / 20 # adjusted for gcse results + df.race_white * 0.05 # adjustest for race - df.sex_male * 0.05 # adjusted for sex ) df["a_level"] = np.random.binomial(4, a_level_prob) ``` Sample binary variable indicating whether individual went to a Russell Group Univeristy. Influenced mainly by A-levels and GCSEs ``` def russell_group_prob(row): if row.a_level == 4: return 0.8 elif row.a_level == 3 and row.gcse >= 7: return 0.4 return 0.1 df["russell_group"] = np.random.binomial( 1, df.apply(russell_group_prob, axis=1) ) ``` Honours degree depends both on a-levels and Russell Group attendance. ``` def honours_prob(row): if row.russell_group == 1: return 0.9 return 0.2 + 0.15 * row.a_level df["honours"] = np.random.binomial(1, df.apply(honours_prob, axis=1)) ``` Years of voluntary experience. ``` df["years_volunteer"] = np.random.poisson(0.5, N) ``` Current income ``` def salary_mean(row): return ( 15000 + row.russell_group * 3000 + row.race_white * 2000 + np.sqrt(row.years_experience) * 5000 ) def salary_std(row): return 1000 + np.sqrt(row.years_experience) * 2000 # integer divide and multiply by 250 to round to nearest 250 df["income"] = ( np.random.normal( df.apply(salary_mean, axis=1), df.apply(salary_std, axis=1), ) // 250 * 250 ) ``` IT skills is a simple ordered categorical variable that depends on sex. ``` df["it_skills"] = np.random.binomial(3, 0.4 + 0.3 * df.sex_male) ``` Years of holes in cv ``` df["years_gaps"] = np.random.poisson( 0.2 * (1.0 - 0.5 * df.sex_male - 0.25 * df.race_white) * df.years_experience ) ``` Quality of written cv ``` df["quality_cv"] = np.random.binomial(3, 0.6, N) ``` Finally we use a logistic regression to create a probability that the individual was employed, then sample a label with that probability. ``` def sigmoid(x): return 1 / (1 + np.exp(-x)) def employed_prob(row): return sigmoid( # implicit discrimination 2 * row.referred + 1 * row.years_experience + 0.5 * row.gcse + 0.8 * row.a_level + 0.1 * row.russell_group + 0.1 * row.honours - 0.5 * row.years_gaps + 0.4 * row.quality_cv + 0.4 * row.it_skills # explicit discrimination + 0.8 * row.race_white + 0.5 * row.sex_male # offset - 15 ) df["employed_yes"] = np.random.binomial(1, df.apply(employed_prob, axis=1)) ``` Drop age as it's no longer needed. ``` df = df.drop(columns="age") ``` The final data looks like this. ``` df.head() ``` ## Train, val and test splits We split the data into train, validation and test sets. ``` train_df, test_df = train_test_split(df, test_size=0.2, random_state=42) train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42) ``` ## Preprocessing ``` ss = StandardScaler() # Numerical attributes cts_features = [ "a_level", "gcse", "years_experience", "years_volunteer", "income", "it_skills", "years_gaps", "quality_cv", ] train_df_scaled = train_df.copy() val_df_scaled = val_df.copy() test_df_scaled = test_df.copy() train_df_scaled[cts_features] = ss.fit_transform(train_df[cts_features]) val_df_scaled[cts_features] = ss.transform(val_df[cts_features]) test_df_scaled[cts_features] = ss.transform(test_df[cts_features]) ``` ## Save data ``` artifacts_dir = Path("../../artifacts") # temporary platform specific directory data_dir = artifacts_dir / "data" / "recruiting" ``` Data generated by us is committed to the repository for reproducibility. However feel free to regenerate your own version of the data and compare results. ``` # train_df.to_csv(data_dir / "raw" / "train.csv", index=False) # test_df.to_csv(data_dir / "raw" / "test.csv", index=False) # val_df.to_csv(data_dir / "raw" / "val.csv", index=False) # train_df_scaled.to_csv(data_dir / "processed" / "train.csv", index=False) # val_df_scaled.to_csv(data_dir / "processed" / "val.csv", index=False) # test_df_scaled.to_csv(data_dir / "processed" / "test.csv", index=False) ```
github_jupyter
from pathlib import Path import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler N = 10000 # number of data points to generate P_SEX_MALE = 0.5 P_RACE_WHITE = 0.5 P_EMPLOYED_WHITE_MALE = 0.7 P_EMPLOYED_BLACK_MALE = 0.45 P_EMPLOYED_WHITE_FEMALE = 0.5 P_EMPLOYED_BLACK_FEMALE = 0.25 df = pd.DataFrame() df["sex_male"] = np.random.binomial(1, P_SEX_MALE, N) df["race_white"] = np.random.binomial(1, P_RACE_WHITE, N) # we won't use age in the final data, we just use it # to ensure other features like years of experience # are generated consistently df["age"] = np.floor(np.random.poisson(70, N) / 2) df["years_experience"] = np.random.poisson( 0.4 * np.where(df.age >= 22, df.age - 22, 0) + df.race_white * 0.2 + df.sex_male * 0.1 ) df["referred"] = np.random.binomial( 1, 0.2 + 0.4 * df.sex_male + 0.3 * df.race_white ) df["gcse"] = np.random.binomial(10, 0.6 + df.race_white * 0.15) a_level_prob = ( 0.4 # baseline probability + df.gcse / 20 # adjusted for gcse results + df.race_white * 0.05 # adjustest for race - df.sex_male * 0.05 # adjusted for sex ) df["a_level"] = np.random.binomial(4, a_level_prob) def russell_group_prob(row): if row.a_level == 4: return 0.8 elif row.a_level == 3 and row.gcse >= 7: return 0.4 return 0.1 df["russell_group"] = np.random.binomial( 1, df.apply(russell_group_prob, axis=1) ) def honours_prob(row): if row.russell_group == 1: return 0.9 return 0.2 + 0.15 * row.a_level df["honours"] = np.random.binomial(1, df.apply(honours_prob, axis=1)) df["years_volunteer"] = np.random.poisson(0.5, N) def salary_mean(row): return ( 15000 + row.russell_group * 3000 + row.race_white * 2000 + np.sqrt(row.years_experience) * 5000 ) def salary_std(row): return 1000 + np.sqrt(row.years_experience) * 2000 # integer divide and multiply by 250 to round to nearest 250 df["income"] = ( np.random.normal( df.apply(salary_mean, axis=1), df.apply(salary_std, axis=1), ) // 250 * 250 ) df["it_skills"] = np.random.binomial(3, 0.4 + 0.3 * df.sex_male) df["years_gaps"] = np.random.poisson( 0.2 * (1.0 - 0.5 * df.sex_male - 0.25 * df.race_white) * df.years_experience ) df["quality_cv"] = np.random.binomial(3, 0.6, N) def sigmoid(x): return 1 / (1 + np.exp(-x)) def employed_prob(row): return sigmoid( # implicit discrimination 2 * row.referred + 1 * row.years_experience + 0.5 * row.gcse + 0.8 * row.a_level + 0.1 * row.russell_group + 0.1 * row.honours - 0.5 * row.years_gaps + 0.4 * row.quality_cv + 0.4 * row.it_skills # explicit discrimination + 0.8 * row.race_white + 0.5 * row.sex_male # offset - 15 ) df["employed_yes"] = np.random.binomial(1, df.apply(employed_prob, axis=1)) df = df.drop(columns="age") df.head() train_df, test_df = train_test_split(df, test_size=0.2, random_state=42) train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42) ss = StandardScaler() # Numerical attributes cts_features = [ "a_level", "gcse", "years_experience", "years_volunteer", "income", "it_skills", "years_gaps", "quality_cv", ] train_df_scaled = train_df.copy() val_df_scaled = val_df.copy() test_df_scaled = test_df.copy() train_df_scaled[cts_features] = ss.fit_transform(train_df[cts_features]) val_df_scaled[cts_features] = ss.transform(val_df[cts_features]) test_df_scaled[cts_features] = ss.transform(test_df[cts_features]) artifacts_dir = Path("../../artifacts") # temporary platform specific directory data_dir = artifacts_dir / "data" / "recruiting" # train_df.to_csv(data_dir / "raw" / "train.csv", index=False) # test_df.to_csv(data_dir / "raw" / "test.csv", index=False) # val_df.to_csv(data_dir / "raw" / "val.csv", index=False) # train_df_scaled.to_csv(data_dir / "processed" / "train.csv", index=False) # val_df_scaled.to_csv(data_dir / "processed" / "val.csv", index=False) # test_df_scaled.to_csv(data_dir / "processed" / "test.csv", index=False)
0.559771
0.981962
<a href="https://colab.research.google.com/github/Satvik256/Graph-Convolutional-Networks/blob/main/GCN_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %matplotlib inline ``` Tutorial problem description ---------------------------- The tutorial is based on the "Zachary's karate club" problem. The karate club is a social network that includes 34 members and documents pairwise links between members who interact outside the club. The club later divides into two communities led by the instructor (node 0) and the club president (node 33). The network is visualized as follows with the color indicating the community: ![](https://data.dgl.ai/tutorial/img/karate-club.png) :align: center The task is to predict which side (0 or 33) each member tends to join given the social network itself. **Step 1: Creating a graph for the following** ------------------------------- Creating the graph for Zachary's karate club as follows: ``` pip install dgl-cu101 import dgl import numpy as np def build_karate_club_graph(): src = np.array([1, 2, 2, 3, 3, 3, 4, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 9, 10, 10, 10, 11, 12, 12, 13, 13, 13, 13, 16, 16, 17, 17, 19, 19, 21, 21, 25, 25, 27, 27, 27, 28, 29, 29, 30, 30, 31, 31, 31, 31, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33]) dst = np.array([0, 0, 1, 0, 1, 2, 0, 0, 0, 4, 5, 0, 1, 2, 3, 0, 2, 2, 0, 4, 5, 0, 0, 3, 0, 1, 2, 3, 5, 6, 0, 1, 0, 1, 0, 1, 23, 24, 2, 23, 24, 2, 23, 26, 1, 8, 0, 24, 25, 28, 2, 8, 14, 15, 18, 20, 22, 23, 29, 30, 31, 8, 9, 13, 14, 15, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30, 31, 32]) u = np.concatenate([src, dst]) v = np.concatenate([dst, src]) return dgl.DGLGraph((u, v)) #GRAPH OF THE FOLLOWING CREATED ``` Print out the number of nodes and edges in our newly constructed graph: ``` G = build_karate_club_graph() print('We have %d nodes.' % G.number_of_nodes()) print('We have %d edges.' % G.number_of_edges()) ``` Visualizing the graph by converting it to a `networkx <https://networkx.github.io/documentation/stable/>`_ graph: ``` import networkx as nx # Since the actual graph is undirected, we convert it for visualization # purpose. nx_G = G.to_networkx().to_undirected() # Kamada-Kawaii layout usually looks pretty for arbitrary graphs pos = nx.kamada_kawai_layout(nx_G) nx.draw(nx_G, pos, with_labels=True, node_color=[[.7, .7, .7]]) ``` ***Step 2: Assign features to nodes or edges*** -------------------------------------------- Graph neural networks associate features with nodes and edges for training. For our classification example, since there is no input feature, we assign each node with a learnable embedding vector. ``` import torch import torch.nn as nn import torch.nn.functional as F embed = nn.Embedding(34, 5) # 34 nodes with embedding dim equal to 5 G.ndata['feat'] = embed.weight ``` Print out the node features to verify: ``` # printing out node 2's input feature print(G.ndata['feat'][2]) # printing out node 10 and 11's input features print(G.ndata['feat'][[10, 11]]) ``` **`*`Step 3: Define a Graph Convolutional Network (GCN)`*`** -------------------------------------------------- To perform node classification, use the Graph Convolutional Network (GCN) developed by `Kipf and Welling <https://arxiv.org/abs/1609.02907>`_. Here is the simplest definition of a GCN framework. We recommend that you read the original paper for more details. - At layer $l$, each node $v_i^l$ carries a feature vector $h_i^l$. - Each layer of the GCN tries to aggregate the features from $u_i^{l}$ where $u_i$'s are neighborhood nodes to $v$ into the next layer representation at $v_i^{l+1}$. This is followed by an affine transformation with some non-linearity. The above definition of GCN fits into a **message-passing** paradigm: Each node will update its own feature with information sent from neighboring nodes. A graphical demonstration is displayed below. ![mailbox](https://data.dgl.ai/tutorial/1_first/mailbox.png) **The implementations of Graph Neural Network Layers under the `dgl.<backend>.nn` subpackage. The :class:`~dgl.nn.pytorch.GraphConv` module implements one Graph Convolutional layer.** ``` from dgl.nn.pytorch import GraphConv ``` ###***Define a deeper GCN model that contains two GCN layers:*** ``` class GCN(nn.Module): def __init__(self, in_feats, hidden_size, num_classes): super(GCN, self).__init__() self.conv1 = GraphConv(in_feats, hidden_size) self.conv2 = GraphConv(hidden_size, num_classes) def forward(self, g, inputs): h = self.conv1(g, inputs) h = torch.relu(h) h = self.conv2(g, h) return h # The first layer transforms input features of size of 5 to a hidden size of 5. # The second layer transforms the hidden layer and produces output features of # size 2, corresponding to the two groups of the karate club. net = GCN(5, 5, 2) ``` ***Step 4: Data preparation and initialization*** ------------------------------------------- We use learnable embeddings to initialize the node features. Since this is a semi-supervised setting, only the instructor (node 0) and the club president (node 33) are assigned labels. The implementation is available as follow. ``` inputs = embed.weight labeled_nodes = torch.tensor([0, 33]) # only the instructor and the president nodes are labeled labels = torch.tensor([0, 1]) # their labels are different ``` ##***Step 5: Train then visualize*** ---------------------------- The training loop is exactly the same as other PyTorch models. We (1) create an optimizer, (2) feed the inputs to the model, (3) calculate the loss and (4) use autograd to optimize the model. ``` import itertools optimizer = torch.optim.Adam(itertools.chain(net.parameters(), embed.parameters()), lr=0.01) all_logits = [] for epoch in range(50): logits = net(G, inputs) # we save the logits for visualization later all_logits.append(logits.detach()) logp = F.log_softmax(logits, 1) # we only compute loss for labeled nodes loss = F.nll_loss(logp[labeled_nodes], labels) optimizer.zero_grad() loss.backward() optimizer.step() print('Epoch %d | Loss: %.4f' % (epoch, loss.item())) ``` ###**Since the model produces an output feature of size 2 for each node, we can visualize by plotting the output feature in a 2D space. The following code animates the training process from initial guess (where the nodes are not classified correctly at all) to the end (where the nodes are linearly separable).** ``` import matplotlib.animation as animation import matplotlib.pyplot as plt def draw(i): cls1color = '#00FFFF' cls2color = '#FF00FF' pos = {} colors = [] for v in range(34): pos[v] = all_logits[i][v].numpy() cls = pos[v].argmax() colors.append(cls1color if cls else cls2color) ax.cla() ax.axis('off') ax.set_title('Epoch: %d' % i) nx.draw_networkx(nx_G.to_undirected(), pos, node_color=colors, with_labels=True, node_size=300, ax=ax) fig = plt.figure(dpi=150) fig.clf() ax = fig.subplots() draw(1) # draw the prediction of the first epoch plt.close() ``` ![](https://data.dgl.ai/tutorial/1_first/karate0.png) ##**The following animation GIF shows how the model correctly predicts the community after a series of training epochs.** ``` ani = animation.FuncAnimation(fig, draw, frames=len(all_logits), interval=200) ani ``` ![](https://data.dgl.ai/tutorial/1_first/karate.gif)
github_jupyter
%matplotlib inline pip install dgl-cu101 import dgl import numpy as np def build_karate_club_graph(): src = np.array([1, 2, 2, 3, 3, 3, 4, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 9, 10, 10, 10, 11, 12, 12, 13, 13, 13, 13, 16, 16, 17, 17, 19, 19, 21, 21, 25, 25, 27, 27, 27, 28, 29, 29, 30, 30, 31, 31, 31, 31, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33]) dst = np.array([0, 0, 1, 0, 1, 2, 0, 0, 0, 4, 5, 0, 1, 2, 3, 0, 2, 2, 0, 4, 5, 0, 0, 3, 0, 1, 2, 3, 5, 6, 0, 1, 0, 1, 0, 1, 23, 24, 2, 23, 24, 2, 23, 26, 1, 8, 0, 24, 25, 28, 2, 8, 14, 15, 18, 20, 22, 23, 29, 30, 31, 8, 9, 13, 14, 15, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30, 31, 32]) u = np.concatenate([src, dst]) v = np.concatenate([dst, src]) return dgl.DGLGraph((u, v)) #GRAPH OF THE FOLLOWING CREATED G = build_karate_club_graph() print('We have %d nodes.' % G.number_of_nodes()) print('We have %d edges.' % G.number_of_edges()) import networkx as nx # Since the actual graph is undirected, we convert it for visualization # purpose. nx_G = G.to_networkx().to_undirected() # Kamada-Kawaii layout usually looks pretty for arbitrary graphs pos = nx.kamada_kawai_layout(nx_G) nx.draw(nx_G, pos, with_labels=True, node_color=[[.7, .7, .7]]) import torch import torch.nn as nn import torch.nn.functional as F embed = nn.Embedding(34, 5) # 34 nodes with embedding dim equal to 5 G.ndata['feat'] = embed.weight # printing out node 2's input feature print(G.ndata['feat'][2]) # printing out node 10 and 11's input features print(G.ndata['feat'][[10, 11]]) from dgl.nn.pytorch import GraphConv class GCN(nn.Module): def __init__(self, in_feats, hidden_size, num_classes): super(GCN, self).__init__() self.conv1 = GraphConv(in_feats, hidden_size) self.conv2 = GraphConv(hidden_size, num_classes) def forward(self, g, inputs): h = self.conv1(g, inputs) h = torch.relu(h) h = self.conv2(g, h) return h # The first layer transforms input features of size of 5 to a hidden size of 5. # The second layer transforms the hidden layer and produces output features of # size 2, corresponding to the two groups of the karate club. net = GCN(5, 5, 2) inputs = embed.weight labeled_nodes = torch.tensor([0, 33]) # only the instructor and the president nodes are labeled labels = torch.tensor([0, 1]) # their labels are different import itertools optimizer = torch.optim.Adam(itertools.chain(net.parameters(), embed.parameters()), lr=0.01) all_logits = [] for epoch in range(50): logits = net(G, inputs) # we save the logits for visualization later all_logits.append(logits.detach()) logp = F.log_softmax(logits, 1) # we only compute loss for labeled nodes loss = F.nll_loss(logp[labeled_nodes], labels) optimizer.zero_grad() loss.backward() optimizer.step() print('Epoch %d | Loss: %.4f' % (epoch, loss.item())) import matplotlib.animation as animation import matplotlib.pyplot as plt def draw(i): cls1color = '#00FFFF' cls2color = '#FF00FF' pos = {} colors = [] for v in range(34): pos[v] = all_logits[i][v].numpy() cls = pos[v].argmax() colors.append(cls1color if cls else cls2color) ax.cla() ax.axis('off') ax.set_title('Epoch: %d' % i) nx.draw_networkx(nx_G.to_undirected(), pos, node_color=colors, with_labels=True, node_size=300, ax=ax) fig = plt.figure(dpi=150) fig.clf() ax = fig.subplots() draw(1) # draw the prediction of the first epoch plt.close() ani = animation.FuncAnimation(fig, draw, frames=len(all_logits), interval=200) ani
0.764804
0.986455
## 1. Preparing our dataset <p><em>These recommendations are so on point! How does this playlist know me so well?</em></p> <p><img src="https://assets.datacamp.com/production/project_449/img/iphone_music.jpg" alt="Project Image Record" width="600px"></p> <p>Over the past few years, streaming services with huge catalogs have become the primary means through which most people listen to their favorite music. But at the same time, the sheer amount of music on offer can mean users might be a bit overwhelmed when trying to look for newer music that suits their tastes.</p> <p>For this reason, streaming services have looked into means of categorizing music to allow for personalized recommendations. One method involves direct analysis of the raw audio information in a given song, scoring the raw data on a variety of metrics. Today, we'll be examining data compiled by a research group known as The Echo Nest. Our goal is to look through this dataset and classify songs as being either 'Hip-Hop' or 'Rock' - all without listening to a single one ourselves. In doing so, we will learn how to clean our data, do some exploratory data visualization, and use feature reduction towards the goal of feeding our data through some simple machine learning algorithms, such as decision trees and logistic regression.</p> <p>To begin with, let's load the metadata about our tracks alongside the track metrics compiled by The Echo Nest. A song is about more than its title, artist, and number of listens. We have another dataset that has musical features of each track such as <code>danceability</code> and <code>acousticness</code> on a scale from -1 to 1. These exist in two different files, which are in different formats - CSV and JSON. While CSV is a popular file format for denoting tabular data, JSON is another common file format in which databases often return the results of a given query.</p> <p>Let's start by creating two pandas <code>DataFrames</code> out of these files that we can merge so we have features and labels (often also referred to as <code>X</code> and <code>y</code>) for the classification later on.</p> ``` import pandas as pd # Read in track metadata with genre labels tracks = pd.read_csv('datasets/fma-rock-vs-hiphop.csv') # Read in track metrics with the features echonest_metrics = pd.read_json('datasets/echonest-metrics.json') # Merge the relevant columns of tracks and echonest_metrics echo_tracks = pd.merge(echonest_metrics, tracks[['track_id', 'genre_top']], on='track_id') # Inspect the resultant dataframe echo_tracks.info() ``` ## 2. Pairwise relationships between continuous variables <p>We typically want to avoid using variables that have strong correlations with each other -- hence avoiding feature redundancy -- for a few reasons:</p> <ul> <li>To keep the model simple and improve interpretability (with many features, we run the risk of overfitting).</li> <li>When our datasets are very large, using fewer features can drastically speed up our computation time.</li> </ul> <p>To get a sense of whether there are any strongly correlated features in our data, we will use built-in functions in the <code>pandas</code> package.</p> ``` # Create a correlation matrix corr_metrics = echo_tracks.corr() corr_metrics.style.background_gradient() ``` ## 3. Normalizing the feature data <p>As mentioned earlier, it can be particularly useful to simplify our models and use as few features as necessary to achieve the best result. Since we didn't find any particular strong correlations between our features, we can instead use a common approach to reduce the number of features called <strong>principal component analysis (PCA)</strong>. </p> <p>It is possible that the variance between genres can be explained by just a few features in the dataset. PCA rotates the data along the axis of highest variance, thus allowing us to determine the relative contribution of each feature of our data towards the variance between classes. </p> <p>However, since PCA uses the absolute variance of a feature to rotate the data, a feature with a broader range of values will overpower and bias the algorithm relative to the other features. To avoid this, we must first normalize our data. There are a few methods to do this, but a common way is through <em>standardization</em>, such that all features have a mean = 0 and standard deviation = 1 (the resultant is a z-score).</p> ``` # Define our features features = echo_tracks.drop(columns=['genre_top', 'track_id']) # Define our labels labels = echo_tracks['genre_top'] # Import the StandardScaler from sklearn.preprocessing import StandardScaler # Scale the features and set the values to a new variable scaler = StandardScaler() scaled_train_features = scaler.fit_transform(features) ``` ## 4. Principal Component Analysis on our scaled data <p>Now that we have preprocessed our data, we are ready to use PCA to determine by how much we can reduce the dimensionality of our data. We can use <strong>scree-plots</strong> and <strong>cumulative explained ratio plots</strong> to find the number of components to use in further analyses.</p> <p>Scree-plots display the number of components against the variance explained by each component, sorted in descending order of variance. Scree-plots help us get a better sense of which components explain a sufficient amount of variance in our data. When using scree plots, an 'elbow' (a steep drop from one data point to the next) in the plot is typically used to decide on an appropriate cutoff.</p> ``` # This is just to make plots appear in the notebook %matplotlib inline # Import our plotting module, and PCA class import matplotlib.pyplot as plt from sklearn.decomposition import PCA # Get our explained variance ratios from PCA using all features pca = PCA() pca.fit(scaled_train_features) exp_variance = pca.explained_variance_ratio_ # plot the explained variance using a barplot fig, ax = plt.subplots() ax.bar(range(pca.n_components_), pca.explained_variance_ratio_) ax.set_xlabel('Principal Component #') ``` ## 5. Further visualization of PCA <p>Unfortunately, there does not appear to be a clear elbow in this scree plot, which means it is not straightforward to find the number of intrinsic dimensions using this method. </p> <p>But all is not lost! Instead, we can also look at the <strong>cumulative explained variance plot</strong> to determine how many features are required to explain, say, about 85% of the variance (cutoffs are somewhat arbitrary here, and usually decided upon by 'rules of thumb'). Once we determine the appropriate number of components, we can perform PCA with that many components, ideally reducing the dimensionality of our data.</p> ``` # Import numpy import numpy as np # Calculate the cumulative explained variance cum_exp_variance = np.cumsum(exp_variance) # Plot the cumulative explained variance and draw a dashed line at 0.85. fig, ax = plt.subplots() ax.plot(cum_exp_variance) ax.axhline(y=0.85, linestyle='--') # choose the n_components where about 85% of our variance can be explained display(cum_exp_variance) n_components = 6 # Perform PCA with the chosen number of components and project data onto components pca = PCA(n_components, random_state=10) pca.fit(scaled_train_features) pca_projection = pca.transform(scaled_train_features) ``` ## 6. Train a decision tree to classify genre <p>Now we can use the lower dimensional PCA projection of the data to classify songs into genres. To do that, we first need to split our dataset into 'train' and 'test' subsets, where the 'train' subset will be used to train our model while the 'test' dataset allows for model performance validation.</p> <p>Here, we will be using a simple algorithm known as a decision tree. Decision trees are rule-based classifiers that take in features and follow a 'tree structure' of binary decisions to ultimately classify a data point into one of two or more categories. In addition to being easy to both use and interpret, decision trees allow us to visualize the 'logic flowchart' that the model generates from the training data.</p> <p>Here is an example of a decision tree that demonstrates the process by which an input image (in this case, of a shape) might be classified based on the number of sides it has and whether it is rotated.</p> <p><img src="https://assets.datacamp.com/production/project_449/img/simple_decision_tree.png" alt="Decision Tree Flow Chart Example" width="350px"></p> ``` # Import train_test_split function and Decision tree classifier from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier # Split our data train_features, test_features, train_labels, test_labels = train_test_split( pca_projection, labels, random_state=10, ) # Train our decision tree tree = DecisionTreeClassifier(random_state=10) tree.fit(train_features, train_labels) # Predict the labels for the test data pred_labels_tree = tree.predict(test_features) ``` ## 7. Compare our decision tree to a logistic regression <p>Although our tree's performance is decent, it's a bad idea to immediately assume that it's therefore the perfect tool for this job -- there's always the possibility of other models that will perform even better! It's always a worthwhile idea to at least test a few other algorithms and find the one that's best for our data.</p> <p>Sometimes simplest is best, and so we will start by applying <strong>logistic regression</strong>. Logistic regression makes use of what's called the logistic function to calculate the odds that a given data point belongs to a given class. Once we have both models, we can compare them on a few performance metrics, such as false positive and false negative rate (or how many points are inaccurately classified). </p> ``` # Import LogisticRegression from sklearn.linear_model import LogisticRegression # Train our logistic regression and predict labels for the test set logreg = LogisticRegression(random_state=10) logreg.fit(train_features, train_labels) pred_labels_logit = logreg.predict(test_features) # Create the classification report for both models from sklearn.metrics import classification_report class_rep_tree = classification_report(test_labels, pred_labels_tree) class_rep_log = classification_report(test_labels, pred_labels_logit) print("Decision Tree: \n", class_rep_tree) print("Logistic Regression: \n", class_rep_log) ``` ## 8. Balance our data for greater performance <p>Both our models do similarly well, boasting an average precision of 87% each. However, looking at our classification report, we can see that rock songs are fairly well classified, but hip-hop songs are disproportionately misclassified as rock songs. </p> <p>Why might this be the case? Well, just by looking at the number of data points we have for each class, we see that we have far more data points for the rock classification than for hip-hop, potentially skewing our model's ability to distinguish between classes. This also tells us that most of our model's accuracy is driven by its ability to classify just rock songs, which is less than ideal.</p> <p>To account for this, we can weight the value of a correct classification in each class inversely to the occurrence of data points for each class. Since a correct classification for "Rock" is not more important than a correct classification for "Hip-Hop" (and vice versa), we only need to account for differences in <em>sample size</em> of our data points when weighting our classes here, and not relative importance of each class. </p> ``` # Subset only the hip-hop tracks, and then only the rock tracks hop_only = echo_tracks.loc[echo_tracks['genre_top'] == 'Hip-Hop'] rock_only = echo_tracks.loc[echo_tracks['genre_top'] == 'Rock'] # sample the rocks songs to be the same number as there are hip-hop songs rock_only = rock_only.sample(hop_only.shape[0], random_state=10) # concatenate the dataframes rock_only and hop_only rock_hop_bal = pd.concat([rock_only, hop_only]) # The features, labels, and pca projection are created for the balanced dataframe features = rock_hop_bal.drop(['genre_top', 'track_id'], axis=1) labels = rock_hop_bal['genre_top'] pca_projection = pca.fit_transform(scaler.fit_transform(features)) # Redefine the train and test set with the pca_projection from the balanced data train_features, test_features, train_labels, test_labels = train_test_split(pca_projection, labels, random_state=10) ``` ## 9. Does balancing our dataset improve model bias? <p>We've now balanced our dataset, but in doing so, we've removed a lot of data points that might have been crucial to training our models. Let's test to see if balancing our data improves model bias towards the "Rock" classification while retaining overall classification performance. </p> <p>Note that we have already reduced the size of our dataset and will go forward without applying any dimensionality reduction. In practice, we would consider dimensionality reduction more rigorously when dealing with vastly large datasets and when computation times become prohibitively large.</p> ``` # Train our decision tree on the balanced data tree = DecisionTreeClassifier(random_state=10) tree.fit(train_features, train_labels) pred_labels_tree = tree.predict(test_features) # Train our logistic regression on the balanced data logreg = LogisticRegression(random_state=10) logreg.fit(train_features, train_labels) pred_labels_logit = logreg.predict(test_features) # Compare the models print("Decision Tree: \n", classification_report(test_labels, pred_labels_tree)) print("Logistic Regression: \n", classification_report(test_labels, pred_labels_logit)) ``` ## 10. Using cross-validation to evaluate our models <p>Success! Balancing our data has removed bias towards the more prevalent class. To get a good sense of how well our models are actually performing, we can apply what's called <strong>cross-validation</strong> (CV). This step allows us to compare models in a more rigorous fashion.</p> <p>Since the way our data is split into train and test sets can impact model performance, CV attempts to split the data multiple ways and test the model on each of the splits. Although there are many different CV methods, all with their own advantages and disadvantages, we will use what's known as <strong>K-fold</strong> CV here. K-fold first splits the data into K different, equally sized subsets. Then, it iteratively uses each subset as a test set while using the remainder of the data as train sets. Finally, we can then aggregate the results from each fold for a final model performance score.</p> ``` from sklearn.model_selection import KFold, cross_val_score # Set up our K-fold cross-validation kf = KFold(10) tree = DecisionTreeClassifier(random_state=10) logreg = LogisticRegression(random_state=10) # Train our models using KFold cv tree_score = cross_val_score(tree, pca_projection, labels, cv=kf) logit_score = cross_val_score(logreg, pca_projection, labels, cv=kf) # Print the mean of each array of scores print("Decision Tree:", np.mean(tree_score), "Logistic Regression:", np.mean(logit_score)) ```
github_jupyter
import pandas as pd # Read in track metadata with genre labels tracks = pd.read_csv('datasets/fma-rock-vs-hiphop.csv') # Read in track metrics with the features echonest_metrics = pd.read_json('datasets/echonest-metrics.json') # Merge the relevant columns of tracks and echonest_metrics echo_tracks = pd.merge(echonest_metrics, tracks[['track_id', 'genre_top']], on='track_id') # Inspect the resultant dataframe echo_tracks.info() # Create a correlation matrix corr_metrics = echo_tracks.corr() corr_metrics.style.background_gradient() # Define our features features = echo_tracks.drop(columns=['genre_top', 'track_id']) # Define our labels labels = echo_tracks['genre_top'] # Import the StandardScaler from sklearn.preprocessing import StandardScaler # Scale the features and set the values to a new variable scaler = StandardScaler() scaled_train_features = scaler.fit_transform(features) # This is just to make plots appear in the notebook %matplotlib inline # Import our plotting module, and PCA class import matplotlib.pyplot as plt from sklearn.decomposition import PCA # Get our explained variance ratios from PCA using all features pca = PCA() pca.fit(scaled_train_features) exp_variance = pca.explained_variance_ratio_ # plot the explained variance using a barplot fig, ax = plt.subplots() ax.bar(range(pca.n_components_), pca.explained_variance_ratio_) ax.set_xlabel('Principal Component #') # Import numpy import numpy as np # Calculate the cumulative explained variance cum_exp_variance = np.cumsum(exp_variance) # Plot the cumulative explained variance and draw a dashed line at 0.85. fig, ax = plt.subplots() ax.plot(cum_exp_variance) ax.axhline(y=0.85, linestyle='--') # choose the n_components where about 85% of our variance can be explained display(cum_exp_variance) n_components = 6 # Perform PCA with the chosen number of components and project data onto components pca = PCA(n_components, random_state=10) pca.fit(scaled_train_features) pca_projection = pca.transform(scaled_train_features) # Import train_test_split function and Decision tree classifier from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier # Split our data train_features, test_features, train_labels, test_labels = train_test_split( pca_projection, labels, random_state=10, ) # Train our decision tree tree = DecisionTreeClassifier(random_state=10) tree.fit(train_features, train_labels) # Predict the labels for the test data pred_labels_tree = tree.predict(test_features) # Import LogisticRegression from sklearn.linear_model import LogisticRegression # Train our logistic regression and predict labels for the test set logreg = LogisticRegression(random_state=10) logreg.fit(train_features, train_labels) pred_labels_logit = logreg.predict(test_features) # Create the classification report for both models from sklearn.metrics import classification_report class_rep_tree = classification_report(test_labels, pred_labels_tree) class_rep_log = classification_report(test_labels, pred_labels_logit) print("Decision Tree: \n", class_rep_tree) print("Logistic Regression: \n", class_rep_log) # Subset only the hip-hop tracks, and then only the rock tracks hop_only = echo_tracks.loc[echo_tracks['genre_top'] == 'Hip-Hop'] rock_only = echo_tracks.loc[echo_tracks['genre_top'] == 'Rock'] # sample the rocks songs to be the same number as there are hip-hop songs rock_only = rock_only.sample(hop_only.shape[0], random_state=10) # concatenate the dataframes rock_only and hop_only rock_hop_bal = pd.concat([rock_only, hop_only]) # The features, labels, and pca projection are created for the balanced dataframe features = rock_hop_bal.drop(['genre_top', 'track_id'], axis=1) labels = rock_hop_bal['genre_top'] pca_projection = pca.fit_transform(scaler.fit_transform(features)) # Redefine the train and test set with the pca_projection from the balanced data train_features, test_features, train_labels, test_labels = train_test_split(pca_projection, labels, random_state=10) # Train our decision tree on the balanced data tree = DecisionTreeClassifier(random_state=10) tree.fit(train_features, train_labels) pred_labels_tree = tree.predict(test_features) # Train our logistic regression on the balanced data logreg = LogisticRegression(random_state=10) logreg.fit(train_features, train_labels) pred_labels_logit = logreg.predict(test_features) # Compare the models print("Decision Tree: \n", classification_report(test_labels, pred_labels_tree)) print("Logistic Regression: \n", classification_report(test_labels, pred_labels_logit)) from sklearn.model_selection import KFold, cross_val_score # Set up our K-fold cross-validation kf = KFold(10) tree = DecisionTreeClassifier(random_state=10) logreg = LogisticRegression(random_state=10) # Train our models using KFold cv tree_score = cross_val_score(tree, pca_projection, labels, cv=kf) logit_score = cross_val_score(logreg, pca_projection, labels, cv=kf) # Print the mean of each array of scores print("Decision Tree:", np.mean(tree_score), "Logistic Regression:", np.mean(logit_score))
0.824073
0.985356
# Traitement de la source de données ``` import os from datetime import date, timedelta import pandas as pd # Racine des fichiers quotidiens BASE_URL = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{}.csv' # Dates de disponibilité des fichiers START_DATE = date(2020, 1, 22) END_DATE = date(2020, 3, 13) # Répertoire de sauvegarde des fichiers bruts RAWFILES_DIR = '../data/raw/' PROCESSED_DIR = '../data/processed/' # Fichier principal ALL_DATA_FILE = 'all_data.csv' #TODO: A remplacer par la lecture du fichier env.yaml ``` ## Boucle de récupération des fichiers ``` delta = END_DATE - START_DATE # as timedelta for i in range(delta.days + 1): day = START_DATE + timedelta(days=i) day_label = day.strftime("%m-%d-%Y") #print(day_label) virus_df = pd.read_csv(BASE_URL.format(day_label), sep=',', parse_dates=['Last Update']) virus_df.to_csv(os.path.join(RAWFILES_DIR, day_label + '.csv'), index=False) ``` ## Constitution de la table de références lat / log ``` import glob df_list = [] # Lecture des fichiers récupérés et sélection de ceux qui ont une lat / long for file in glob.glob(os.path.join(RAWFILES_DIR, '*.csv')): virus_df = pd.read_csv(file, sep=',') if 'Latitude' in virus_df.columns and 'Longitude' in virus_df.columns: df_list.append(virus_df) all_df = pd.concat(df_list) # Création d'une table de références pour les lat/long (all_df[['Province/State', 'Country/Region', 'Latitude', 'Longitude']] .drop_duplicates(subset=['Province/State', 'Country/Region']) .sort_values(by=['Country/Region', 'Province/State']) .to_csv(os.path.join(PROCESSED_DIR, 'lat_long_table.csv'), index=False) ) ``` ## Construction d'une table unique ``` data_catalog = { 'Last Update': ['<M8[ns]'], 'Confirmed': ['float64', 'int64'], 'Deaths': ['float64', 'int64'], 'Recovered': ['float64', 'int64'], 'Latitude': ['float64'], 'Longitude': ['float64'], } df_list = [] latlong_df = pd.read_csv(os.path.join(PROCESSED_DIR, 'lat_long_table.csv')) # Lecture des fichiers récupérés et sélection de ceux qui ont une lat / long for file in glob.glob(os.path.join(RAWFILES_DIR, '*.csv')): virus_df = pd.read_csv(file, sep=',', parse_dates=['Last Update']) if not('Latitude' in virus_df.columns and 'Longitude' in virus_df.columns): virus_df = virus_df.merge(latlong_df, on=['Province/State', 'Country/Region'], how='left') for field, types in data_catalog.items(): assert virus_df[field].dtypes in types, f"Bad type for {field} in {file}" df_list.append(virus_df.assign(source=os.path.basename(file))) all_df = pd.concat(df_list) # Sauvegarde de la table totale all_df.to_csv(os.path.join(PROCESSED_DIR, 'all_data.csv'), index=False) all_df.head() worldpop = pd.read_csv(os.path.join(RAWFILES_DIR, 'worldpop/worldpop.csv'), delimiter=',') worldpop worldpop[['Country Name', 'Country Code', '2020']] ```
github_jupyter
import os from datetime import date, timedelta import pandas as pd # Racine des fichiers quotidiens BASE_URL = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{}.csv' # Dates de disponibilité des fichiers START_DATE = date(2020, 1, 22) END_DATE = date(2020, 3, 13) # Répertoire de sauvegarde des fichiers bruts RAWFILES_DIR = '../data/raw/' PROCESSED_DIR = '../data/processed/' # Fichier principal ALL_DATA_FILE = 'all_data.csv' #TODO: A remplacer par la lecture du fichier env.yaml delta = END_DATE - START_DATE # as timedelta for i in range(delta.days + 1): day = START_DATE + timedelta(days=i) day_label = day.strftime("%m-%d-%Y") #print(day_label) virus_df = pd.read_csv(BASE_URL.format(day_label), sep=',', parse_dates=['Last Update']) virus_df.to_csv(os.path.join(RAWFILES_DIR, day_label + '.csv'), index=False) import glob df_list = [] # Lecture des fichiers récupérés et sélection de ceux qui ont une lat / long for file in glob.glob(os.path.join(RAWFILES_DIR, '*.csv')): virus_df = pd.read_csv(file, sep=',') if 'Latitude' in virus_df.columns and 'Longitude' in virus_df.columns: df_list.append(virus_df) all_df = pd.concat(df_list) # Création d'une table de références pour les lat/long (all_df[['Province/State', 'Country/Region', 'Latitude', 'Longitude']] .drop_duplicates(subset=['Province/State', 'Country/Region']) .sort_values(by=['Country/Region', 'Province/State']) .to_csv(os.path.join(PROCESSED_DIR, 'lat_long_table.csv'), index=False) ) data_catalog = { 'Last Update': ['<M8[ns]'], 'Confirmed': ['float64', 'int64'], 'Deaths': ['float64', 'int64'], 'Recovered': ['float64', 'int64'], 'Latitude': ['float64'], 'Longitude': ['float64'], } df_list = [] latlong_df = pd.read_csv(os.path.join(PROCESSED_DIR, 'lat_long_table.csv')) # Lecture des fichiers récupérés et sélection de ceux qui ont une lat / long for file in glob.glob(os.path.join(RAWFILES_DIR, '*.csv')): virus_df = pd.read_csv(file, sep=',', parse_dates=['Last Update']) if not('Latitude' in virus_df.columns and 'Longitude' in virus_df.columns): virus_df = virus_df.merge(latlong_df, on=['Province/State', 'Country/Region'], how='left') for field, types in data_catalog.items(): assert virus_df[field].dtypes in types, f"Bad type for {field} in {file}" df_list.append(virus_df.assign(source=os.path.basename(file))) all_df = pd.concat(df_list) # Sauvegarde de la table totale all_df.to_csv(os.path.join(PROCESSED_DIR, 'all_data.csv'), index=False) all_df.head() worldpop = pd.read_csv(os.path.join(RAWFILES_DIR, 'worldpop/worldpop.csv'), delimiter=',') worldpop worldpop[['Country Name', 'Country Code', '2020']]
0.086103
0.619759
# Simple tests for QSO templates A simple notebook that plays around with the templates.QSO Class. ``` import numpy as np import warnings import matplotlib.pyplot as plt from matplotlib.patches import Polygon from desisim.templates import QSO %pylab inline seed = 123 nmodel = 10 qso = QSO(minwave=3000, maxwave=5e4) ``` ### Make templates with and without the fast Lyman-alpha forest. ``` flux, wave, meta = qso.make_templates(nmodel=nmodel, zrange=(2.0, 4.0), seed=seed, nocolorcuts=True, lyaforest=False) flux_forest, _, meta_forest = qso.make_templates(nmodel=nmodel, zrange=(2.0, 4.0), seed=seed, nocolorcuts=True, lyaforest=True) meta meta_forest ``` ### Show the forest ``` for ii in range(nmodel): plt.plot(wave, flux_forest[ii, :]) plt.plot(wave, flux[ii, :]) plt.xlim(3000, 6300) plt.show() ``` ### Show the effect of extrapolation ``` for ii in range(nmodel): plt.plot(wave, flux[ii, :]) #plt.xlim(3000, 200) plt.xscale('log') plt.show() ``` ### Look at the color-cuts. ``` flux1, _, meta1 = qso.make_templates(nmodel=100, seed=1, lyaforest=True, nocolorcuts=True) flux2, _, meta2 = qso.make_templates(nmodel=100, seed=1, lyaforest=True, nocolorcuts=False) fail = np.where(np.sum(flux2, axis=1) == 0)[0] fail def qso_colorbox(ax, plottype='grz'): """Draw the QSO selection boxes.""" rmaglim = 22.7 xlim = ax.get_xlim() ylim = ax.get_ylim() if plottype == 'grz-r': verts = [(xlim[0]-0.05, 17.0), (22.7, 17.0), (22.7, ylim[1]+0.05), (xlim[0]-0.05, ylim[1]+0.05) ] if plottype == 'rW1-rz': verts = None ax.axvline(x=-0.3, ls='--', color='k') ax.axvline(x=1.3, ls='--', color='k') if plottype == 'gr-rz': verts = [(-0.3, 1.3), (1.1, 1.3), (1.1, ylim[0]-0.05), (-0.3, ylim[0]-0.05) ] if verts: ax.add_patch(Polygon(verts, fill=False, ls='--', color='k')) def flux2colors(cat): """Convert DECam/WISE fluxes to magnitudes and colors.""" colors = dict() with warnings.catch_warnings(): # ignore missing fluxes (e.g., for QSOs) warnings.simplefilter('ignore') for ii, band in zip((1, 2, 4), ('g', 'r', 'z')): colors[band] = 22.5 - 2.5 * np.log10(cat['DECAM_FLUX'][..., ii].data) colors['grz'] = 22.5-2.5*np.log10((cat['DECAM_FLUX'][..., 1] + 0.8 * cat['DECAM_FLUX'][..., 2] + 0.5 * cat['DECAM_FLUX'][..., 4]).data / 2.3) colors['gr'] = colors['g'] - colors['r'] colors['rz'] = colors['r'] - colors['z'] return colors nocuts = flux2colors(meta1) cuts = flux2colors(meta2) fig, ax = plt.subplots() ax.scatter(nocuts['rz'], nocuts['gr'], s=14, label='No Color-cuts') ax.scatter(cuts['rz'], cuts['gr'], s=14, marker='s', alpha=0.7, label='With Color-cuts') ax.set_xlabel('$r - z$') ax.set_ylabel('$g - r$') ax.set_xlim(-1, 2.2) ax.set_ylim(-1, 2.0) ax.legend(loc='upper right') qso_colorbox(ax, 'gr-rz') ```
github_jupyter
import numpy as np import warnings import matplotlib.pyplot as plt from matplotlib.patches import Polygon from desisim.templates import QSO %pylab inline seed = 123 nmodel = 10 qso = QSO(minwave=3000, maxwave=5e4) flux, wave, meta = qso.make_templates(nmodel=nmodel, zrange=(2.0, 4.0), seed=seed, nocolorcuts=True, lyaforest=False) flux_forest, _, meta_forest = qso.make_templates(nmodel=nmodel, zrange=(2.0, 4.0), seed=seed, nocolorcuts=True, lyaforest=True) meta meta_forest for ii in range(nmodel): plt.plot(wave, flux_forest[ii, :]) plt.plot(wave, flux[ii, :]) plt.xlim(3000, 6300) plt.show() for ii in range(nmodel): plt.plot(wave, flux[ii, :]) #plt.xlim(3000, 200) plt.xscale('log') plt.show() flux1, _, meta1 = qso.make_templates(nmodel=100, seed=1, lyaforest=True, nocolorcuts=True) flux2, _, meta2 = qso.make_templates(nmodel=100, seed=1, lyaforest=True, nocolorcuts=False) fail = np.where(np.sum(flux2, axis=1) == 0)[0] fail def qso_colorbox(ax, plottype='grz'): """Draw the QSO selection boxes.""" rmaglim = 22.7 xlim = ax.get_xlim() ylim = ax.get_ylim() if plottype == 'grz-r': verts = [(xlim[0]-0.05, 17.0), (22.7, 17.0), (22.7, ylim[1]+0.05), (xlim[0]-0.05, ylim[1]+0.05) ] if plottype == 'rW1-rz': verts = None ax.axvline(x=-0.3, ls='--', color='k') ax.axvline(x=1.3, ls='--', color='k') if plottype == 'gr-rz': verts = [(-0.3, 1.3), (1.1, 1.3), (1.1, ylim[0]-0.05), (-0.3, ylim[0]-0.05) ] if verts: ax.add_patch(Polygon(verts, fill=False, ls='--', color='k')) def flux2colors(cat): """Convert DECam/WISE fluxes to magnitudes and colors.""" colors = dict() with warnings.catch_warnings(): # ignore missing fluxes (e.g., for QSOs) warnings.simplefilter('ignore') for ii, band in zip((1, 2, 4), ('g', 'r', 'z')): colors[band] = 22.5 - 2.5 * np.log10(cat['DECAM_FLUX'][..., ii].data) colors['grz'] = 22.5-2.5*np.log10((cat['DECAM_FLUX'][..., 1] + 0.8 * cat['DECAM_FLUX'][..., 2] + 0.5 * cat['DECAM_FLUX'][..., 4]).data / 2.3) colors['gr'] = colors['g'] - colors['r'] colors['rz'] = colors['r'] - colors['z'] return colors nocuts = flux2colors(meta1) cuts = flux2colors(meta2) fig, ax = plt.subplots() ax.scatter(nocuts['rz'], nocuts['gr'], s=14, label='No Color-cuts') ax.scatter(cuts['rz'], cuts['gr'], s=14, marker='s', alpha=0.7, label='With Color-cuts') ax.set_xlabel('$r - z$') ax.set_ylabel('$g - r$') ax.set_xlim(-1, 2.2) ax.set_ylim(-1, 2.0) ax.legend(loc='upper right') qso_colorbox(ax, 'gr-rz')
0.700588
0.902093
# Hyperopt-Sklearn on Iris `Iris` is a small data set of 150 examples of flower attributes and types of Iris. The small size of Iris means that hyperparameter optimization takes just a few seconds. On the other hand, Iris is so *easy* that we'll typically see numerous near-perfect models within the first few random guesses; hyperparameter optimization algorithms are hardly necessary at all. Nevertheless, here is how to use hyperopt-sklearn (`hpsklearn`) to find a good model of the Iris data set. The code walk-through is given in 5 steps: 1. module imports 2. data preparation into training and testing sets for a single fold of cross-validation. 3. creation of a hpsklearn `HyperoptEstimator` 4. a somewhat spelled-out version of `HyperoptEstimator.fit` 5. inspecting and testing the best model ``` # IMPORTS from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris import hpsklearn import hpsklearn.demo_support import hyperopt.tpe import pandas as pd import numpy as np # PREPARE TRAINING AND TEST DATA iris = load_iris() df_iris = pd.DataFrame(iris.data, columns=iris.feature_names) df_iris['species_name'] = pd.Categorical.from_codes(iris.target, iris.target_names) y = df_iris['species_name'] X = df_iris.drop(['species_name'], axis=1) # TRAIN AND TEST DATA X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) estimator = hpsklearn.HyperoptEstimator( preprocessing=hpsklearn.components.any_preprocessing('pp'), classifier=hpsklearn.components.any_classifier('clf'), algo=hyperopt.tpe.suggest, trial_timeout=15.0, # seconds max_evals=15, ) # Demo version of estimator.fit() fit_iterator = estimator.fit_iter(X_train,y_train) fit_iterator.__next__() plot_helper = hpsklearn.demo_support.PlotHelper(estimator, mintodate_ylim=(-.01, .10)) while len(estimator.trials.trials) < estimator.max_evals: fit_iterator.send(1) # -- try one more model plot_helper.post_iter() plot_helper.post_loop() # -- Model selection was done on a subset of the training data. # -- Now that we've picked a model, train on all training data. estimator.retrain_best_model_on_full_data(X_train, y_train) print('Best preprocessing pipeline:') for pp in estimator._best_preprocs: print(pp) print('\n') print('Best classifier:\n', estimator._best_learner) test_predictions = estimator.predict(X_test) acc_in_percent = 100 * np.mean(test_predictions == y_test) print('\n') print('Prediction accuracy in generalization is %.1f%%' % acc_in_percent) ```
github_jupyter
# IMPORTS from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris import hpsklearn import hpsklearn.demo_support import hyperopt.tpe import pandas as pd import numpy as np # PREPARE TRAINING AND TEST DATA iris = load_iris() df_iris = pd.DataFrame(iris.data, columns=iris.feature_names) df_iris['species_name'] = pd.Categorical.from_codes(iris.target, iris.target_names) y = df_iris['species_name'] X = df_iris.drop(['species_name'], axis=1) # TRAIN AND TEST DATA X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) estimator = hpsklearn.HyperoptEstimator( preprocessing=hpsklearn.components.any_preprocessing('pp'), classifier=hpsklearn.components.any_classifier('clf'), algo=hyperopt.tpe.suggest, trial_timeout=15.0, # seconds max_evals=15, ) # Demo version of estimator.fit() fit_iterator = estimator.fit_iter(X_train,y_train) fit_iterator.__next__() plot_helper = hpsklearn.demo_support.PlotHelper(estimator, mintodate_ylim=(-.01, .10)) while len(estimator.trials.trials) < estimator.max_evals: fit_iterator.send(1) # -- try one more model plot_helper.post_iter() plot_helper.post_loop() # -- Model selection was done on a subset of the training data. # -- Now that we've picked a model, train on all training data. estimator.retrain_best_model_on_full_data(X_train, y_train) print('Best preprocessing pipeline:') for pp in estimator._best_preprocs: print(pp) print('\n') print('Best classifier:\n', estimator._best_learner) test_predictions = estimator.predict(X_test) acc_in_percent = 100 * np.mean(test_predictions == y_test) print('\n') print('Prediction accuracy in generalization is %.1f%%' % acc_in_percent)
0.637595
0.901444
# Implementation Several Python libraries allow for easy and efficient implementation of neural networks. Here, we'll show examples with the very popular `tf.keras` submodule. This submodule integrates Keras, a user-friendly high-level API, into Tensorflow, a lower-level backend. Let's start by loading Tensorflow, our visualization packages, and the {doc}`Boston </content/appendix/data>` housing dataset from `scikit-learn`. ``` import tensorflow as tf from sklearn import datasets import matplotlib.pyplot as plt import seaborn as sns boston = datasets.load_boston() X_boston = boston['data'] y_boston = boston['target'] ``` Neural networks in Keras can be fit through one of two APIs: the *sequential* or the *functional* API. For the type of models discussed in this chapter, either approach works. ## 1. The Sequential API Fitting a network with the Keras sequential API can be broken down into four steps: 1. Instantiate model 2. Add layers 3. Compile model (and summarize) 4. Fit model An example of the code for these four steps is shown below. We first instantiate the network using `tf.keras.models.Sequential()`. Next, we add layers to the network. Specifically, we have to add any hidden layers we like followed by a single output layer. The type of networks covered in this chapter use only `Dense` layers. A "dense" layer is one in which each neuron is a function of all the other neurons in the previous layer. We identify the number of neurons in the layer with the `units` argument and the activation function applied to the layer with the `activation` argument. For the first layer only, we must also identify the `input_shape`, or the number of neurons in the input layer. If our predictors are of length `D`, the input shape will be `(D, )` (which is the shape of a single observation, as we can see with `X[0].shape`). The next step is to compile the model. Compiling determines the configuration of the model; we specify the optimizer and loss function to be used as well as any metrics we would like to monitor. After compiling, we can also preview our model with `model.summary()`. Finally, we fit the model. Here is where we actually provide our training data. Two other important arguments are `epochs` and `batch_size`. Models in Keras are fit with *mini-batch gradient descent*, in which samples of the training data are looped through and individually used to calculate and update gradients. `batch_size` determines the size of these samples, and `epochs` determines how many times the gradient is calculated for each sample. ``` ## 1. Instantiate model = tf.keras.models.Sequential(name = 'Sequential_Model') ## 2. Add Layers model.add(tf.keras.layers.Dense(units = 8, activation = 'relu', input_shape = (X_boston.shape[1], ), name = 'hidden')) model.add(tf.keras.layers.Dense(units = 1, activation = 'linear', name = 'output')) ## 3. Compile (and summarize) model.compile(optimizer = 'adam', loss = 'mse') print(model.summary()) ## 4. Fit model.fit(X_boston, y_boston, epochs = 100, batch_size = 1, validation_split=0.2, verbose = 0); ``` Predictions with the model built above are shown below. ``` # Create Predictions yhat_boston = model.predict(X_boston)[:,0] # Plot fig, ax = plt.subplots() sns.scatterplot(y_boston, yhat_boston) ax.set(xlabel = r"$y$", ylabel = r"$\hat{y}$", title = r"$y$ vs. $\hat{y}$") sns.despine() ``` ## 2. The Functional API Fitting models with the Functional API can again be broken into four steps, listed below. 1. Define layers 2. Define model 3. Compile model (and summarize) 4. Fit model While the sequential approach first defines the model and then adds layers, the functional approach does the opposite. We start by adding an input layer using `tf.keras.Input()`. Next, we add one or more hidden layers using `tf.keras.layers.Dense()`. Note that in this approach, we link layers directly. For instance, we indicate that the `hidden` layer below follows the `inputs` layer by adding `(inputs)` to the end of its definition. After creating the layers, we can define our model. We do this by using `tf.keras.Model()` and identifying the input and output layers. Finally, we compile and fit our model as in the sequential API. ``` ## 1. Define layers inputs = tf.keras.Input(shape = (X_boston.shape[1],), name = "input") hidden = tf.keras.layers.Dense(8, activation = "relu", name = "first_hidden")(inputs) outputs = tf.keras.layers.Dense(1, activation = "linear", name = "output")(hidden) ## 2. Model model = tf.keras.Model(inputs = inputs, outputs = outputs, name = "Functional_Model") ## 3. Compile (and summarize) model.compile(optimizer = "adam", loss = "mse") print(model.summary()) ## 4. Fit model.fit(X_boston, y_boston, epochs = 100, batch_size = 1, validation_split=0.2, verbose = 0); ``` Predictions formed with this model are shown below. ``` # Create Predictions yhat_boston = model.predict(X_boston)[:,0] # Plot fig, ax = plt.subplots() sns.scatterplot(y_boston, yhat_boston) ax.set(xlabel = r"$y$", ylabel = r"$\hat{y}$", title = r"$y$ vs. $\hat{y}$") sns.despine() ```
github_jupyter
import tensorflow as tf from sklearn import datasets import matplotlib.pyplot as plt import seaborn as sns boston = datasets.load_boston() X_boston = boston['data'] y_boston = boston['target'] ## 1. Instantiate model = tf.keras.models.Sequential(name = 'Sequential_Model') ## 2. Add Layers model.add(tf.keras.layers.Dense(units = 8, activation = 'relu', input_shape = (X_boston.shape[1], ), name = 'hidden')) model.add(tf.keras.layers.Dense(units = 1, activation = 'linear', name = 'output')) ## 3. Compile (and summarize) model.compile(optimizer = 'adam', loss = 'mse') print(model.summary()) ## 4. Fit model.fit(X_boston, y_boston, epochs = 100, batch_size = 1, validation_split=0.2, verbose = 0); # Create Predictions yhat_boston = model.predict(X_boston)[:,0] # Plot fig, ax = plt.subplots() sns.scatterplot(y_boston, yhat_boston) ax.set(xlabel = r"$y$", ylabel = r"$\hat{y}$", title = r"$y$ vs. $\hat{y}$") sns.despine() ## 1. Define layers inputs = tf.keras.Input(shape = (X_boston.shape[1],), name = "input") hidden = tf.keras.layers.Dense(8, activation = "relu", name = "first_hidden")(inputs) outputs = tf.keras.layers.Dense(1, activation = "linear", name = "output")(hidden) ## 2. Model model = tf.keras.Model(inputs = inputs, outputs = outputs, name = "Functional_Model") ## 3. Compile (and summarize) model.compile(optimizer = "adam", loss = "mse") print(model.summary()) ## 4. Fit model.fit(X_boston, y_boston, epochs = 100, batch_size = 1, validation_split=0.2, verbose = 0); # Create Predictions yhat_boston = model.predict(X_boston)[:,0] # Plot fig, ax = plt.subplots() sns.scatterplot(y_boston, yhat_boston) ax.set(xlabel = r"$y$", ylabel = r"$\hat{y}$", title = r"$y$ vs. $\hat{y}$") sns.despine()
0.808067
0.994353
<a href="https://colab.research.google.com/github/sreneee/OOP-58002/blob/main/OOP_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Classes with Multiple Objects ``` class Birds: def __init__(self, bird_name): self.bird_name = bird_name def flying_birds(self): print(f"{self.bird_name} flies above clouds") def non_flying_birds(self): print(f"{self.bird_name} is the national bird of the Philippines") vulture = Birds("Griffon Vulture") crane = Birds("Common Crane") emu = Birds("Emu") vulture.flying_birds() crane.flying_birds() emu.non_flying_birds() ``` Encapsulation ``` class foo: def __init__(self, a , b): self.__a = a self.__b = b def add(self): return self.__a + self.__b def sub(self): return self.__a - self.__b foo_object = foo(3,4) print(foo_object.add()) #adding a and b print(foo_object.sub()) #subtracting a and b foo_object.__b = 5 foo_object.__a = 7 print(foo_object.add()) print(foo_object.sub()) ``` Inheritance ``` class Person: def __init__(self,firstname, surname): self.firstname = firstname self.surname = surname def printname(self): print(self.firstname, self.surname) person = Person("Ashley", "Goce") person.printname() class Student(Person): pass person = Student("Denise", "Goce") person.printname() ``` Polymorphism ``` class RegularPolygon: def __init__(self,side): self.side = side class Square(RegularPolygon): def area(self): return self.side*self.side class EquillateralTriangle(RegularPolygon): def area(self): return self.side*self.side*0.433 x = Square(4) y = EquillateralTriangle(3) print(x.area()) print(y.area()) ``` Application 1 ``` class Person: def __init__(self,std1,pre,mid,fin): self.std1 = std1 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std1) print(self.pre) print(self.mid) print(self.fin) s1 =Person("Student 1",'Prelim: 90','Midterm: 87','Finals: 94\n') s1.Grade() avg =(90+87+94)/3 print('Average grade is: ' + str(avg)+'\n') def __init__(self,std2,pre,mid,fin): self.std2 = std2 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std2) print(self.pre) print(self.mid) print(self.fin) s2 =Person("Student 2",'Prelim: 77','Midterm: 83','Finals: 89\n') s2.Grade() avg =(77+83+89)/3 print('Average grade is: ' + str(avg)+'\n') def __init__(self,std3,pre,mid,fin): self.std3 = std3 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std3) print(self.pre) print(self.mid) print(self.fin) s3 =Person("Student 3",'Prelim: 93','Midterm: 97','Finals: 84\n') s3.Grade() avg =(93+97+84)/3 print('Average grade is: ' + str(avg)+'\n') ```
github_jupyter
class Birds: def __init__(self, bird_name): self.bird_name = bird_name def flying_birds(self): print(f"{self.bird_name} flies above clouds") def non_flying_birds(self): print(f"{self.bird_name} is the national bird of the Philippines") vulture = Birds("Griffon Vulture") crane = Birds("Common Crane") emu = Birds("Emu") vulture.flying_birds() crane.flying_birds() emu.non_flying_birds() class foo: def __init__(self, a , b): self.__a = a self.__b = b def add(self): return self.__a + self.__b def sub(self): return self.__a - self.__b foo_object = foo(3,4) print(foo_object.add()) #adding a and b print(foo_object.sub()) #subtracting a and b foo_object.__b = 5 foo_object.__a = 7 print(foo_object.add()) print(foo_object.sub()) class Person: def __init__(self,firstname, surname): self.firstname = firstname self.surname = surname def printname(self): print(self.firstname, self.surname) person = Person("Ashley", "Goce") person.printname() class Student(Person): pass person = Student("Denise", "Goce") person.printname() class RegularPolygon: def __init__(self,side): self.side = side class Square(RegularPolygon): def area(self): return self.side*self.side class EquillateralTriangle(RegularPolygon): def area(self): return self.side*self.side*0.433 x = Square(4) y = EquillateralTriangle(3) print(x.area()) print(y.area()) class Person: def __init__(self,std1,pre,mid,fin): self.std1 = std1 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std1) print(self.pre) print(self.mid) print(self.fin) s1 =Person("Student 1",'Prelim: 90','Midterm: 87','Finals: 94\n') s1.Grade() avg =(90+87+94)/3 print('Average grade is: ' + str(avg)+'\n') def __init__(self,std2,pre,mid,fin): self.std2 = std2 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std2) print(self.pre) print(self.mid) print(self.fin) s2 =Person("Student 2",'Prelim: 77','Midterm: 83','Finals: 89\n') s2.Grade() avg =(77+83+89)/3 print('Average grade is: ' + str(avg)+'\n') def __init__(self,std3,pre,mid,fin): self.std3 = std3 self.pre = pre self.mid = mid self.fin = fin def Grade(self): print(self.std3) print(self.pre) print(self.mid) print(self.fin) s3 =Person("Student 3",'Prelim: 93','Midterm: 97','Finals: 84\n') s3.Grade() avg =(93+97+84)/3 print('Average grade is: ' + str(avg)+'\n')
0.612657
0.896251
``` from google.colab import drive drive.mount('/content/drive') cd /content/drive/MyDrive/IDL/Project/ #!git clone https://github.com/alivaramesh/MIXEM.git !git clone https://github.com/manasip8993/11785_Project.git !cd 11785_Project !which python !python --version !echo $PYTHONPATH %env PYTHONPATH= %%bash MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh MINICONDA_PREFIX=/usr/local #wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT chmod +x $MINICONDA_INSTALLER_SCRIPT ./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX !which conda !which python !python --version %%bash conda install --channel defaults conda python=3.6 --yes conda update --channel defaults --all --yes !conda --version !python --version import sys sys.path !ls /usr/local/lib/python3.6/dist-packages _ = (sys.path .append("/usr/local/lib/python3.6/site-packages")) cd /content/drive/MyDrive/IDL/Project/MIXEM !conda env create -f environment.yml ``` # **Train the Model** ``` #%%bash !source activate myEnv && conda env list && python /content/drive/MyDrive/IDL/Project/MIXEM/run.py --config_path /content/drive/MyDrive/IDL/Project/MIXEM/config.yaml --log_dir /content/drive/MyDrive/IDL/Project/MIXEM/log --dataroot /content/drive/MyDrive/IDL/Project/MIXEM/dataset --lineardataroot /content/drive/MyDrive/IDL/Project/MIXEM/lineareval --gpu 0 --dsname STL10 ``` # **Evaluate the Model** ``` !source activate myEnv && conda env list && python cluster_eval.py /content/drive/MyDrive/IDL/Project/MIXEM/dataset STL10 /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/STL10_890.pth /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/config.yaml 10 ``` # Resume Training ``` !source activate myEnv && conda env list && python /content/drive/MyDrive/IDL/Project/MIXEM/run.py --config_path /content/drive/MyDrive/IDL/Project/MIXEM/config.yaml --log_dir /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446 --dataroot /content/drive/MyDrive/IDL/Project/MIXEM/dataset --lineardataroot /content/drive/MyDrive/IDL/Project/MIXEM/lineareval --gpu 0 --dsname STL10 --init_from /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/checkpoint --resume --epochs 1000 !conda install -c conda-forge scikit-learn==1.0 ``` **TSNE** --- ``` !source activate myEnv && conda env list && pip install -U scikit-learn && python tsne.py /content/drive/MyDrive/IDL/Project/MIXEM/dataset STL10 /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/STL10_10.pth /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/config.yaml 10 ```
github_jupyter
from google.colab import drive drive.mount('/content/drive') cd /content/drive/MyDrive/IDL/Project/ #!git clone https://github.com/alivaramesh/MIXEM.git !git clone https://github.com/manasip8993/11785_Project.git !cd 11785_Project !which python !python --version !echo $PYTHONPATH %env PYTHONPATH= %%bash MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh MINICONDA_PREFIX=/usr/local #wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT chmod +x $MINICONDA_INSTALLER_SCRIPT ./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX !which conda !which python !python --version %%bash conda install --channel defaults conda python=3.6 --yes conda update --channel defaults --all --yes !conda --version !python --version import sys sys.path !ls /usr/local/lib/python3.6/dist-packages _ = (sys.path .append("/usr/local/lib/python3.6/site-packages")) cd /content/drive/MyDrive/IDL/Project/MIXEM !conda env create -f environment.yml #%%bash !source activate myEnv && conda env list && python /content/drive/MyDrive/IDL/Project/MIXEM/run.py --config_path /content/drive/MyDrive/IDL/Project/MIXEM/config.yaml --log_dir /content/drive/MyDrive/IDL/Project/MIXEM/log --dataroot /content/drive/MyDrive/IDL/Project/MIXEM/dataset --lineardataroot /content/drive/MyDrive/IDL/Project/MIXEM/lineareval --gpu 0 --dsname STL10 !source activate myEnv && conda env list && python cluster_eval.py /content/drive/MyDrive/IDL/Project/MIXEM/dataset STL10 /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/STL10_890.pth /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/config.yaml 10 !source activate myEnv && conda env list && python /content/drive/MyDrive/IDL/Project/MIXEM/run.py --config_path /content/drive/MyDrive/IDL/Project/MIXEM/config.yaml --log_dir /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446 --dataroot /content/drive/MyDrive/IDL/Project/MIXEM/dataset --lineardataroot /content/drive/MyDrive/IDL/Project/MIXEM/lineareval --gpu 0 --dsname STL10 --init_from /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/checkpoint --resume --epochs 1000 !conda install -c conda-forge scikit-learn==1.0 !source activate myEnv && conda env list && pip install -U scikit-learn && python tsne.py /content/drive/MyDrive/IDL/Project/MIXEM/dataset STL10 /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/STL10_10.pth /content/drive/MyDrive/IDL/Project/MIXEM/log/STL10_MIXTURE_10_PP_0.05_0.1_ENT_2.0_PIMAXL_0.1_resnet18_32_512_temp_0.5_LR_1e-4_WDECAY_1e-5_gpus_0_1635552446/checkpoints/config.yaml 10
0.146484
0.077378
# End-to-End Topic Modeling Workflow ``` %load_ext lab_black %load_ext autoreload %autoreload 2 import os from glob import glob from io import StringIO import numpy as np import pandas as pd from azure.storage.blob import BlobServiceClient from gensim.corpora.dictionary import Dictionary from sklearn.decomposition import NMF from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline %aimport src.data_helpers from src.data_helpers import load_data %aimport src.extraction_helpers from src.extraction_helpers import get_top_n_most_freq_words %aimport src.hybrid_helpers from src.hybrid_helpers import get_nmf_coherence_scores %aimport src.processing_helpers from src.processing_helpers import process_text ``` <a id="toc"></a> ## [Table of Contents](#table-of-contents) 0. [About](#about) 1. [User Inputs](#user-inputs) 2. [Load Data, Concatenate and Filter](#load-data-concatenate-and-filter) 3. [Topic modeling using Gensim NMF with Topic coherence to find best number of topics](#topic-modeling-using-gensim-nmf-with-topic-coherence-to-find-best-number-of-topics) - 3.1. [Pre-Processing for Gensim NMF, Tokenization, Stemming, etc.](#pre-processing-for-gensim-nmf,-tokenization,-stemming-etc.) - 3.2. [Use Gensim to perform Bag-of-Words transformation](#use-gensim-to-perform-bag-of-words-transformation) - 3.3. [Use Gensim NMF and Topic coherence to find number of topics](#use-gensim-nmf-and-topic-coherence-to-find-number-of-topics) - 3.4. [Training](#training) - 3.4.1. [ML Model Training using Best Number of Topics](#ml-model-training-using-best-number-of-topics) - 3.4.2. [Get Training Outputs](#get-training-outputs) - 3.4.3. [Merge Training Outputs with Data Features](#merge-training-outputs-with-data-features) - 3.4.4. [Sanity Checks of Intermediate and Final Merged Data](#sanity-checks-of-intermediate-and-final-merged-data) - 3.5. [Overview of Deployment Considerations](#overview-of-deployment-considerations) - 3.6. [Evaluate](#evaluate) - 3.6.1. [Get Descriptive Statistics for Topic Residuals](#get-descriptive-statistics-for-topic-residuals) - 3.6.2. [Assess Re-Trained ML Model Performance](#assess-re-trained-ml-model-performance) - 3.7. [Register](#register) - 3.8. [Deployment](#deployment) <a id="about"></a> ## 0. [About](#about) In this notebook, we will go through a second experiment with [topic coherence](https://en.wikipedia.org/wiki/Coherence_(linguistics)) approaches using Gensim to find an optimal number of topics from the Guardian's Space news listings data in `data/processed/*_processed.csv`. Following previous work ([1](https://github.com/robsalgado/personal_data_science_projects/blob/master/topic_modeling_nmf/topic_modeling_cnn.ipynb)), this will be done using [`sklearn`](https://en.wikipedia.org/wiki/Scikit-learn)'s [`NMF` model](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization#Text_mining) with [TFIDF vectorization](https://en.wikipedia.org/wiki/Tf%E2%80%93idf), after determining the optimal number of topics using [Gensim](https://en.wikipedia.org/wiki/Gensim)'s topic coherence pipeline to evaluate the topics determined by Gensim's `NMF`. <a id="user-inputs"></a> ## 1. [User Inputs](#user-inputs) We'll define below the variables that are to be used throughout the code. ``` PROJ_ROOT_DIR = os.getcwd() data_dir = os.path.join(PROJ_ROOT_DIR, "data", "raw") processed_data_dir = os.path.join(PROJ_ROOT_DIR, "data", "processed") # General inputs cloud_data = True # Topic naming topic_nums = list(range(10, 45 + 5, 5)) n_top_words = 10 unwanted_guardian_cols = [ "webTitle", "id", "sectionId", "sectionName", "type", "isHosted", "pillarId", "pillarName", "page", "document_type", "apiUrl", "publication", "year", "month", "day", "dayofweek", "dayofyear", "weekofyear", "quarter", ] ci_run = False def get_top_words_per_topic(row, n_top_words=5): return row.nlargest(n_top_words).index.tolist() def get_topic_residual_stats(data_doc_topic_words_weights): df_residuals_stats = data_doc_topic_words_weights.groupby("topic_num").agg( { "topic_num_resid": ["min", "mean", "median", "max"], "url": "nunique", "publication_date": ["min", "max"], "article_chars": ["min", "max"], } ) df_residuals_stats.columns = df_residuals_stats.columns.map("_".join) df_residuals_stats = df_residuals_stats.reset_index().sort_values( by=["topic_num"], ignore_index=True ) return df_residuals_stats def assess_topic_residual_stats( df_residuals_stats_training, df_residuals_stats, df_topic_word_weights_top_ten ): for stat_name in ["min", "median", "max"]: curr_bound = df_residuals_stats[f"topic_num_resid_{stat_name}"] train_bound = df_residuals_stats_training[f"topic_num_resid_{stat_name}"] abs_diff_bound = curr_bound - train_bound df_residuals_stats[f"{stat_name}_resid_diff"] = ( 100 * abs_diff_bound / train_bound ) values = ["improved", "acceptable"] for stat_name in ["min", "median"]: res_stats = df_residuals_stats[f"{stat_name}_resid_diff"] # > 5 is poor since this indicates articles have become more # loosely related to eachother and we don't understand why # < -5 is poor since this indicates articles have become more # tightly related to eachother and we don't understand why criteria = [ all(res_stats < 0.00001) and all(res_stats >= -5), res_stats.between(0.00001, 5), ] # < -5 or > 5 will be assigned a (default) value of 'poor' df_residuals_stats[f"{stat_name}_resid_quality"] = np.select( criteria, values, "poor" ) cols = ["topic_num", "min_resid_quality", "median_resid_quality"] df_topic_word_weights_top_ten = df_topic_word_weights_top_ten.merge( df_residuals_stats[cols], on="topic_num", ).sort_values(by="topic_num", ignore_index=True) return [df_residuals_stats, df_topic_word_weights_top_ten] def calc_diff(f, s): return ((s - f) / f) * 100 def compare_best_num_topics_manual(df_coherences, num_topics_manually): best_num_topics = df_coherences.set_index("num_topics").squeeze().idxmax() bt_mask = df_coherences["num_topics"] == best_num_topics ntm_mask = df_coherences["num_topics"] == num_topics_manually score_w_man_num_topics = df_coherences.loc[ntm_mask, "coherence"].iloc[0] best_score = df_coherences.loc[bt_mask, "coherence"].iloc[0] pdiff = calc_diff(best_score, score_w_man_num_topics) df_coherences["ratio_to_best"] = df_coherences["coherence"] / best_score return [pdiff, df_coherences, best_num_topics] # General inputs az_storage_container_name = "myconedesx7" # Guardian Filenames # # Cloud-based files guardian_inputs = { "blobedesz21": "urls", "blobedesz19": "text1", "blobedesz20": "text2", } conn_str = ( "DefaultEndpointsProtocol=https;" f"AccountName={os.getenv('AZURE_STORAGE_ACCOUNT')};" f"AccountKey={os.getenv('AZURE_STORAGE_KEY')};" f"EndpointSuffix={os.getenv('ENDPOINT_SUFFIX')}" ) blob_service_client = BlobServiceClient.from_connection_string(conn_str=conn_str) ``` <a id="load-data-concatenate-and-filter"></a> ## 2. [Load Data, Concatenate and Filter](#load-data-concatenate-and-filter) We'll start by loading the data and drop the news articles that are 500 characters or shorter in length ``` %%time df_guardian = load_data( cloud_data, data_dir, "", "", guardian_inputs, blob_service_client, az_storage_container_name, unwanted_guardian_cols, ) df_guardian["year"] = pd.to_datetime(df_guardian["publication_date"]).dt.year df_guardian["article_chars"] = df_guardian["text"].str.split().str.len() ``` <a id="topic-modeling-using-gensim-nmf-with-topic-coherence-to-find-best-number-of-topics"></a> ## 3. [Topic modeling using Gensim NMF with Topic coherence to find best number of topics](#topic-modeling-using-gensim-nmf-with-topic-coherence-to-find-best-number-of-topics) <a id="pre-processing-for-gensim-nmf,-tokenization,-stemming-etc."></a> ### 3.1. [Pre-Processing for Gensim NMF, Tokenization, Stemming, etc.](#pre-processing-for-gensim-nmf,-tokenization,-stemming-etc.) Now, we'll perform the following processing actions on each news article's text - [tokenize](https://en.wikipedia.org/wiki/Lexical_analysis#Tokenization) the text of the article, using the [NLTK package's `TweetTokenizer`](https://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual) - clean the text of the articles - convert to lowercase - remove numbers - [expand contractions](https://www.kdnuggets.com/2018/08/practitioners-guide-processing-understanding-text-2.html) ([code](https://grammar.yourdictionary.com/style-and-usage/using-contractions.html)) - ([snowball](https://en.wikipedia.org/wiki/Snowball_(programming_language))) [stemming](https://en.wikipedia.org/wiki/Stemming) - remove [punctuation](https://docs.python.org/3/library/string.html#string.punctuation) - remove [stopwords](https://en.wikipedia.org/wiki/Stop_word) - remove any standalone single character - remove [whitespaces](https://en.wikipedia.org/wiki/Whitespace_character) ``` %%time texts = df_guardian["text"].apply(process_text) ``` <a id="use-gensim-to-perform-bag-of-words-transformation"></a> ### 3.2. [Use Gensim to perform Bag-of-Words transformation](#use-gensim-to-perform-bag-of-words-transformation) Now, we'll create a corpus comprising an assigned ID and corresponding count frequency of words from the tokens created above. This is Gensim's document conversion into a [bag-of-words](https://en.wikipedia.org/wiki/Bag-of-words_model) format. It returns a list of tuples comprising token identifier and the corresponding count (frequency). ``` %%time if not ci_run: # Create Dictionary dictionary = Dictionary(texts) # Remove extreme values dictionary.filter_extremes( no_below=3, # default = 5 no_above=0.85, # default is 0.5 keep_n=5_000, # default is 100_000 ) # Term Document Frequency for corpus corpus = [dictionary.doc2bow(text) for text in texts] ``` <a id="use-gensim-nmf-and-topic-coherence-to-find-number-of-topics"></a> ### 3.3. [Use Gensim NMF and Topic coherence to find number of topics](#use-gensim-nmf-and-topic-coherence-to-find-number-of-topics) Next, we'll compute the coherence score for our specified list of number of topics to be compared. A [topic coherence `Class`](https://radimrehurek.com/gensim/models/coherencemodel.html), from the [Gensim library](https://pypi.org/project/gensim/), is used to evaluate topics found using an [NMF](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) model. A helper function is used for this and is shown below ``` !pygmentize src/hybrid_helpers.py ``` The coherence model's reported coherence score will be computed as the number of topics used to train an NMF model is varied. The higher the coherence for the selected number of topics the better. This way, we can select the number of topics in an NMF model that returns the highest coherence score. ``` %%time if not ci_run: # For each specified number of topics, run NMF and calculate topic coherence topic_coherence_scores = [ get_nmf_coherence_scores(corpus, texts, n, dictionary) for n in topic_nums ] # Extract coherence score for each number of topics tried df_coherence_scores = ( pd.DataFrame.from_dict(dict(zip(topic_nums, topic_coherence_scores)), orient="index") .reset_index() .rename(columns={"index": "num_topics", 0: "coherence"}) .set_index("num_topics") ) display(df_coherence_scores) ``` <a id="training"></a> ### 3.4. [Training](#training) <a id="ml-model-training-using-best-number-of-topics"></a> #### 3.4.1. [ML Model Training using Best Number of Topics](#ml-model-training-using-best-number-of-topics) ``` best_num_topics = 35 %%time vectorizer = TfidfVectorizer( tokenizer=None, # default is None stop_words=None, # default is None lowercase=True, # default is True ngram_range=(1, 2), # default is (1, 2) max_df=0.85, # default is 1.0 min_df=3, # default is 1 max_features=5000, # default is None preprocessor=" ".join, # default is None binary=False, # default is False strip_accents=None, # default is None # token_pattern='(?u)\\b\\w\\w+\\b', # default is '(?u)\\b\\w\\w+\\b' ) sk_nmf = NMF( n_components=best_num_topics, solver="cd", # default is "cd" init="nndsvd", # default is None, "nnsvd" = Nonnegative Double Singular Value Decomposition max_iter=500, # default is 200 l1_ratio=0.0, # default is 0.0 alpha=0.0, # default is 0.0 tol=0.0001, # default is 0.0001 random_state=42, ) pipe = Pipeline([("vectorizer", vectorizer), ("nmf", sk_nmf)]) doc_topic = pipe.fit_transform(texts) A = pipe.named_steps["vectorizer"].transform(texts) W = pipe.named_steps["nmf"].components_ H = pipe.named_steps["nmf"].transform(A) ``` <a id="get-training-outputs"></a> #### 3.4.2. [Get Training Outputs](#get-training-outputs) ``` # Get row-wise (topic-wise) weights topic_words_weights = pd.DataFrame( W, index=[str(k) for k in range(best_num_topics)], columns=pipe.named_steps["vectorizer"].get_feature_names(), ) display( topic_words_weights.head(2) .append(topic_words_weights.tail(2)) .sample(15, axis=1) .style.set_caption( "Selection of 25 terms and their weights, or TFIDF scores, for " f"each of the {best_num_topics} topics" ) ) # Get top 10 weights by topic df_topic_words = ( pd.DataFrame( topic_words_weights.apply( lambda x: get_top_words_per_topic(x, n_top_words), axis=1 ).tolist(), index=topic_words_weights.index, ) .reset_index() .rename(columns={"index": "topic"}) .assign(topic_num=range(best_num_topics)) ).astype({"topic": int}) display( df_topic_words.style.set_caption( f"Top 10 terms, by TFIDF score, for each of the {best_num_topics} topics" ) ) ``` <a id="merge-training-outputs-with-data-features"></a> #### 3.4.3. [Merge Training Outputs with Data Features](#merge-training-outputs-with-data-features) ``` # Convert doc-topic matrix into DataFrame and append url column # - this gives the most popular topic for each article df_doc_topic = ( pd.DataFrame(doc_topic) .idxmax(axis=1) .rename("topic_num") .to_frame() .assign(url=df_guardian["url"].tolist()) ).astype({"topic_num": int}) display(df_doc_topic.head()) # Merge doc-topic DataFrame with top 10 terms by topic df_doc_topic_words = df_doc_topic.merge( df_topic_words, on="topic_num", how="left" ).astype({"topic_num": int}) display( df_doc_topic_words.head() .append(df_doc_topic_words.tail()) .style.set_caption( "Document URL, its topic and its top 10 TFIDF terms, shown for " f"each of the {best_num_topics} topics" ) ) data_doc_topic_words = ( df_guardian.merge(df_doc_topic_words, on=["url"], how="left") .astype({"topic_num": int}) .rename(columns={c: f"term_{c}" for c in range(0, 10 + 1)}) ) display( data_doc_topic_words.drop(columns=["text"]) .head(3) .append(data_doc_topic_words.drop(columns=["text"]).tail(3)) .style.set_caption("Document with its assigned topic and its top 10 TFIDF terms") ) # Calculate residual for each topic r = np.zeros(A.shape[0]) for row in range(A.shape[0]): r[row] = np.linalg.norm(A[row, :] - H[row, :].dot(W), "fro") data_doc_topic_words["topic_num_resid"] = r # For each topic, get top 10 terms and their weights df_topic_word_weights_top_ten = ( ( topic_words_weights.groupby(topic_words_weights.index) .apply(lambda x: x.iloc[0].nlargest(n_top_words)) .reset_index() .rename(columns={"level_0": "topic_num", "level_1": "term", 0: "weight"}) ) .astype({"topic_num": int}) .sort_values(by=["topic_num"]) ) display( df_topic_word_weights_top_ten.head(5) .append(df_topic_word_weights_top_ten.tail(5)) .style.set_caption( f"Term and TFIDF term scores, for each of the {best_num_topics} topics" ) ) # Add weights to merged doc-topic--top-10-terms DataFrame df_word_weights = ( pd.DataFrame( df_topic_word_weights_top_ten.groupby("topic_num")["weight"] .apply(list) .tolist(), columns=[f"term_{w}_weight" for w in range(0, 9 + 1)], ) .reset_index() .rename(columns={"index": "topic_num"}) .astype({"topic_num": int}) ) display( df_word_weights.head(5) .append(df_word_weights.tail(5)) .style.set_caption( f"Term and TFIDF term scores, for each of the {best_num_topics} topics" ) ) data_doc_topic_words_weights = data_doc_topic_words.merge( df_word_weights, on="topic_num", how="left" ) display( data_doc_topic_words_weights.drop(columns=["text"]) .head(5) .append(data_doc_topic_words_weights.drop(columns=["text"]).tail(5)) .style.set_caption( "Document with its assigned topic and its top 10 TFIDF term and weights" ) ) ``` <a id="sanity-checks-of-intermediate-and-final-merged-data"></a> #### 3.4.4. [Sanity Checks of Intermediate and Final Merged Data](#sanity-checks-of-intermediate-and-final-merged-data) Verify that `DataFrame` of all term weights contains the correct number of topics ``` display(topic_words_weights.head(2).append(topic_words_weights.tail(2))) assert len(topic_words_weights) == best_num_topics ``` Check that the first row of the `DataFrame` with the top `n` words per topic gives the specified number of words ``` display(df_topic_words.iloc[[0]]) expectd = df_topic_words.iloc[0, 1:-1].tolist() top_ten_words = topic_words_weights.iloc[0].nlargest(n_top_words).index.tolist() assert top_ten_words == expectd ``` Verify that the URL column in the `DataFrame` with processed news article URL and associated topic number matches the URL from the processed data ``` display(df_doc_topic.head(2).append(df_doc_topic.tail(2))) assert df_doc_topic["url"].tolist() == df_guardian["url"].tolist() ``` From the `DataFrame` with the terms, term weights and processed data columns (URL, text, publication date, etc.), verify that merging has been performed correctly by checking the following - all URLs in the URL column match the URLs column from the processed data - (OPTIONAL) repeat this for a random sample of the URLs - the dates are monotically increasing (as is the case with the processed data) Checking the full URL column ``` display( data_doc_topic_words_weights.head(2).append(data_doc_topic_words_weights.tail(2)) ) urls_from_data = data_doc_topic_words_weights["url"].tolist() assert urls_from_data == df_guardian["url"].tolist() ``` Checking the date column ``` data_pubs_dates = data_doc_topic_words_weights["publication_date"] assert pd.to_datetime(data_pubs_dates).is_monotonic ``` Checking a sample of the URL column ``` sampled_article_urls = data_doc_topic_words_weights.sample(155)["url"] data_web_urls = df_guardian.iloc[sampled_article_urls.index]["url"] assert sampled_article_urls.tolist() == data_web_urls.tolist() ``` <a id="overview-of-deployment-considerations"></a> ### 3.5. [Overview of Deployment Considerations](#overview-of-deployment-considerations) Up to this point, we have only discussed ML model training. We'll refer to this as the initial model training as this is the first training run to learn topics from the retrieved news articles. News articles have been published on the Guardian website since the last such article used in this initial training run. So, the training data in the next ML model training run (or re-training) will be expanded to include those additional news articles. We will deploy the initially trained model to production. For dealing with new news articles (retrieved after the initial training), two options might be considered - when a re-trained model is available, having been trained on expanded data (initial training data and new data), we could replace the deployed model with the newly trained one - ignore the new news articles and just make predictions based on the initially deployed model The first approach has the benefit of alerting us to ML model drift relative to the new training dataset. In production, we might get to a point where the initially learned number of topics no longer adequately represent the data used in re-training. In such a situation, if we have evidence from the re-trained model's predictions, of model drift then we can take an appropriate action to inspect the new model performance and maybe even manually re-train the model. Here, we'll use this approach and re-run ML model training when every new news article is added to our data. Based on the performance of this re-trained model, we'll either - deploy it to production and overwrite the existing model, or - manually inspect the re-trained model's performance and decide if the model performance has indeed drifted relative to the initial training run <a id="evaluate"></a> ### 3.6. [Evaluate](#evaluate) <a id="get-descriptive-statistics-for-topic-residuals"></a> #### 3.6.1. [Get Descriptive Statistics for Topic Residuals](#get-descriptive-statistics-for-topic-residuals) For the current project's workflow, when we re-training the ML model using more training data (new published news articles), the following could change - the best coherence score - the number of topics that produce the best coherence score - a selection of the descriptive statistics (min, median, max, etc.) of the residuals for each topic and we will need to handle these changes. Just because the best coherence score has changed, should we consider the re-trained ML model, its predicted number of topics and the descriptive statistics for the new topic residuals to be better? The approach that will be used to handle this will be to specify an acceptable threshod for - the best coherence score - a selection of the topicwise-descriptive statistics across all the topics during re-training, compared to the initial run. If the re-trained version of these two statistics are within the acceptable threshold, and if the best number of topics matches the number learned from initial model training, then we'll replace the deployed model by the version which was re-trained on a larger number of news articles. ``` # Get residual statistics for each topic df_residuals_stats = get_topic_residual_stats(data_doc_topic_words_weights).drop( columns=[ "url_nunique", "publication_date_min", "publication_date_max", "article_chars_min", "article_chars_max", ] ) display(df_residuals_stats) ``` **Observations** 1. The number of articles within each topic is not of use to the trained model when making predictions on unseen news articles. The absolute number of new articles within each topic might (shown above as the `news_articles` column) or relative number can justifiably be different between the training and unseen data. A topic was written about heavily due to news-making events, but interest in it might reasonably drop over time. The relative number of news articles in unseen data could well be different between the training and unseen data. 2. Similar reasoning as that for the `news_articles` column can be applied to the start and end dates of news article publication - these aren't of much use when predicting the topic of unseen news articles since there might reasonably be a long gap (in dates) between successive articles within a topic. For unseen news articles, the publication date on the Guardian's website will always correspond to a date that occurs later than the `end_date` from the training data. However, we can't do much with this information. 3. The training data required news article length greater than 500 characters, including whitespaces. This can be enforced for unseen data as well. Shorter unseen news articles can't be used and will have to return 10 dummy terms and term weights (eg. -999, etc.). However, the min and max of the unseen news article, excluding spaces, is more challenging to enforce. This could be approximated for unseen news articles and could be checked to fall within the range of the minimum and maximum of the corresponding values from the training data (shown above in the `min_length` and `max_length` columns respectively) including a buffer. We could assume this buffer to be, for example, 10 percent to allow for slightly longer or shorter news articles or consider a different buffer for each topic. The choice of the length of the buffer is subjective and we could be filtering out too many valid news articles. So, for this iteration of using a trained model, we won't filter unseen news articles based on the length of the article excluding whitespaces. For the initial, export the residual stats file. This will be considered as the reference file against which every subsequent ML model re-training run's residual statistics will be compared ``` train_res_stats_fname = "training_res_stats.csv" res_stats_output_dir_run = "./residual_stats/" local_res_stats_path = os.path.join(PROJ_ROOT_DIR, train_res_stats_fname) df_residuals_stats.to_csv(local_res_stats_path, index=False) ``` Now, export a copy of the residual stats file from the current ML model re-training run. This file will have the (a) current run ID and (b) number of news articles used to train the model, included in its filename - since this is also the initial training run, we will take the initial training run stats as the re-training file ``` fname, _ = os.path.splitext(train_res_stats_fname) os.makedirs(res_stats_output_dir_run, exist_ok=True) # run_id_0, as this is the initial run fsuffix = f"__run_id_0__num_articles_train_{len(df_guardian)}" saved_res_stats_path = os.path.join(res_stats_output_dir_run, fname + fsuffix + ".csv") print(saved_res_stats_path) df_residuals_stats.to_csv(saved_res_stats_path, index=False) ``` <a id="assess-re-trained-ml-model-performance"></a> #### 3.6.2. [Assess Re-Trained ML Model Performance](#assess-re-trained-ml-model-performance) Now, in a situation where we are considering the model trained by the current run, we will start by loading the reference residual stats against which the current run's stats will be compared. We will start by loading the reference residual statistics file ``` df_residuals_stats_training = pd.read_csv( os.path.join(PROJ_ROOT_DIR, "training_res_stats.csv") ) ``` Assess the residual stats from the current run, by comparing them against the refernce. This will create the `min_resid_quality` and `median_resid_quality` columns, which will contain the following, on a per topic basis - `improved`, if the re-trained model residuals are between -5% and 0.00001% of the reference stats - `acceptable`, if the re-trained model residuals are between 0.00001% and 5% of the reference stats - `poor`, if the re-trained model residuals are less than or equal to -5% or greater than or equal to 5% of the reference stats - greater than 5% is poor since this indicates articles have become more loosely related to eachother and we don't understand why (possibly the new training dataset covers more topics than we determined) - less than -5% is poor since this indicates articles have become more tightly related to eachother and we don't understand why (possibly the new training dataset covers fewer topics than we determined) ``` df_residuals_stats, df_topic_word_weights_top_ten = assess_topic_residual_stats( df_residuals_stats_training, df_residuals_stats, df_topic_word_weights_top_ten ) display(df_residuals_stats) display( df_topic_word_weights_top_ten.head().append(df_topic_word_weights_top_ten.tail()) ) ``` **Notes** 1. The `_quality` columns are also added to the top 10 term weights (TFIDF scores) for each topic. <a id="register"></a> ### 3.7. [Register](#register) Now, we'll determine if the re-trained ML model, from the current run, should be deployed to replace the existing deployed model. First, we'll calculate the number of poor topic residuals from the current run based on how residuals are assigned `poor` or `acceptable` ``` # Get number of poor or acceptable topic residual stats num_poor_topic_resids = sum( { f"{stat_name}_{quality}": df_residuals_stats[ df_residuals_stats[f"{stat_name}_resid_quality"] == quality ]["topic_num"].nunique() for stat_name in ["min", "median"] for quality in ["poor", "acceptable"] }.values() ) print(num_poor_topic_resids) ``` Next, we'll calculate the following - best number of topics from the current re-training run - percent difference between the coherence scores using - our manually determined best number of topics - best number of topics from the current re-training run ``` num_topics_manually = 35 if not ci_run: # Summarize coherence scores pdiff, df_coherences, best_n_topics = compare_best_num_topics_manual( df_coherence_scores.reset_index(), num_topics_manually ) bt_mask = df_coherences["num_topics"] == best_num_topics best_score = df_coherences.loc[bt_mask, "coherence"].iloc[0] print(best_score) ``` Summarize the performance of the model, including checking for each of the above three conditions, in a dictionary below ``` threshold_poor_topic_resids = 10 max_pct_diff_coh = 5 if not ci_run: # Create model properties n_topics_drift = num_topics_manually != best_n_topics coh_drift = abs(pdiff) > max_pct_diff_coh res_drift = num_poor_topic_resids > threshold_poor_topic_resids properties = { "best_n_topics": best_n_topics, "n_topics_drift": n_topics_drift, "coherence_for_best_n_topics": best_score, "pdiff": pdiff, "max_pct_diff_coherence": max_pct_diff_coh, "coherence_drift": coh_drift, "num_poor_topic_resids": num_poor_topic_resids, "threshold_poor_topic_resids": threshold_poor_topic_resids, "residual_drift": res_drift, "model_drift": n_topics_drift or coh_drift or res_drift, "num_articles_train": len(df_guardian), } display(pd.DataFrame.from_dict(properties, orient="index").T) ``` **Note** 1. If each subsequent run's residual statistics iteratively drift relative to the previously deployed model's statistics, then the updated model's residual statistics (now to be used in production) might be significantly different from those of the initial training run. For this reason, the initial reference residual statistics file will NEVER be over-written. The residual statistics of all subsequent training runs will simply be compared against the (reference) residual statistics of this initial run in order to determine if the current run's model will be deployed to replace the previously deployed model. In this way, every subsequent run will have a dictionary associated with it and the dictionary contents will reflect the new model residual stastics' performance relative to the reference statistics. These statistics will be used next to determine if the current training run's model should be deployed or not. <a id="deployment"></a> ### 3.8. [Deployment](#deployment) If any of the following conditions are met - coherence drift: the percent difference between coherence scores from the current and reference training run is greater than a pre-defined threshold - topics drift: the manually determined best number of topics is not equal to the best number of topics from the current run - residual drift: percent difference between topic residuals with trained model from current run and reference training run is greater than a pre-defined threshold then this indicates that our re-trained ML model (from the current run) has drifted relative to our initial model training and we - won't deploy and use the re-trained model in place of the model deployed from the initial training run - will send an email indicating that a manual analysis of ML model re-training on the new dataset (including the newly retrieved news articles since the initial model training) is required ``` props_of_interest = ["n_topics_drift", "coherence_drift", "residual_drift"] if not ci_run: drift_props = [properties[prop] for prop in props_of_interest] if sum(drift_props) == 0: print("Re-Trained performance improved. Updating model.") else: print("Re-Trained performance became worse. Not updating model.") ``` **Notes** 1. All of the three conditions will be met for the initial training run. This will allow the initial ML model deployment to proceed. --- <span style="float:left"> <a href="./8_gensim_coherence_nlp_trials.ipynb"><<< 8 - Gensim Topic Modeling</a> </span> <span style="float:right"> &#169; 2021 | <a href="https://github.com/edesz/streetcar-delays">@edesz</a> (MIT) </span>
github_jupyter
%load_ext lab_black %load_ext autoreload %autoreload 2 import os from glob import glob from io import StringIO import numpy as np import pandas as pd from azure.storage.blob import BlobServiceClient from gensim.corpora.dictionary import Dictionary from sklearn.decomposition import NMF from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline %aimport src.data_helpers from src.data_helpers import load_data %aimport src.extraction_helpers from src.extraction_helpers import get_top_n_most_freq_words %aimport src.hybrid_helpers from src.hybrid_helpers import get_nmf_coherence_scores %aimport src.processing_helpers from src.processing_helpers import process_text PROJ_ROOT_DIR = os.getcwd() data_dir = os.path.join(PROJ_ROOT_DIR, "data", "raw") processed_data_dir = os.path.join(PROJ_ROOT_DIR, "data", "processed") # General inputs cloud_data = True # Topic naming topic_nums = list(range(10, 45 + 5, 5)) n_top_words = 10 unwanted_guardian_cols = [ "webTitle", "id", "sectionId", "sectionName", "type", "isHosted", "pillarId", "pillarName", "page", "document_type", "apiUrl", "publication", "year", "month", "day", "dayofweek", "dayofyear", "weekofyear", "quarter", ] ci_run = False def get_top_words_per_topic(row, n_top_words=5): return row.nlargest(n_top_words).index.tolist() def get_topic_residual_stats(data_doc_topic_words_weights): df_residuals_stats = data_doc_topic_words_weights.groupby("topic_num").agg( { "topic_num_resid": ["min", "mean", "median", "max"], "url": "nunique", "publication_date": ["min", "max"], "article_chars": ["min", "max"], } ) df_residuals_stats.columns = df_residuals_stats.columns.map("_".join) df_residuals_stats = df_residuals_stats.reset_index().sort_values( by=["topic_num"], ignore_index=True ) return df_residuals_stats def assess_topic_residual_stats( df_residuals_stats_training, df_residuals_stats, df_topic_word_weights_top_ten ): for stat_name in ["min", "median", "max"]: curr_bound = df_residuals_stats[f"topic_num_resid_{stat_name}"] train_bound = df_residuals_stats_training[f"topic_num_resid_{stat_name}"] abs_diff_bound = curr_bound - train_bound df_residuals_stats[f"{stat_name}_resid_diff"] = ( 100 * abs_diff_bound / train_bound ) values = ["improved", "acceptable"] for stat_name in ["min", "median"]: res_stats = df_residuals_stats[f"{stat_name}_resid_diff"] # > 5 is poor since this indicates articles have become more # loosely related to eachother and we don't understand why # < -5 is poor since this indicates articles have become more # tightly related to eachother and we don't understand why criteria = [ all(res_stats < 0.00001) and all(res_stats >= -5), res_stats.between(0.00001, 5), ] # < -5 or > 5 will be assigned a (default) value of 'poor' df_residuals_stats[f"{stat_name}_resid_quality"] = np.select( criteria, values, "poor" ) cols = ["topic_num", "min_resid_quality", "median_resid_quality"] df_topic_word_weights_top_ten = df_topic_word_weights_top_ten.merge( df_residuals_stats[cols], on="topic_num", ).sort_values(by="topic_num", ignore_index=True) return [df_residuals_stats, df_topic_word_weights_top_ten] def calc_diff(f, s): return ((s - f) / f) * 100 def compare_best_num_topics_manual(df_coherences, num_topics_manually): best_num_topics = df_coherences.set_index("num_topics").squeeze().idxmax() bt_mask = df_coherences["num_topics"] == best_num_topics ntm_mask = df_coherences["num_topics"] == num_topics_manually score_w_man_num_topics = df_coherences.loc[ntm_mask, "coherence"].iloc[0] best_score = df_coherences.loc[bt_mask, "coherence"].iloc[0] pdiff = calc_diff(best_score, score_w_man_num_topics) df_coherences["ratio_to_best"] = df_coherences["coherence"] / best_score return [pdiff, df_coherences, best_num_topics] # General inputs az_storage_container_name = "myconedesx7" # Guardian Filenames # # Cloud-based files guardian_inputs = { "blobedesz21": "urls", "blobedesz19": "text1", "blobedesz20": "text2", } conn_str = ( "DefaultEndpointsProtocol=https;" f"AccountName={os.getenv('AZURE_STORAGE_ACCOUNT')};" f"AccountKey={os.getenv('AZURE_STORAGE_KEY')};" f"EndpointSuffix={os.getenv('ENDPOINT_SUFFIX')}" ) blob_service_client = BlobServiceClient.from_connection_string(conn_str=conn_str) %%time df_guardian = load_data( cloud_data, data_dir, "", "", guardian_inputs, blob_service_client, az_storage_container_name, unwanted_guardian_cols, ) df_guardian["year"] = pd.to_datetime(df_guardian["publication_date"]).dt.year df_guardian["article_chars"] = df_guardian["text"].str.split().str.len() %%time texts = df_guardian["text"].apply(process_text) %%time if not ci_run: # Create Dictionary dictionary = Dictionary(texts) # Remove extreme values dictionary.filter_extremes( no_below=3, # default = 5 no_above=0.85, # default is 0.5 keep_n=5_000, # default is 100_000 ) # Term Document Frequency for corpus corpus = [dictionary.doc2bow(text) for text in texts] !pygmentize src/hybrid_helpers.py %%time if not ci_run: # For each specified number of topics, run NMF and calculate topic coherence topic_coherence_scores = [ get_nmf_coherence_scores(corpus, texts, n, dictionary) for n in topic_nums ] # Extract coherence score for each number of topics tried df_coherence_scores = ( pd.DataFrame.from_dict(dict(zip(topic_nums, topic_coherence_scores)), orient="index") .reset_index() .rename(columns={"index": "num_topics", 0: "coherence"}) .set_index("num_topics") ) display(df_coherence_scores) best_num_topics = 35 %%time vectorizer = TfidfVectorizer( tokenizer=None, # default is None stop_words=None, # default is None lowercase=True, # default is True ngram_range=(1, 2), # default is (1, 2) max_df=0.85, # default is 1.0 min_df=3, # default is 1 max_features=5000, # default is None preprocessor=" ".join, # default is None binary=False, # default is False strip_accents=None, # default is None # token_pattern='(?u)\\b\\w\\w+\\b', # default is '(?u)\\b\\w\\w+\\b' ) sk_nmf = NMF( n_components=best_num_topics, solver="cd", # default is "cd" init="nndsvd", # default is None, "nnsvd" = Nonnegative Double Singular Value Decomposition max_iter=500, # default is 200 l1_ratio=0.0, # default is 0.0 alpha=0.0, # default is 0.0 tol=0.0001, # default is 0.0001 random_state=42, ) pipe = Pipeline([("vectorizer", vectorizer), ("nmf", sk_nmf)]) doc_topic = pipe.fit_transform(texts) A = pipe.named_steps["vectorizer"].transform(texts) W = pipe.named_steps["nmf"].components_ H = pipe.named_steps["nmf"].transform(A) # Get row-wise (topic-wise) weights topic_words_weights = pd.DataFrame( W, index=[str(k) for k in range(best_num_topics)], columns=pipe.named_steps["vectorizer"].get_feature_names(), ) display( topic_words_weights.head(2) .append(topic_words_weights.tail(2)) .sample(15, axis=1) .style.set_caption( "Selection of 25 terms and their weights, or TFIDF scores, for " f"each of the {best_num_topics} topics" ) ) # Get top 10 weights by topic df_topic_words = ( pd.DataFrame( topic_words_weights.apply( lambda x: get_top_words_per_topic(x, n_top_words), axis=1 ).tolist(), index=topic_words_weights.index, ) .reset_index() .rename(columns={"index": "topic"}) .assign(topic_num=range(best_num_topics)) ).astype({"topic": int}) display( df_topic_words.style.set_caption( f"Top 10 terms, by TFIDF score, for each of the {best_num_topics} topics" ) ) # Convert doc-topic matrix into DataFrame and append url column # - this gives the most popular topic for each article df_doc_topic = ( pd.DataFrame(doc_topic) .idxmax(axis=1) .rename("topic_num") .to_frame() .assign(url=df_guardian["url"].tolist()) ).astype({"topic_num": int}) display(df_doc_topic.head()) # Merge doc-topic DataFrame with top 10 terms by topic df_doc_topic_words = df_doc_topic.merge( df_topic_words, on="topic_num", how="left" ).astype({"topic_num": int}) display( df_doc_topic_words.head() .append(df_doc_topic_words.tail()) .style.set_caption( "Document URL, its topic and its top 10 TFIDF terms, shown for " f"each of the {best_num_topics} topics" ) ) data_doc_topic_words = ( df_guardian.merge(df_doc_topic_words, on=["url"], how="left") .astype({"topic_num": int}) .rename(columns={c: f"term_{c}" for c in range(0, 10 + 1)}) ) display( data_doc_topic_words.drop(columns=["text"]) .head(3) .append(data_doc_topic_words.drop(columns=["text"]).tail(3)) .style.set_caption("Document with its assigned topic and its top 10 TFIDF terms") ) # Calculate residual for each topic r = np.zeros(A.shape[0]) for row in range(A.shape[0]): r[row] = np.linalg.norm(A[row, :] - H[row, :].dot(W), "fro") data_doc_topic_words["topic_num_resid"] = r # For each topic, get top 10 terms and their weights df_topic_word_weights_top_ten = ( ( topic_words_weights.groupby(topic_words_weights.index) .apply(lambda x: x.iloc[0].nlargest(n_top_words)) .reset_index() .rename(columns={"level_0": "topic_num", "level_1": "term", 0: "weight"}) ) .astype({"topic_num": int}) .sort_values(by=["topic_num"]) ) display( df_topic_word_weights_top_ten.head(5) .append(df_topic_word_weights_top_ten.tail(5)) .style.set_caption( f"Term and TFIDF term scores, for each of the {best_num_topics} topics" ) ) # Add weights to merged doc-topic--top-10-terms DataFrame df_word_weights = ( pd.DataFrame( df_topic_word_weights_top_ten.groupby("topic_num")["weight"] .apply(list) .tolist(), columns=[f"term_{w}_weight" for w in range(0, 9 + 1)], ) .reset_index() .rename(columns={"index": "topic_num"}) .astype({"topic_num": int}) ) display( df_word_weights.head(5) .append(df_word_weights.tail(5)) .style.set_caption( f"Term and TFIDF term scores, for each of the {best_num_topics} topics" ) ) data_doc_topic_words_weights = data_doc_topic_words.merge( df_word_weights, on="topic_num", how="left" ) display( data_doc_topic_words_weights.drop(columns=["text"]) .head(5) .append(data_doc_topic_words_weights.drop(columns=["text"]).tail(5)) .style.set_caption( "Document with its assigned topic and its top 10 TFIDF term and weights" ) ) display(topic_words_weights.head(2).append(topic_words_weights.tail(2))) assert len(topic_words_weights) == best_num_topics display(df_topic_words.iloc[[0]]) expectd = df_topic_words.iloc[0, 1:-1].tolist() top_ten_words = topic_words_weights.iloc[0].nlargest(n_top_words).index.tolist() assert top_ten_words == expectd display(df_doc_topic.head(2).append(df_doc_topic.tail(2))) assert df_doc_topic["url"].tolist() == df_guardian["url"].tolist() display( data_doc_topic_words_weights.head(2).append(data_doc_topic_words_weights.tail(2)) ) urls_from_data = data_doc_topic_words_weights["url"].tolist() assert urls_from_data == df_guardian["url"].tolist() data_pubs_dates = data_doc_topic_words_weights["publication_date"] assert pd.to_datetime(data_pubs_dates).is_monotonic sampled_article_urls = data_doc_topic_words_weights.sample(155)["url"] data_web_urls = df_guardian.iloc[sampled_article_urls.index]["url"] assert sampled_article_urls.tolist() == data_web_urls.tolist() # Get residual statistics for each topic df_residuals_stats = get_topic_residual_stats(data_doc_topic_words_weights).drop( columns=[ "url_nunique", "publication_date_min", "publication_date_max", "article_chars_min", "article_chars_max", ] ) display(df_residuals_stats) train_res_stats_fname = "training_res_stats.csv" res_stats_output_dir_run = "./residual_stats/" local_res_stats_path = os.path.join(PROJ_ROOT_DIR, train_res_stats_fname) df_residuals_stats.to_csv(local_res_stats_path, index=False) fname, _ = os.path.splitext(train_res_stats_fname) os.makedirs(res_stats_output_dir_run, exist_ok=True) # run_id_0, as this is the initial run fsuffix = f"__run_id_0__num_articles_train_{len(df_guardian)}" saved_res_stats_path = os.path.join(res_stats_output_dir_run, fname + fsuffix + ".csv") print(saved_res_stats_path) df_residuals_stats.to_csv(saved_res_stats_path, index=False) df_residuals_stats_training = pd.read_csv( os.path.join(PROJ_ROOT_DIR, "training_res_stats.csv") ) df_residuals_stats, df_topic_word_weights_top_ten = assess_topic_residual_stats( df_residuals_stats_training, df_residuals_stats, df_topic_word_weights_top_ten ) display(df_residuals_stats) display( df_topic_word_weights_top_ten.head().append(df_topic_word_weights_top_ten.tail()) ) # Get number of poor or acceptable topic residual stats num_poor_topic_resids = sum( { f"{stat_name}_{quality}": df_residuals_stats[ df_residuals_stats[f"{stat_name}_resid_quality"] == quality ]["topic_num"].nunique() for stat_name in ["min", "median"] for quality in ["poor", "acceptable"] }.values() ) print(num_poor_topic_resids) num_topics_manually = 35 if not ci_run: # Summarize coherence scores pdiff, df_coherences, best_n_topics = compare_best_num_topics_manual( df_coherence_scores.reset_index(), num_topics_manually ) bt_mask = df_coherences["num_topics"] == best_num_topics best_score = df_coherences.loc[bt_mask, "coherence"].iloc[0] print(best_score) threshold_poor_topic_resids = 10 max_pct_diff_coh = 5 if not ci_run: # Create model properties n_topics_drift = num_topics_manually != best_n_topics coh_drift = abs(pdiff) > max_pct_diff_coh res_drift = num_poor_topic_resids > threshold_poor_topic_resids properties = { "best_n_topics": best_n_topics, "n_topics_drift": n_topics_drift, "coherence_for_best_n_topics": best_score, "pdiff": pdiff, "max_pct_diff_coherence": max_pct_diff_coh, "coherence_drift": coh_drift, "num_poor_topic_resids": num_poor_topic_resids, "threshold_poor_topic_resids": threshold_poor_topic_resids, "residual_drift": res_drift, "model_drift": n_topics_drift or coh_drift or res_drift, "num_articles_train": len(df_guardian), } display(pd.DataFrame.from_dict(properties, orient="index").T) props_of_interest = ["n_topics_drift", "coherence_drift", "residual_drift"] if not ci_run: drift_props = [properties[prop] for prop in props_of_interest] if sum(drift_props) == 0: print("Re-Trained performance improved. Updating model.") else: print("Re-Trained performance became worse. Not updating model.")
0.479016
0.824956
# FK Processing (CLI) A series of steps illustrating how to begin processing infrasonic data utilizing the command line methods of Infrapy ## Building a database for processing The CLI methods in Infrapy depend on a relational SQL database for pointing to data for processing and storing processing results. In this tutorial, we will build a SQLite database out of SAC files provided in the /test folder, and begin FK processing. ``` !pisces sac2db sqlite:///test.sqlite ../../test/data/cli/*.SAC ``` As infrapy is an array processing tool, after your sqlite database is created, you will need to update the REFSTA for each array using update_refsta.py ``` %run ../../scripts/update_refsta.py sqlite:///test.sqlite FSU %run ../../scripts/update_refsta.py sqlite:///test.sqlite HWU %run ../../scripts/update_refsta.py sqlite:///test.sqlite LCM %run ../../scripts/update_refsta.py sqlite:///test.sqlite PSU %run ../../scripts/update_refsta.py sqlite:///test.sqlite WMU mv sqlite:///test.sqlite ``` ## Updating Configuration File for Processing An example configuration file is provided in the /test folder. A full description of each section of the configuration file is provided in the documentation. The configuration file outlines the database you wish to process in, the specific parameters for processing and the station [or network for assocation processing] to use. For simplicity within this tutorial, since we have loaded data from five separate arrays into the database, we will call processing in the same database with five different files. ### Viewing the configuration file We can use the %load command to view a configuration file within this notebook. ``` # %load ../../test/db_processing_test/config_example.txt [database] # required url = sqlite:///test_detect.sqlite site = pisces.tables.css3:Site wfdisc = pisces.tables.css3:Wfdisc affiliation = pisces.tables.css3:Affiliation [GeneralParams] year=2012 dayofyearini=206 dayofyearend=208 station=BRP channel=EDF name=test cpucnt=30 [FKParams] name=mid band fk test freqmin=1 freqmax=3.0 beamwinlen=60 beamwinstep=30 backazmin=-180.0 backazmax=180.0 backazstep=1.5 trvelmin=300.0 trvelmax=600.0 trvelstep=2.5 beammethod=bartlett fkresults=fk_res_lcm numsources = 1 func_fk = None [FDetectParams] detwinlen=1800.0 detthresh=0.99 dsegmin=5 back_az_lim=15 detmethod=fstat tb_prod=4000 pfkid=0 fkresults=fk_res_brp fdresults=fd_res_brp [AssocLocParams] network=YJ pfdetectid=0 pfkid=0 beamwidth=10.0 rangemax=1000.0 distmax = 10.0 clusterthresh=4.0 trimthresh=3.0 eventdetmin=3 eventarrmin=2 duration = 60 fdtable_1=fd_res_brp fdtable_2=fd_res_fsu fdtable_3=fd_res_hwu resultstable = test_assoc6 # %load ../../test/db_processing_test/config_example.txt %%writefile ../../test/db_processing_test/config_example.txt # %load ../../test/db_processing_test/config_example.txt %%writefile ../../test/db_processing_test/config_example.txt [database] url = sqlite:///test_detect.sqlite site = pisces.tables.css3:Site wfdisc = pisces.tables.css3:Wfdisc affiliation = pisces.tables.css3:Affiliation [GeneralParams] year=2012 dayofyearini=206 dayofyearend=208 station=FSU channel=EDF name=test cpucnt=30 [FKParams] name=mid band fk test freqmin=.5 freqmax=5.0 beamwinlen=60 beamwinstep=30 backazmin=-180.0 backazmax=180.0 backazstep=1.5 trvelmin=300.0 trvelmax=600.0 trvelstep=2.5 beammethod=bartlett fkresults=fk_res_fsu numsources = 1 func_fk = None [FDetectParams] detwinlen=1800.0 detthresh=0.99 dsegmin=5 back_az_lim=15 detmethod=fstat tb_prod=4000 pfkid=0 fkresults=fk_res_brp fdresults=fd_res_brp [AssocLocParams] network=YJ pfdetectid=0 pfkid=0 beamwidth=10.0 rangemax=1000.0 distmax = 10.0 clusterthresh=4.0 trimthresh=3.0 eventdetmin=3 eventarrmin=2 duration = 60 fdtable_1=fd_res_brp fdtable_2=fd_res_fsu fdtable_3=fd_res_hwu resultstable = test_assoc6 ``` ## Running CLI FK Processing ``` %%writefile ../../test/db_processing_test/config_example.txt !infrapy run_fk --config_file ../../test/db_processing_test/config_example.txt ## Print FK Results ## Plot FK Results ## Change FK parameters, re-run processing ## Change station and results table in config file, re-run processing ```
github_jupyter
!pisces sac2db sqlite:///test.sqlite ../../test/data/cli/*.SAC %run ../../scripts/update_refsta.py sqlite:///test.sqlite FSU %run ../../scripts/update_refsta.py sqlite:///test.sqlite HWU %run ../../scripts/update_refsta.py sqlite:///test.sqlite LCM %run ../../scripts/update_refsta.py sqlite:///test.sqlite PSU %run ../../scripts/update_refsta.py sqlite:///test.sqlite WMU mv sqlite:///test.sqlite # %load ../../test/db_processing_test/config_example.txt [database] # required url = sqlite:///test_detect.sqlite site = pisces.tables.css3:Site wfdisc = pisces.tables.css3:Wfdisc affiliation = pisces.tables.css3:Affiliation [GeneralParams] year=2012 dayofyearini=206 dayofyearend=208 station=BRP channel=EDF name=test cpucnt=30 [FKParams] name=mid band fk test freqmin=1 freqmax=3.0 beamwinlen=60 beamwinstep=30 backazmin=-180.0 backazmax=180.0 backazstep=1.5 trvelmin=300.0 trvelmax=600.0 trvelstep=2.5 beammethod=bartlett fkresults=fk_res_lcm numsources = 1 func_fk = None [FDetectParams] detwinlen=1800.0 detthresh=0.99 dsegmin=5 back_az_lim=15 detmethod=fstat tb_prod=4000 pfkid=0 fkresults=fk_res_brp fdresults=fd_res_brp [AssocLocParams] network=YJ pfdetectid=0 pfkid=0 beamwidth=10.0 rangemax=1000.0 distmax = 10.0 clusterthresh=4.0 trimthresh=3.0 eventdetmin=3 eventarrmin=2 duration = 60 fdtable_1=fd_res_brp fdtable_2=fd_res_fsu fdtable_3=fd_res_hwu resultstable = test_assoc6 # %load ../../test/db_processing_test/config_example.txt %%writefile ../../test/db_processing_test/config_example.txt # %load ../../test/db_processing_test/config_example.txt %%writefile ../../test/db_processing_test/config_example.txt [database] url = sqlite:///test_detect.sqlite site = pisces.tables.css3:Site wfdisc = pisces.tables.css3:Wfdisc affiliation = pisces.tables.css3:Affiliation [GeneralParams] year=2012 dayofyearini=206 dayofyearend=208 station=FSU channel=EDF name=test cpucnt=30 [FKParams] name=mid band fk test freqmin=.5 freqmax=5.0 beamwinlen=60 beamwinstep=30 backazmin=-180.0 backazmax=180.0 backazstep=1.5 trvelmin=300.0 trvelmax=600.0 trvelstep=2.5 beammethod=bartlett fkresults=fk_res_fsu numsources = 1 func_fk = None [FDetectParams] detwinlen=1800.0 detthresh=0.99 dsegmin=5 back_az_lim=15 detmethod=fstat tb_prod=4000 pfkid=0 fkresults=fk_res_brp fdresults=fd_res_brp [AssocLocParams] network=YJ pfdetectid=0 pfkid=0 beamwidth=10.0 rangemax=1000.0 distmax = 10.0 clusterthresh=4.0 trimthresh=3.0 eventdetmin=3 eventarrmin=2 duration = 60 fdtable_1=fd_res_brp fdtable_2=fd_res_fsu fdtable_3=fd_res_hwu resultstable = test_assoc6 %%writefile ../../test/db_processing_test/config_example.txt !infrapy run_fk --config_file ../../test/db_processing_test/config_example.txt ## Print FK Results ## Plot FK Results ## Change FK parameters, re-run processing ## Change station and results table in config file, re-run processing
0.145085
0.870267
``` #Task 1 Milestone 3 import pandas as pd import matplotlib.pyplot as plt output = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", sep=",") cleaned_output = output.copy() cleaned_output = cleaned_output.copy().drop([' 77516',' 13',' 2174',' 0', ' Not-in-family', ' 40'],axis=1) cleaned_output = cleaned_output.rename(columns={'39': 'Age', ' State-gov': 'Industry', ' Bachelors': 'Education'}) cleaned_output = cleaned_output.rename(columns={' Never-married': 'Marital Stat', ' Adm-clerical': 'Position'}) cleaned_output = cleaned_output.rename(columns={' White': 'Race', ' Male': 'Gender', ' United-States': 'Country', ' <=50K':'Income'}) cleaned_output = cleaned_output.drop_duplicates() out = cleaned_output[cleaned_output['Gender'] == ' Male'] out.Income.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Male Income Spread") plt.xlabel("Income") plt.ylabel("Count") #Interesting - More people earn less than 50k than people who earn greater than 50k out = cleaned_output[cleaned_output['Gender'] == ' Female'] out.Income.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Female Income Spread") plt.xlabel("Income") plt.ylabel("Count") #Interesting - More people earn less than 50k than people who earn greater than 50k out2 = cleaned_output[cleaned_output['Gender'] == ' Male'] out2 = out2.sort_values('Age', ascending = True) out2 = out2.reset_index() out2 = out2.drop(columns=['index']) out2.Education.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Male Education Spread") plt.xlabel("Education") plt.ylabel("Count") #Interesting - This dataset appears to get most of it's information from people in their 30s. out3 = cleaned_output[cleaned_output['Gender'] == ' Female'] out3 = out3.sort_values('Age', ascending = True) out3 = out3.reset_index() out3 = out3.drop(columns=['index']) out3.Education.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Female Education Spread") plt.xlabel("Education") plt.ylabel("Count") #Interesting - This dataset appears to get most of it's information from people in their 30s. ``` # Analysis Pipeline ### Steps to recreate what I got 1. Import pandas, and matplotlib 2. Load dataset using complete file location 3. Use commas as separator. 4. Remove unecessary columns. 5. Rename columns appropriately 6. Drop duplicates. 7. Bar graph male and female income count 8. Bar graph male and female education count. 9. load and process may show an error with the "to_csv" when file is first opened, please ignore this. ``` #Method chaining import pandas as pd def load_and_process(location): # Method chaining begins df = ( pd.read_csv(location, sep = ",") .drop([' 77516',' 13',' 2174',' 0', ' Not-in-family', ' 40'],axis=1) .rename(columns={'39': 'Age', ' State-gov': 'Industry', ' Bachelors': 'Education'}) .rename(columns={' Never-married': 'Marital Stat', ' Adm-clerical': 'Position'}) .rename(columns={' White': 'Race', ' Male': 'Gender', ' United-States': 'Country', ' <=50K':'Income'}) .drop_duplicates() .sort_values("Age", ascending=True) ) return df ``` # EDA When I first downloaded this dataset, I immediately knew that I was going to do something income related. However, what exactly, I didn't know. So, I tinkered around with the dataset and decided to explore the gender pay gap situation using the dataset. Now, while many factors play into the gender pay gap, I wanted to see if education plays a small role in this large problem. First I graphed the male income comparison, and then the female income comparison. As we see above, there are far less women who make over 50k than men who make over 50k. Thus, showing that the gender pay gap is reflected in this dataset. Then, I comapred the education levels, and as it is clearly seen, lesser women are getting the same level of education than men, which highlights the faults in our education system. I also decided to isolate the genders as otherwise, the count itself could influence the results. With this, I hypothesize that education may play a role in the gender pay gap situation.
github_jupyter
#Task 1 Milestone 3 import pandas as pd import matplotlib.pyplot as plt output = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", sep=",") cleaned_output = output.copy() cleaned_output = cleaned_output.copy().drop([' 77516',' 13',' 2174',' 0', ' Not-in-family', ' 40'],axis=1) cleaned_output = cleaned_output.rename(columns={'39': 'Age', ' State-gov': 'Industry', ' Bachelors': 'Education'}) cleaned_output = cleaned_output.rename(columns={' Never-married': 'Marital Stat', ' Adm-clerical': 'Position'}) cleaned_output = cleaned_output.rename(columns={' White': 'Race', ' Male': 'Gender', ' United-States': 'Country', ' <=50K':'Income'}) cleaned_output = cleaned_output.drop_duplicates() out = cleaned_output[cleaned_output['Gender'] == ' Male'] out.Income.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Male Income Spread") plt.xlabel("Income") plt.ylabel("Count") #Interesting - More people earn less than 50k than people who earn greater than 50k out = cleaned_output[cleaned_output['Gender'] == ' Female'] out.Income.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Female Income Spread") plt.xlabel("Income") plt.ylabel("Count") #Interesting - More people earn less than 50k than people who earn greater than 50k out2 = cleaned_output[cleaned_output['Gender'] == ' Male'] out2 = out2.sort_values('Age', ascending = True) out2 = out2.reset_index() out2 = out2.drop(columns=['index']) out2.Education.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Male Education Spread") plt.xlabel("Education") plt.ylabel("Count") #Interesting - This dataset appears to get most of it's information from people in their 30s. out3 = cleaned_output[cleaned_output['Gender'] == ' Female'] out3 = out3.sort_values('Age', ascending = True) out3 = out3.reset_index() out3 = out3.drop(columns=['index']) out3.Education.value_counts().plot(kind='bar', figsize=(15,5)) plt.title("Female Education Spread") plt.xlabel("Education") plt.ylabel("Count") #Interesting - This dataset appears to get most of it's information from people in their 30s. #Method chaining import pandas as pd def load_and_process(location): # Method chaining begins df = ( pd.read_csv(location, sep = ",") .drop([' 77516',' 13',' 2174',' 0', ' Not-in-family', ' 40'],axis=1) .rename(columns={'39': 'Age', ' State-gov': 'Industry', ' Bachelors': 'Education'}) .rename(columns={' Never-married': 'Marital Stat', ' Adm-clerical': 'Position'}) .rename(columns={' White': 'Race', ' Male': 'Gender', ' United-States': 'Country', ' <=50K':'Income'}) .drop_duplicates() .sort_values("Age", ascending=True) ) return df
0.360827
0.818265
# DeepCross on Criteo Ad Dataset in TF 2.x ``` !pip install tensorflow==2.5.0 !pip install -q -U kaggle !pip install --upgrade --force-reinstall --no-deps kaggle !mkdir ~/.kaggle !cp /content/drive/MyDrive/kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d mrkmakr/criteo-dataset !unzip criteo-dataset.zip import os import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder, KBinsDiscretizer from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.layers import Layer, Input, ReLU from tensorflow.keras.layers import Dense, Embedding, Dropout from tensorflow.keras.regularizers import l2 from tensorflow.keras.losses import binary_crossentropy from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import AUC os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' os.environ['CUDA_VISIBLE_DEVICES'] = '0' file = 'dac/train.txt' read_part = True sample_num = 10000 test_size = 0.2 embed_dim = 8 dnn_dropout = 0.5 hidden_units = [256, 128, 64] learning_rate = 0.001 batch_size = 4096 epochs = 10 def sparseFeature(feat, feat_num, embed_dim=4): """ create dictionary for sparse feature :param feat: feature name :param feat_num: the total number of sparse features that do not repeat :param embed_dim: embedding dimension :return: """ return {'feat_name': feat, 'feat_num': feat_num, 'embed_dim': embed_dim} def denseFeature(feat): """ create dictionary for dense feature :param feat: dense feature name :return: """ return {'feat_name': feat} def create_criteo_dataset(file, embed_dim=8, read_part=True, sample_num=100000, test_size=0.2): """ a example about creating criteo dataset :param file: dataset's path :param embed_dim: the embedding dimension of sparse features :param read_part: whether to read part of it :param sample_num: the number of instances if read_part is True :param test_size: ratio of test dataset :return: feature columns, train, test """ names = ['label', 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11', 'C12', 'C13', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'C22', 'C23', 'C24', 'C25', 'C26'] if read_part: data_df = pd.read_csv(file, sep='\t', iterator=True, header=None, names=names) data_df = data_df.get_chunk(sample_num) else: data_df = pd.read_csv(file, sep='\t', header=None, names=names) sparse_features = ['C' + str(i) for i in range(1, 27)] dense_features = ['I' + str(i) for i in range(1, 14)] features = sparse_features + dense_features data_df[sparse_features] = data_df[sparse_features].fillna('-1') data_df[dense_features] = data_df[dense_features].fillna(0) # Bin continuous data into intervals. est = KBinsDiscretizer(n_bins=100, encode='ordinal', strategy='uniform') data_df[dense_features] = est.fit_transform(data_df[dense_features]) for feat in sparse_features: le = LabelEncoder() data_df[feat] = le.fit_transform(data_df[feat]) # ==============Feature Engineering=================== # ==================================================== feature_columns = [sparseFeature(feat, int(data_df[feat].max()) + 1, embed_dim=embed_dim) for feat in features] train, test = train_test_split(data_df, test_size=test_size) train_X = train[features].values.astype('int32') train_y = train['label'].values.astype('int32') test_X = test[features].values.astype('int32') test_y = test['label'].values.astype('int32') return feature_columns, (train_X, train_y), (test_X, test_y) class Residual_Units(Layer): """ Residual Units """ def __init__(self, hidden_unit, dim_stack): """ :param hidden_unit: A list. Neural network hidden units. :param dim_stack: A scalar. The dimension of inputs unit. """ super(Residual_Units, self).__init__() self.layer1 = Dense(units=hidden_unit, activation='relu') self.layer2 = Dense(units=dim_stack, activation=None) self.relu = ReLU() def call(self, inputs, **kwargs): x = inputs x = self.layer1(x) x = self.layer2(x) outputs = self.relu(x + inputs) return outputs class Deep_Crossing(Model): def __init__(self, feature_columns, hidden_units, res_dropout=0., embed_reg=1e-6): """ Deep&Crossing :param feature_columns: A list. sparse column feature information. :param hidden_units: A list. Neural network hidden units. :param res_dropout: A scalar. Dropout of resnet. :param embed_reg: A scalar. The regularizer of embedding. """ super(Deep_Crossing, self).__init__() self.sparse_feature_columns = feature_columns self.embed_layers = { 'embed_' + str(i): Embedding(input_dim=feat['feat_num'], input_length=1, output_dim=feat['embed_dim'], embeddings_initializer='random_uniform', embeddings_regularizer=l2(embed_reg)) for i, feat in enumerate(self.sparse_feature_columns) } # the total length of embedding layers embed_layers_len = sum([feat['embed_dim'] for feat in self.sparse_feature_columns]) self.res_network = [Residual_Units(unit, embed_layers_len) for unit in hidden_units] self.res_dropout = Dropout(res_dropout) self.dense = Dense(1, activation=None) def call(self, inputs): sparse_inputs = inputs sparse_embed = tf.concat([self.embed_layers['embed_{}'.format(i)](sparse_inputs[:, i]) for i in range(sparse_inputs.shape[1])], axis=-1) r = sparse_embed for res in self.res_network: r = res(r) r = self.res_dropout(r) outputs = tf.nn.sigmoid(self.dense(r)) return outputs def summary(self): sparse_inputs = Input(shape=(len(self.sparse_feature_columns),), dtype=tf.int32) Model(inputs=sparse_inputs, outputs=self.call(sparse_inputs)).summary() # ========================== Create dataset ======================= feature_columns, train, test = create_criteo_dataset(file=file, embed_dim=embed_dim, read_part=read_part, sample_num=sample_num, test_size=test_size) train_X, train_y = train test_X, test_y = test # ============================Build Model========================== mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = Deep_Crossing(feature_columns, hidden_units) model.summary() # =========================Compile============================ model.compile(loss=binary_crossentropy, optimizer=Adam(learning_rate=learning_rate), metrics=[AUC()]) # ============================model checkpoint====================== # check_path = 'save/deep_crossing_weights.epoch_{epoch:04d}.val_loss_{val_loss:.4f}.ckpt' # checkpoint = tf.keras.callbacks.ModelCheckpoint(check_path, save_weights_only=True, # verbose=1, period=5) # ===========================Fit============================== model.fit( train_X, train_y, epochs=epochs, callbacks=[EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)], # checkpoint batch_size=batch_size, validation_split=0.1 ) # ===========================Test============================== print('test AUC: %f' % model.evaluate(test_X, test_y, batch_size=batch_size)[1]) ```
github_jupyter
!pip install tensorflow==2.5.0 !pip install -q -U kaggle !pip install --upgrade --force-reinstall --no-deps kaggle !mkdir ~/.kaggle !cp /content/drive/MyDrive/kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d mrkmakr/criteo-dataset !unzip criteo-dataset.zip import os import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder, KBinsDiscretizer from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.layers import Layer, Input, ReLU from tensorflow.keras.layers import Dense, Embedding, Dropout from tensorflow.keras.regularizers import l2 from tensorflow.keras.losses import binary_crossentropy from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import AUC os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' os.environ['CUDA_VISIBLE_DEVICES'] = '0' file = 'dac/train.txt' read_part = True sample_num = 10000 test_size = 0.2 embed_dim = 8 dnn_dropout = 0.5 hidden_units = [256, 128, 64] learning_rate = 0.001 batch_size = 4096 epochs = 10 def sparseFeature(feat, feat_num, embed_dim=4): """ create dictionary for sparse feature :param feat: feature name :param feat_num: the total number of sparse features that do not repeat :param embed_dim: embedding dimension :return: """ return {'feat_name': feat, 'feat_num': feat_num, 'embed_dim': embed_dim} def denseFeature(feat): """ create dictionary for dense feature :param feat: dense feature name :return: """ return {'feat_name': feat} def create_criteo_dataset(file, embed_dim=8, read_part=True, sample_num=100000, test_size=0.2): """ a example about creating criteo dataset :param file: dataset's path :param embed_dim: the embedding dimension of sparse features :param read_part: whether to read part of it :param sample_num: the number of instances if read_part is True :param test_size: ratio of test dataset :return: feature columns, train, test """ names = ['label', 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11', 'C12', 'C13', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'C22', 'C23', 'C24', 'C25', 'C26'] if read_part: data_df = pd.read_csv(file, sep='\t', iterator=True, header=None, names=names) data_df = data_df.get_chunk(sample_num) else: data_df = pd.read_csv(file, sep='\t', header=None, names=names) sparse_features = ['C' + str(i) for i in range(1, 27)] dense_features = ['I' + str(i) for i in range(1, 14)] features = sparse_features + dense_features data_df[sparse_features] = data_df[sparse_features].fillna('-1') data_df[dense_features] = data_df[dense_features].fillna(0) # Bin continuous data into intervals. est = KBinsDiscretizer(n_bins=100, encode='ordinal', strategy='uniform') data_df[dense_features] = est.fit_transform(data_df[dense_features]) for feat in sparse_features: le = LabelEncoder() data_df[feat] = le.fit_transform(data_df[feat]) # ==============Feature Engineering=================== # ==================================================== feature_columns = [sparseFeature(feat, int(data_df[feat].max()) + 1, embed_dim=embed_dim) for feat in features] train, test = train_test_split(data_df, test_size=test_size) train_X = train[features].values.astype('int32') train_y = train['label'].values.astype('int32') test_X = test[features].values.astype('int32') test_y = test['label'].values.astype('int32') return feature_columns, (train_X, train_y), (test_X, test_y) class Residual_Units(Layer): """ Residual Units """ def __init__(self, hidden_unit, dim_stack): """ :param hidden_unit: A list. Neural network hidden units. :param dim_stack: A scalar. The dimension of inputs unit. """ super(Residual_Units, self).__init__() self.layer1 = Dense(units=hidden_unit, activation='relu') self.layer2 = Dense(units=dim_stack, activation=None) self.relu = ReLU() def call(self, inputs, **kwargs): x = inputs x = self.layer1(x) x = self.layer2(x) outputs = self.relu(x + inputs) return outputs class Deep_Crossing(Model): def __init__(self, feature_columns, hidden_units, res_dropout=0., embed_reg=1e-6): """ Deep&Crossing :param feature_columns: A list. sparse column feature information. :param hidden_units: A list. Neural network hidden units. :param res_dropout: A scalar. Dropout of resnet. :param embed_reg: A scalar. The regularizer of embedding. """ super(Deep_Crossing, self).__init__() self.sparse_feature_columns = feature_columns self.embed_layers = { 'embed_' + str(i): Embedding(input_dim=feat['feat_num'], input_length=1, output_dim=feat['embed_dim'], embeddings_initializer='random_uniform', embeddings_regularizer=l2(embed_reg)) for i, feat in enumerate(self.sparse_feature_columns) } # the total length of embedding layers embed_layers_len = sum([feat['embed_dim'] for feat in self.sparse_feature_columns]) self.res_network = [Residual_Units(unit, embed_layers_len) for unit in hidden_units] self.res_dropout = Dropout(res_dropout) self.dense = Dense(1, activation=None) def call(self, inputs): sparse_inputs = inputs sparse_embed = tf.concat([self.embed_layers['embed_{}'.format(i)](sparse_inputs[:, i]) for i in range(sparse_inputs.shape[1])], axis=-1) r = sparse_embed for res in self.res_network: r = res(r) r = self.res_dropout(r) outputs = tf.nn.sigmoid(self.dense(r)) return outputs def summary(self): sparse_inputs = Input(shape=(len(self.sparse_feature_columns),), dtype=tf.int32) Model(inputs=sparse_inputs, outputs=self.call(sparse_inputs)).summary() # ========================== Create dataset ======================= feature_columns, train, test = create_criteo_dataset(file=file, embed_dim=embed_dim, read_part=read_part, sample_num=sample_num, test_size=test_size) train_X, train_y = train test_X, test_y = test # ============================Build Model========================== mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = Deep_Crossing(feature_columns, hidden_units) model.summary() # =========================Compile============================ model.compile(loss=binary_crossentropy, optimizer=Adam(learning_rate=learning_rate), metrics=[AUC()]) # ============================model checkpoint====================== # check_path = 'save/deep_crossing_weights.epoch_{epoch:04d}.val_loss_{val_loss:.4f}.ckpt' # checkpoint = tf.keras.callbacks.ModelCheckpoint(check_path, save_weights_only=True, # verbose=1, period=5) # ===========================Fit============================== model.fit( train_X, train_y, epochs=epochs, callbacks=[EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)], # checkpoint batch_size=batch_size, validation_split=0.1 ) # ===========================Test============================== print('test AUC: %f' % model.evaluate(test_X, test_y, batch_size=batch_size)[1])
0.798305
0.659898
# Sequential MNIST & Permuted Sequential MNIST ## Overview MNIST is a handwritten digit classification dataset (Lecun et al., 1998) that is frequently used to test deep learning models. In particular, sequential MNIST is frequently used to test a recurrent network's ability to retain information from the distant past (see paper for references). In this task, each MNIST image ($28 \times 28$) is presented to the model as a $748 \times 1$ sequence for digit classification. In the more chanllenging permuted MNIST (P-MNIST) setting, the order of the sequence is permuted at a (fixed) random order. **NOET**: - Because a TCN's receptive field depends on depth of the network and the filter size, we need to make sure that the model we used can cover the sequence length 784. - While this is a sequence model task, we only use the last output (i.e. at time T=784) for the eventual classification. ## Settings ``` import torch as th import torch.nn as nn import numpy as np import torch.nn.functional as F from tqdm.notebook import tqdm BATCH_SIZE = 128 DEVICE = "cuda:0" DROPOUT = 0.05 CLIP = -1 EPOCHS = 5 KSIZE = 7 LEVELS = 8 LR = 2e-3 OPTIM = "Adam" NHID = 30 SEED = 1111 PERMUTE = False DATA_ROOT = "/home/densechen/dataset" N_CLASSES = 10 INPUT_CHANNELS = 1 SEQ_LEN = int(784 / INPUT_CHANNELS) CHANNEL_SIZES = [NHID] * LEVELS th.manual_seed(SEED) ``` ## Data Generation ``` from torchvision import datasets, transforms def data_generator(): transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) train_set = datasets.MNIST(DATA_ROOT, train=True, download=False, transform=transform) test_set = datasets.MNIST(DATA_ROOT, train=False, download=False, transform=transform) train_loader = th.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE) test_loader = th.utils.data.DataLoader(test_set, batch_size=BATCH_SIZE) return train_loader, test_loader print("Producing data...") train_loader, test_loader = data_generator() print("Finished.") ``` ## Build Model ``` from core.tcn import TemporalConvNet class TCN(nn.Module): def __init__(self, input_size, output_size, num_channels, kernel_size, dropout): super().__init__() self.tcn = TemporalConvNet(input_size, num_channels, kernel_size=kernel_size, dropout=dropout) self.linear = nn.Linear(num_channels[-1], output_size) def forward(self, inputs): # inputs shape: [N, C_in, L_in] y1 = self.tcn(inputs) o = self.linear(y1[..., -1]) return F.log_softmax(o, dim=1) print("Building model...") permute_idx = th.Tensor(np.random.permutation(784).astype(np.float64)).long() model = TCN(INPUT_CHANNELS, N_CLASSES, CHANNEL_SIZES, kernel_size=KSIZE, dropout=DROPOUT) model = model.to(DEVICE) optimizer = getattr(th.optim, OPTIM)(model.parameters(), lr=LR) model_size = sum(p.numel() for p in model.parameters()) print(f"Model Size: {model_size/1000} K") print("Finished.") ``` ## Run ``` def train(ep): model.train() process = tqdm(train_loader) for data, target in process: data, target = data.to(DEVICE), target.to(DEVICE) data = data.view(-1, INPUT_CHANNELS, SEQ_LEN) if PERMUTE: data = data[:, :, permute_idx] optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() if CLIP > 0: th.nn.utils.clip_grad_norm_(model.parameters(), CLIP) optimizer.step() process.set_description( f"Train Epoch: {ep}, Loss: {loss.item():.6f}") def test(): model.eval() correct = 0 with th.no_grad(): for data, target in test_loader: data, target = data.to(DEVICE), target.to(DEVICE) data = data.view(-1, INPUT_CHANNELS, SEQ_LEN) if PERMUTE: data = data[:, :, permute] output = model(data) pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).cpu().sum() print(f'Accuracy: {correct/len(test_loader.dataset) * 100:.2f}%') for epoch in range(1, EPOCHS+1): train(epoch) test() ```
github_jupyter
import torch as th import torch.nn as nn import numpy as np import torch.nn.functional as F from tqdm.notebook import tqdm BATCH_SIZE = 128 DEVICE = "cuda:0" DROPOUT = 0.05 CLIP = -1 EPOCHS = 5 KSIZE = 7 LEVELS = 8 LR = 2e-3 OPTIM = "Adam" NHID = 30 SEED = 1111 PERMUTE = False DATA_ROOT = "/home/densechen/dataset" N_CLASSES = 10 INPUT_CHANNELS = 1 SEQ_LEN = int(784 / INPUT_CHANNELS) CHANNEL_SIZES = [NHID] * LEVELS th.manual_seed(SEED) from torchvision import datasets, transforms def data_generator(): transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) train_set = datasets.MNIST(DATA_ROOT, train=True, download=False, transform=transform) test_set = datasets.MNIST(DATA_ROOT, train=False, download=False, transform=transform) train_loader = th.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE) test_loader = th.utils.data.DataLoader(test_set, batch_size=BATCH_SIZE) return train_loader, test_loader print("Producing data...") train_loader, test_loader = data_generator() print("Finished.") from core.tcn import TemporalConvNet class TCN(nn.Module): def __init__(self, input_size, output_size, num_channels, kernel_size, dropout): super().__init__() self.tcn = TemporalConvNet(input_size, num_channels, kernel_size=kernel_size, dropout=dropout) self.linear = nn.Linear(num_channels[-1], output_size) def forward(self, inputs): # inputs shape: [N, C_in, L_in] y1 = self.tcn(inputs) o = self.linear(y1[..., -1]) return F.log_softmax(o, dim=1) print("Building model...") permute_idx = th.Tensor(np.random.permutation(784).astype(np.float64)).long() model = TCN(INPUT_CHANNELS, N_CLASSES, CHANNEL_SIZES, kernel_size=KSIZE, dropout=DROPOUT) model = model.to(DEVICE) optimizer = getattr(th.optim, OPTIM)(model.parameters(), lr=LR) model_size = sum(p.numel() for p in model.parameters()) print(f"Model Size: {model_size/1000} K") print("Finished.") def train(ep): model.train() process = tqdm(train_loader) for data, target in process: data, target = data.to(DEVICE), target.to(DEVICE) data = data.view(-1, INPUT_CHANNELS, SEQ_LEN) if PERMUTE: data = data[:, :, permute_idx] optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() if CLIP > 0: th.nn.utils.clip_grad_norm_(model.parameters(), CLIP) optimizer.step() process.set_description( f"Train Epoch: {ep}, Loss: {loss.item():.6f}") def test(): model.eval() correct = 0 with th.no_grad(): for data, target in test_loader: data, target = data.to(DEVICE), target.to(DEVICE) data = data.view(-1, INPUT_CHANNELS, SEQ_LEN) if PERMUTE: data = data[:, :, permute] output = model(data) pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).cpu().sum() print(f'Accuracy: {correct/len(test_loader.dataset) * 100:.2f}%') for epoch in range(1, EPOCHS+1): train(epoch) test()
0.890407
0.973062
### Imports ``` import pickle import numpy as np import tensorflow as tf from matplotlib import pyplot as plt %matplotlib inline import gym gym.logger.set_level(40) import os # print(os.environ['LD_LIBRARY_PATH']) import load_policy import tf_util import mujoco_py ``` Load expert policy. ``` policy_fn = load_policy.load_policy('./experts/Hopper-v2.pkl') with open('./expert_policy_Hopper-v2.pkl', 'rb') as pkl: data_train = pickle.load(pkl) ``` ### My Code #### Behavioral Cloning (Direct) ``` observ = tf.placeholder(shape=(None, 11), dtype=tf.float64, name='observ') with tf.name_scope('fc1'): hidden1 = tf.contrib.layers.fully_connected(observ, num_outputs=128, activation_fn=tf.nn.relu) with tf.name_scope('fc2'): hidden2 = tf.contrib.layers.fully_connected(hidden1, num_outputs=256, activation_fn=tf.nn.relu) with tf.name_scope('fc3'): hidden3 = tf.contrib.layers.fully_connected(hidden2, num_outputs=64, activation_fn=tf.nn.relu) with tf.name_scope('fc_out'): action_hat = tf.contrib.layers.fully_connected(hidden3, num_outputs=3, activation_fn=None) action_expert = tf.placeholder(shape=[None, 1, 3], dtype=tf.float64, name='action_exp') mseloss = tf.reduce_mean(tf.squared_difference(action_hat, action_expert)) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(mseloss) sess = tf.InteractiveSession() tf.global_variables_initializer().run() env = gym.make('Hopper-v2') def get_random_batch(dataset=data_train, batch_size=200): ttl_size = len(dataset['observations']) choices = np.random.choice(ttl_size, batch_size, replace=False) return ( dataset['observations'][choices], dataset['actions'][choices] ) def run_our_model_once( model_input=observ, env=env, render=False): last_observ = env.reset() done = False reward = 0 while not done: action = sess.run(action_hat, feed_dict={ model_input: np.array([last_observ]) }) last_observ, step_reward, done, _ = env.step(action) reward += step_reward return reward train_loss_rec = [] reward_rec = [] train_size = data_train['observations'].shape[0] for step_cnt in range(500): perm = np.random.permutation(train_size) train_loss_ttl = 0 for cut in range(0, train_size, 20): train_loss, _ = sess.run([mseloss, train_op], feed_dict={ observ: data_train['observations'][cut:cut + 20], action_expert: data_train['actions'][cut:cut + 20] }) train_loss_ttl += train_loss train_loss_rec.append(train_loss_ttl / train_size) reward_rec.append(run_our_model_once()) print('step {}, training loss {:.4f}, with reward {:.4f}'.format(step_cnt, train_loss, reward_rec[-1]) \ + ' ' * 20, end='\r') if step_cnt % 100 == 0: print() # new line plt.plot(train_loss_rec) plt.show() plt.plot(reward_rec) plt.show() ``` #### DAgger ``` def DAgger( model_input=observ, model_output=action_hat, env=env, dataset=data_train): observ_rec = [] action_prd_rec = [] action_exp_rec = [] last_observ = env.reset() done = False cnt = 0 # run in real env with current policy while not done: cnt += 1 observ_rec.append(last_observ) step_action_hat = sess.run(model_output, feed_dict={ model_input: np.array([last_observ]) }) step_action_exp = policy_fn(last_observ[None, :]) action_prd_rec.append(step_action_hat) action_exp_rec.append(step_action_exp) last_observ, _, done, _ = env.step(step_action_hat) action_prd_rec, action_exp_rec, observ_rec = \ np.array(action_prd_rec), np.array(action_exp_rec), np.array(observ_rec) # aggregate data dataset['observations'] = \ np.append(data_train['observations'], observ_rec, axis=0) dataset['actions'] = \ np.append(data_train['actions'], action_exp_rec, axis=0) return observ_rec, action_prd_rec, action_exp_rec def reset_dataset(dataset=data_train, orig_size=1000): dataset['observations'] = dataset['observations'][:orig_size] dataset['actions'] = dataset['actions'][:orig_size] # reset_dataset() tf.global_variables_initializer().run() train_loss_rec = [] reward_rec = [] for step_cnt in range(500): for _ in range(5): DAgger() train_size = data_train['observations'].shape[0] perm = np.random.permutation(train_size) train_loss_ttl = 0 for cut in range(0, train_size, 20): train_loss, _ = sess.run([mseloss, train_op], feed_dict={ observ: data_train['observations'][cut:cut + 20], action_expert: data_train['actions'][cut:cut + 20] }) train_loss_ttl += train_loss train_loss_rec.append(train_loss_ttl / train_size) reward_rec.append(run_our_model_once()) print('step {}, training loss {:.4f}, with reward {:.4f}'.format(step_cnt, train_loss, reward_rec[-1]) \ + ' ' * 20, end='\r') if step_cnt % 100 == 0: print() # new line plt.plot(train_loss_rec) plt.show() plt.plot(reward_rec) plt.show() ```
github_jupyter
import pickle import numpy as np import tensorflow as tf from matplotlib import pyplot as plt %matplotlib inline import gym gym.logger.set_level(40) import os # print(os.environ['LD_LIBRARY_PATH']) import load_policy import tf_util import mujoco_py policy_fn = load_policy.load_policy('./experts/Hopper-v2.pkl') with open('./expert_policy_Hopper-v2.pkl', 'rb') as pkl: data_train = pickle.load(pkl) observ = tf.placeholder(shape=(None, 11), dtype=tf.float64, name='observ') with tf.name_scope('fc1'): hidden1 = tf.contrib.layers.fully_connected(observ, num_outputs=128, activation_fn=tf.nn.relu) with tf.name_scope('fc2'): hidden2 = tf.contrib.layers.fully_connected(hidden1, num_outputs=256, activation_fn=tf.nn.relu) with tf.name_scope('fc3'): hidden3 = tf.contrib.layers.fully_connected(hidden2, num_outputs=64, activation_fn=tf.nn.relu) with tf.name_scope('fc_out'): action_hat = tf.contrib.layers.fully_connected(hidden3, num_outputs=3, activation_fn=None) action_expert = tf.placeholder(shape=[None, 1, 3], dtype=tf.float64, name='action_exp') mseloss = tf.reduce_mean(tf.squared_difference(action_hat, action_expert)) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(mseloss) sess = tf.InteractiveSession() tf.global_variables_initializer().run() env = gym.make('Hopper-v2') def get_random_batch(dataset=data_train, batch_size=200): ttl_size = len(dataset['observations']) choices = np.random.choice(ttl_size, batch_size, replace=False) return ( dataset['observations'][choices], dataset['actions'][choices] ) def run_our_model_once( model_input=observ, env=env, render=False): last_observ = env.reset() done = False reward = 0 while not done: action = sess.run(action_hat, feed_dict={ model_input: np.array([last_observ]) }) last_observ, step_reward, done, _ = env.step(action) reward += step_reward return reward train_loss_rec = [] reward_rec = [] train_size = data_train['observations'].shape[0] for step_cnt in range(500): perm = np.random.permutation(train_size) train_loss_ttl = 0 for cut in range(0, train_size, 20): train_loss, _ = sess.run([mseloss, train_op], feed_dict={ observ: data_train['observations'][cut:cut + 20], action_expert: data_train['actions'][cut:cut + 20] }) train_loss_ttl += train_loss train_loss_rec.append(train_loss_ttl / train_size) reward_rec.append(run_our_model_once()) print('step {}, training loss {:.4f}, with reward {:.4f}'.format(step_cnt, train_loss, reward_rec[-1]) \ + ' ' * 20, end='\r') if step_cnt % 100 == 0: print() # new line plt.plot(train_loss_rec) plt.show() plt.plot(reward_rec) plt.show() def DAgger( model_input=observ, model_output=action_hat, env=env, dataset=data_train): observ_rec = [] action_prd_rec = [] action_exp_rec = [] last_observ = env.reset() done = False cnt = 0 # run in real env with current policy while not done: cnt += 1 observ_rec.append(last_observ) step_action_hat = sess.run(model_output, feed_dict={ model_input: np.array([last_observ]) }) step_action_exp = policy_fn(last_observ[None, :]) action_prd_rec.append(step_action_hat) action_exp_rec.append(step_action_exp) last_observ, _, done, _ = env.step(step_action_hat) action_prd_rec, action_exp_rec, observ_rec = \ np.array(action_prd_rec), np.array(action_exp_rec), np.array(observ_rec) # aggregate data dataset['observations'] = \ np.append(data_train['observations'], observ_rec, axis=0) dataset['actions'] = \ np.append(data_train['actions'], action_exp_rec, axis=0) return observ_rec, action_prd_rec, action_exp_rec def reset_dataset(dataset=data_train, orig_size=1000): dataset['observations'] = dataset['observations'][:orig_size] dataset['actions'] = dataset['actions'][:orig_size] # reset_dataset() tf.global_variables_initializer().run() train_loss_rec = [] reward_rec = [] for step_cnt in range(500): for _ in range(5): DAgger() train_size = data_train['observations'].shape[0] perm = np.random.permutation(train_size) train_loss_ttl = 0 for cut in range(0, train_size, 20): train_loss, _ = sess.run([mseloss, train_op], feed_dict={ observ: data_train['observations'][cut:cut + 20], action_expert: data_train['actions'][cut:cut + 20] }) train_loss_ttl += train_loss train_loss_rec.append(train_loss_ttl / train_size) reward_rec.append(run_our_model_once()) print('step {}, training loss {:.4f}, with reward {:.4f}'.format(step_cnt, train_loss, reward_rec[-1]) \ + ' ' * 20, end='\r') if step_cnt % 100 == 0: print() # new line plt.plot(train_loss_rec) plt.show() plt.plot(reward_rec) plt.show()
0.435661
0.577883
# Find values that meet a condition or threshold This notebook explores how to find values in a numpy array that meet a condition, for example, meeting a threshold condition. ## A small example Start with a numpy array that contains a set of measurements. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from phonlab.array import nonzero_groups %matplotlib inline a = np.array([0,1,2,3,2,1,0], dtype=int) print(a) ``` Next plot the values with filled circles. A horizontal line separates values above and below the threshold. ``` thresh = 1.5 fig, ax = plt.subplots() ax.plot(a, 'o'); ax.axhline(y=thresh); ``` The values at indexes [0, 1, 5, 6] lie below the threshold, and [2, 3, 4] are above. We can find each set of values with integer indexing and boolean indexing. ### Find threshold values with boolean indexing A simple comparison creates a boolean index for values that are greater than the threshold. Negating that index returns True for elements that are less than or equal to the threshold. ``` gtbool = a > thresh ltbool = ~gtbool # OR: `ltbool = a <= thresh` print('Above threshold: ', gtbool) print('Below threshold: ', ltbool) ``` You can use the boolean index to select elements from the array, to make a plot of values above the threshold, for instance. ``` plt.plot(a[gtbool], 'o') plt.ylim([0, 3.2]) plt.axhline(y=thresh); ``` In Pandas you can also assign the boolean index to a dataframe column for later querying. ``` df = pd.DataFrame({'meas': a}) df['is_gt'] = df['meas'] > thresh # Add the boolean index column 'is_gt' df ``` ### Find the indexes of the threshold values Sometimes you need to know the indexes where the condition is met. This can be useful for including the X axis when plotting, for instance. The `np.nonzero()` function returns the indexes of an array that have nonzero values, which are `True` values for a boolean array. We use it with our condition to find the indexes of values above and below the threshold. `np.nonzero()` returns a tuple of arrays corresponding to the dimensions of the input array, and since our input is 1D we only need the first element of that tuple. ``` ltidx = (a <= thresh).nonzero()[0] # [0] for first element of the tuple gtidx = (a > thresh).nonzero()[0] print(ltidx, gtidx) ``` We plot the values on each side of the threshold separately, and the indexes returned by `np.nonzero()` help us plot correctly along the x-axis. ``` fig, ax = plt.subplots() ax.plot(ltidx, a[ltidx], 'o'); ax.plot(gtidx, a[gtidx], 'D'); ax.axhline(y=thresh); ``` ## Finding groups (runs) of values that meet the threshold Let's look at a slightly longer array with a duplicate of `a` at a larger magnitude and with two extra values added to the ends. Our goal is to find the groups of consecutive values that meet the threshold condition. ``` aa = np.hstack([2, a, a * 2, 2]) fig, ax = plt.subplots() ax.plot(aa, 'o') ax.axhline(thresh); ``` We select the indexes of values greater than the threshold and plot them separately. Notice that there are four separate regions along the X axis where values are greater than the threshold. These regions are indexes `[0]`, `[3, 4, 5]`, `[9, 10, 11, 12, 13]`, `[15]`. ``` ltidx = (aa <= thresh).nonzero()[0] gtidx = (aa > thresh).nonzero()[0] print('Below threshold: ', ltidx) print('Above threshold: ', gtidx) fig, ax = plt.subplots() ax.plot(ltidx, aa[ltidx], 'o'); ax.plot(gtidx, aa[gtidx], 'D'); ax.axhline(y=thresh); ``` The indexes of the values above the threshold are contained in `gtidx`, but they are not grouped. The `nonzero_groups` function (from `phonlab.utils`, not numpy!) returns the same indexes as `nonzero` does and additionally groups the indexes into consecutive sets. ``` nonzero_groups(aa > thresh) ``` Compare with the flattened result in `gtidx`. ``` gtidx ``` `nonzero_groups` also includes parameters for filtering the groups in the result. The `minlen` parameter filters out groups with fewer than a minimum number of elements. For example, if `minlen` is `3` then only groups with at least three elements are returned. ``` nonzero_groups(aa > thresh, minlen=3) ``` Sometimes you might want to retrieve one or more groups of elements that include one or more specific elements. For example, you might be interested in the highest value in an array and want to retrieve the consecutive group of values above a threshold that includes that value. The `include_any` parameter is used to provide one or more indexes that a group must include in order to be included in the result. Groups that include any index provided by `include_any` are part of the result set, and those that don't have at least one of the indexes provided by `include_any` are excluded. ``` maxidx = aa.argmax() print(f'Index of max value: {maxidx}') nonzero_groups(aa > thresh, include_any=maxidx) ``` `include_any` can also provide a list of indexes. ``` nonzero_groups(aa > thresh, include_any=[0, 5, 12]) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt from phonlab.array import nonzero_groups %matplotlib inline a = np.array([0,1,2,3,2,1,0], dtype=int) print(a) thresh = 1.5 fig, ax = plt.subplots() ax.plot(a, 'o'); ax.axhline(y=thresh); gtbool = a > thresh ltbool = ~gtbool # OR: `ltbool = a <= thresh` print('Above threshold: ', gtbool) print('Below threshold: ', ltbool) plt.plot(a[gtbool], 'o') plt.ylim([0, 3.2]) plt.axhline(y=thresh); df = pd.DataFrame({'meas': a}) df['is_gt'] = df['meas'] > thresh # Add the boolean index column 'is_gt' df ltidx = (a <= thresh).nonzero()[0] # [0] for first element of the tuple gtidx = (a > thresh).nonzero()[0] print(ltidx, gtidx) fig, ax = plt.subplots() ax.plot(ltidx, a[ltidx], 'o'); ax.plot(gtidx, a[gtidx], 'D'); ax.axhline(y=thresh); aa = np.hstack([2, a, a * 2, 2]) fig, ax = plt.subplots() ax.plot(aa, 'o') ax.axhline(thresh); ltidx = (aa <= thresh).nonzero()[0] gtidx = (aa > thresh).nonzero()[0] print('Below threshold: ', ltidx) print('Above threshold: ', gtidx) fig, ax = plt.subplots() ax.plot(ltidx, aa[ltidx], 'o'); ax.plot(gtidx, aa[gtidx], 'D'); ax.axhline(y=thresh); nonzero_groups(aa > thresh) gtidx nonzero_groups(aa > thresh, minlen=3) maxidx = aa.argmax() print(f'Index of max value: {maxidx}') nonzero_groups(aa > thresh, include_any=maxidx) nonzero_groups(aa > thresh, include_any=[0, 5, 12])
0.690768
0.98943
# 卷积神经网络 --- 从0开始 之前的教程里,在输入神经网络前我们将输入图片直接转成了向量。这样做有两个不好的地方: - 在图片里相近的像素在向量表示里可能很远,从而模型很难捕获他们的空间关系。 - 对于大图片输入,模型可能会很大。例如输入是$256\times 256\times3$的照片(仍然远比手机拍的小),输入层是1000,那么这一层的模型大小是将近1GB. 这一节我们介绍卷积神经网络,其有效了解决了上述两个问题。 ## 卷积神经网络 卷积神经网络是指主要由卷积层构成的神经网络。 ### 卷积层 卷积层跟前面的全连接层类似,但输入和权重不是做简单的矩阵乘法,而是使用每次作用在一个窗口上的卷积。下图演示了输入是一个$4\times 4$矩阵,使用一个$3\times 3$的权重,计算得到$2\times 2$结果的过程。每次我们采样一个跟权重一样大小的窗口,让它跟权重做按元素的乘法然后相加。通常我们也是用卷积的术语把这个权重叫kernel或者filter。 ![](https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/no_padding_no_strides.gif) (图片版权属于vdumoulin@github) 我们使用`nd.Convolution`来演示这个。 ``` from mxnet import nd # 输入输出数据格式是 batch x channel x height x width,这里batch和channel都是1 # 权重格式是 output_channels x in_channels x height x width,这里input_filter和output_filter都是1。 w = nd.arange(4).reshape((1,1,2,2)) b = nd.array([1]) data = nd.arange(9).reshape((1,1,3,3)) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[1]) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) ``` 我们可以控制如何移动窗口,和在边缘的时候如何填充窗口。下图演示了`stride=2`和`pad=1`。 ![](https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/padding_strides.gif) ``` out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[1], stride=(2,2), pad=(1,1)) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) ``` 当输入数据有多个通道的时候,每个通道会有对应的权重,然后会对每个通道做卷积之后在通道之间求和 $$conv(data, w, b) = \sum_i conv(data[:,i,:,:], w[:,i,:,:], b)$$ ``` w = nd.arange(8).reshape((1,2,2,2)) data = nd.arange(18).reshape((1,2,3,3)) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[0]) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) ``` 当输入需要多通道时,每个输出通道有对应权重,然后每个通道上做卷积。 $$conv(data, w, b)[:,i,:,:] = conv(data, w[i,:,:,:], b[i])$$ ``` w = nd.arange(16).reshape((2,2,2,2)) data = nd.arange(18).reshape((1,2,3,3)) b = nd.array([1,2]) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[0]) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) ``` ### 池化层(pooling) 因为卷积层每次作用在一个窗口,它对位置很敏感。池化层能够很好的缓解这个问题。它跟卷积类似每次看一个小窗口,然后选出窗口里面最大的元素,或者平均元素作为输出。 ``` data = nd.arange(18).reshape((1,2,3,3)) max_pool = nd.Pooling(data=data, pool_type="max", kernel=(2,2)) avg_pool = nd.Pooling(data=data, pool_type="avg", kernel=(2,2)) print('data:', data, '\n\nmax pooling:', max_pool, '\n\navg pooling:', avg_pool) ``` 下面我们可以开始使用这些层构建模型了。 ## 获取数据 我们继续使用FashionMNIST(希望你还没有彻底厌烦这个数据) ``` import sys sys.path.append('..') from utils import load_data_fashion_mnist batch_size = 256 train_data, test_data = load_data_fashion_mnist(batch_size) ``` ## 定义模型 因为卷积网络计算比全连接要复杂,这里我们默认使用GPU来计算。如果GPU不能用,默认使用CPU。(下面这段代码会保存在`utils.py`里可以下次重复使用)。 ``` import mxnet as mx try: ctx = mx.gpu() _ = nd.zeros((1,), ctx=ctx) except: ctx = mx.cpu() ctx ``` 我们使用MNIST常用的LeNet,它有两个卷积层,之后是两个全连接层。注意到我们将权重全部创建在`ctx`上: ``` weight_scale = .01 # output channels = 20, kernel = (5,5) W1 = nd.random_normal(shape=(20,1,5,5), scale=weight_scale, ctx=ctx) b1 = nd.zeros(W1.shape[0], ctx=ctx) # output channels = 50, kernel = (3,3) W2 = nd.random_normal(shape=(50,20,3,3), scale=weight_scale, ctx=ctx) b2 = nd.zeros(W2.shape[0], ctx=ctx) # output dim = 128 W3 = nd.random_normal(shape=(1250, 128), scale=weight_scale, ctx=ctx) b3 = nd.zeros(W3.shape[1], ctx=ctx) # output dim = 10 W4 = nd.random_normal(shape=(W3.shape[1], 10), scale=weight_scale, ctx=ctx) b4 = nd.zeros(W4.shape[1], ctx=ctx) params = [W1, b1, W2, b2, W3, b3, W4, b4] for param in params: param.attach_grad() ``` 卷积模块通常是“卷积层-激活层-池化层”。然后转成2D矩阵输出给后面的全连接层。 ``` def net(X, verbose=False): X = X.as_in_context(W1.context) # 第一层卷积 h1_conv = nd.Convolution( data=X, weight=W1, bias=b1, kernel=W1.shape[2:], num_filter=W1.shape[0]) h1_activation = nd.relu(h1_conv) h1 = nd.Pooling( data=h1_activation, pool_type="max", kernel=(2,2), stride=(2,2)) # 第二层卷积 h2_conv = nd.Convolution( data=h1, weight=W2, bias=b2, kernel=W2.shape[2:], num_filter=W2.shape[0]) h2_activation = nd.relu(h2_conv) h2 = nd.Pooling(data=h2_activation, pool_type="max", kernel=(2,2), stride=(2,2)) h2 = nd.flatten(h2) # 第一层全连接 h3_linear = nd.dot(h2, W3) + b3 h3 = nd.relu(h3_linear) # 第二层全连接 h4_linear = nd.dot(h3, W4) + b4 if verbose: print('X:',X.shape) print('W1:',W1.shape) print('h1_conv:',h1_conv.shape) print('h1_activation:',h1_activation.shape) print('1st conv block:', h1.shape) print() print('W2:',W2.shape) print('h2_conv:',h2_conv.shape) print('h2_activation:',h2_activation.shape) print('2nd conv block:', h2.shape) print() print('1st dense:', h3.shape) print('2nd dense:', h4_linear.shape) print('output:', h4_linear) return h4_linear ``` 测试一下,输出中间结果形状(当然可以直接打印结果)和最终结果。 ``` for data, _ in train_data: net(data, verbose=True) break ``` ## 训练 跟前面没有什么不同的,除了这里我们使用`as_in_context`将`data`和`label`都放置在需要的设备上。(下面这段代码也将保存在`utils.py`里方便之后使用)。 ``` from mxnet import autograd as autograd from utils import SGD, accuracy, evaluate_accuracy from mxnet import gluon softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss() learning_rate = .2 for epoch in range(5): train_loss = 0. train_acc = 0. for data, label in train_data: label = label.as_in_context(ctx) with autograd.record(): output = net(data) loss = softmax_cross_entropy(output, label) loss.backward() SGD(params, learning_rate/batch_size) train_loss += nd.mean(loss).asscalar() train_acc += accuracy(output, label) test_acc = evaluate_accuracy(test_data, net, ctx) print("Epoch %d. Loss: %f, Train acc %f, Test acc %f" % ( epoch, train_loss/len(train_data), train_acc/len(train_data), test_acc)) ``` ## 结论 可以看到卷积神经网络比前面的多层感知的分类精度更好。事实上,如果你看懂了这一章,那你基本知道了计算视觉里最重要的几个想法。LeNet早在90年代就提出来了。不管你相信不相信,如果你5年前懂了这个而且开了家公司,那么你很可能现在已经把公司作价几千万卖个某大公司了。幸运的是,或者不幸的是,现在的算法已经更加高级些了,接下来我们会看到一些更加新的想法。 ## 练习 - 试试改改卷积层设定,例如filter数量,kernel大小 - 试试把池化层从`max`改到`avg` - 如果你有GPU,那么尝试用CPU来跑一下看看 - 你可能注意到比前面的多层感知机慢了很多,那么尝试计算下这两个模型分别需要多少浮点计算。例如$n\times m$和$m \times k$的矩阵乘法需要浮点运算 $2nmk$。 **吐槽和讨论欢迎点**[这里](https://discuss.gluon.ai/t/topic/736)
github_jupyter
from mxnet import nd # 输入输出数据格式是 batch x channel x height x width,这里batch和channel都是1 # 权重格式是 output_channels x in_channels x height x width,这里input_filter和output_filter都是1。 w = nd.arange(4).reshape((1,1,2,2)) b = nd.array([1]) data = nd.arange(9).reshape((1,1,3,3)) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[1]) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[1], stride=(2,2), pad=(1,1)) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) w = nd.arange(8).reshape((1,2,2,2)) data = nd.arange(18).reshape((1,2,3,3)) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[0]) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) w = nd.arange(16).reshape((2,2,2,2)) data = nd.arange(18).reshape((1,2,3,3)) b = nd.array([1,2]) out = nd.Convolution(data, w, b, kernel=w.shape[2:], num_filter=w.shape[0]) print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out) data = nd.arange(18).reshape((1,2,3,3)) max_pool = nd.Pooling(data=data, pool_type="max", kernel=(2,2)) avg_pool = nd.Pooling(data=data, pool_type="avg", kernel=(2,2)) print('data:', data, '\n\nmax pooling:', max_pool, '\n\navg pooling:', avg_pool) import sys sys.path.append('..') from utils import load_data_fashion_mnist batch_size = 256 train_data, test_data = load_data_fashion_mnist(batch_size) import mxnet as mx try: ctx = mx.gpu() _ = nd.zeros((1,), ctx=ctx) except: ctx = mx.cpu() ctx weight_scale = .01 # output channels = 20, kernel = (5,5) W1 = nd.random_normal(shape=(20,1,5,5), scale=weight_scale, ctx=ctx) b1 = nd.zeros(W1.shape[0], ctx=ctx) # output channels = 50, kernel = (3,3) W2 = nd.random_normal(shape=(50,20,3,3), scale=weight_scale, ctx=ctx) b2 = nd.zeros(W2.shape[0], ctx=ctx) # output dim = 128 W3 = nd.random_normal(shape=(1250, 128), scale=weight_scale, ctx=ctx) b3 = nd.zeros(W3.shape[1], ctx=ctx) # output dim = 10 W4 = nd.random_normal(shape=(W3.shape[1], 10), scale=weight_scale, ctx=ctx) b4 = nd.zeros(W4.shape[1], ctx=ctx) params = [W1, b1, W2, b2, W3, b3, W4, b4] for param in params: param.attach_grad() def net(X, verbose=False): X = X.as_in_context(W1.context) # 第一层卷积 h1_conv = nd.Convolution( data=X, weight=W1, bias=b1, kernel=W1.shape[2:], num_filter=W1.shape[0]) h1_activation = nd.relu(h1_conv) h1 = nd.Pooling( data=h1_activation, pool_type="max", kernel=(2,2), stride=(2,2)) # 第二层卷积 h2_conv = nd.Convolution( data=h1, weight=W2, bias=b2, kernel=W2.shape[2:], num_filter=W2.shape[0]) h2_activation = nd.relu(h2_conv) h2 = nd.Pooling(data=h2_activation, pool_type="max", kernel=(2,2), stride=(2,2)) h2 = nd.flatten(h2) # 第一层全连接 h3_linear = nd.dot(h2, W3) + b3 h3 = nd.relu(h3_linear) # 第二层全连接 h4_linear = nd.dot(h3, W4) + b4 if verbose: print('X:',X.shape) print('W1:',W1.shape) print('h1_conv:',h1_conv.shape) print('h1_activation:',h1_activation.shape) print('1st conv block:', h1.shape) print() print('W2:',W2.shape) print('h2_conv:',h2_conv.shape) print('h2_activation:',h2_activation.shape) print('2nd conv block:', h2.shape) print() print('1st dense:', h3.shape) print('2nd dense:', h4_linear.shape) print('output:', h4_linear) return h4_linear for data, _ in train_data: net(data, verbose=True) break from mxnet import autograd as autograd from utils import SGD, accuracy, evaluate_accuracy from mxnet import gluon softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss() learning_rate = .2 for epoch in range(5): train_loss = 0. train_acc = 0. for data, label in train_data: label = label.as_in_context(ctx) with autograd.record(): output = net(data) loss = softmax_cross_entropy(output, label) loss.backward() SGD(params, learning_rate/batch_size) train_loss += nd.mean(loss).asscalar() train_acc += accuracy(output, label) test_acc = evaluate_accuracy(test_data, net, ctx) print("Epoch %d. Loss: %f, Train acc %f, Test acc %f" % ( epoch, train_loss/len(train_data), train_acc/len(train_data), test_acc))
0.530723
0.988268
At the end of this step, you will understand the concepts of underfitting and overfitting, and you will be able to apply these ideas to make your models more accurate. # Experimenting With Different Models Now that you have a reliable way to measure model accuracy, you can experiment with alternative models and see which gives the best predictions. But what alternatives do you have for models? You can see in scikit-learn's [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) that the decision tree model has many options (more than you'll want or need for a long time). The most important options determine the tree's depth. Recall from [the first lesson in this course](https://www.kaggle.com/dansbecker/how-models-work) that a tree's depth is a measure of how many splits it makes before coming to a prediction. This is a relatively shallow tree ![Depth 2 Tree](http://i.imgur.com/R3ywQsR.png) In practice, it's not uncommon for a tree to have 10 splits between the top level (all houses) and a leaf. As the tree gets deeper, the dataset gets sliced up into leaves with fewer houses. If a tree only had 1 split, it divides the data into 2 groups. If each group is split again, we would get 4 groups of houses. Splitting each of those again would create 8 groups. If we keep doubling the number of groups by adding more splits at each level, we'll have \\(2^{10}\\) groups of houses by the time we get to the 10th level. That's 1024 leaves. When we divide the houses amongst many leaves, we also have fewer houses in each leaf. Leaves with very few houses will make predictions that are quite close to those homes' actual values, but they may make very unreliable predictions for new data (because each prediction is based on only a few houses). This is a phenomenon called **overfitting**, where a model matches the training data almost perfectly, but does poorly in validation and other new data. On the flip side, if we make our tree very shallow, it doesn't divide up the houses into very distinct groups. At an extreme, if a tree divides houses into only 2 or 4, each group still has a wide variety of houses. Resulting predictions may be far off for most houses, even in the training data (and it will be bad in validation too for the same reason). When a model fails to capture important distinctions and patterns in the data, so it performs poorly even in training data, that is called **underfitting**. Since we care about accuracy on new data, which we estimate from our validation data, we want to find the sweet spot between underfitting and overfitting. Visually, we want the low point of the (red) validation curve in the figure below. ![underfitting_overfitting](http://i.imgur.com/AXSEOfI.png) # Example There are a few alternatives for controlling the tree depth, and many allow for some routes through the tree to have greater depth than other routes. But the *max_leaf_nodes* argument provides a very sensible way to control overfitting vs underfitting. The more leaves we allow the model to make, the more we move from the underfitting area in the above graph to the overfitting area. We can use a utility function to help compare MAE scores from different values for *max_leaf_nodes*: ``` from sklearn.metrics import mean_absolute_error from sklearn.tree import DecisionTreeRegressor def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y): model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0) model.fit(train_X, train_y) preds_val = model.predict(val_X) mae = mean_absolute_error(val_y, preds_val) return(mae) ``` The data is loaded into **train_X**, **val_X**, **train_y** and **val_y** using the code you've already seen (and which you've already written). ``` # Data Loading Code Runs At This Point import pandas as pd # Load data melbourne_file_path = '../input/melbourne-housing-snapshot/melb_data.csv' melbourne_data = pd.read_csv(melbourne_file_path) # Filter rows with missing values filtered_melbourne_data = melbourne_data.dropna(axis=0) # Choose target and features y = filtered_melbourne_data.Price melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea', 'YearBuilt', 'Lattitude', 'Longtitude'] X = filtered_melbourne_data[melbourne_features] from sklearn.model_selection import train_test_split # split data into training and validation data, for both features and target train_X, val_X, train_y, val_y = train_test_split(X, y,random_state = 0) ``` We can use a for-loop to compare the accuracy of models built with different values for *max_leaf_nodes.* ``` # compare MAE with differing values of max_leaf_nodes for max_leaf_nodes in [5, 50, 500, 5000]: my_mae = get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y) print("Max leaf nodes: %d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, my_mae)) ``` Of the options listed, 500 is the optimal number of leaves. --- # Conclusion Here's the takeaway: Models can suffer from either: - **Overfitting:** capturing spurious patterns that won't recur in the future, leading to less accurate predictions, or - **Underfitting:** failing to capture relevant patterns, again leading to less accurate predictions. We use **validation** data, which isn't used in model training, to measure a candidate model's accuracy. This lets us try many candidate models and keep the best one. # Your Turn Try **[optimizing the model you've previously built](#$NEXT_NOTEBOOK_URL$)**.
github_jupyter
from sklearn.metrics import mean_absolute_error from sklearn.tree import DecisionTreeRegressor def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y): model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0) model.fit(train_X, train_y) preds_val = model.predict(val_X) mae = mean_absolute_error(val_y, preds_val) return(mae) # Data Loading Code Runs At This Point import pandas as pd # Load data melbourne_file_path = '../input/melbourne-housing-snapshot/melb_data.csv' melbourne_data = pd.read_csv(melbourne_file_path) # Filter rows with missing values filtered_melbourne_data = melbourne_data.dropna(axis=0) # Choose target and features y = filtered_melbourne_data.Price melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea', 'YearBuilt', 'Lattitude', 'Longtitude'] X = filtered_melbourne_data[melbourne_features] from sklearn.model_selection import train_test_split # split data into training and validation data, for both features and target train_X, val_X, train_y, val_y = train_test_split(X, y,random_state = 0) # compare MAE with differing values of max_leaf_nodes for max_leaf_nodes in [5, 50, 500, 5000]: my_mae = get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y) print("Max leaf nodes: %d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, my_mae))
0.826502
0.987436
# 语言模型和数据集 :label:`sec_language_model` 在 :numref:`sec_text_preprocessing`中, 我们了解了如何将文本数据映射为词元, 以及将这些词元可以视为一系列离散的观测,例如单词或字符。 假设长度为$T$的文本序列中的词元依次为$x_1, x_2, \ldots, x_T$。 于是,$x_t$($1 \leq t \leq T$) 可以被认为是文本序列在时间步$t$处的观测或标签。 在给定这样的文本序列时,*语言模型*(language model)的目标是估计序列的联合概率 $$P(x_1, x_2, \ldots, x_T).$$ 例如,只需要一次抽取一个词元$x_t \sim P(x_t \mid x_{t-1}, \ldots, x_1)$, 一个理想的语言模型就能够基于模型本身生成自然文本。 与猴子使用打字机完全不同的是,从这样的模型中提取的文本 都将作为自然语言(例如,英语文本)来传递。 只需要基于前面的对话片断中的文本, 就足以生成一个有意义的对话。 显然,我们离设计出这样的系统还很遥远, 因为它需要“理解”文本,而不仅仅是生成语法合理的内容。 尽管如此,语言模型依然是非常有用的。 例如,短语“to recognize speech”和“to wreck a nice beach”读音上听起来非常相似。 这种相似性会导致语音识别中的歧义,但是这很容易通过语言模型来解决, 因为第二句的语义很奇怪。 同样,在文档摘要生成算法中, “狗咬人”比“人咬狗”出现的频率要高得多, 或者“我想吃奶奶”是一个相当匪夷所思的语句, 而“我想吃,奶奶”则要正常得多。 ## 学习语言模型 显而易见,我们面对的问题是如何对一个文档, 甚至是一个词元序列进行建模。 假设在单词级别对文本数据进行词元化, 我们可以依靠在 :numref:`sec_sequence`中对序列模型的分析。 让我们从基本概率规则开始: $$P(x_1, x_2, \ldots, x_T) = \prod_{t=1}^T P(x_t \mid x_1, \ldots, x_{t-1}).$$ 例如,包含了四个单词的一个文本序列的概率是: $$P(\text{deep}, \text{learning}, \text{is}, \text{fun}) = P(\text{deep}) P(\text{learning} \mid \text{deep}) P(\text{is} \mid \text{deep}, \text{learning}) P(\text{fun} \mid \text{deep}, \text{learning}, \text{is}).$$ 为了训练语言模型,我们需要计算单词的概率, 以及给定前面几个单词后出现某个单词的条件概率。 这些概率本质上就是语言模型的参数。 这里,我们假设训练数据集是一个大型的文本语料库。 比如,维基百科的所有条目、 [古登堡计划](https://en.wikipedia.org/wiki/Project_Gutenberg), 或者所有发布在网络上的文本。 训练数据集中词的概率可以根据给定词的相对词频来计算。 例如,可以将估计值$\hat{P}(\text{deep})$ 计算为任何以单词“deep”开头的句子的概率。 一种(稍稍不太精确的)方法是统计单词“deep”在数据集中的出现次数, 然后将其除以整个语料库中的单词总数。 这种方法效果不错,特别是对于频繁出现的单词。 接下来,我们可以尝试估计 $$\hat{P}(\text{learning} \mid \text{deep}) = \frac{n(\text{deep, learning})}{n(\text{deep})},$$ 其中$n(x)$和$n(x, x')$分别是单个单词和连续单词对的出现次数。 不幸的是,由于连续单词对“deep learning”的出现频率要低得多, 所以估计这类单词正确的概率要困难得多。 特别是对于一些不常见的单词组合,要想找到足够的出现次数来获得准确的估计可能都不容易。 而对于三个或者更多的单词组合,情况会变得更糟。 许多合理的三个单词组合可能是存在的,但是在数据集中却找不到。 除非我们提供某种解决方案,来将这些单词组合指定为非零计数, 否则将无法在语言模型中使用它们。 如果数据集很小,或者单词非常罕见,那么这类单词出现一次的机会可能都找不到。 一种常见的策略是执行某种形式的*拉普拉斯平滑*(Laplace smoothing), 具体方法是在所有计数中添加一个小常量。 用$n$表示训练集中的单词总数,用$m$表示唯一单词的数量。 此解决方案有助于处理单元素问题,例如通过: $$ \begin{aligned} \hat{P}(x) & = \frac{n(x) + \epsilon_1/m}{n + \epsilon_1}, \\ \hat{P}(x' \mid x) & = \frac{n(x, x') + \epsilon_2 \hat{P}(x')}{n(x) + \epsilon_2}, \\ \hat{P}(x'' \mid x,x') & = \frac{n(x, x',x'') + \epsilon_3 \hat{P}(x'')}{n(x, x') + \epsilon_3}. \end{aligned} $$ 其中,$\epsilon_1,\epsilon_2$和$\epsilon_3$是超参数。 以$\epsilon_1$为例:当$\epsilon_1 = 0$时,不应用平滑; 当$\epsilon_1$接近正无穷大时,$\hat{P}(x)$接近均匀概率分布$1/m$。 上面的公式是 :cite:`Wood.Gasthaus.Archambeau.ea.2011` 的一个相当原始的变形。 然而,这样的模型很容易变得无效,原因如下: 首先,我们需要存储所有的计数; 其次,这完全忽略了单词的意思。 例如,“猫”(cat)和“猫科动物”(feline)可能出现在相关的上下文中, 但是想根据上下文调整这类模型其实是相当困难的。 最后,长单词序列大部分是没出现过的, 因此一个模型如果只是简单地统计先前“看到”的单词序列频率, 那么模型面对这种问题肯定是表现不佳的。 ## 马尔可夫模型与$n$元语法 在讨论包含深度学习的解决方案之前,我们需要了解更多的概念和术语。 回想一下我们在 :numref:`sec_sequence`中对马尔可夫模型的讨论, 并且将其应用于语言建模。 如果$P(x_{t+1} \mid x_t, \ldots, x_1) = P(x_{t+1} \mid x_t)$, 则序列上的分布满足一阶马尔可夫性质。 阶数越高,对应的依赖关系就越长。 这种性质推导出了许多可以应用于序列建模的近似公式: $$ \begin{aligned} P(x_1, x_2, x_3, x_4) &= P(x_1) P(x_2) P(x_3) P(x_4),\\ P(x_1, x_2, x_3, x_4) &= P(x_1) P(x_2 \mid x_1) P(x_3 \mid x_2) P(x_4 \mid x_3),\\ P(x_1, x_2, x_3, x_4) &= P(x_1) P(x_2 \mid x_1) P(x_3 \mid x_1, x_2) P(x_4 \mid x_2, x_3). \end{aligned} $$ 通常,涉及一个、两个和三个变量的概率公式分别被称为 “一元语法”(unigram)、“二元语法”(bigram)和“三元语法”(trigram)模型。 下面,我们将学习如何去设计更好的模型。 ## 自然语言统计 我们看看在真实数据上如果进行自然语言统计。 根据 :numref:`sec_text_preprocessing`中介绍的时光机器数据集构建词表, 并打印前$10$个最常用的(频率最高的)单词。 ``` import random from mxnet import np, npx from d2l import mxnet as d2l npx.set_np() tokens = d2l.tokenize(d2l.read_time_machine()) # 因为每个文本行不一定是一个句子或一个段落,因此我们把所有文本行拼接到一起 corpus = [token for line in tokens for token in line] vocab = d2l.Vocab(corpus) vocab.token_freqs[:10] ``` 正如我们所看到的,(**最流行的词**)看起来很无聊, 这些词通常(**被称为*停用词***)(stop words),因此可以被过滤掉。 尽管如此,它们本身仍然是有意义的,我们仍然会在模型中使用它们。 此外,还有个明显的问题是词频衰减的速度相当地快。 例如,最常用单词的词频对比,第$10$个还不到第$1$个的$1/5$。 为了更好地理解,我们可以[**画出的词频图**]: ``` freqs = [freq for token, freq in vocab.token_freqs] d2l.plot(freqs, xlabel='token: x', ylabel='frequency: n(x)', xscale='log', yscale='log') ``` 通过此图我们可以发现:词频以一种明确的方式迅速衰减。 将前几个单词作为例外消除后,剩余的所有单词大致遵循双对数坐标图上的一条直线。 这意味着单词的频率满足*齐普夫定律*(Zipf's law), 即第$i$个最常用单词的频率$n_i$为: $$n_i \propto \frac{1}{i^\alpha},$$ :eqlabel:`eq_zipf_law` 等价于 $$\log n_i = -\alpha \log i + c,$$ 其中$\alpha$是刻画分布的指数,$c$是常数。 这告诉我们想要通过计数统计和平滑来建模单词是不可行的, 因为这样建模的结果会大大高估尾部单词的频率,也就是所谓的不常用单词。 那么[**其他的词元组合,比如二元语法、三元语法等等,又会如何呢?**] 我们来看看二元语法的频率是否与一元语法的频率表现出相同的行为方式。 ``` bigram_tokens = [pair for pair in zip(corpus[:-1], corpus[1:])] bigram_vocab = d2l.Vocab(bigram_tokens) bigram_vocab.token_freqs[:10] ``` 这里值得注意:在十个最频繁的词对中,有九个是由两个停用词组成的, 只有一个与“the time”有关。 我们再进一步看看三元语法的频率是否表现出相同的行为方式。 ``` trigram_tokens = [triple for triple in zip( corpus[:-2], corpus[1:-1], corpus[2:])] trigram_vocab = d2l.Vocab(trigram_tokens) trigram_vocab.token_freqs[:10] ``` 最后,我们[**直观地对比三种模型中的词元频率**]:一元语法、二元语法和三元语法。 ``` bigram_freqs = [freq for token, freq in bigram_vocab.token_freqs] trigram_freqs = [freq for token, freq in trigram_vocab.token_freqs] d2l.plot([freqs, bigram_freqs, trigram_freqs], xlabel='token: x', ylabel='frequency: n(x)', xscale='log', yscale='log', legend=['unigram', 'bigram', 'trigram']) ``` 这张图非常令人振奋!原因有很多: 首先,除了一元语法词,单词序列似乎也遵循齐普夫定律, 尽管公式 :eqref:`eq_zipf_law`中的指数$\alpha$更小 (指数的大小受序列长度的影响)。 其次,词表中$n$元组的数量并没有那么大,这说明语言中存在相当多的结构, 这些结构给了我们应用模型的希望。 第三,很多$n$元组很少出现,这使得拉普拉斯平滑非常不适合语言建模。 作为代替,我们将使用基于深度学习的模型。 ## 读取长序列数据 由于序列数据本质上是连续的,因此我们在处理数据时需要解决这个问题。 在 :numref:`sec_sequence`中我们以一种相当特别的方式做到了这一点: 当序列变得太长而不能被模型一次性全部处理时, 我们可能希望拆分这样的序列方便模型读取。 在介绍该模型之前,我们看一下总体策略。 假设我们将使用神经网络来训练语言模型, 模型中的网络一次处理具有预定义长度 (例如$n$个时间步)的一个小批量序列。 现在的问题是如何[**随机生成一个小批量数据的特征和标签以供读取。**] 首先,由于文本序列可以是任意长的, 例如整本《时光机器》(*The Time Machine*), 于是任意长的序列可以被我们划分为具有相同时间步数的子序列。 当训练我们的神经网络时,这样的小批量子序列将被输入到模型中。 假设网络一次只处理具有$n$个时间步的子序列。 :numref:`fig_timemachine_5gram`画出了 从原始文本序列获得子序列的所有不同的方式, 其中$n=5$,并且每个时间步的词元对应于一个字符。 请注意,因为我们可以选择任意偏移量来指示初始位置,所以我们有相当大的自由度。 ![分割文本时,不同的偏移量会导致不同的子序列](../img/timemachine-5gram.svg) :label:`fig_timemachine_5gram` 因此,我们应该从 :numref:`fig_timemachine_5gram`中选择哪一个呢? 事实上,他们都一样的好。 然而,如果我们只选择一个偏移量, 那么用于训练网络的、所有可能的子序列的覆盖范围将是有限的。 因此,我们可以从随机偏移量开始划分序列, 以同时获得*覆盖性*(coverage)和*随机性*(randomness)。 下面,我们将描述如何实现*随机采样*(random sampling)和 *顺序分区*(sequential partitioning)策略。 ### 随机采样 (**在随机采样中,每个样本都是在原始的长序列上任意捕获的子序列。**) 在迭代过程中,来自两个相邻的、随机的、小批量中的子序列不一定在原始序列上相邻。 对于语言建模,目标是基于到目前为止我们看到的词元来预测下一个词元, 因此标签是移位了一个词元的原始序列。 下面的代码每次可以从数据中随机生成一个小批量。 在这里,参数`batch_size`指定了每个小批量中子序列样本的数目, 参数`num_steps`是每个子序列中预定义的时间步数。 ``` def seq_data_iter_random(corpus, batch_size, num_steps): #@save """使用随机抽样生成一个小批量子序列""" # 从随机偏移量开始对序列进行分区,随机范围包括num_steps-1 corpus = corpus[random.randint(0, num_steps - 1):] # 减去1,是因为我们需要考虑标签 num_subseqs = (len(corpus) - 1) // num_steps # 长度为num_steps的子序列的起始索引 initial_indices = list(range(0, num_subseqs * num_steps, num_steps)) # 在随机抽样的迭代过程中, # 来自两个相邻的、随机的、小批量中的子序列不一定在原始序列上相邻 random.shuffle(initial_indices) def data(pos): # 返回从pos位置开始的长度为num_steps的序列 return corpus[pos: pos + num_steps] num_batches = num_subseqs // batch_size for i in range(0, batch_size * num_batches, batch_size): # 在这里,initial_indices包含子序列的随机起始索引 initial_indices_per_batch = initial_indices[i: i + batch_size] X = [data(j) for j in initial_indices_per_batch] Y = [data(j + 1) for j in initial_indices_per_batch] yield np.array(X), np.array(Y) ``` 下面我们[**生成一个从$0$到$34$的序列**]。 假设批量大小为$2$,时间步数为$5$,这意味着可以生成 $\lfloor (35 - 1) / 5 \rfloor= 6$个“特征-标签”子序列对。 如果设置小批量大小为$2$,我们只能得到$3$个小批量。 ``` my_seq = list(range(35)) for X, Y in seq_data_iter_random(my_seq, batch_size=2, num_steps=5): print('X: ', X, '\nY:', Y) ``` ### 顺序分区 在迭代过程中,除了对原始序列可以随机抽样外, 我们还可以[**保证两个相邻的小批量中的子序列在原始序列上也是相邻的**]。 这种策略在基于小批量的迭代过程中保留了拆分的子序列的顺序,因此称为顺序分区。 ``` def seq_data_iter_sequential(corpus, batch_size, num_steps): #@save """使用顺序分区生成一个小批量子序列""" # 从随机偏移量开始划分序列 offset = random.randint(0, num_steps) num_tokens = ((len(corpus) - offset - 1) // batch_size) * batch_size Xs = np.array(corpus[offset: offset + num_tokens]) Ys = np.array(corpus[offset + 1: offset + 1 + num_tokens]) Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1) num_batches = Xs.shape[1] // num_steps for i in range(0, num_steps * num_batches, num_steps): X = Xs[:, i: i + num_steps] Y = Ys[:, i: i + num_steps] yield X, Y ``` 基于相同的设置,通过顺序分区[**读取每个小批量的子序列的特征`X`和标签`Y`**]。 通过将它们打印出来可以发现: 迭代期间来自两个相邻的小批量中的子序列在原始序列中确实是相邻的。 ``` for X, Y in seq_data_iter_sequential(my_seq, batch_size=2, num_steps=5): print('X: ', X, '\nY:', Y) ``` 现在,我们[**将上面的两个采样函数包装到一个类中**], 以便稍后可以将其用作数据迭代器。 ``` class SeqDataLoader: #@save """加载序列数据的迭代器""" def __init__(self, batch_size, num_steps, use_random_iter, max_tokens): if use_random_iter: self.data_iter_fn = d2l.seq_data_iter_random else: self.data_iter_fn = d2l.seq_data_iter_sequential self.corpus, self.vocab = d2l.load_corpus_time_machine(max_tokens) self.batch_size, self.num_steps = batch_size, num_steps def __iter__(self): return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps) ``` [**最后,我们定义了一个函数`load_data_time_machine`, 它同时返回数据迭代器和词表**], 因此可以与其他带有`load_data`前缀的函数 (如 :numref:`sec_fashion_mnist`中定义的 `d2l.load_data_fashion_mnist`)类似地使用。 ``` def load_data_time_machine(batch_size, num_steps, #@save use_random_iter=False, max_tokens=10000): """返回时光机器数据集的迭代器和词表""" data_iter = SeqDataLoader( batch_size, num_steps, use_random_iter, max_tokens) return data_iter, data_iter.vocab ``` ## 小结 * 语言模型是自然语言处理的关键。 * $n$元语法通过截断相关性,为处理长序列提供了一种实用的模型。 * 长序列存在一个问题:它们很少出现或者从不出现。 * 齐普夫定律支配着单词的分布,这个分布不仅适用于一元语法,还适用于其他$n$元语法。 * 通过拉普拉斯平滑法可以有效地处理结构丰富而频率不足的低频词词组。 * 读取长序列的主要方式是随机采样和顺序分区。在迭代过程中,后者可以保证来自两个相邻的小批量中的子序列在原始序列上也是相邻的。 ## 练习 1. 假设训练数据集中有$100,000$个单词。一个四元语法需要存储多少个词频和相邻多词频率? 1. 我们如何对一系列对话建模? 1. 一元语法、二元语法和三元语法的齐普夫定律的指数是不一样的,你能设法估计嘛? 1. 想一想读取长序列数据的其他方法? 1. 考虑一下我们用于读取长序列的随机偏移量。 1. 为什么随机偏移量是个好主意? 1. 它真的会在文档的序列上实现完美的均匀分布吗? 1. 你要怎么做才能使分布更均匀? 1. 如果我们希望一个序列样本是一个完整的句子,那么这在小批量抽样中会带来怎样的问题?如何解决? [Discussions](https://discuss.d2l.ai/t/2096)
github_jupyter
import random from mxnet import np, npx from d2l import mxnet as d2l npx.set_np() tokens = d2l.tokenize(d2l.read_time_machine()) # 因为每个文本行不一定是一个句子或一个段落,因此我们把所有文本行拼接到一起 corpus = [token for line in tokens for token in line] vocab = d2l.Vocab(corpus) vocab.token_freqs[:10] freqs = [freq for token, freq in vocab.token_freqs] d2l.plot(freqs, xlabel='token: x', ylabel='frequency: n(x)', xscale='log', yscale='log') bigram_tokens = [pair for pair in zip(corpus[:-1], corpus[1:])] bigram_vocab = d2l.Vocab(bigram_tokens) bigram_vocab.token_freqs[:10] trigram_tokens = [triple for triple in zip( corpus[:-2], corpus[1:-1], corpus[2:])] trigram_vocab = d2l.Vocab(trigram_tokens) trigram_vocab.token_freqs[:10] bigram_freqs = [freq for token, freq in bigram_vocab.token_freqs] trigram_freqs = [freq for token, freq in trigram_vocab.token_freqs] d2l.plot([freqs, bigram_freqs, trigram_freqs], xlabel='token: x', ylabel='frequency: n(x)', xscale='log', yscale='log', legend=['unigram', 'bigram', 'trigram']) def seq_data_iter_random(corpus, batch_size, num_steps): #@save """使用随机抽样生成一个小批量子序列""" # 从随机偏移量开始对序列进行分区,随机范围包括num_steps-1 corpus = corpus[random.randint(0, num_steps - 1):] # 减去1,是因为我们需要考虑标签 num_subseqs = (len(corpus) - 1) // num_steps # 长度为num_steps的子序列的起始索引 initial_indices = list(range(0, num_subseqs * num_steps, num_steps)) # 在随机抽样的迭代过程中, # 来自两个相邻的、随机的、小批量中的子序列不一定在原始序列上相邻 random.shuffle(initial_indices) def data(pos): # 返回从pos位置开始的长度为num_steps的序列 return corpus[pos: pos + num_steps] num_batches = num_subseqs // batch_size for i in range(0, batch_size * num_batches, batch_size): # 在这里,initial_indices包含子序列的随机起始索引 initial_indices_per_batch = initial_indices[i: i + batch_size] X = [data(j) for j in initial_indices_per_batch] Y = [data(j + 1) for j in initial_indices_per_batch] yield np.array(X), np.array(Y) my_seq = list(range(35)) for X, Y in seq_data_iter_random(my_seq, batch_size=2, num_steps=5): print('X: ', X, '\nY:', Y) def seq_data_iter_sequential(corpus, batch_size, num_steps): #@save """使用顺序分区生成一个小批量子序列""" # 从随机偏移量开始划分序列 offset = random.randint(0, num_steps) num_tokens = ((len(corpus) - offset - 1) // batch_size) * batch_size Xs = np.array(corpus[offset: offset + num_tokens]) Ys = np.array(corpus[offset + 1: offset + 1 + num_tokens]) Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1) num_batches = Xs.shape[1] // num_steps for i in range(0, num_steps * num_batches, num_steps): X = Xs[:, i: i + num_steps] Y = Ys[:, i: i + num_steps] yield X, Y for X, Y in seq_data_iter_sequential(my_seq, batch_size=2, num_steps=5): print('X: ', X, '\nY:', Y) class SeqDataLoader: #@save """加载序列数据的迭代器""" def __init__(self, batch_size, num_steps, use_random_iter, max_tokens): if use_random_iter: self.data_iter_fn = d2l.seq_data_iter_random else: self.data_iter_fn = d2l.seq_data_iter_sequential self.corpus, self.vocab = d2l.load_corpus_time_machine(max_tokens) self.batch_size, self.num_steps = batch_size, num_steps def __iter__(self): return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps) def load_data_time_machine(batch_size, num_steps, #@save use_random_iter=False, max_tokens=10000): """返回时光机器数据集的迭代器和词表""" data_iter = SeqDataLoader( batch_size, num_steps, use_random_iter, max_tokens) return data_iter, data_iter.vocab
0.364099
0.909224
# 参考・引用 [House Prices: Advanced Regression Techniques](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) [Comprehensive data exploration with Python](https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python) [Kaggle Tutorial : House Prices](https://www.kaggle.com/takizawa/kaggle-tutorial-house-prices?scriptVersionId=1081693) # 概要 1. モデルの作成 - モデル作成の為の準備(ライブラリのインポート、データの読込・確認) - 線形回帰によるモデルの作成 - モデルの評価 2. モデルの改善 - データの分析 - データの整備 - アルゴリズムの変更 ※本来ならば最初のモデル作成の前にデータの分析や整備を行うべきだが、学習内容をシンプルにする為に、あえて上記の手順で実施 # 実行環境について このPython 3環境には、多くの有用な分析ライブラリがインストールされていて、Docker Imageで定義されています。[Docker Image 参照先リンク](https://github.com/kaggle/docker-python) # ライブラリについて 機械学習の実装を行うに当たって、役に立つライブラリについて説明。 ## Pandas データ解析を支援する機能を提供するライブラリ。数表や時系列データを操作するためのデータ構造と演算を提供。 ### 特徴 - データ操作のための高速で効率的なDataFrame (行列型) オブジェクト - メモリ内のデータ構造と複数のフォーマット(CSV, TXT, xls, xlsx, )のデータ間で相互に読み書きするためのツール群 - データ処理(カウント, 集計, 変換, 欠損値処理, マージ, 結合, etc...) - 時系列データ処理(日, 週, 月, 四半期, 年) ## Matplotlib(mpl_toolkits含む) グラフ描画の為のライブラリ。 ### グラフの種類 - 折れ線グラフ - 散布図 - ヒストグラム - ヒートマップ - 3Dグラフ ## Seaborn Matplotlibの機能を**より美しく、またより簡単**に実現するためのライブラリ。 ### グラフの種類 - 折れ線グラフ - 散布図 - ヒストグラム - ヒートマップ ## Numpy プログラミング言語Pythonにおいて数値計算を効率的に行うためのライブラリ。 効率的な数値計算を行うための型付きの多次元配列(例えばベクトルや行列など)のサポートをPythonに加えるとともに、それらを操作するための大規模な高水準の数学関数ライブラリを提供。 ## scikit-learn Pythonのオープンソース機械学習ライブラリ。 Pythonの数値計算ライブラリのNumPyとSciPyとやり取りするよう設計されている。 ``` # No.1 # ライブラリのインポート import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import seaborn as sns import numpy as np from sklearn.linear_model import LinearRegression # Jupyter Notebookの中でインライン表示する場合の設定(これが無いと別ウィンドウでグラフが開く) %matplotlib inline ``` # 1. モデルの作成 ## Kaggle上に保存されているデータの確認 画面右上の < ボタンをクリック後、Dataタブをクリックすると、本環境に保存されているデータの一覧が表示される。 ### Kaggle上に保存されているデータの読み込み ファイルパスは ../input/[読み込みたいデータファイル] Pandasでデータを読み込むとDataFrameというオブジェクトができる。 DataFrameとは簡単に言うと行列データで、機械学習を行う際に扱いやすいデータ形式。 ``` # No.2 # データの読み込み df = pd.read_csv("../input/train.csv") # No.3 # 読み込んだデータの確認 df # No.4 # 先頭5行の確認 df.head() ``` ## 線形回帰分析での予測 - カラム[OverallQual]と線形回帰分析を用いて学習処理を実施する - カラム[OverallQual] : Rates the overall material and finish of the house(家全体の材料と仕上げの評価) - 線形回帰分析の数式 : y = θ0 + θ1X1 + θ2X2 + θ3X3 + .... + θX - 本処理の数式 : y[SalesPrice] = θ0[y切片] + θ1[傾き] * X1[OverallQual] ``` # No.5 # XにOverallQual、yにSalePriceをセット X = df[["OverallQual"]].values y = df["SalePrice"].values # アルゴリズムに線形回帰(Linear Regression)を採用 slr = LinearRegression() # fit関数でモデル作成 slr.fit(X,y) # 偏回帰係数(回帰分析において得られる回帰方程式の各説明変数の係数)を出力 # 偏回帰係数はscikit-learnのcoefで取得 print('傾き:{0}'.format(slr.coef_[0])) # y切片(直線とy軸との交点)を出力 # 余談:x切片もあり、それは直線とx軸との交点を指す print('y切片: {0}'.format(slr.intercept_)) ``` ## 線形回帰から導き出される数式 y[SalesPrice] = -96206.07951476038[y切片] + 45435.8025930994[傾き] * X[OverallQual] ## 線形回帰から導き出される数式とデータセットを図示 - plt.scatter(X, y) : 散布図 - plt.plot(X,slr.predict(X),color='red') : 折れ線グラフ ``` # No.6 # 散布図を描画 plt.scatter(X,y) # 折れ線グラフを描画 plt.plot(X,slr.predict(X),color='red') # 表示 plt.show() ``` ## テストデータを用いて予測 ここまでの処理でモデルが作成されましたので、テスト(モデル評価用)データを用いて予測を行います。 ``` # No.7 # テストデータの読込 df_test = pd.read_csv("../input/test.csv") # No.8 # テストデータの内容確認(評価用のデータなので、SalePriceはない) df_test.head() # No.9 # テストデータの OverallQual の値をセット X_test = df_test[["OverallQual"]].values # 学習済みのモデルから予測した結果をセット y_test_pred = slr.predict(X_test) # No.10 # 学習済みのモデルから予測した結果を出力 y_test_pred ``` ## Kaggleにサブミットするファイル形式に変換 作成したモデルをKaggleに提出して評価を受ける為に、[決められた提出フォーマット](https://www.kaggle.com/c/house-prices-advanced-regression-techniques#evaluation)に変換します。 - Id、SalePriceの2列のファイル ``` # No.11 # df_testに SalePrice カラムを追加し、学習済みのモデルから予測した結果をセット df_test["SalePrice"] = y_test_pred # No.12 # df_testの先頭5行を確認 df_test.head() # No.13 # Id, SalePriceの2列だけ表示 df_test[["Id","SalePrice"]].head() # No.14 # Id, SalePriceの2列だけのファイルに変換 df_test[["Id","SalePrice"]].to_csv("submission.csv",index=False) ``` ## Kaggleへの提出と評価 ### 提出ファイルの作成 画面右上の Commit ボタンをクリック。 そうすることで、Kaggleの所定位置にノートブックで作成されたcsvデータがセットされる。 ※csv作成処理が複数コーディングされている場合は、最後にコーディングされたcsv作成処理で作成されたcsvデータがセットされる ### ファイルの提出と評価 **2018.09.28 提出方法の変更に伴う修正** ~~画面右上の > ボタンをクリックして、Versions タブをクリックして、Outputタブをクリック。 その中にある、 Submit to Competitionをクリック。~~ Kernelのトップ画面(エディター画面に遷移する前の画面)に戻りOutputタブをクリック。 その中にある、 Submit to Competitionをクリック。 score:1.16083(誤差の大きさ) # 2. モデルの改善 最初に作成したモデルは、工夫もなくデータを読み込み、scikit-learnで学習させたもの。 ここからは、予測したいデータと関連の深いデータは何か?といった調査や、「データの前処理」と呼ばれるデータの加工を行い、モデルの精度をあげる為の作業を実施。 その為にはデータを理解することが必要。各変数を見て、その意味を理解し、この問題との関連性を調査。時間がかかるが大事な作業。 ## カラムの把握 カラムの一覧を表示し、それぞれの内容を把握。そこからSalePriceに影響を与えるデータは何か?という事を、仮説をたてて考える。 今回使用しているデータの詳しい説明は`../input/data_description.txt`にある。 ``` # No.15 # カラムの一覧表示 df.columns ``` ## SalesPriceを知る ### 基本統計量の表示 |項目名|意味| |--|--| |count|データ件数| |mean|平均| |std|標準偏差| |min|最小値| |25%|第1四分位数| |50%|第2四分位数| |75%|第3四分位数| |max|最大値| - 四分位数:データを大きさの順に並べたときに下から25%に位置する値・50%に位置する値・75%に位置する値のことをいう。[詳しくはこちら](https://atarimae.biz/archives/19162) - 標準偏差:データのばらつきの大きさを表わす指標。[詳しくはこちら](https://atarimae.biz/archives/5379) ``` # No.16 # 基本統計量の表示 df.SalePrice.describe() ``` ## SalesPriceをヒストグラムで分析 - 横軸にSalesPrice - 縦軸に割合 ``` # No.17 # ヒストグラムの表示 sns.distplot(df.SalePrice) ``` - 160,000$位のデータの割合が最も多い - 極端に高い金額のものが存在する - 正規分布(グラフにしたときに数値の大半が中央に集中し、左右対称の釣り鐘型に「分布」するデータ)ではない ### 機械学習における正規分布の効果とは - 予測しようとしている値が正規分布に従った方が精度がよくなる ## SalePriceと相関係数の高い上位10個のデータを調査 ### 相関係数とは 相関関係を指し示す係数のこと ### 相関関係とは 片方の変数が変化すれば、もう一方の変数も変化するという、2つの変数間の関係性をあらわしているもの 2種類のデータの(直線的な)関係性の強さを −1 から +1 の間の値で表しており、 正(+)の相関の場合は、片方の変数が大きくなればもう片方の変数も大きくなり、 負(-)の相関の場合は、片方の変数が大きくなればもう片方の変数も小さくなる なお、相関関係の強弱については下記の通り |相関係数|相関の強さ| |--|--| |±0.2 ~ ±0.4|弱い相関がある| |±0.4 ~ ±0.7|相関がある| |±0.7 ~ ±0.9|強い相関がある| |±0.9 ~ ±1.0|(ほぼ)完全な相関がある| ### 注意点 相関関係は因果関係と同じものではない。疑似相関の場合がある - 因果関係 : 2つの変数の間に原因と結果の関係があること - 疑似相関 : 相関係数は高いが、2つの変数の間に因果関係がないこと 疑似相関については[Wiki](https://ja.wikipedia.org/wiki/%E6%93%AC%E4%BC%BC%E7%9B%B8%E9%96%A2)の例が分かりやすい ``` # No.18 # 相関係数を算出 corrmat = df.corr() corrmat # No.19 # 算出した相関係数を相関が高い順に上位10個のデータを表示 # ヒートマップに表示させるカラムの数 k = 10 # SalesPriceとの相関が大きい上位10個のカラム名を取得 cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index # SalesPriceとの相関が大きい上位10個のカラムを対象に相関を算出 # .T(Trancepose[転置行列])を行う理由は、corrcoefで相関を算出する際に、各カラムの値を行毎にまとめなければならない為 cm = np.corrcoef(df[cols].values.T) # ヒートマップのフォントサイズを指定 sns.set(font_scale=1.25) # 算出した相関データをヒートマップで表示 hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values) plt.show() # corrmat.nlargest(k, 'SalePrice') : SalePriceの値が大きい順にkの行数だけ抽出した行と全列の行列データ # corrmat.nlargest(k, 'SalePrice')['SalePrice'] : SalePriceの値が大きい順にkの行数だけ抽出した抽出した行とSalePrice列だけ抽出したデータ # corrmat.nlargest(k, 'SalePrice')['SalePrice'].index : 行の項目を抽出。データの中身はIndex(['SalePrice', 'OverallQual', ... , 'YearBuilt'], dtype='object') # cols.values : カラムの値(SalesPrice, OverallQual, ...) ``` ヒートマップからSalePriceと相関が高いのは、OverallQual と GrLivArea だと分かる。 ## 相関の強い OverallQual と GrLivArea の散布図の表示 ``` # No.20 # 散布図の表示 sns.set() cols = ['SalePrice', 'OverallQual', 'GrLivArea'] sns.pairplot(df[cols], size = 2.5) plt.show() ``` 一番右上のSalesPriceとGrLivAreaの散布図を見てみると、傾向から大幅に外れているデータが2つ存在。 このデータを不適切な学習データ、もしくは異常値データとみなして削除。 ``` # No.21 # 数値の大きい上位2位のデータを表示 df.sort_values(by = 'GrLivArea', ascending = False)[:2] #ascending=False は'GrLivArea'の高いデータから順に並べ替えられる # No.22 # No.21で判明したデータのIdの値を指定して削除 df = df.drop(index = df[df['Id'] == 1299].index) df = df.drop(index = df[df['Id'] == 524].index) ``` 削除されているかを確認。 ``` # No.23 # 散布図の表示 sns.set() cols = ['SalePrice', 'OverallQual', 'GrLivArea'] sns.pairplot(df[cols], size = 2.5) plt.show() ``` ## GrLivAreaを使用してモデルを学習 ``` # No.24 # XにGrLivArea、yにSalePriceをセット X = df[["GrLivArea"]].values y = df["SalePrice"].values # アルゴリズムに線形回帰(Linear Regression)を採用 slr = LinearRegression() # fit関数で学習開始 slr.fit(X,y) # 偏回帰係数(回帰分析において得られる回帰方程式の各説明変数の係数)を出力 # 偏回帰係数はscikit-learnのcoefで取得 print('傾き:{0}'.format(slr.coef_[0])) # y切片(直線とy軸との交点)を出力 # 余談:x切片もあり、それは直線とx軸との交点を指す print('y切片: {0}'.format(slr.intercept_)) # No.25 # 散布図を描画 plt.scatter(X,y) # 折れ線グラフを描画 plt.plot(X,slr.predict(X),color='red') # 表示 plt.show() # No.26 # テストデータの読込 df_test = pd.read_csv("../input/test.csv") # No.27 # テストデータの内容確認(最初にモデルを作成した際に追加したSalePriceが消えている事) df_test.head() # No.28 # テストデータの GrLivArea の値をセット X_test = df_test[["GrLivArea"]].values # 学習済みのモデルから予測した結果をセット y_test_pred = slr.predict(X_test) # No.29 # 学習済みのモデルから予測した結果を出力 y_test_pred # No.30 # df_testに SalePrice カラムを追加し、学習済みのモデルから予測した結果をセット df_test["SalePrice"] = y_test_pred # No.31 # Id, SalePriceの2列だけ表示 df_test[["Id","SalePrice"]].head() # No.32 # Id, SalePriceの2列だけのファイルに変換 df_test[["Id","SalePrice"]].to_csv("submission.csv",index=False) ``` ## Kaggleへの提出と評価 ### 提出ファイルの作成 画面右上の Commit ボタンをクリック。 そうすることで、Kaggleの所定位置にノートブックで作成されたcsvデータがセットされる。 ※csv作成処理が複数コーディングされている場合は、最後にコーディングされたcsv作成処理で作成されたcsvデータがセットされる ### ファイルの提出と評価 **2018.09.28 提出方法の変更に伴う修正** ~~画面右上の > ボタンをクリックして、Versions タブをクリックして、Outputタブをクリック。 その中にある、 Submit to Competitionをクリック。~~ Kernelのトップ画面(エディター画面に遷移する前の画面)に戻りOutputタブをクリック。 その中にある、 Submit to Competitionをクリック。 score:0.28783(OverallQualの場合は1.16083) ## 重回帰分析の採用 SalePriceと相関の高いOverallQualとGrLivAreaを説明変数に使用 ``` # No.33 # XにGrLivArea、yにSalePriceをセット X = df[["OverallQual", "GrLivArea"]].values y = df["SalePrice"].values # アルゴリズムに線形回帰(Linear Regression)を採用 slr = LinearRegression() # fit関数で学習開始 slr.fit(X,y) # 偏回帰係数(回帰分析において得られる回帰方程式の各説明変数の係数)を出力 # 偏回帰係数はscikit-learnのcoefで取得 print('傾き:{0}'.format(slr.coef_)) a1, a2 = slr.coef_ # y切片(直線とy軸との交点)を出力 # 余談:x切片もあり、それは直線とx軸との交点を指す print('y切片: {0}'.format(slr.intercept_)) b = slr.intercept_ # No.34 # 3D描画(散布図の描画) x, y, z = np.array(df["OverallQual"]), np.array(df["GrLivArea"]), np.array(df["SalePrice"].values) fig = plt.figure() ax = Axes3D(fig) ax.scatter3D(np.ravel(x), np.ravel(y), np.ravel(z)) # 3D描画(回帰平面の描画) # np.arange(0, 10, 2)は# 初項0,公差2で終点が10の等差数列(array([ 2, 4, 6, 8, 10])) X, Y = np.meshgrid(np.arange(0, 12, 2), np.arange(0, 6000, 1000)) Z = a1 * X + a2 * Y + b ax.plot_surface(X, Y, Z, alpha = 0.5, color = "red") #alphaで透明度を指定 ax.set_xlabel("OverallQual") ax.set_ylabel("GrLivArea") ax.set_zlabel("SalePrice") plt.show() # No.35 # テストデータの読込 df_test = pd.read_csv("../input/test.csv") # No.36 # テストデータの内容確認(追加したSalePriceが消えている事) df_test.head() # No.37 # テストデータの OverallQual と GrLivArea の値をセット X_test = df_test[["OverallQual", "GrLivArea"]].values # 学習済みのモデルから予測した結果をセット y_test_pred = slr.predict(X_test) # No.38 # 学習済みのモデルから予測した結果を出力 y_test_pred # No.39 # df_testに SalePrice カラムを追加し、学習済みのモデルから予測した結果をセット df_test["SalePrice"] = y_test_pred # No.40 # Id, SalePriceの2列だけ表示 df_test[["Id","SalePrice"]].head() # No.41 # Id, SalePriceの2列だけのファイルに変換 df_test[["Id","SalePrice"]].to_csv("submission.csv",index=False) ``` ## Kaggleへの提出と評価 ### 提出ファイルの作成 画面右上の Commit ボタンをクリック。 そうすることで、Kaggleの所定位置にノートブックで作成されたcsvデータがセットされる。 ※csv作成処理が複数コーディングされている場合は、最後にコーディングされたcsv作成処理で作成されたcsvデータがセットされる ### ファイルの提出と評価 **2018.09.28 提出方法の変更に伴う修正** ~~画面右上の > ボタンをクリックして、Versions タブをクリックして、Outputタブをクリック。 その中にある、 Submit to Competitionをクリック。~~ Kernelのトップ画面(エディター画面に遷移する前の画面)に戻りOutputタブをクリック。 その中にある、 Submit to Competitionをクリック。 score:0.84138(前のscore:0.28783)
github_jupyter
# No.1 # ライブラリのインポート import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import seaborn as sns import numpy as np from sklearn.linear_model import LinearRegression # Jupyter Notebookの中でインライン表示する場合の設定(これが無いと別ウィンドウでグラフが開く) %matplotlib inline # No.2 # データの読み込み df = pd.read_csv("../input/train.csv") # No.3 # 読み込んだデータの確認 df # No.4 # 先頭5行の確認 df.head() # No.5 # XにOverallQual、yにSalePriceをセット X = df[["OverallQual"]].values y = df["SalePrice"].values # アルゴリズムに線形回帰(Linear Regression)を採用 slr = LinearRegression() # fit関数でモデル作成 slr.fit(X,y) # 偏回帰係数(回帰分析において得られる回帰方程式の各説明変数の係数)を出力 # 偏回帰係数はscikit-learnのcoefで取得 print('傾き:{0}'.format(slr.coef_[0])) # y切片(直線とy軸との交点)を出力 # 余談:x切片もあり、それは直線とx軸との交点を指す print('y切片: {0}'.format(slr.intercept_)) # No.6 # 散布図を描画 plt.scatter(X,y) # 折れ線グラフを描画 plt.plot(X,slr.predict(X),color='red') # 表示 plt.show() # No.7 # テストデータの読込 df_test = pd.read_csv("../input/test.csv") # No.8 # テストデータの内容確認(評価用のデータなので、SalePriceはない) df_test.head() # No.9 # テストデータの OverallQual の値をセット X_test = df_test[["OverallQual"]].values # 学習済みのモデルから予測した結果をセット y_test_pred = slr.predict(X_test) # No.10 # 学習済みのモデルから予測した結果を出力 y_test_pred # No.11 # df_testに SalePrice カラムを追加し、学習済みのモデルから予測した結果をセット df_test["SalePrice"] = y_test_pred # No.12 # df_testの先頭5行を確認 df_test.head() # No.13 # Id, SalePriceの2列だけ表示 df_test[["Id","SalePrice"]].head() # No.14 # Id, SalePriceの2列だけのファイルに変換 df_test[["Id","SalePrice"]].to_csv("submission.csv",index=False) # No.15 # カラムの一覧表示 df.columns # No.16 # 基本統計量の表示 df.SalePrice.describe() # No.17 # ヒストグラムの表示 sns.distplot(df.SalePrice) # No.18 # 相関係数を算出 corrmat = df.corr() corrmat # No.19 # 算出した相関係数を相関が高い順に上位10個のデータを表示 # ヒートマップに表示させるカラムの数 k = 10 # SalesPriceとの相関が大きい上位10個のカラム名を取得 cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index # SalesPriceとの相関が大きい上位10個のカラムを対象に相関を算出 # .T(Trancepose[転置行列])を行う理由は、corrcoefで相関を算出する際に、各カラムの値を行毎にまとめなければならない為 cm = np.corrcoef(df[cols].values.T) # ヒートマップのフォントサイズを指定 sns.set(font_scale=1.25) # 算出した相関データをヒートマップで表示 hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values) plt.show() # corrmat.nlargest(k, 'SalePrice') : SalePriceの値が大きい順にkの行数だけ抽出した行と全列の行列データ # corrmat.nlargest(k, 'SalePrice')['SalePrice'] : SalePriceの値が大きい順にkの行数だけ抽出した抽出した行とSalePrice列だけ抽出したデータ # corrmat.nlargest(k, 'SalePrice')['SalePrice'].index : 行の項目を抽出。データの中身はIndex(['SalePrice', 'OverallQual', ... , 'YearBuilt'], dtype='object') # cols.values : カラムの値(SalesPrice, OverallQual, ...) # No.20 # 散布図の表示 sns.set() cols = ['SalePrice', 'OverallQual', 'GrLivArea'] sns.pairplot(df[cols], size = 2.5) plt.show() # No.21 # 数値の大きい上位2位のデータを表示 df.sort_values(by = 'GrLivArea', ascending = False)[:2] #ascending=False は'GrLivArea'の高いデータから順に並べ替えられる # No.22 # No.21で判明したデータのIdの値を指定して削除 df = df.drop(index = df[df['Id'] == 1299].index) df = df.drop(index = df[df['Id'] == 524].index) # No.23 # 散布図の表示 sns.set() cols = ['SalePrice', 'OverallQual', 'GrLivArea'] sns.pairplot(df[cols], size = 2.5) plt.show() # No.24 # XにGrLivArea、yにSalePriceをセット X = df[["GrLivArea"]].values y = df["SalePrice"].values # アルゴリズムに線形回帰(Linear Regression)を採用 slr = LinearRegression() # fit関数で学習開始 slr.fit(X,y) # 偏回帰係数(回帰分析において得られる回帰方程式の各説明変数の係数)を出力 # 偏回帰係数はscikit-learnのcoefで取得 print('傾き:{0}'.format(slr.coef_[0])) # y切片(直線とy軸との交点)を出力 # 余談:x切片もあり、それは直線とx軸との交点を指す print('y切片: {0}'.format(slr.intercept_)) # No.25 # 散布図を描画 plt.scatter(X,y) # 折れ線グラフを描画 plt.plot(X,slr.predict(X),color='red') # 表示 plt.show() # No.26 # テストデータの読込 df_test = pd.read_csv("../input/test.csv") # No.27 # テストデータの内容確認(最初にモデルを作成した際に追加したSalePriceが消えている事) df_test.head() # No.28 # テストデータの GrLivArea の値をセット X_test = df_test[["GrLivArea"]].values # 学習済みのモデルから予測した結果をセット y_test_pred = slr.predict(X_test) # No.29 # 学習済みのモデルから予測した結果を出力 y_test_pred # No.30 # df_testに SalePrice カラムを追加し、学習済みのモデルから予測した結果をセット df_test["SalePrice"] = y_test_pred # No.31 # Id, SalePriceの2列だけ表示 df_test[["Id","SalePrice"]].head() # No.32 # Id, SalePriceの2列だけのファイルに変換 df_test[["Id","SalePrice"]].to_csv("submission.csv",index=False) # No.33 # XにGrLivArea、yにSalePriceをセット X = df[["OverallQual", "GrLivArea"]].values y = df["SalePrice"].values # アルゴリズムに線形回帰(Linear Regression)を採用 slr = LinearRegression() # fit関数で学習開始 slr.fit(X,y) # 偏回帰係数(回帰分析において得られる回帰方程式の各説明変数の係数)を出力 # 偏回帰係数はscikit-learnのcoefで取得 print('傾き:{0}'.format(slr.coef_)) a1, a2 = slr.coef_ # y切片(直線とy軸との交点)を出力 # 余談:x切片もあり、それは直線とx軸との交点を指す print('y切片: {0}'.format(slr.intercept_)) b = slr.intercept_ # No.34 # 3D描画(散布図の描画) x, y, z = np.array(df["OverallQual"]), np.array(df["GrLivArea"]), np.array(df["SalePrice"].values) fig = plt.figure() ax = Axes3D(fig) ax.scatter3D(np.ravel(x), np.ravel(y), np.ravel(z)) # 3D描画(回帰平面の描画) # np.arange(0, 10, 2)は# 初項0,公差2で終点が10の等差数列(array([ 2, 4, 6, 8, 10])) X, Y = np.meshgrid(np.arange(0, 12, 2), np.arange(0, 6000, 1000)) Z = a1 * X + a2 * Y + b ax.plot_surface(X, Y, Z, alpha = 0.5, color = "red") #alphaで透明度を指定 ax.set_xlabel("OverallQual") ax.set_ylabel("GrLivArea") ax.set_zlabel("SalePrice") plt.show() # No.35 # テストデータの読込 df_test = pd.read_csv("../input/test.csv") # No.36 # テストデータの内容確認(追加したSalePriceが消えている事) df_test.head() # No.37 # テストデータの OverallQual と GrLivArea の値をセット X_test = df_test[["OverallQual", "GrLivArea"]].values # 学習済みのモデルから予測した結果をセット y_test_pred = slr.predict(X_test) # No.38 # 学習済みのモデルから予測した結果を出力 y_test_pred # No.39 # df_testに SalePrice カラムを追加し、学習済みのモデルから予測した結果をセット df_test["SalePrice"] = y_test_pred # No.40 # Id, SalePriceの2列だけ表示 df_test[["Id","SalePrice"]].head() # No.41 # Id, SalePriceの2列だけのファイルに変換 df_test[["Id","SalePrice"]].to_csv("submission.csv",index=False)
0.22194
0.942454
IMPORTING PACKAGES ``` import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import sklearn import seaborn as sns sns.set_style("whitegrid") from sklearn import preprocessing from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.model_selection import cross_val_score from pandas import datetime import h5py from matplotlib.font_manager import FontProperties %matplotlib inline ``` READING DATA ``` train=pd.read_csv('train.csv') test=pd.read_csv('test.csv') store=pd.read_csv('stores.csv') feature=pd.read_csv('features.csv') ``` # EXPLORING DATA EXPLORING stores.csv ``` print(store.head().append(store.tail()),"\n") print("Structure of Store:\n",store.shape, "\n") print("Number of missing values:\n",store.isnull().sum().sort_values(ascending=False),"\n") ``` EXPLORING features.csv ``` print(feature.head().append(feature.tail()),"\n") print("Structure of Feature: ",feature.shape,"\n") print("Summary Statistic:\n",feature.describe(),"\n") print("Number of missing values:\n",feature.isnull().sum().sort_values(ascending=False),"\n") ``` FINDING OUT THE MISSING PERCENTAGE OF DATA ACROSS EACH FEATURE ``` feature_percent_missing = feature.isnull().sum()*100/len(feature) feature_data_type = feature.dtypes feature_summary = pd.DataFrame({"Percent_missing": feature_percent_missing.round(2), "Datatypes": feature_data_type}) feature_summary ``` EXPLORING train.csv ``` print(train.head().append(train.tail()),"\n") print("Structure of train:\n",train.shape,"\n") print("Summary Statistic:\n",train.describe(),"\n") print("Number of missing values:\n",train.isnull().sum().sort_values(ascending=False),"\n") ``` EXPLORING test.csv ``` print(test.head().append(test.tail()),"\n") print("Structure of test:\n",test.shape,"\n") print("Summary Statistic:\n",test.describe(),"\n") print("Number of missing values:\n",test.isnull().sum().sort_values(ascending=False),"\n") ``` # JOINING TABLES ``` combined_train = pd.merge(train, store, how="left", on="Store") combined_test = pd.merge(test, store, how="left", on="Store") print(combined_train.head(),"\n", combined_train.shape,"\n") print(combined_test.head(),"\n", combined_test.shape,"\n") combined_train = pd.merge(combined_train, feature, how = "inner", on=["Store","Date"]) combined_test = pd.merge(combined_test, feature, how = "inner", on=["Store","Date"]) print(combined_train.head()) print(combined_test.head()) ``` DROPING THE EXTRA IsHoliday_y FROM BOTH THE COMBINED DATASETS ``` combined_train = combined_train.drop(["IsHoliday_y"], axis=1) combined_test = combined_test.drop(["IsHoliday_y"], axis=1) print(combined_train.head()) print(combined_test.head()) combined_train.describe() combined_test.describe() ``` # DATA PREPROCESSING ### REPLACING MISSING VALUES BY 0 CHECKING FOR THE TOTAL NUMBER OF MISSING VALUES IN combined_train AND combined_test AND THEN REPLACING THEM WITH 0 ``` print(combined_test.isnull().sum()) print(combined_train.isnull().sum()) processed_train = combined_train.fillna(0) processed_test = combined_test.fillna(0) ``` ### REPLACING NEGATIVE MARKDOWN EVENTS BY 0 IN processed_train AND processed_test ``` processed_train.loc[processed_train['Weekly_Sales'] < 0.0,'Weekly_Sales'] = 0.0 processed_train.loc[processed_train['MarkDown2'] < 0.0,'MarkDown2'] = 0.0 processed_train.loc[processed_train['MarkDown3'] < 0.0,'MarkDown3'] = 0.0 processed_train.describe() processed_test.loc[processed_test['MarkDown1'] < 0.0,'MarkDown1'] = 0.0 processed_test.loc[processed_test['MarkDown2'] < 0.0,'MarkDown2'] = 0.0 processed_test.loc[processed_test['MarkDown3'] < 0.0,'MarkDown3'] = 0.0 processed_test.loc[processed_test['MarkDown5'] < 0.0,'MarkDown5'] = 0.0 processed_test.describe() ``` ### PERFORMING ONE HOT ENCODING FOR CATEGORICAL DATA AND BOOLEAN DATA ``` print(processed_train.dtypes, processed_test.dtypes) cat_col = ['IsHoliday_x','Type'] for col in cat_col: lbl = preprocessing.LabelEncoder() lbl.fit(processed_train[col].values.astype('str')) processed_train[col] = lbl.transform(processed_train[col].values.astype('str')) for col in cat_col: lbl = preprocessing.LabelEncoder() lbl.fit(processed_test[col].values.astype('str')) processed_test[col] = lbl.transform(processed_test[col].values.astype('str')) processed_train.to_csv("Processed_data/processed_train.csv", index=False) processed_test.to_csv("Processed_data/processed_test.csv", index=False) processed_train.head() ``` REARRANGING THE RESPONSE COLUMN (Weekly_Sales) ``` processed_train = processed_train[['Store', 'Dept', 'Date', 'Unemployment', 'IsHoliday_x', 'Type', 'Size', 'Temperature', 'Fuel_Price', 'MarkDown1', 'MarkDown2', 'MarkDown3', 'MarkDown4', 'MarkDown5', 'CPI', 'Weekly_Sales']] processed_train.to_csv("Processed_data/processed_train.csv", index=False) processed_train.head() ``` # VISUALIZATION OF HISTORIC DATA ``` store['Type'].value_counts().plot(kind='bar') plt.title('Total number of each type of stores') plt.xlabel('Type') plt.ylabel('Number of Stores') plt.show() a=sns.catplot(x="Type", y="Size", data=store); a.fig.suptitle('Sizes of each type of store') a=train[['Store', 'Dept']].drop_duplicates() a.plot(kind='scatter', x='Store',y='Dept') plt.title('Departments across every store') a=processed_train[['Weekly_Sales', 'Size']].drop_duplicates() a.plot(kind='scatter', x='Size',y='Weekly_Sales',color='red') plt.title('Weekly Sales for stores of every size') a=sns.catplot(x="Type", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales for stores of every type') a=sns.catplot(x="IsHoliday_x", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales on Holiday and Non-Holiday weeeks') a=sns.catplot(x="Dept", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across every department') a=sns.catplot(x="Fuel_Price", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across Fuel Price') a=sns.catplot(x="Temperature", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across average temperature of the week') a=sns.catplot(x="CPI", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across Consumer Price Index') a=sns.catplot(x="Unemployment", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across Unemployment Rates') corr=processed_train.corr() sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns) cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True) def magnify(): return [dict(selector="th", props=[("font-size", "7pt")]), dict(selector="td", props=[('padding', "0em 0em")]), dict(selector="th:hover", props=[("font-size", "12pt")]), dict(selector="tr:hover td:hover", props=[('max-width', '200px'), ('font-size', '12pt')]) ] corr.style.background_gradient(cmap, axis=1)\ .set_properties(**{'max-width': '80px', 'font-size': '10pt'})\ .set_caption("Hover to magify")\ .set_precision(2)\ .set_table_styles(magnify()) dfabc=processed_train[['Date','Store','Dept','IsHoliday_x','Unemployment','Fuel_Price','Temperature','Type','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI','Weekly_Sales']] dfabc.head() dfabc["MarkDownValue"] = dfabc["MarkDown1"].add(dfabc["MarkDown2"]) dfabc["MarkDownValue"] = dfabc["MarkDownValue"].add(dfabc["MarkDown3"]) dfabc["MarkDownValue"] = dfabc["MarkDownValue"].add(dfabc["MarkDown4"]) dfabc["MarkDownValue"] = dfabc["MarkDownValue"].add(dfabc["MarkDown5"]) dfabc.head() dfabc = dfabc[dfabc.MarkDownValue != 0.0] dfabc.head() dfabc.shape processed_test.head() processed_test.shape dfdef=processed_test[['Store','Dept','IsHoliday_x','Type','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI']] dfdef["MarkDownValue"] = dfdef["MarkDown1"].add(dfdef["MarkDown2"]) dfdef["MarkDownValue"] = dfdef["MarkDownValue"].add(dfdef["MarkDown3"]) dfdef["MarkDownValue"] = dfdef["MarkDownValue"].add(dfdef["MarkDown4"]) dfdef["MarkDownValue"] = dfdef["MarkDownValue"].add(dfdef["MarkDown5"]) dfdef.head() dfdef = dfdef[dfdef.MarkDownValue != 0.0] dfdef.head() dfdef.shape dfx=dfabc dfx=pd.get_dummies(dfx, columns=['Dept','Store','Type']) dfx['Day']=dfx['Date'].str[0:2] dfx['Month']=dfx['Date'].str[3:5] dfx['Year']=dfx['Date'].str[6:10] dfx.head() dfx['Day']=pd.to_numeric(dfx['Day']) dfx['Month']=pd.to_numeric(dfx['Month']) dfx['Year']=pd.to_numeric(dfx['Year']) dfx.dtypes from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor regressor = DecisionTreeRegressor(max_depth=32, random_state=0) ptr=pd.get_dummies(processed_train, columns=['Dept','Store','Type']) ptr['Day']=ptr['Date'].str[0:2] ptr['Month']=ptr['Date'].str[3:5] ptr['Year']=ptr['Date'].str[6:10] ptr['Day']=pd.to_numeric(ptr['Day']) ptr['Month']=pd.to_numeric(ptr['Month']) ptr['Year']=pd.to_numeric(ptr['Year']) ptr.head() dfx.shape ptr.shape dfx.head() x=dfx[[#'Unemployment', 'IsHoliday_x', #'Size', #'Temperature', #'Fuel_Price', 'MarkDown1', 'MarkDown2', 'MarkDown3', 'MarkDown4', 'MarkDown5', #'CPI', #'Weekly_Sales', 'Dept_1', 'Dept_2', 'Dept_3', 'Dept_4', 'Dept_5', 'Dept_6', 'Dept_7', 'Dept_8', 'Dept_9', 'Dept_10', 'Dept_11', 'Dept_12', 'Dept_13', 'Dept_14', 'Dept_16', 'Dept_17', 'Dept_18', 'Dept_19', 'Dept_20', 'Dept_21', 'Dept_22', 'Dept_23', 'Dept_24', 'Dept_25', 'Dept_26', 'Dept_27', 'Dept_28', 'Dept_29', 'Dept_30', 'Dept_31', 'Dept_32', 'Dept_33', 'Dept_34', 'Dept_35', 'Dept_36', 'Dept_37', 'Dept_38', 'Dept_39', 'Dept_40', 'Dept_41', 'Dept_42', 'Dept_43', 'Dept_44', 'Dept_45', 'Dept_46', 'Dept_47', 'Dept_48', 'Dept_49', 'Dept_50', 'Dept_51', 'Dept_52', 'Dept_54', 'Dept_55', 'Dept_56', 'Dept_58', 'Dept_59', 'Dept_60', 'Dept_65', 'Dept_67', 'Dept_71', 'Dept_72', 'Dept_74', 'Dept_77', 'Dept_78', 'Dept_79', 'Dept_80', 'Dept_81', 'Dept_82', 'Dept_83', 'Dept_85', 'Dept_87', 'Dept_90', 'Dept_91', 'Dept_92', 'Dept_93', 'Dept_94', 'Dept_95', 'Dept_96', 'Dept_97', 'Dept_98', 'Dept_99', 'Store_1', 'Store_2', 'Store_3', 'Store_4', 'Store_5', 'Store_6', 'Store_7', 'Store_8', 'Store_9', 'Store_10', 'Store_11', 'Store_12', 'Store_13', 'Store_14', 'Store_15', 'Store_16', 'Store_17', 'Store_18', 'Store_19', 'Store_20', 'Store_21', 'Store_22', 'Store_23', 'Store_24', 'Store_25', 'Store_26', 'Store_27', 'Store_28', 'Store_29', 'Store_30', 'Store_31', 'Store_32', 'Store_33', 'Store_34', 'Store_35', 'Store_36', 'Store_37', 'Store_38', 'Store_39', 'Store_40', 'Store_41', 'Store_42', 'Store_43', 'Store_44', 'Store_45', 'Type_0', 'Type_1', 'Type_2', 'Day', 'Month', 'Year']] y=dfx[['Weekly_Sales']] rf=RandomForestRegressor() scores=cross_val_score(rf,x,y,scoring='r2',cv=5) np.mean(scores) X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=0) rf.fit(X_train,y_train) yhat=rf.predict(X_test) yhat y_test=np.asarray(y_test['Weekly_Sales']) y_test dataset = pd.DataFrame({'y_test': y_test, 'yhat': yhat}, columns=['y_test', 'yhat']) dataset.head() dataset.to_excel('result.xlsx') dataset.shape ```
github_jupyter
import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import sklearn import seaborn as sns sns.set_style("whitegrid") from sklearn import preprocessing from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.model_selection import cross_val_score from pandas import datetime import h5py from matplotlib.font_manager import FontProperties %matplotlib inline train=pd.read_csv('train.csv') test=pd.read_csv('test.csv') store=pd.read_csv('stores.csv') feature=pd.read_csv('features.csv') print(store.head().append(store.tail()),"\n") print("Structure of Store:\n",store.shape, "\n") print("Number of missing values:\n",store.isnull().sum().sort_values(ascending=False),"\n") print(feature.head().append(feature.tail()),"\n") print("Structure of Feature: ",feature.shape,"\n") print("Summary Statistic:\n",feature.describe(),"\n") print("Number of missing values:\n",feature.isnull().sum().sort_values(ascending=False),"\n") feature_percent_missing = feature.isnull().sum()*100/len(feature) feature_data_type = feature.dtypes feature_summary = pd.DataFrame({"Percent_missing": feature_percent_missing.round(2), "Datatypes": feature_data_type}) feature_summary print(train.head().append(train.tail()),"\n") print("Structure of train:\n",train.shape,"\n") print("Summary Statistic:\n",train.describe(),"\n") print("Number of missing values:\n",train.isnull().sum().sort_values(ascending=False),"\n") print(test.head().append(test.tail()),"\n") print("Structure of test:\n",test.shape,"\n") print("Summary Statistic:\n",test.describe(),"\n") print("Number of missing values:\n",test.isnull().sum().sort_values(ascending=False),"\n") combined_train = pd.merge(train, store, how="left", on="Store") combined_test = pd.merge(test, store, how="left", on="Store") print(combined_train.head(),"\n", combined_train.shape,"\n") print(combined_test.head(),"\n", combined_test.shape,"\n") combined_train = pd.merge(combined_train, feature, how = "inner", on=["Store","Date"]) combined_test = pd.merge(combined_test, feature, how = "inner", on=["Store","Date"]) print(combined_train.head()) print(combined_test.head()) combined_train = combined_train.drop(["IsHoliday_y"], axis=1) combined_test = combined_test.drop(["IsHoliday_y"], axis=1) print(combined_train.head()) print(combined_test.head()) combined_train.describe() combined_test.describe() print(combined_test.isnull().sum()) print(combined_train.isnull().sum()) processed_train = combined_train.fillna(0) processed_test = combined_test.fillna(0) processed_train.loc[processed_train['Weekly_Sales'] < 0.0,'Weekly_Sales'] = 0.0 processed_train.loc[processed_train['MarkDown2'] < 0.0,'MarkDown2'] = 0.0 processed_train.loc[processed_train['MarkDown3'] < 0.0,'MarkDown3'] = 0.0 processed_train.describe() processed_test.loc[processed_test['MarkDown1'] < 0.0,'MarkDown1'] = 0.0 processed_test.loc[processed_test['MarkDown2'] < 0.0,'MarkDown2'] = 0.0 processed_test.loc[processed_test['MarkDown3'] < 0.0,'MarkDown3'] = 0.0 processed_test.loc[processed_test['MarkDown5'] < 0.0,'MarkDown5'] = 0.0 processed_test.describe() print(processed_train.dtypes, processed_test.dtypes) cat_col = ['IsHoliday_x','Type'] for col in cat_col: lbl = preprocessing.LabelEncoder() lbl.fit(processed_train[col].values.astype('str')) processed_train[col] = lbl.transform(processed_train[col].values.astype('str')) for col in cat_col: lbl = preprocessing.LabelEncoder() lbl.fit(processed_test[col].values.astype('str')) processed_test[col] = lbl.transform(processed_test[col].values.astype('str')) processed_train.to_csv("Processed_data/processed_train.csv", index=False) processed_test.to_csv("Processed_data/processed_test.csv", index=False) processed_train.head() processed_train = processed_train[['Store', 'Dept', 'Date', 'Unemployment', 'IsHoliday_x', 'Type', 'Size', 'Temperature', 'Fuel_Price', 'MarkDown1', 'MarkDown2', 'MarkDown3', 'MarkDown4', 'MarkDown5', 'CPI', 'Weekly_Sales']] processed_train.to_csv("Processed_data/processed_train.csv", index=False) processed_train.head() store['Type'].value_counts().plot(kind='bar') plt.title('Total number of each type of stores') plt.xlabel('Type') plt.ylabel('Number of Stores') plt.show() a=sns.catplot(x="Type", y="Size", data=store); a.fig.suptitle('Sizes of each type of store') a=train[['Store', 'Dept']].drop_duplicates() a.plot(kind='scatter', x='Store',y='Dept') plt.title('Departments across every store') a=processed_train[['Weekly_Sales', 'Size']].drop_duplicates() a.plot(kind='scatter', x='Size',y='Weekly_Sales',color='red') plt.title('Weekly Sales for stores of every size') a=sns.catplot(x="Type", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales for stores of every type') a=sns.catplot(x="IsHoliday_x", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales on Holiday and Non-Holiday weeeks') a=sns.catplot(x="Dept", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across every department') a=sns.catplot(x="Fuel_Price", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across Fuel Price') a=sns.catplot(x="Temperature", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across average temperature of the week') a=sns.catplot(x="CPI", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across Consumer Price Index') a=sns.catplot(x="Unemployment", y="Weekly_Sales", data=processed_train); a.fig.suptitle('Weekly Sales across Unemployment Rates') corr=processed_train.corr() sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns) cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True) def magnify(): return [dict(selector="th", props=[("font-size", "7pt")]), dict(selector="td", props=[('padding', "0em 0em")]), dict(selector="th:hover", props=[("font-size", "12pt")]), dict(selector="tr:hover td:hover", props=[('max-width', '200px'), ('font-size', '12pt')]) ] corr.style.background_gradient(cmap, axis=1)\ .set_properties(**{'max-width': '80px', 'font-size': '10pt'})\ .set_caption("Hover to magify")\ .set_precision(2)\ .set_table_styles(magnify()) dfabc=processed_train[['Date','Store','Dept','IsHoliday_x','Unemployment','Fuel_Price','Temperature','Type','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI','Weekly_Sales']] dfabc.head() dfabc["MarkDownValue"] = dfabc["MarkDown1"].add(dfabc["MarkDown2"]) dfabc["MarkDownValue"] = dfabc["MarkDownValue"].add(dfabc["MarkDown3"]) dfabc["MarkDownValue"] = dfabc["MarkDownValue"].add(dfabc["MarkDown4"]) dfabc["MarkDownValue"] = dfabc["MarkDownValue"].add(dfabc["MarkDown5"]) dfabc.head() dfabc = dfabc[dfabc.MarkDownValue != 0.0] dfabc.head() dfabc.shape processed_test.head() processed_test.shape dfdef=processed_test[['Store','Dept','IsHoliday_x','Type','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI']] dfdef["MarkDownValue"] = dfdef["MarkDown1"].add(dfdef["MarkDown2"]) dfdef["MarkDownValue"] = dfdef["MarkDownValue"].add(dfdef["MarkDown3"]) dfdef["MarkDownValue"] = dfdef["MarkDownValue"].add(dfdef["MarkDown4"]) dfdef["MarkDownValue"] = dfdef["MarkDownValue"].add(dfdef["MarkDown5"]) dfdef.head() dfdef = dfdef[dfdef.MarkDownValue != 0.0] dfdef.head() dfdef.shape dfx=dfabc dfx=pd.get_dummies(dfx, columns=['Dept','Store','Type']) dfx['Day']=dfx['Date'].str[0:2] dfx['Month']=dfx['Date'].str[3:5] dfx['Year']=dfx['Date'].str[6:10] dfx.head() dfx['Day']=pd.to_numeric(dfx['Day']) dfx['Month']=pd.to_numeric(dfx['Month']) dfx['Year']=pd.to_numeric(dfx['Year']) dfx.dtypes from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor regressor = DecisionTreeRegressor(max_depth=32, random_state=0) ptr=pd.get_dummies(processed_train, columns=['Dept','Store','Type']) ptr['Day']=ptr['Date'].str[0:2] ptr['Month']=ptr['Date'].str[3:5] ptr['Year']=ptr['Date'].str[6:10] ptr['Day']=pd.to_numeric(ptr['Day']) ptr['Month']=pd.to_numeric(ptr['Month']) ptr['Year']=pd.to_numeric(ptr['Year']) ptr.head() dfx.shape ptr.shape dfx.head() x=dfx[[#'Unemployment', 'IsHoliday_x', #'Size', #'Temperature', #'Fuel_Price', 'MarkDown1', 'MarkDown2', 'MarkDown3', 'MarkDown4', 'MarkDown5', #'CPI', #'Weekly_Sales', 'Dept_1', 'Dept_2', 'Dept_3', 'Dept_4', 'Dept_5', 'Dept_6', 'Dept_7', 'Dept_8', 'Dept_9', 'Dept_10', 'Dept_11', 'Dept_12', 'Dept_13', 'Dept_14', 'Dept_16', 'Dept_17', 'Dept_18', 'Dept_19', 'Dept_20', 'Dept_21', 'Dept_22', 'Dept_23', 'Dept_24', 'Dept_25', 'Dept_26', 'Dept_27', 'Dept_28', 'Dept_29', 'Dept_30', 'Dept_31', 'Dept_32', 'Dept_33', 'Dept_34', 'Dept_35', 'Dept_36', 'Dept_37', 'Dept_38', 'Dept_39', 'Dept_40', 'Dept_41', 'Dept_42', 'Dept_43', 'Dept_44', 'Dept_45', 'Dept_46', 'Dept_47', 'Dept_48', 'Dept_49', 'Dept_50', 'Dept_51', 'Dept_52', 'Dept_54', 'Dept_55', 'Dept_56', 'Dept_58', 'Dept_59', 'Dept_60', 'Dept_65', 'Dept_67', 'Dept_71', 'Dept_72', 'Dept_74', 'Dept_77', 'Dept_78', 'Dept_79', 'Dept_80', 'Dept_81', 'Dept_82', 'Dept_83', 'Dept_85', 'Dept_87', 'Dept_90', 'Dept_91', 'Dept_92', 'Dept_93', 'Dept_94', 'Dept_95', 'Dept_96', 'Dept_97', 'Dept_98', 'Dept_99', 'Store_1', 'Store_2', 'Store_3', 'Store_4', 'Store_5', 'Store_6', 'Store_7', 'Store_8', 'Store_9', 'Store_10', 'Store_11', 'Store_12', 'Store_13', 'Store_14', 'Store_15', 'Store_16', 'Store_17', 'Store_18', 'Store_19', 'Store_20', 'Store_21', 'Store_22', 'Store_23', 'Store_24', 'Store_25', 'Store_26', 'Store_27', 'Store_28', 'Store_29', 'Store_30', 'Store_31', 'Store_32', 'Store_33', 'Store_34', 'Store_35', 'Store_36', 'Store_37', 'Store_38', 'Store_39', 'Store_40', 'Store_41', 'Store_42', 'Store_43', 'Store_44', 'Store_45', 'Type_0', 'Type_1', 'Type_2', 'Day', 'Month', 'Year']] y=dfx[['Weekly_Sales']] rf=RandomForestRegressor() scores=cross_val_score(rf,x,y,scoring='r2',cv=5) np.mean(scores) X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=0) rf.fit(X_train,y_train) yhat=rf.predict(X_test) yhat y_test=np.asarray(y_test['Weekly_Sales']) y_test dataset = pd.DataFrame({'y_test': y_test, 'yhat': yhat}, columns=['y_test', 'yhat']) dataset.head() dataset.to_excel('result.xlsx') dataset.shape
0.455441
0.76649
# Chaikin Oscillator https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:chaikin_oscillator ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") # fix_yahoo_finance is used to fetch data import fix_yahoo_finance as yf yf.pdr_override() # input symbol = 'AMD' start = '2016-01-01' end = '2019-01-01' # Read data df = yf.download(symbol,start,end) # View Columns df.head() df['MF_Multiplier'] = (2*df['Adj Close']-df['Low']-df['High'])/(df['High']-df['Low']) df['MF_Volume'] = df['MF_Multiplier']*df['Volume'] df['ADL'] = df['MF_Volume'].cumsum() df = df.drop(['MF_Multiplier','MF_Volume'],axis=1) df['ADL_3_EMA'] = df['ADL'].ewm(ignore_na=False,span=3,min_periods=2,adjust=True).mean() df['ADL_10_EMA'] = df['ADL'].ewm(ignore_na=False,span=10,min_periods=9,adjust=True).mean() df['Chaikin_Oscillator'] = df['ADL_3_EMA'] - df['ADL_10_EMA'] df = df.drop(['ADL','ADL_3_EMA','ADL_10_EMA'],axis=1) df.head(20) fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) ax1.plot(df['Adj Close']) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax1.legend(loc='best') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Chaikin_Oscillator'], label='Chaikin Oscillator', color='black') ax2.axhline(y=0, color='darkblue') ax2.text(s='Positive', x=df.index[0], y=1, verticalalignment='bottom', fontsize=14, color='green') ax2.text(s='Negative', x=df.index[0], y=1, verticalalignment='top', fontsize=14, color='red') ax2.grid() ax2.legend(loc='best') ax2.set_ylabel('Chaikin Oscillator') ax2.set_xlabel('Date') ``` ## Candlestick with Chaikin Oscillator ``` from matplotlib import dates as mdates import datetime as dt dfc = df.copy() dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close'] #dfc = dfc.dropna() dfc = dfc.reset_index() dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date)) dfc.head() from mpl_finance import candlestick_ohlc fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0) ax1.xaxis_date() ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) ax1.grid(True, which='both') ax1.minorticks_on() ax1v = ax1.twinx() colors = dfc.VolumePositive.map({True: 'g', False: 'r'}) ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4) ax1v.axes.yaxis.set_ticklabels([]) ax1v.set_ylim(0, 3*df.Volume.max()) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Chaikin_Oscillator'], label='Chaikin Oscillator', color='black') ax2.axhline(y=0, color='darkblue') ax2.text(s='Positive', x=dfc.Date[0], y=1, verticalalignment='bottom', fontsize=14, color='green') ax2.text(s='Negative', x=dfc.Date[0], y=1, verticalalignment='top', fontsize=14, color='red') ax2.grid() ax2.legend(loc='best') ax2.set_ylabel('Chaikin Oscillator') ax2.set_xlabel('Date') ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") # fix_yahoo_finance is used to fetch data import fix_yahoo_finance as yf yf.pdr_override() # input symbol = 'AMD' start = '2016-01-01' end = '2019-01-01' # Read data df = yf.download(symbol,start,end) # View Columns df.head() df['MF_Multiplier'] = (2*df['Adj Close']-df['Low']-df['High'])/(df['High']-df['Low']) df['MF_Volume'] = df['MF_Multiplier']*df['Volume'] df['ADL'] = df['MF_Volume'].cumsum() df = df.drop(['MF_Multiplier','MF_Volume'],axis=1) df['ADL_3_EMA'] = df['ADL'].ewm(ignore_na=False,span=3,min_periods=2,adjust=True).mean() df['ADL_10_EMA'] = df['ADL'].ewm(ignore_na=False,span=10,min_periods=9,adjust=True).mean() df['Chaikin_Oscillator'] = df['ADL_3_EMA'] - df['ADL_10_EMA'] df = df.drop(['ADL','ADL_3_EMA','ADL_10_EMA'],axis=1) df.head(20) fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) ax1.plot(df['Adj Close']) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax1.legend(loc='best') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Chaikin_Oscillator'], label='Chaikin Oscillator', color='black') ax2.axhline(y=0, color='darkblue') ax2.text(s='Positive', x=df.index[0], y=1, verticalalignment='bottom', fontsize=14, color='green') ax2.text(s='Negative', x=df.index[0], y=1, verticalalignment='top', fontsize=14, color='red') ax2.grid() ax2.legend(loc='best') ax2.set_ylabel('Chaikin Oscillator') ax2.set_xlabel('Date') from matplotlib import dates as mdates import datetime as dt dfc = df.copy() dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close'] #dfc = dfc.dropna() dfc = dfc.reset_index() dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date)) dfc.head() from mpl_finance import candlestick_ohlc fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0) ax1.xaxis_date() ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) ax1.grid(True, which='both') ax1.minorticks_on() ax1v = ax1.twinx() colors = dfc.VolumePositive.map({True: 'g', False: 'r'}) ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4) ax1v.axes.yaxis.set_ticklabels([]) ax1v.set_ylim(0, 3*df.Volume.max()) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Chaikin_Oscillator'], label='Chaikin Oscillator', color='black') ax2.axhline(y=0, color='darkblue') ax2.text(s='Positive', x=dfc.Date[0], y=1, verticalalignment='bottom', fontsize=14, color='green') ax2.text(s='Negative', x=dfc.Date[0], y=1, verticalalignment='top', fontsize=14, color='red') ax2.grid() ax2.legend(loc='best') ax2.set_ylabel('Chaikin Oscillator') ax2.set_xlabel('Date')
0.362518
0.729664
``` import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn import svm from sklearn.svm import SVC # Read in the data #Data = pd.read_csv('Full_Data.csv', encoding = "ISO-8859-1") #Data.head(1) data = pd.read_csv('Full_Data.csv', encoding = "ISO-8859-1") data.head(1) train = data[data['Date'] < '20150101'] test = data[data['Date'] > '20141231'] # Removing punctuations slicedData= train.iloc[:,2:27] slicedData.replace(to_replace="[^a-zA-Z]", value=" ", regex=True, inplace=True) # Renaming column names for ease of access list1= [i for i in range(25)] new_Index=[str(i) for i in list1] slicedData.columns= new_Index slicedData.head(5) # Convertng headlines to lower case for index in new_Index: slicedData[index]=slicedData[index].str.lower() slicedData.head(1) headlines = [] for row in range(0,len(slicedData.index)): headlines.append(' '.join(str(x) for x in slicedData.iloc[row,0:25])) headlines[0] basicvectorizer = CountVectorizer(ngram_range=(1,1)) basictrain = basicvectorizer.fit_transform(headlines) basicmodel = svm.SVC(C=1, class_weight='balanced',kernel='rbf', gamma=0.100000000000000000000001, tol=1e-10) basicmodel = basicmodel.fit(basictrain, train["Label"]) testheadlines = [] for row in range(0,len(test.index)): testheadlines.append(' '.join(str(x) for x in test.iloc[row,2:27])) basictest = basicvectorizer.transform(testheadlines) predictions = basicmodel.predict(basictest) predictions pd.crosstab(test["Label"], predictions, rownames=["Actual"], colnames=["Predicted"]) print(basictrain.shape) from sklearn.metrics import classification_report from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix print (classification_report(test["Label"], predictions)) print (accuracy_score(test["Label"], predictions)) basicvectorizer2 = CountVectorizer(ngram_range=(2,2)) basictrain2 = basicvectorizer2.fit_transform(headlines) basicmodel2 = svm.SVC(C=1, class_weight='balanced',kernel='rbf', gamma=0.100000000000000000000001, tol=1e-10) basicmodel2 = basicmodel2.fit(basictrain2, train["Label"]) basictest2 = basicvectorizer2.transform(testheadlines) predictions2 = basicmodel2.predict(basictest2) pd.crosstab(test["Label"], predictions2, rownames=["Actual"], colnames=["Predicted"]) print(basictrain2.shape) print (classification_report(test["Label"], predictions2)) print (accuracy_score(test["Label"], predictions2)) basicvectorizer3 = CountVectorizer(ngram_range=(3,3)) basictrain3 = basicvectorizer3.fit_transform(headlines) basicmodel3 = svm.SVC(C=1, class_weight='balanced',kernel='rbf', gamma=0.100000000000000000000001, tol=1e-10) basicmodel3 = basicmodel3.fit(basictrain3, train["Label"]) basictest3 = basicvectorizer3.transform(testheadlines) predictions3 = basicmodel3.predict(basictest3) pd.crosstab(test["Label"], predictions3, rownames=["Actual"], colnames=["Predicted"]) print(basictrain3.shape) print (classification_report(test["Label"], predictions3)) print (accuracy_score(test["Label"], predictions3)) ```
github_jupyter
import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn import svm from sklearn.svm import SVC # Read in the data #Data = pd.read_csv('Full_Data.csv', encoding = "ISO-8859-1") #Data.head(1) data = pd.read_csv('Full_Data.csv', encoding = "ISO-8859-1") data.head(1) train = data[data['Date'] < '20150101'] test = data[data['Date'] > '20141231'] # Removing punctuations slicedData= train.iloc[:,2:27] slicedData.replace(to_replace="[^a-zA-Z]", value=" ", regex=True, inplace=True) # Renaming column names for ease of access list1= [i for i in range(25)] new_Index=[str(i) for i in list1] slicedData.columns= new_Index slicedData.head(5) # Convertng headlines to lower case for index in new_Index: slicedData[index]=slicedData[index].str.lower() slicedData.head(1) headlines = [] for row in range(0,len(slicedData.index)): headlines.append(' '.join(str(x) for x in slicedData.iloc[row,0:25])) headlines[0] basicvectorizer = CountVectorizer(ngram_range=(1,1)) basictrain = basicvectorizer.fit_transform(headlines) basicmodel = svm.SVC(C=1, class_weight='balanced',kernel='rbf', gamma=0.100000000000000000000001, tol=1e-10) basicmodel = basicmodel.fit(basictrain, train["Label"]) testheadlines = [] for row in range(0,len(test.index)): testheadlines.append(' '.join(str(x) for x in test.iloc[row,2:27])) basictest = basicvectorizer.transform(testheadlines) predictions = basicmodel.predict(basictest) predictions pd.crosstab(test["Label"], predictions, rownames=["Actual"], colnames=["Predicted"]) print(basictrain.shape) from sklearn.metrics import classification_report from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix print (classification_report(test["Label"], predictions)) print (accuracy_score(test["Label"], predictions)) basicvectorizer2 = CountVectorizer(ngram_range=(2,2)) basictrain2 = basicvectorizer2.fit_transform(headlines) basicmodel2 = svm.SVC(C=1, class_weight='balanced',kernel='rbf', gamma=0.100000000000000000000001, tol=1e-10) basicmodel2 = basicmodel2.fit(basictrain2, train["Label"]) basictest2 = basicvectorizer2.transform(testheadlines) predictions2 = basicmodel2.predict(basictest2) pd.crosstab(test["Label"], predictions2, rownames=["Actual"], colnames=["Predicted"]) print(basictrain2.shape) print (classification_report(test["Label"], predictions2)) print (accuracy_score(test["Label"], predictions2)) basicvectorizer3 = CountVectorizer(ngram_range=(3,3)) basictrain3 = basicvectorizer3.fit_transform(headlines) basicmodel3 = svm.SVC(C=1, class_weight='balanced',kernel='rbf', gamma=0.100000000000000000000001, tol=1e-10) basicmodel3 = basicmodel3.fit(basictrain3, train["Label"]) basictest3 = basicvectorizer3.transform(testheadlines) predictions3 = basicmodel3.predict(basictest3) pd.crosstab(test["Label"], predictions3, rownames=["Actual"], colnames=["Predicted"]) print(basictrain3.shape) print (classification_report(test["Label"], predictions3)) print (accuracy_score(test["Label"], predictions3))
0.439266
0.448366
``` # Allow us to load `open_cp` without installing import sys, os.path sys.path.insert(0, os.path.abspath("..")) ``` # The EM algorithm for Hawkes processes Here we explore the optimisation algorithm for parameter estimation given in 1. Mohler et al. "Randomized Controlled Field Trials of Predictive Policing". Journal of the American Statistical Association (2015) DOI:10.1080/01621459.2015.1077710 2. Lewis, Mohler, "A Nonparametric EM algorithm for Multiscale Hawkes Processes", preprint (2011) see http://math.scu.edu/~gmohler/EM_paper.pdf 3. Laub et al "Hawkes Processes" arXiv:150702822v1 [math.PR] ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np ``` # Checking the simulation ``` import open_cp.sources.sepp as source_sepp process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=0.05)) ``` ### Background rate Should be a homogeneous Poisson process of rate 0.1 ``` totality = [] trials = 50000 for _ in range(trials): result = process.sample_with_details(0,100) totality.extend(result.backgrounds) bins = np.linspace(0, 100, 21) counts = np.histogram(totality, bins)[0] counts = counts / (trials * (bins[1] - bins[0])) fig, ax = plt.subplots(ncols=2, figsize=(16,5)) ax[0].plot((bins[:-1] + bins[1:])/2, counts) ax[0].set(xlim=[0,100], ylim=[0,.15]) ax[1].plot((bins[:-1] + bins[1:])/2, counts) ax[1].set(xlim=[0,100]) for i in range(2): ax[i].set_xlabel("time") ax[i].set_ylabel("intensity") None ``` ### Aftershocks Should be exponential, with an "exponential weight" on 0.05, and "intensity" of 1.0. So the conditional intensity function should be $\theta \omega e^{-\omega t}$ with $\theta=1, \omega=0.05$. ``` totality = [] trials = 10000 for _ in range(trials): result = process.sample_with_details(0,100) totality.extend(result.trigger_deltas) fig, ax = plt.subplots(ncols=2, figsize=(16,5)) bins = np.linspace(0,1,11) xcoords = (bins[:-1] + bins[1:]) / 2 y = np.random.exponential(1 / 0.05, size=100000) x = 1 - np.exp(-0.05 * np.asarray(y)) c = np.histogram(x, bins)[0] ax[0].scatter(xcoords, c / 10000) ax[0].set(xlim=[0,1], ylim=[0,1.1], title="Direct simulation from numpy") y = np.asarray(totality) x = 1 - np.exp(-0.05 * np.asarray(y)) c = np.histogram(x, bins)[0] ax[1].scatter(xcoords, c / trials * 10) ax[1].set(xlim=[0,1], title="From our process, showing edge effects") None ``` We only sample the process in a finite time interval, so we'll miss aftershocks which occur after the end of our time window. To correct for this, using the extra (normally hidden) information we have about our simulation, we can discard aftershocks which occur near the end of our time window. ``` totality = [] trials = 1000 total_points = 0 for _ in range(trials): result = process.sample_with_details(0,1000) mask = result.trigger_points <= 900 totality.extend( result.trigger_deltas[mask] ) count = np.sum(result.points <= 900) total_points += count fig, ax = plt.subplots(ncols=2, figsize=(16,5)) y = np.asarray(totality) x = 1 - np.exp(-0.05 * np.asarray(y)) c = np.histogram(x, bins)[0] ax[0].scatter(xcoords, c / trials) ax[0].set(xlim=[0,1], title="From our process, corrected for edge effects") scale = 1 / ((bins[1] - bins[0]) * total_points) ax[1].scatter(xcoords, c * scale) ax[1].set(xlim=[0,1], title="Normalised to estimate $\\theta$") None ``` To get the normalisation correct above, we need to think about what the "intensity" parameter $\theta$ represents. - Each background event gives rise to $n$ aftershock events, where $n$ is distributed as a Poisson random variable with mean $\theta$. - Each of these initial aftershocks gives rise to further aftershocks, again with the same distribution. - And so on. The total number of events, counting also the initial event, is then $$ 1 + \theta + \theta^2 + \cdots = \frac{1}{1-\theta} $$ supposing $\theta<1$. This is a standard result in the theory of Branching Processes, see e.g. [Lectures notes, page 9](http://wwwf.imperial.ac.uk/~ejm/M3S4/NOTES2.PDF) So let $\alpha$ be the count of aftershock events, and $\beta$ the count of background events, $\alpha + \beta$ is the total number of events. Then $$ \frac{\alpha}{\beta} = \frac{1}{1-\theta} - 1 = \frac{\theta}{1-\theta} \implies \theta = \frac{\alpha / \beta}{1+\alpha/\beta} =\frac{\alpha}{\alpha+\beta} $$ # The EM algorithm Here we implement the "EM" algorithm described by Mohler et al (2015). ``` total_time = 10000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape def p_matrix_col(points, col, theta=0.5, omega=0.05, mu=0.1): p = np.empty(col + 1) p[col] = mu dt = points[col] - points[0:col] p[0:col] = theta * omega * np.exp(-omega * dt) return p, np.sum(p), dt def m_step(points, total_time, theta=0.5, omega=0.05, mu=0.1): omega_1, omega_2, mu_e = 0.0, 0.0, 0.0 for col in range(len(points)): p, norm, dt = p_matrix_col(points, col, theta, omega, mu) wp = p[0:col] * dt omega_1 += np.sum(p[0:col]) / norm omega_2 += np.sum(wp) / norm mu_e += p[-1] / norm return omega_1 / len(points), omega_1 / omega_2, mu_e / total_time theta, omega, mu = m_step(points, total_time) theta, omega, mu def apply_algorithm(points, total_time, steps=200, theta_in=0.5, omega_in=0.05, mu_in=0.1, convergence_criteria=None, func=m_step): theta, omega, mu = theta_in, omega_in, mu_in thetas, omegas, mus = [theta], [omega], [mu] for _ in range(steps): theta, omega, mu = func(points, total_time, theta, omega, mu) diff = max(abs(thetas[-1] - theta), abs(omegas[-1] - omega), abs(mus[-1] - mu)) thetas.append(theta) omegas.append(omega) mus.append(mu) if convergence_criteria is not None and diff <= convergence_criteria: break return thetas, omegas, mus thetas, omegas, mus = apply_algorithm(points, total_time, 100) thetas[-1], omegas[-1], mus[-1] def plot_convergence(thetas, omegas, mus, inv_omega=False): fig, ax = plt.subplots(figsize=(16,5)) x = list(range(len(thetas))) ax.plot(x, thetas) legend_txt = ["$\\theta$"] if inv_omega: legend_txt.append("$\\omega^{-1}$") ax.plot(x, 1 / np.asarray(omegas)) else: legend_txt.append("$\\omega$") ax.plot(x, omegas) ax.plot(x, mus) legend_txt.append("$\\mu$") ax.legend(legend_txt) ax.set(xlabel="Iteration") plot_convergence(thetas, omegas, mus) ``` For a largish sample size, this is typical behaviour-- we converge to an estimate which under-estimates $\theta$ and slightly over-estimates $\omega$ and $\mu$. - So we think there are slightly more background events than there really are - For the aftershocks, we underestimate the mean value of the exponential (so believe aftershocks occur closer in time than they really do) and underestimate the total aftershock intensity. I cannot rigourously justisfy this, but I believe this is due to edge effects-- events which occur near the end of our time window are likely to trigger aftershocks which we do not observe (because they fall after the time window) and so we underestimate the intensity of aftershocks, and so compensate by overestimating the background rate. It must be said that with repeated runs, you can also get the opposite behaviour-- $\theta$ being over-estimated. ### For a small sample ``` total_time = 1000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape plot_convergence(*apply_algorithm(points, total_time, 200)) ``` This is a slightly extreme graph, but it can certainly occur. ## Using a window of data If our model is correct, then note that "real-world" will not start from time 0, but will instead be a window onto an on-going process. ``` total_time = 2000 result = process.sample_with_details(0, total_time) points = result.points points = points[points>=1000] plot_convergence(*apply_algorithm(points, 1000, 200)) ``` This is a typical graph, though more extreme behaviour can be observed as well! ``` total_time = 11000 result = process.sample_with_details(0, total_time) points = result.points points = points[points>=1000] plot_convergence(*apply_algorithm(points, 10000, 50)) ``` # Choosing different parameters The above work is all performed with $\mu=0.1, \omega=0.05, \theta=0.5$. The choice of $\omega$ is problematic, as this means that the average time between event and aftershock is $20$ time units, 2 times longer than the expected time to the next background event. So let's repeat the work, but with $\omega=1$ say. ``` process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1)) points = process.sample(0, 1000) fig, ax = plt.subplots(figsize=(16,1)) ax.scatter(points, np.random.random(len(points))*.02, alpha=0.3) None total_time = 10000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape plot_convergence(*apply_algorithm(points, total_time, 200, theta_in=0.5, omega_in=1, mu_in=0.1), True) ``` Finally try with $\omega=10$ and a small(er) sample. ``` process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=10)) points = process.sample(0, 1000) fig, ax = plt.subplots(figsize=(16,1)) ax.scatter(points, np.random.random(len(points))*.02, alpha=0.3) None total_time = 1000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape plot_convergence(*apply_algorithm(points, total_time, 200, theta_in=0.5, omega_in=10, mu_in=0.1), True) ``` # Recreation of Lewis and Mohler The EM algorithm which we're using originates in: Lewis, Mohler, "A Nonparametric EM algorithm for Multiscale Hawkes Processes", preprint 2010, see http://math.scu.edu/~gmohler/EM_paper.pdf In Figure 1 of that paper, they carry out simulations (10 times) with $\mu=1, \theta=0.5$ time in $[0,2000]$ and with $\omega^{-1} = 0.01, 0.1, 1$ and $10$. By comparison, if we rescale to $\mu=0.1$ as used above, the paper considers time in $[0,20000]$ and $\omega^{-1} = 0.001, 0.01, 0.1$ and $1$ which are more extreme values than we considered before! As in the paper, here we run the algorithm until the difference (in $L^\infty$ norm) between iterations is less than $10^{-5}$. We are only varying $\omega$ which controls the "scale" of the aftershocks. As $\omega^{-1}$ increases, aftershocks becomes more spread-out in time, and thus, at least intuitively, it becomes harder to tell background events apart from aftershock events. ``` all_results_dict = {} for omega_inv in [0.01, 0.1, 1, 10]: results = [] for _ in range(10): process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv)) points = process.sample(0, 2000) convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1, convergence_criteria = 10**(-5), func=m_step) convergents = np.asarray(convergents) # (theta, omega, mu), iterations results.append((convergents[:,-1], convergents.shape[-1])) all_results_dict[omega_inv] = results thetas = {key: [result[0][0] for result in all_results_dict[key]] for key in all_results_dict} mus = {key: [result[0][2] for result in all_results_dict[key]] for key in all_results_dict} fig, ax = plt.subplots(ncols=2, figsize=(16,6)) def plot(ax, data, true_value): x = list(data.keys()) y = np.asarray([ np.mean(data[k]) for k in x ]) ax.scatter(x, y, color="black") yy = np.asarray([ np.std(data[k], ddof=1) for k in x ]) for x, y, dy in zip(x,y,yy): ax.plot([x]*2, [y-dy,y+dy], color="black") ax.plot([0.001,100], [true_value]*2, color="black", linestyle="--", linewidth=1) ax.set(xscale="log", xlim=[0.001,100]) plot(ax[0], thetas, 0.5) plot(ax[1], mus, 1) ax[0].set(xlabel="$\\omega^{-1}$", ylabel="$\\theta$") ax[1].set(xlabel="$\\omega^{-1}$", ylabel="$\\mu$") None ``` Except when $\omega^{-1}=10$ these look like the graphs from the paper. Reading more closely, we see from the paper: > To investigate the convergence of (14) we simulate realizations ... run the EM algorithm (with boundary correction) This is misleading, as (14) refers to the uncorrected algorithm. By "boundary correction" we mean computing the likelihood without applying certain approximations. This leads to a variant on the "M step": $$ \theta = \frac{\sum_{i<j} p_{ij}}{n - \sum_{i=1}^n e^{-\omega (T-t_i)}}, \qquad \omega = \frac{\sum_{i<j} p_{ij}}{\sum_{i<j}(t_j-t_i)p_{ij} + \theta \sum_{i=1}^n(T-t_i)e^{-\omega(T-t_i)}} $$ ``` def corrected_m_step(points, total_time, theta=0.5, omega=0.05, mu=0.1): omega_1, omega_2, mu_e = 0.0, 0.0, 0.0 for col in range(len(points)): p, norm, dt = p_matrix_col(points, col, theta, omega, mu) wp = p[0:col] * dt omega_1 += np.sum(p[0:col]) / norm omega_2 += np.sum(wp) / norm mu_e += p[-1] / norm from_end = total_time - points exp_from_end = np.exp(-omega * from_end) corrected_n = len(points) - np.sum(exp_from_end) corrected_omega_2 = omega_2 + theta * np.sum(from_end * exp_from_end) return omega_1 / corrected_n, omega_1 / corrected_omega_2, mu_e / total_time all_results_dict = {} for omega_inv in [0.01, 0.1, 1, 10]: results = [] for _ in range(10): process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv)) points = process.sample(0, 2000) convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1, convergence_criteria = 10**(-5), func=corrected_m_step) convergents = np.asarray(convergents) # (theta, omega, mu), iterations results.append((convergents[:,-1], convergents.shape[-1])) all_results_dict[omega_inv] = results thetas = {key: [result[0][0] for result in all_results_dict[key]] for key in all_results_dict} mus = {key: [result[0][2] for result in all_results_dict[key]] for key in all_results_dict} fig, ax = plt.subplots(ncols=2, figsize=(16,6)) plot(ax[0], thetas, 0.5) plot(ax[1], mus, 1) ax[0].set(xlabel="$\\omega^{-1}$", ylabel="$\\theta$") ax[1].set(xlabel="$\\omega^{-1}$", ylabel="$\\mu$") None ``` This is definitely an improvement, but despite having tried this a number of times, I cannot reproduce the graphs from Lewis and Mohler. # Computing the likelihood From (3) Theorem 3 we know that if $\lambda^*(t)$ is the conditional intensity function, then the likelihood of the process is: $$ L = \Big( \prod_{i=1}^n \lambda^*(t_i) \Big) \exp\Big( -\int_0^T \lambda^*(s) \ ds \Big) $$ were $(t_1,\cdots,t_n)$ is a sample of the process in the time window $[0,T]$. Taking logs, this expands to $$ l = \log L = \sum_{i=1}^n \log\Big( \mu + \sum_{j=1}^{i-1} \theta\omega e^{-\omega (t_i-t_j)} \Big) - \int_0^T \lambda^*(s) \ ds. $$ The integral can be split into parts: \begin{align*} \int_0^T \lambda^*(s) \ ds &= \mu T + \sum_{i=1}^{n-1} \int_{t_i}^{t_{i+1}} \sum_{j=1}^i \theta\omega e^{-\omega (t-t_j)} \ dt + \int_{t_n}^T \sum_{j=1}^n \theta\omega e^{-\omega (t-t_j)} \ dt \\ &= \mu T + \theta \sum_{i=1}^{n-1} \sum_{j=1}^i \Big( e^{-\omega (t_i-t_j)} - e^{-\omega (t_{i+1}-t_j)} \Big) + \theta \sum_{j=1}^n \Big( e^{-\omega (t_n-t_j)} - e^{-\omega (T-t_j)} \Big) \end{align*} Many terms in the double sum cancel out, leaving \begin{align*} & \mu T + \theta \sum_{i=1}^{n-1} e^{-\omega (t_i-t_i)} - \theta \sum_{j=1}^{n-1} e^{-\omega (t_n-t_j)} + \theta \sum_{j=1}^n \Big( e^{-\omega (t_n-t_j)} - e^{-\omega (T-t_j)} \Big) \\ &= \mu T + n \theta + \theta - \theta \sum_{j=1}^n e^{-\omega (T-t_j)} \end{align*} ``` def likelihood(points, time_range, theta, omega, mu): n = len(points) first_sum = np.empty(n) first_sum[0] = mu for i in range(1, n): dt = points[i] - points[:i] first_sum[i] = mu + theta * omega * np.sum(np.exp(-omega * dt)) second_sum = np.sum(np.exp(-omega * (time_range - points))) return np.sum(np.log(first_sum)) - (mu * time_range + n * theta + theta - theta * second_sum) omega_inv = 10 process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv)) points = process.sample(0, 2000) convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1, convergence_criteria = 10**(-5), func=corrected_m_step) convergents = np.asarray(convergents) theta, omega, mu = convergents[:,-1] theta, omega, mu likelihood(points, 2000, theta, omega, mu) likelihood(points, 2000, 0.5, 0.1, 1) ``` So the algorithm is doing what it should be: finding a maximal likelihood!
github_jupyter
# Allow us to load `open_cp` without installing import sys, os.path sys.path.insert(0, os.path.abspath("..")) %matplotlib inline import matplotlib.pyplot as plt import numpy as np import open_cp.sources.sepp as source_sepp process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=0.05)) totality = [] trials = 50000 for _ in range(trials): result = process.sample_with_details(0,100) totality.extend(result.backgrounds) bins = np.linspace(0, 100, 21) counts = np.histogram(totality, bins)[0] counts = counts / (trials * (bins[1] - bins[0])) fig, ax = plt.subplots(ncols=2, figsize=(16,5)) ax[0].plot((bins[:-1] + bins[1:])/2, counts) ax[0].set(xlim=[0,100], ylim=[0,.15]) ax[1].plot((bins[:-1] + bins[1:])/2, counts) ax[1].set(xlim=[0,100]) for i in range(2): ax[i].set_xlabel("time") ax[i].set_ylabel("intensity") None totality = [] trials = 10000 for _ in range(trials): result = process.sample_with_details(0,100) totality.extend(result.trigger_deltas) fig, ax = plt.subplots(ncols=2, figsize=(16,5)) bins = np.linspace(0,1,11) xcoords = (bins[:-1] + bins[1:]) / 2 y = np.random.exponential(1 / 0.05, size=100000) x = 1 - np.exp(-0.05 * np.asarray(y)) c = np.histogram(x, bins)[0] ax[0].scatter(xcoords, c / 10000) ax[0].set(xlim=[0,1], ylim=[0,1.1], title="Direct simulation from numpy") y = np.asarray(totality) x = 1 - np.exp(-0.05 * np.asarray(y)) c = np.histogram(x, bins)[0] ax[1].scatter(xcoords, c / trials * 10) ax[1].set(xlim=[0,1], title="From our process, showing edge effects") None totality = [] trials = 1000 total_points = 0 for _ in range(trials): result = process.sample_with_details(0,1000) mask = result.trigger_points <= 900 totality.extend( result.trigger_deltas[mask] ) count = np.sum(result.points <= 900) total_points += count fig, ax = plt.subplots(ncols=2, figsize=(16,5)) y = np.asarray(totality) x = 1 - np.exp(-0.05 * np.asarray(y)) c = np.histogram(x, bins)[0] ax[0].scatter(xcoords, c / trials) ax[0].set(xlim=[0,1], title="From our process, corrected for edge effects") scale = 1 / ((bins[1] - bins[0]) * total_points) ax[1].scatter(xcoords, c * scale) ax[1].set(xlim=[0,1], title="Normalised to estimate $\\theta$") None total_time = 10000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape def p_matrix_col(points, col, theta=0.5, omega=0.05, mu=0.1): p = np.empty(col + 1) p[col] = mu dt = points[col] - points[0:col] p[0:col] = theta * omega * np.exp(-omega * dt) return p, np.sum(p), dt def m_step(points, total_time, theta=0.5, omega=0.05, mu=0.1): omega_1, omega_2, mu_e = 0.0, 0.0, 0.0 for col in range(len(points)): p, norm, dt = p_matrix_col(points, col, theta, omega, mu) wp = p[0:col] * dt omega_1 += np.sum(p[0:col]) / norm omega_2 += np.sum(wp) / norm mu_e += p[-1] / norm return omega_1 / len(points), omega_1 / omega_2, mu_e / total_time theta, omega, mu = m_step(points, total_time) theta, omega, mu def apply_algorithm(points, total_time, steps=200, theta_in=0.5, omega_in=0.05, mu_in=0.1, convergence_criteria=None, func=m_step): theta, omega, mu = theta_in, omega_in, mu_in thetas, omegas, mus = [theta], [omega], [mu] for _ in range(steps): theta, omega, mu = func(points, total_time, theta, omega, mu) diff = max(abs(thetas[-1] - theta), abs(omegas[-1] - omega), abs(mus[-1] - mu)) thetas.append(theta) omegas.append(omega) mus.append(mu) if convergence_criteria is not None and diff <= convergence_criteria: break return thetas, omegas, mus thetas, omegas, mus = apply_algorithm(points, total_time, 100) thetas[-1], omegas[-1], mus[-1] def plot_convergence(thetas, omegas, mus, inv_omega=False): fig, ax = plt.subplots(figsize=(16,5)) x = list(range(len(thetas))) ax.plot(x, thetas) legend_txt = ["$\\theta$"] if inv_omega: legend_txt.append("$\\omega^{-1}$") ax.plot(x, 1 / np.asarray(omegas)) else: legend_txt.append("$\\omega$") ax.plot(x, omegas) ax.plot(x, mus) legend_txt.append("$\\mu$") ax.legend(legend_txt) ax.set(xlabel="Iteration") plot_convergence(thetas, omegas, mus) total_time = 1000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape plot_convergence(*apply_algorithm(points, total_time, 200)) total_time = 2000 result = process.sample_with_details(0, total_time) points = result.points points = points[points>=1000] plot_convergence(*apply_algorithm(points, 1000, 200)) total_time = 11000 result = process.sample_with_details(0, total_time) points = result.points points = points[points>=1000] plot_convergence(*apply_algorithm(points, 10000, 50)) process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1)) points = process.sample(0, 1000) fig, ax = plt.subplots(figsize=(16,1)) ax.scatter(points, np.random.random(len(points))*.02, alpha=0.3) None total_time = 10000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape plot_convergence(*apply_algorithm(points, total_time, 200, theta_in=0.5, omega_in=1, mu_in=0.1), True) process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=10)) points = process.sample(0, 1000) fig, ax = plt.subplots(figsize=(16,1)) ax.scatter(points, np.random.random(len(points))*.02, alpha=0.3) None total_time = 1000 result = process.sample_with_details(0, total_time) points = result.points points.shape, result.trigger_deltas.shape plot_convergence(*apply_algorithm(points, total_time, 200, theta_in=0.5, omega_in=10, mu_in=0.1), True) all_results_dict = {} for omega_inv in [0.01, 0.1, 1, 10]: results = [] for _ in range(10): process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv)) points = process.sample(0, 2000) convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1, convergence_criteria = 10**(-5), func=m_step) convergents = np.asarray(convergents) # (theta, omega, mu), iterations results.append((convergents[:,-1], convergents.shape[-1])) all_results_dict[omega_inv] = results thetas = {key: [result[0][0] for result in all_results_dict[key]] for key in all_results_dict} mus = {key: [result[0][2] for result in all_results_dict[key]] for key in all_results_dict} fig, ax = plt.subplots(ncols=2, figsize=(16,6)) def plot(ax, data, true_value): x = list(data.keys()) y = np.asarray([ np.mean(data[k]) for k in x ]) ax.scatter(x, y, color="black") yy = np.asarray([ np.std(data[k], ddof=1) for k in x ]) for x, y, dy in zip(x,y,yy): ax.plot([x]*2, [y-dy,y+dy], color="black") ax.plot([0.001,100], [true_value]*2, color="black", linestyle="--", linewidth=1) ax.set(xscale="log", xlim=[0.001,100]) plot(ax[0], thetas, 0.5) plot(ax[1], mus, 1) ax[0].set(xlabel="$\\omega^{-1}$", ylabel="$\\theta$") ax[1].set(xlabel="$\\omega^{-1}$", ylabel="$\\mu$") None def corrected_m_step(points, total_time, theta=0.5, omega=0.05, mu=0.1): omega_1, omega_2, mu_e = 0.0, 0.0, 0.0 for col in range(len(points)): p, norm, dt = p_matrix_col(points, col, theta, omega, mu) wp = p[0:col] * dt omega_1 += np.sum(p[0:col]) / norm omega_2 += np.sum(wp) / norm mu_e += p[-1] / norm from_end = total_time - points exp_from_end = np.exp(-omega * from_end) corrected_n = len(points) - np.sum(exp_from_end) corrected_omega_2 = omega_2 + theta * np.sum(from_end * exp_from_end) return omega_1 / corrected_n, omega_1 / corrected_omega_2, mu_e / total_time all_results_dict = {} for omega_inv in [0.01, 0.1, 1, 10]: results = [] for _ in range(10): process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv)) points = process.sample(0, 2000) convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1, convergence_criteria = 10**(-5), func=corrected_m_step) convergents = np.asarray(convergents) # (theta, omega, mu), iterations results.append((convergents[:,-1], convergents.shape[-1])) all_results_dict[omega_inv] = results thetas = {key: [result[0][0] for result in all_results_dict[key]] for key in all_results_dict} mus = {key: [result[0][2] for result in all_results_dict[key]] for key in all_results_dict} fig, ax = plt.subplots(ncols=2, figsize=(16,6)) plot(ax[0], thetas, 0.5) plot(ax[1], mus, 1) ax[0].set(xlabel="$\\omega^{-1}$", ylabel="$\\theta$") ax[1].set(xlabel="$\\omega^{-1}$", ylabel="$\\mu$") None def likelihood(points, time_range, theta, omega, mu): n = len(points) first_sum = np.empty(n) first_sum[0] = mu for i in range(1, n): dt = points[i] - points[:i] first_sum[i] = mu + theta * omega * np.sum(np.exp(-omega * dt)) second_sum = np.sum(np.exp(-omega * (time_range - points))) return np.sum(np.log(first_sum)) - (mu * time_range + n * theta + theta - theta * second_sum) omega_inv = 10 process = source_sepp.SelfExcitingPointProcess( background_sampler = source_sepp.HomogeneousPoissonSampler(rate=1), trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=1/omega_inv)) points = process.sample(0, 2000) convergents = apply_algorithm(points, 2000, 2000, theta_in=0.5, omega_in=1/omega_inv, mu_in=1, convergence_criteria = 10**(-5), func=corrected_m_step) convergents = np.asarray(convergents) theta, omega, mu = convergents[:,-1] theta, omega, mu likelihood(points, 2000, theta, omega, mu) likelihood(points, 2000, 0.5, 0.1, 1)
0.379608
0.902309
# SLU03 - Git Basics - Exercise notebook ``` # Import for evaluating the results. import hashlib from submit import submit ``` I hope you enjoyed the **Learning Notebook**! The Exercise Notebook is multiple choice so it's really important that you have a good grasp of everything before starting it. **Good luck!** ## Exercise 1: Version Control What is the main principle behind a VCS: - Option **A**: Changes are tracked in a given location, allowing you to track, revert and compare all changes to the files and folders within it - Option **B**: All files and tracked changes are shared publicly in a remote repository - Option **C**: Changes are tracked in all files of a repository - Option **D**: You cannot start a repository without a README ``` # Assign to the variable exercise_1 the option letter that you think is correct. ### BEGIN SOLUTION exercise_1 = "A" ### END SOLUTION # exercise_1 = exercise_1_hash = "559aead08264d5795d3909718cdd05abd49572e84fe55590eef31a88a08fdffd" options = ["A", "B", "C", "D"] assert isinstance(exercise_1,str), "The variable should be a string." assert exercise_1.isupper(), "Please use uppercase for your answer." assert exercise_1.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_1_hash == hashlib.sha256(bytes(exercise_1, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 2: Git Which one of the following statements is false: - Option **A**: Git is a decentralized version control tool - Option **B**: Users can clone entire repositories to their local systems - Option **C**: Commits are possible even if offline - Option **D**: Works are shared to the remote repository automatically by commit ``` # Assign the option letter of the option that you think is correct to the variable exercise_2. ### BEGIN SOLUTION exercise_2 = "D" ### END SOLUTION # exercise_2 = exercise_2_hash = "3f39d5c348e5b79d06e842c114e6cc571583bbf44e4b0ebfda1a01ec05745d43" options = ["A", "B", "C", "D"] assert isinstance(exercise_2,str), "The variable should be a string." assert exercise_2.isupper(), "Please use uppercase for your answer." assert exercise_2.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_2_hash == hashlib.sha256(bytes(exercise_2, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 3: Staging Which of the following statements is true: - Option **A**: You can only have one file staged at a time - Option **B**: You can only stage files after a commit - Option **C**: The staging area is an intermediate storage for changes that will be part of the next commit - Option **D**: All files are added to the staging area automatically ``` # Assign the option letter of the option that you think is correct to the variable exercise_3. ### BEGIN SOLUTION exercise_3 = "C" ### END SOLUTION # exercise_3 = exercise_3_hash = "6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d" options = ["A", "B", "C", "D"] assert isinstance(exercise_3,str), "The variable should be a string." assert exercise_3.isupper(), "Please use uppercase for your answer." assert exercise_3.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_3_hash == hashlib.sha256(bytes(exercise_3, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 4: Git add What command should you use when you want to stage all text files changes in the workspace? - Option **A**: `git add --all` - Option **B**: `git add .` - Option **C**: `git add *.txt` ``` # Assign the option letter of the option that you think is correct to the variable exercise_4. ### BEGIN SOLUTION exercise_4 = "C" ### END SOLUTION # exercise_4 = exercise_4_hash = "6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d" options = ["A", "B", "C"] assert isinstance(exercise_4,str), "The variable should be a string." assert exercise_4.isupper(), "Please use uppercase for your answer." assert exercise_4.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_4_hash == hashlib.sha256(bytes(exercise_4, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 5: Git add and Git commit What is the difference between `git add .`; `git commit` and `git commit -a -m`? - Option **A**: The first tracks and stages all files in the repo, the second commits all staged files in the repo, and the last stages and commits all changed files, followed by a message - Option **B**: The first stages all files that start with '.', the second commits all staged files in the repo, and the last stages and commits all changed files, followed by a message - Option **C**: The first stages all files in the repo, the second commits all modified files in the repo, and the last stages and commits all changed files, followed by a message ``` # Assign the option letter of the option that you think is correct to the variable exercise_5. ### BEGIN SOLUTION exercise_5 = "A" ### END SOLUTION # exercise_5 = exercise_5_hash = "559aead08264d5795d3909718cdd05abd49572e84fe55590eef31a88a08fdffd" options = ["A", "B", "C"] assert isinstance(exercise_5,str), "The variable should be a string." assert exercise_5.isupper(), "Please use uppercase for your answer." assert exercise_5.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_5_hash == hashlib.sha256(bytes(exercise_5, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 6: Git status The `git status` command shows you the difference between the working directory and the index, where changes on tracked files are kept. In a repository where 5 files are being tracked, a new file is added. Additionally, the first line of one of the files is changed. What would the `git status` command show? - Option **A**: Staged: 1, Modified: 1, Untracked: 0 - Option **B**: Staged: 0, Modified: 1, Untracked: 1 - Option **C**: Staged: 5, Modified: 1, Untracked: 1 - Option **D**: Staged: 5, Modified: 0, Untracked: 1 ``` # Assign the option letter of the option that you think is correct to the variable exercise_6. ### BEGIN SOLUTION exercise_6 = "B" ### END SOLUTION # exercise_6 = exercise_6_hash = "df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c" options = ["A", "B", "C", "D"] assert isinstance(exercise_6,str), "The variable should be a string." assert exercise_6.isupper(), "Please use uppercase for your answer." assert exercise_6.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_6_hash == hashlib.sha256(bytes(exercise_6, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 7: .gitignore Which one of the following statements is false: - Option **A**: You can stage a file that is on the .gitignore if you add it explicitly to the `git add` command - Option **B**: You can't ignore the .gitignore file itself - Option **C**: You can use patterns, such as wildcards, in the .gitignore ``` # Assign the option letter of the option that you think is correct to the variable exercise_7. ### BEGIN SOLUTION exercise_7 = "B" ### END SOLUTION # exercise_7 = exercise_7_hash = "df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c" options = ["A", "B", "C"] assert isinstance(exercise_7,str), "The variable should be a string." assert exercise_7.isupper(), "Please use uppercase for your answer." assert exercise_7.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_7_hash == hashlib.sha256(bytes(exercise_7, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 8: git push What would happen if you tried to `git push` changes to a remote repository that is ahead on commits compared to your local repository? - Option **A**: The remote repository will handle everything like magic - Option **B**: The remote repository will throw an error because versions don't match - Option **C**: The remote repository will throw an error because the remote repository is ahead on commits ``` # Assign the option letter of the option that you think is correct to the variable exercise_8. ### BEGIN SOLUTION exercise_8 = "C" ### END SOLUTION # exercise_8 = exercise_8_hash = "6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d" options = ["A", "B", "C"] assert isinstance(exercise_8,str), "The variable should be a string." assert exercise_8.isupper(), "Please use uppercase for your answer." assert exercise_8.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_8_hash == hashlib.sha256(bytes(exercise_8, encoding='utf8')).hexdigest(), "Wrong answer" ``` ## Exercise 9: Git Workflow This exercise is ungraded. It's a set of instructions for you to follow and see for yourself how some git commands work and have some fun with it! - Save this file - On the root of your Prep Course repository, run the `git status` command - Check the output, you should see that the **Exercise Notebook** file is modified - Run the `git add` command to add **just** the Exercise Notebook file - Run the `git commit` command with the option that allows you to write the commit message in the command. Write the following message: "I've conquered the power of git!" Congratulations! You've just finished the first steps of your journey into the *wonderful world of version control*! These skills will accompany you for the rest of the Prep Course, so make good use of them! **See you in the Git Intermediate SLU for some more VCS fun!** ## Submit your work! To submit your work, [get your slack id](https://moshfeu.medium.com/how-to-find-my-member-id-in-slack-workspace-d4bba942e38c) and fill it in the `slack_id` variable. Example: `slack_id = "UTS63FC02"` ``` #slack_id = ... ### BEGIN SOLUTION slack_id = "U01RTPQGEH0" ### END SOLUTION assert isinstance(slack_id,str) submit(slack_id, 3) ```
github_jupyter
# Import for evaluating the results. import hashlib from submit import submit # Assign to the variable exercise_1 the option letter that you think is correct. ### BEGIN SOLUTION exercise_1 = "A" ### END SOLUTION # exercise_1 = exercise_1_hash = "559aead08264d5795d3909718cdd05abd49572e84fe55590eef31a88a08fdffd" options = ["A", "B", "C", "D"] assert isinstance(exercise_1,str), "The variable should be a string." assert exercise_1.isupper(), "Please use uppercase for your answer." assert exercise_1.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_1_hash == hashlib.sha256(bytes(exercise_1, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_2. ### BEGIN SOLUTION exercise_2 = "D" ### END SOLUTION # exercise_2 = exercise_2_hash = "3f39d5c348e5b79d06e842c114e6cc571583bbf44e4b0ebfda1a01ec05745d43" options = ["A", "B", "C", "D"] assert isinstance(exercise_2,str), "The variable should be a string." assert exercise_2.isupper(), "Please use uppercase for your answer." assert exercise_2.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_2_hash == hashlib.sha256(bytes(exercise_2, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_3. ### BEGIN SOLUTION exercise_3 = "C" ### END SOLUTION # exercise_3 = exercise_3_hash = "6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d" options = ["A", "B", "C", "D"] assert isinstance(exercise_3,str), "The variable should be a string." assert exercise_3.isupper(), "Please use uppercase for your answer." assert exercise_3.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_3_hash == hashlib.sha256(bytes(exercise_3, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_4. ### BEGIN SOLUTION exercise_4 = "C" ### END SOLUTION # exercise_4 = exercise_4_hash = "6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d" options = ["A", "B", "C"] assert isinstance(exercise_4,str), "The variable should be a string." assert exercise_4.isupper(), "Please use uppercase for your answer." assert exercise_4.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_4_hash == hashlib.sha256(bytes(exercise_4, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_5. ### BEGIN SOLUTION exercise_5 = "A" ### END SOLUTION # exercise_5 = exercise_5_hash = "559aead08264d5795d3909718cdd05abd49572e84fe55590eef31a88a08fdffd" options = ["A", "B", "C"] assert isinstance(exercise_5,str), "The variable should be a string." assert exercise_5.isupper(), "Please use uppercase for your answer." assert exercise_5.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_5_hash == hashlib.sha256(bytes(exercise_5, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_6. ### BEGIN SOLUTION exercise_6 = "B" ### END SOLUTION # exercise_6 = exercise_6_hash = "df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c" options = ["A", "B", "C", "D"] assert isinstance(exercise_6,str), "The variable should be a string." assert exercise_6.isupper(), "Please use uppercase for your answer." assert exercise_6.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_6_hash == hashlib.sha256(bytes(exercise_6, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_7. ### BEGIN SOLUTION exercise_7 = "B" ### END SOLUTION # exercise_7 = exercise_7_hash = "df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c" options = ["A", "B", "C"] assert isinstance(exercise_7,str), "The variable should be a string." assert exercise_7.isupper(), "Please use uppercase for your answer." assert exercise_7.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_7_hash == hashlib.sha256(bytes(exercise_7, encoding='utf8')).hexdigest(), "Wrong answer" # Assign the option letter of the option that you think is correct to the variable exercise_8. ### BEGIN SOLUTION exercise_8 = "C" ### END SOLUTION # exercise_8 = exercise_8_hash = "6b23c0d5f35d1b11f9b683f0b0a617355deb11277d91ae091d399c655b87940d" options = ["A", "B", "C"] assert isinstance(exercise_8,str), "The variable should be a string." assert exercise_8.isupper(), "Please use uppercase for your answer." assert exercise_8.upper() in options, "Please choose one of the options: %s" % ', '.join(str(opt) for opt in options) assert exercise_8_hash == hashlib.sha256(bytes(exercise_8, encoding='utf8')).hexdigest(), "Wrong answer" #slack_id = ... ### BEGIN SOLUTION slack_id = "U01RTPQGEH0" ### END SOLUTION assert isinstance(slack_id,str) submit(slack_id, 3)
0.419886
0.910784
``` import cuml import cudf import nvcategory import xgboost as xgb import numpy as np from sklearn.metrics import mean_squared_error, roc_auc_score import os import urllib.request data_dir = '../../data/blackfriday/' if not os.path.exists(data_dir): print('creating black friday data directory') os.system('mkdir ../../data/blackfriday') base_url = 'https://datahack-prod.s3.amazonaws.com/train_zip/' ofn = 'train_oSwQCTC.zip' fn = 'train.zip' if not os.path.isfile(data_dir+ofn): print(f'Downloading {base_url+ofn} to {data_dir+fn}') urllib.request.urlretrieve(base_url+ofn, data_dir+fn) #Read in the data. Notice how it decompresses as it reads the data into memory. gdf = cudf.read_csv(data_dir+fn) len(gdf) #Taking a look at the data. We use "to_pandas()" to get the pretty printing. gdf.head().to_pandas() #grabbing the first character of the years in city string to get rid of plus sign, and converting to int gdf['city_years'] = gdf.Stay_In_Current_City_Years.str.get(0) #Here we can see how we can control the value of our category variables with the replace method and turn strings to ints gdf['City_Category'] = gdf.City_Category.str.replace('A', '1') gdf['City_Category'] = gdf.City_Category.str.replace('B', '2') gdf['City_Category'] = gdf.City_Category.str.replace('C', '3') gdf['City_Category'] = gdf['City_Category'].str.stoi() gdf['Gender'] = gdf.Gender.str.replace('F', '1') gdf['Gender'] = gdf.Gender.str.replace('M', '0') gdf['Gender'] = gdf.Gender.str.stoi() #Let's take a look at how many products we have prod_count = cudf.Series(nvcategory.from_strings(gdf.Product_ID.data).values()).unique().count() #hideous one-liner print("Unique Products: {}".format(prod_count)) #Let's take a look at how many primary product categories we have #We do it differently here because the variable is a number, not a string prod1_count = gdf.Product_Category_1.unique().count() print("Unique Product Categories: {}".format(prod1_count)) #Dummy for multi-category products gdf['Product_Category_2'] = gdf['Product_Category_2'].fillna(0) gdf['Product_Category_3'] = gdf['Product_Category_3'].fillna(0) gdf['multi'] = ((gdf['Product_Category_2'] + gdf['Product_Category_3'])>0).astype('int') #Gender/Marital Status interaction variable gdf['gen_mar_interaction'] = gdf['Gender']*gdf['Marital_Status'] #Because Occupation is a code, it should converted into indicator variables gdf = gdf.one_hot_encoding('Occupation', 'occ_dummy', gdf.Occupation.unique()) #Dummy variable from Int gdf = gdf.one_hot_encoding('City_Category', 'city_cat', gdf.City_Category.unique()) #Dummy from string cat = nvcategory.from_strings(gdf.Age.data) gdf['Age'] = cudf.Series(cat.values()) gdf = gdf.one_hot_encoding('Age', 'age', gdf.Age.unique()) #Solution: gdf = gdf.one_hot_encoding('Product_Category_1', 'product', gdf.Product_Category_1.unique()) #We're going to drop the variables we've transformed drop_list = ['User_ID', 'Age', 'Stay_In_Current_City_Years', 'City_Category','Product_ID', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3'] gdf = gdf.drop(drop_list) #We're going to make a list of all the first indicator variables in a series now so it will be #easier to exclude them when we're doing regressions later dummy_list = ['occ_dummy_0', 'city_cat_1', 'age_0', 'product_1', 'Purchase'] #All variables currently have to have the same type for some methods in cuML for col in gdf.columns.tolist(): gdf[col] = gdf[col].astype('float32') #cuml.preprocessing.model_selection.train_test_split test_size = round(len(gdf)*0.2) train_size = round(len(gdf)-test_size) test = gdf.iloc[0:test_size] gdf_train = gdf.iloc[train_size:] #Deleting the main gdf because we're going to be making other subsets and other stuff, so it will be nice to have the memory. del(gdf) y_train = gdf_train['Purchase'].log() X_reg = gdf_train.drop(dummy_list) # # I'm going to perform a hyperparameter search for alpha in a ridge regression output_ridge = {} for alpha in np.around(np.arange(0.01, 1, 0.01), decimals=2): Ridge = cuml.Ridge(alpha=alpha, fit_intercept=False, normalize=True) _fit = Ridge.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) output_ridge['RMSE_RIDGE_{}'.format(alpha)] = _mse print('MAX RMSE: {}'.format(min(output_ridge, key=output_ridge.get))) Ridge = cuml.Ridge(alpha=.1, fit_intercept=False, normalize=True) _fit = Ridge.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) print('{:,}'.format(_mse)) ##Lasso output_lasso = {} for alpha in np.around(np.arange(0.1, 10, 0.1), decimals=2): Lasso = cuml.Lasso(alpha=alpha, fit_intercept=False, normalize=True) _fit = Lasso.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) output_lasso['RMSE_Lasso_{}'.format(alpha)] = _mse print('MAX RMSE: {}'.format(min(output_lasso, key=output_lasso.get))) Lasso = cuml.Lasso(alpha=.1, fit_intercept=False, normalize=True) _fit = Lasso.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) print('{:,}'.format(_mse)) ##Elastic Net output_en = {} for alpha in np.around(np.arange(0.1, 10, 0.1), decimals=2): for ratio in np.around(np.arange(0.1, 1, 0.1), decimals=2): ElasticNet = cuml.ElasticNet(alpha=alpha, l1_ratio=ratio, fit_intercept=False, normalize=True) _fit = ElasticNet.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) output_en['RMSE_ElasticNet_{}_{}'.format(alpha, ratio)] = _mse print('MAX RMSE: {}'.format(min(output_en, key=output_en.get))) ElasticNet = cuml.ElasticNet(alpha=.1, l1_ratio=.1, fit_intercept=False, normalize=True) _fit = ElasticNet.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) print('{:,}'.format(_mse)) y_xgb = gdf_train[['Purchase']].log() X_xgb = gdf_train.drop('Purchase') xgb_train_set = xgb.DMatrix(data=X_xgb, label=y_xgb) xgb_params = { 'nround':100, 'max_depth':4, 'max_leaves':2**4, 'tree_method':'gpu_hist', 'n_gpus':1, 'loss':'ls', 'objective':'reg:squarederror', 'max_features':'auto', 'criterion':'friedman_mse', 'grow_policy':'lossguide', 'verbose':True } xgb_model = xgb.train(xgb_params, dtrain=xgb_train_set) y_hat_xgb = xgb_model.predict(xgb_train_set) RMSE = np.sqrt(mean_squared_error(y_xgb['Purchase'].to_pandas(), y_hat_xgb)) print(RMSE) ```
github_jupyter
import cuml import cudf import nvcategory import xgboost as xgb import numpy as np from sklearn.metrics import mean_squared_error, roc_auc_score import os import urllib.request data_dir = '../../data/blackfriday/' if not os.path.exists(data_dir): print('creating black friday data directory') os.system('mkdir ../../data/blackfriday') base_url = 'https://datahack-prod.s3.amazonaws.com/train_zip/' ofn = 'train_oSwQCTC.zip' fn = 'train.zip' if not os.path.isfile(data_dir+ofn): print(f'Downloading {base_url+ofn} to {data_dir+fn}') urllib.request.urlretrieve(base_url+ofn, data_dir+fn) #Read in the data. Notice how it decompresses as it reads the data into memory. gdf = cudf.read_csv(data_dir+fn) len(gdf) #Taking a look at the data. We use "to_pandas()" to get the pretty printing. gdf.head().to_pandas() #grabbing the first character of the years in city string to get rid of plus sign, and converting to int gdf['city_years'] = gdf.Stay_In_Current_City_Years.str.get(0) #Here we can see how we can control the value of our category variables with the replace method and turn strings to ints gdf['City_Category'] = gdf.City_Category.str.replace('A', '1') gdf['City_Category'] = gdf.City_Category.str.replace('B', '2') gdf['City_Category'] = gdf.City_Category.str.replace('C', '3') gdf['City_Category'] = gdf['City_Category'].str.stoi() gdf['Gender'] = gdf.Gender.str.replace('F', '1') gdf['Gender'] = gdf.Gender.str.replace('M', '0') gdf['Gender'] = gdf.Gender.str.stoi() #Let's take a look at how many products we have prod_count = cudf.Series(nvcategory.from_strings(gdf.Product_ID.data).values()).unique().count() #hideous one-liner print("Unique Products: {}".format(prod_count)) #Let's take a look at how many primary product categories we have #We do it differently here because the variable is a number, not a string prod1_count = gdf.Product_Category_1.unique().count() print("Unique Product Categories: {}".format(prod1_count)) #Dummy for multi-category products gdf['Product_Category_2'] = gdf['Product_Category_2'].fillna(0) gdf['Product_Category_3'] = gdf['Product_Category_3'].fillna(0) gdf['multi'] = ((gdf['Product_Category_2'] + gdf['Product_Category_3'])>0).astype('int') #Gender/Marital Status interaction variable gdf['gen_mar_interaction'] = gdf['Gender']*gdf['Marital_Status'] #Because Occupation is a code, it should converted into indicator variables gdf = gdf.one_hot_encoding('Occupation', 'occ_dummy', gdf.Occupation.unique()) #Dummy variable from Int gdf = gdf.one_hot_encoding('City_Category', 'city_cat', gdf.City_Category.unique()) #Dummy from string cat = nvcategory.from_strings(gdf.Age.data) gdf['Age'] = cudf.Series(cat.values()) gdf = gdf.one_hot_encoding('Age', 'age', gdf.Age.unique()) #Solution: gdf = gdf.one_hot_encoding('Product_Category_1', 'product', gdf.Product_Category_1.unique()) #We're going to drop the variables we've transformed drop_list = ['User_ID', 'Age', 'Stay_In_Current_City_Years', 'City_Category','Product_ID', 'Product_Category_1', 'Product_Category_2', 'Product_Category_3'] gdf = gdf.drop(drop_list) #We're going to make a list of all the first indicator variables in a series now so it will be #easier to exclude them when we're doing regressions later dummy_list = ['occ_dummy_0', 'city_cat_1', 'age_0', 'product_1', 'Purchase'] #All variables currently have to have the same type for some methods in cuML for col in gdf.columns.tolist(): gdf[col] = gdf[col].astype('float32') #cuml.preprocessing.model_selection.train_test_split test_size = round(len(gdf)*0.2) train_size = round(len(gdf)-test_size) test = gdf.iloc[0:test_size] gdf_train = gdf.iloc[train_size:] #Deleting the main gdf because we're going to be making other subsets and other stuff, so it will be nice to have the memory. del(gdf) y_train = gdf_train['Purchase'].log() X_reg = gdf_train.drop(dummy_list) # # I'm going to perform a hyperparameter search for alpha in a ridge regression output_ridge = {} for alpha in np.around(np.arange(0.01, 1, 0.01), decimals=2): Ridge = cuml.Ridge(alpha=alpha, fit_intercept=False, normalize=True) _fit = Ridge.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) output_ridge['RMSE_RIDGE_{}'.format(alpha)] = _mse print('MAX RMSE: {}'.format(min(output_ridge, key=output_ridge.get))) Ridge = cuml.Ridge(alpha=.1, fit_intercept=False, normalize=True) _fit = Ridge.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) print('{:,}'.format(_mse)) ##Lasso output_lasso = {} for alpha in np.around(np.arange(0.1, 10, 0.1), decimals=2): Lasso = cuml.Lasso(alpha=alpha, fit_intercept=False, normalize=True) _fit = Lasso.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) output_lasso['RMSE_Lasso_{}'.format(alpha)] = _mse print('MAX RMSE: {}'.format(min(output_lasso, key=output_lasso.get))) Lasso = cuml.Lasso(alpha=.1, fit_intercept=False, normalize=True) _fit = Lasso.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) print('{:,}'.format(_mse)) ##Elastic Net output_en = {} for alpha in np.around(np.arange(0.1, 10, 0.1), decimals=2): for ratio in np.around(np.arange(0.1, 1, 0.1), decimals=2): ElasticNet = cuml.ElasticNet(alpha=alpha, l1_ratio=ratio, fit_intercept=False, normalize=True) _fit = ElasticNet.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) output_en['RMSE_ElasticNet_{}_{}'.format(alpha, ratio)] = _mse print('MAX RMSE: {}'.format(min(output_en, key=output_en.get))) ElasticNet = cuml.ElasticNet(alpha=.1, l1_ratio=.1, fit_intercept=False, normalize=True) _fit = ElasticNet.fit(X_reg, y_train) _y_hat = _fit.predict(X_reg) _mse = np.sqrt((y_train.reset_index(drop=True).sub(_y_hat)**2).sum()) print('{:,}'.format(_mse)) y_xgb = gdf_train[['Purchase']].log() X_xgb = gdf_train.drop('Purchase') xgb_train_set = xgb.DMatrix(data=X_xgb, label=y_xgb) xgb_params = { 'nround':100, 'max_depth':4, 'max_leaves':2**4, 'tree_method':'gpu_hist', 'n_gpus':1, 'loss':'ls', 'objective':'reg:squarederror', 'max_features':'auto', 'criterion':'friedman_mse', 'grow_policy':'lossguide', 'verbose':True } xgb_model = xgb.train(xgb_params, dtrain=xgb_train_set) y_hat_xgb = xgb_model.predict(xgb_train_set) RMSE = np.sqrt(mean_squared_error(y_xgb['Purchase'].to_pandas(), y_hat_xgb)) print(RMSE)
0.367838
0.399197
<a href="https://colab.research.google.com/github/mbk-dev/okama/blob/master/examples/05%20efficient%20frontier%20multi-period.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> ``` !pip install okama import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [12.0, 6.0] import okama as ok ``` A single period approach is used in classic Markowitz Mean-Variance Analysis (MVA) where a portfolio is always rebalanced and has original weights. In `okama` it's equivalent to monthly rebalanced portfolios as monthly historical data is used. However, in real life portfolios are not rebalanced every day or every moth. In **multi-period** approach portfolio is rebalanced to the original allocation with a certain frequency (annually, quarterly etc.) or not rebalanced at all. **EfficientFrontierReb** class can be used for multi-period optimization. Actually two rebalancing frequencies can be usd (**rebalancing_period** parameter): - 'year' - one Year (default) - 'none' - not rebalanced portfolios ## 2 Assets First example is a simple USD portfolio with 2 ETFs (S&P500 and physical gold spot prices). ``` ls = ['SPY.US', 'GLD.US'] curr = 'USD' y = ok.EfficientFrontierReb(assets=ls, first_date='2004-12', last_date='2020-10', ccy=curr, rebalancing_period='year', # set rebalancing period to one year ticker_names=True, # use tickers in DataFrame column names (can be set to False to show full assets names instead tickers) n_points=20, # number of points in the Efficient Frontier verbose=True) # verbose mode is ON to show progress while the EF points are calcualted y y.names ``` We need to get all the points of the optimized portfolios to plot the Efficient Frontier. As the algorithm uses "black box" objective function (there is no algebraic functions as in classic MVA) the optimization process is slower than in single period optimization. Using "verbose mode" is recommended to see the progress. ``` df_reb_year = y.ef_points ``` **ef_points** property returns the dataframe (table). Each row has risk and return properties of portfolio which could be used to plot the points of the frontier: - _Risk_ - the volatility or annual standard deviation - _CAGR_ - Compound annual growth rate ... and assets weights ``` df_reb_year.head(5) ``` It's interesting to compare annualy rebalanced portfolios with not rebalanced portfolios. ``` y.rebalancing_period = 'none' # Set rebalancing period to NONE df_not_reb = y.ef_points ``` Finally we can plot both Efficient Frontiers... ``` fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls, ccy=curr, first_date='2004-12', last_date='2020-10').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df_reb_year.Risk, df_reb_year.CAGR, label='Annually rebalanced') ax.plot(df_not_reb.Risk, df_not_reb.CAGR, label='Not rebalanced') ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR') ax.legend(); ``` As one can see rebalanced portfolios have more options with higher CAGR. The difference is up to 0,5%. ### Multiple assets Second example will have an asset list composed by US ETFs and Russian indexes for bonds (RGBITR.INDX) and stocks (MCFTR.INDX). The currency of portfolios is Russian Ruble (RUB). ``` ls_m = ['SPY.US', 'GLD.US', 'PGJ.US', 'RGBITR.INDX', 'MCFTR.INDX'] curr_rub = 'RUB' x = ok.EfficientFrontierReb(assets=ls_m, first_date='2005-01', last_date='2020-11', ccy=curr_rub, rebalancing_period='year', # set rebalancing period to one year n_points=20, verbose=True ) x x.names ``` As in the first example we will draw 2 Efficient Frontier: not rebalanced portfolios with annually rebalanced portfolios.. ``` df1_reb_year = x.ef_points x.reb_period = 'none' df1_not_reb = x.ef_points fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls_m, ccy=curr_rub, first_date='2005-01', last_date='2020-11').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df1_reb_year.Risk, df1_reb_year.CAGR, label='Annually rebalanced') ax.plot(df1_not_reb.Risk, df1_not_reb.CAGR, label='Not rebalanced') ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR') ax.legend(); ``` ### Compare with monthly rebalanced portfolios We can also add monthly rebalanced portfolio Frontier (single period optimization) to the chart. Single period optimization can be achieved with **EfficientFrontier** class. ``` z = ok.EfficientFrontier(assets=ls_m, first_date='2005-01', last_date='2020-11', ccy=curr_rub, n_points=20, ) df2 = z.ef_points # Create EF points fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls_m, ccy=curr_rub, first_date='2005-01', last_date='2020-11').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df1_reb_year.Risk, df1_reb_year.CAGR, label='Annually rebalanced') ax.plot(df2.Risk, df2.CAGR, linestyle='--', label='Monthly rebalanced') ax.plot(df1_not_reb.Risk, df1_not_reb.CAGR, label='Not rebalanced') ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR') ax.legend(); ``` ### Monte Carlo simulation for multi-period optimization As in the EfficientFrontier there is a **get_monte_carlo** method which allows to generate N random multi-period portfolios for selected rebalancing frequency. ``` x.rebalancing_period = 'year' # me must set the rebalancing frequency first (annually rebalanced portfolios in this case) monte_carlo = x.get_monte_carlo(n=1000) # generate n random portfolios monte_carlo.head(5) # table of random portfolios properties fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls_m, ccy=curr_rub, first_date='2005-01', last_date='2020-11').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df1_reb_year.Risk, df1_reb_year.CAGR) ax.scatter(monte_carlo.Risk, monte_carlo.CAGR) # draw the random portfolios points ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR'); ``` To cover more space inside the Efficient Frontier with Monte Carlos simulation bigger number on random portfolios is required. But it will take more time to generate it.
github_jupyter
!pip install okama import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [12.0, 6.0] import okama as ok ls = ['SPY.US', 'GLD.US'] curr = 'USD' y = ok.EfficientFrontierReb(assets=ls, first_date='2004-12', last_date='2020-10', ccy=curr, rebalancing_period='year', # set rebalancing period to one year ticker_names=True, # use tickers in DataFrame column names (can be set to False to show full assets names instead tickers) n_points=20, # number of points in the Efficient Frontier verbose=True) # verbose mode is ON to show progress while the EF points are calcualted y y.names df_reb_year = y.ef_points df_reb_year.head(5) y.rebalancing_period = 'none' # Set rebalancing period to NONE df_not_reb = y.ef_points fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls, ccy=curr, first_date='2004-12', last_date='2020-10').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df_reb_year.Risk, df_reb_year.CAGR, label='Annually rebalanced') ax.plot(df_not_reb.Risk, df_not_reb.CAGR, label='Not rebalanced') ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR') ax.legend(); ls_m = ['SPY.US', 'GLD.US', 'PGJ.US', 'RGBITR.INDX', 'MCFTR.INDX'] curr_rub = 'RUB' x = ok.EfficientFrontierReb(assets=ls_m, first_date='2005-01', last_date='2020-11', ccy=curr_rub, rebalancing_period='year', # set rebalancing period to one year n_points=20, verbose=True ) x x.names df1_reb_year = x.ef_points x.reb_period = 'none' df1_not_reb = x.ef_points fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls_m, ccy=curr_rub, first_date='2005-01', last_date='2020-11').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df1_reb_year.Risk, df1_reb_year.CAGR, label='Annually rebalanced') ax.plot(df1_not_reb.Risk, df1_not_reb.CAGR, label='Not rebalanced') ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR') ax.legend(); z = ok.EfficientFrontier(assets=ls_m, first_date='2005-01', last_date='2020-11', ccy=curr_rub, n_points=20, ) df2 = z.ef_points # Create EF points fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls_m, ccy=curr_rub, first_date='2005-01', last_date='2020-11').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df1_reb_year.Risk, df1_reb_year.CAGR, label='Annually rebalanced') ax.plot(df2.Risk, df2.CAGR, linestyle='--', label='Monthly rebalanced') ax.plot(df1_not_reb.Risk, df1_not_reb.CAGR, label='Not rebalanced') ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR') ax.legend(); x.rebalancing_period = 'year' # me must set the rebalancing frequency first (annually rebalanced portfolios in this case) monte_carlo = x.get_monte_carlo(n=1000) # generate n random portfolios monte_carlo.head(5) # table of random portfolios properties fig = plt.figure(figsize=(12,6)) fig.subplots_adjust(bottom=0.2, top=1.5) ok.Plots(ls_m, ccy=curr_rub, first_date='2005-01', last_date='2020-11').plot_assets(kind='cagr') # should be the same history period and the currency ax = plt.gca() ax.plot(df1_reb_year.Risk, df1_reb_year.CAGR) ax.scatter(monte_carlo.Risk, monte_carlo.CAGR) # draw the random portfolios points ax.set_xlabel('Risk (Standard Deviation)') ax.set_ylabel('CAGR');
0.484868
0.971672
# Exploring `.bag` Bathymetry Data Files An exploration of data and metadata in Bathymetric Attributed Grid (BAG) files. References: * BAG website: https://marinemetadata.org/references/bag * Format Specification Document: http://www.opennavsurf.org/papers/ons_fsd.pdf * A slightly dated, Python 2 based video lesson on accessing BAG files: https://www.youtube.com/watch?v=dEtC6bRcjvc Working environment for this notebook: * Python 3 * `conda` packages: * `h5py` - Python interface to HDF5 format used by BAG * `lxml` - XML parser and manipulation library to access BAG metadata * `numpy` - for n-dimensional arrays * `matplotlib` - for plotting * `notebook` - Jupyter notebook "Keep Calm and Conda Install" If you are looking at this in the Salish Sea Tools docs at http://salishsea-meopar-tools.readthedocs.io/en/latest/bathymetry/ExploringBagFiles.html, you can find the source notebook that generated the page in the Salish Sea project [tools repo](https://bitbucket.org/salishsea/tools) at `tools/bathymetry/ExploringBagFiles.ipynb` or download the notebook by itself (instead of cloning the [tools repo](https://bitbucket.org/salishsea/tools) to get it) from http://nbviewer.jupyter.org/urls/bitbucket.org/salishsea/tools/raw/tip/bathymetry/ExploringBagFiles.ipynb. ``` from io import BytesIO import h5py from lxml import etree import matplotlib.pyplot as plt import numpy as np %matplotlib inline ``` ## BAG Dataset Load the BAG dataset and explore some of its basic attributes: ``` bag = h5py.File('/ocean/sallen/allen/research/MEOPAR/chs_bathy/092B.bag') print(type(bag)) print(bag.name) print(bag.filename) for item in bag.items(): print(item) for value in bag.values(): print(value) list(bag['BAG_root'].items()) ``` The list above contains the 4 elements that the BAG specification tells us should be in the file: * `elevation` is the depths as negative 32-bit floats, with `1.0e6` as the "no data" value (land, typically) * `metadata` is the BAG metadata, a blob of XML * `tracking_list` is adjustments to the `elevation` data values made by a hydrographer * `uncertainty` is the vertical uncertainty in the `elevation` data values Note that under Python 3 the `h5py` library maked heavy use of `memoryview` objects which are iterators. The transformation to a `list` object above, or the use of a `for` loop above that collects the items from the `memoryview`. One odd thing to note is that the metadata is stored as a collection of 1-character strings which turn out to be single bytes in Python 3. We're going to have to do something about that... Peeling away the HDF5 group layer: ``` root = bag['BAG_root'] print(root.name) print(root.parent) list(root.items()) ``` ## The `elevation` Element Pulling the `elevation` dataset out of the BAG, and the depths data out of the dataset: ``` elev_node = root['elevation'] print(type(elev_node)) elev = elev_node.value print(type(elev)) print(elev.min(), elev.max()) ``` As noted above `1e+06` indicates no data at a point, typically meaning land. Let's replace those with NumPy `NaN`s so that we can work with the data more easily: ``` elev[elev > 9e5] = np.NAN print(np.nanmin(elev), np.nanmax(elev)) fig, ax = plt.subplots(1, 1) ax.imshow(elev) ax.invert_yaxis() ``` ## The `metadata` Element Pulling the `metadata` element out of the BAG, and getting it into a form that we can work with: ``` metadata_node = root['metadata'] print(type(metadata_node)) print(metadata_node) ``` As noted above, the metadata is a collection of single characters in the form of bytes. We need to collect those bytes into a buffer and parse them to get an XML tree object that we can work with in code: ``` buffer = BytesIO(metadata_node.value) tree = etree.parse(buffer) root = tree.getroot() ``` Now we can get a somewhat readable rendering of the metadata in all its verbose XML glory: ``` print(etree.tostring(root, pretty_print=True).decode('ascii')) ``` To get information out of the tree we need to deal with the namespaces that are used for the various tags: ``` root.nsmap ``` Building the tags that we need to get to the resolution, and then walking the tree to get the resolution and its units: ``` sri = etree.QName(root.nsmap['gmd'], 'spatialRepresentationInfo').text adp = etree.QName(root.nsmap['gmd'], 'axisDimensionProperties').text dim = etree.QName(root.nsmap['gmd'], 'MD_Dimension').text res = etree.QName(root.nsmap['gmd'], 'resolution').text res_meas = etree.QName(root.nsmap['gco'], 'Measure').text resolution = ( root .find('.//{}'.format(sri)) .find('.//{}'.format(adp)) .find('.//{}'.format(dim)) .find('.//{}'.format(res)) .find('.//{}'.format(res_meas)) ) print(resolution.text, resolution.get('uom')) ``` There might be a more elegant way of doing the sequence of `find`s above if one were to dig more deeply into XPATH syntax. Similarily for the data region boundaries: ``` id_info = etree.QName(root.nsmap['gmd'], 'identificationInfo').text bag_data_id = etree.QName(root.nsmap['bag'], 'BAG_DataIdentification').text extent = etree.QName(root.nsmap['gmd'], 'extent').text ex_extent = etree.QName(root.nsmap['gmd'], 'EX_Extent').text geo_el = etree.QName(root.nsmap['gmd'], 'geographicElement').text geo_bb = etree.QName(root.nsmap['gmd'], 'EX_GeographicBoundingBox').text west_bound_lon = etree.QName(root.nsmap['gmd'], 'westBoundLongitude').text east_bound_lon = etree.QName(root.nsmap['gmd'], 'eastBoundLongitude').text north_bound_lat = etree.QName(root.nsmap['gmd'], 'northBoundLatitude').text south_bound_lat = etree.QName(root.nsmap['gmd'], 'southBoundLatitude').text decimal = etree.QName(root.nsmap['gco'], 'Decimal').text bbox = ( root .find('.//{}'.format(id_info)) .find('.//{}'.format(bag_data_id)) .find('.//{}'.format(extent)) .find('.//{}'.format(ex_extent)) .find('.//{}'.format(geo_el)) .find('.//{}'.format(geo_bb)) ) west_lon = ( bbox .find('.//{}'.format(west_bound_lon)) .find('.//{}'.format(decimal)) ) print('west:', west_lon.text) east_lon = ( bbox .find('.//{}'.format(east_bound_lon)) .find('.//{}'.format(decimal)) ) print('east:', east_lon.text) north_lat = ( bbox .find('.//{}'.format(north_bound_lat)) .find('.//{}'.format(decimal)) ) print('north:', north_lat.text) south_lat = ( bbox .find('.//{}'.format(south_bound_lat)) .find('.//{}'.format(decimal)) ) print('south:', south_lat.text) ```
github_jupyter
from io import BytesIO import h5py from lxml import etree import matplotlib.pyplot as plt import numpy as np %matplotlib inline bag = h5py.File('/ocean/sallen/allen/research/MEOPAR/chs_bathy/092B.bag') print(type(bag)) print(bag.name) print(bag.filename) for item in bag.items(): print(item) for value in bag.values(): print(value) list(bag['BAG_root'].items()) root = bag['BAG_root'] print(root.name) print(root.parent) list(root.items()) elev_node = root['elevation'] print(type(elev_node)) elev = elev_node.value print(type(elev)) print(elev.min(), elev.max()) elev[elev > 9e5] = np.NAN print(np.nanmin(elev), np.nanmax(elev)) fig, ax = plt.subplots(1, 1) ax.imshow(elev) ax.invert_yaxis() metadata_node = root['metadata'] print(type(metadata_node)) print(metadata_node) buffer = BytesIO(metadata_node.value) tree = etree.parse(buffer) root = tree.getroot() print(etree.tostring(root, pretty_print=True).decode('ascii')) root.nsmap sri = etree.QName(root.nsmap['gmd'], 'spatialRepresentationInfo').text adp = etree.QName(root.nsmap['gmd'], 'axisDimensionProperties').text dim = etree.QName(root.nsmap['gmd'], 'MD_Dimension').text res = etree.QName(root.nsmap['gmd'], 'resolution').text res_meas = etree.QName(root.nsmap['gco'], 'Measure').text resolution = ( root .find('.//{}'.format(sri)) .find('.//{}'.format(adp)) .find('.//{}'.format(dim)) .find('.//{}'.format(res)) .find('.//{}'.format(res_meas)) ) print(resolution.text, resolution.get('uom')) id_info = etree.QName(root.nsmap['gmd'], 'identificationInfo').text bag_data_id = etree.QName(root.nsmap['bag'], 'BAG_DataIdentification').text extent = etree.QName(root.nsmap['gmd'], 'extent').text ex_extent = etree.QName(root.nsmap['gmd'], 'EX_Extent').text geo_el = etree.QName(root.nsmap['gmd'], 'geographicElement').text geo_bb = etree.QName(root.nsmap['gmd'], 'EX_GeographicBoundingBox').text west_bound_lon = etree.QName(root.nsmap['gmd'], 'westBoundLongitude').text east_bound_lon = etree.QName(root.nsmap['gmd'], 'eastBoundLongitude').text north_bound_lat = etree.QName(root.nsmap['gmd'], 'northBoundLatitude').text south_bound_lat = etree.QName(root.nsmap['gmd'], 'southBoundLatitude').text decimal = etree.QName(root.nsmap['gco'], 'Decimal').text bbox = ( root .find('.//{}'.format(id_info)) .find('.//{}'.format(bag_data_id)) .find('.//{}'.format(extent)) .find('.//{}'.format(ex_extent)) .find('.//{}'.format(geo_el)) .find('.//{}'.format(geo_bb)) ) west_lon = ( bbox .find('.//{}'.format(west_bound_lon)) .find('.//{}'.format(decimal)) ) print('west:', west_lon.text) east_lon = ( bbox .find('.//{}'.format(east_bound_lon)) .find('.//{}'.format(decimal)) ) print('east:', east_lon.text) north_lat = ( bbox .find('.//{}'.format(north_bound_lat)) .find('.//{}'.format(decimal)) ) print('north:', north_lat.text) south_lat = ( bbox .find('.//{}'.format(south_bound_lat)) .find('.//{}'.format(decimal)) ) print('south:', south_lat.text)
0.285073
0.984139
``` # Source: # http://blog.nextgenetics.net/?e=102 from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''') ``` # **Differences in Squirrel Behavior by Fur Color in Central Park** This research will be testing the hypothesis that there is no difference between Eastern gray squirrels (*Sciurus carolinensis*) in New York City’s Central Park with different primary fur colors with regards to their willingness to approach humans. The dataset comprises 3,023 rows, each representing an individual squirrel, and I will be looking at four of the 31 columns in the dataset (primary fur color, approaches, indifferent, and runs from). It was collected in October 2018 by the Squirrel Census, with the help of 323 volunteer Squirrel Sighters, as well as The Explorers Club, NYU Department of Environmental Studies, Macaulay Honors College, the Central Park Conservancy, and New York City Department of Parks & Recreation. It is located at https://data.cityofnewyork.us/Environment/2018-Squirrel-Census-Fur-Color-Map/fak5-wcft. # **Differences in Squirrel Behavior by Fur Color in Central Park** ### **Glenn Schexnayder** ### **January 15, 2020** - What? - Willingness to approach according to primary fur color. - Where? - New York City's Central Park - When? - October 2018 - Who? - Squirrel Census - How? - Various volunteers, including student groups ``` # Import dependencies and read in the data. import pandas import numpy from matplotlib import pyplot from scipy import stats import math import seaborn from matplotlib.ticker import FuncFormatter squirrels = pandas.read_csv('C:\\Users\\gsche\\Downloads\\2018_Central_Park_Squirrel_Census_-_Squirrel_Data.csv') # Get information about the data squirrels.info() squirrels.head() # Get the relevant columns with only rows that have non-null values for fur color # and rename the columns to fit with Python naming conventions. squirrels_trimmed = squirrels.rename(columns={'Primary Fur Color': 'primary_fur_color', 'Approaches': 'approaches', 'Indifferent': 'indifferent', 'Runs from': 'runs_from'})[['primary_fur_color', 'approaches', 'indifferent', 'runs_from']].dropna() print(squirrels_trimmed.info()) squirrels_trimmed.head() ``` ### **Methodology** I will first explore the data through descriptive statistics and data visualizations. From there I will conduct a series of pairwise t-tests to test the hypothesis that there is no difference in the Eastern gray squrirrel population between different fur colors in how willing they are to approach humans. - Why? - To determine what behaviors are different between subgroups of Eastern gray squirrels due to reports of squirrels being a nuisance in Central Park. ### **Audience** This study will be of interest to anyone who would like to understand more about the Eastern gray squirrel, including biologists who might want to see what role fur color might play in their evolution. A correlation between fur color and willingness to approach humans could mean something about how fur color was selected for by the environment, which would lead to interesting questions about what factors would be involved. Primary Fur Color has some null values, so I'll have to exclude those from the analysis. ## What does the distribution of primary fur color for Eastern gray squirrels look like? Below are the little guys we're talking about. They're all Eastern gray squirrels, but they have different pigmentation that leads to the different primary fur colors. ``` import matplotlib.image as mpimg img_A = mpimg.imread('C:\\Users\\gsche\\Downloads\\squirrel.jfif') img_B = mpimg.imread('C:\\Users\\gsche\\Downloads\\cinnamon squirrel.jfif') # display images fig, ax = pyplot.subplots(1,2) ax[0].axis('off') ax[0].imshow(img_A); ax[1].axis('off') ax[1].imshow(img_B); ``` ![image.png](attachment:image.png) ``` fur_counts = pandas.DataFrame(squirrels_trimmed['primary_fur_color'].value_counts()) fur_counts # Get the unique values for Primary Fur Color and check the distribution. seaborn.set(rc={'figure.figsize':(15, 8)}) seaborn.set_style('white') x = numpy.arange(3) y = numpy.arange(0, 3000, 500) fig, ax = pyplot.subplots() pyplot.bar(x, squirrels_trimmed['primary_fur_color'].value_counts(), 0.65, color='teal', edgecolor='black') pyplot.xticks(x, ('Gray', 'Cinnamon', 'Black'), fontsize=16) pyplot.yticks(y, fontsize=16) pyplot.title('Eastern Gray Squirrel Distribution by Primary Fur Color', fontsize=20, color='black') pyplot.show() ``` Now that I've dropped the null values from the primary_fur_color, I can see that there are three unique values: gray, cinnamon, and black. I've also gotten the counts of each, and can see that by far, gray is the most common primary fur color. The other two are still greater than 60, however, so they easily meet the threshold for comparing means with t-tests. ## What do the ratios of behaviors in each of these subgroups look like? ``` # Convert the boolean values to integer values. squirrels_trimmed['approaches'] = squirrels_trimmed['approaches'].astype(int) squirrels_trimmed['indifferent'] = squirrels_trimmed['indifferent'].astype(int) squirrels_trimmed['runs_from'] = squirrels_trimmed['runs_from'].astype(int) # Create separate dataframes for the different primary fur colors. squirrels_gray = squirrels_trimmed[squirrels_trimmed.primary_fur_color=='Gray'] squirrels_cinnamon = squirrels_trimmed[squirrels_trimmed.primary_fur_color=='Cinnamon'] squirrels_black = squirrels_trimmed[squirrels_trimmed.primary_fur_color=='Black'] seaborn.set(rc={'figure.figsize':(15, 8)}) seaborn.set_style('white') labels = ['Gray', 'Cinnamon', 'Black'] approaches_means = [squirrels_gray['approaches'].mean(), squirrels_cinnamon['approaches'].mean(), squirrels_black['approaches'].mean()] indifferent_means = [squirrels_gray['indifferent'].mean(), squirrels_cinnamon['indifferent'].mean(), squirrels_black['indifferent'].mean()] runs_from_means = [squirrels_gray['runs_from'].mean(), squirrels_cinnamon['runs_from'].mean(), squirrels_black['runs_from'].mean()] x = numpy.arange(len(labels)) y = numpy.arange(0, 0.6, 0.1).round(decimals=1) fig, ax = pyplot.subplots() rects1 = ax.bar(x - 0.3, approaches_means, 0.3, label='Approaches', edgecolor='black') rects2 = ax.bar(x, indifferent_means, 0.3, label='Indifferent', edgecolor='black') rects3 = ax.bar(x + 0.3, runs_from_means, 0.3, label='Runs From', edgecolor='black') ax.set_xticks(x) ax.set_xticklabels(labels, fontsize=16) ax.set_yticklabels(y, fontsize=16) ax.legend(fontsize=16) ax.set_title('Ratio of Behavior by Primary Fur Color', fontsize=20) fig.tight_layout() pyplot.show() ``` Interestingly, at first glance, it appears as though cinnamon-colored squirrels are the most likely to approach humans, and black-colored squirrels are most likely to run away. Now I'll perform the t-tests to check for statistical significance. ## Are these differences significant? ``` print(stats.ttest_ind(squirrels_gray['approaches'], squirrels_cinnamon['approaches'])) print(stats.ttest_ind(squirrels_gray['approaches'], squirrels_black['approaches'])) print(stats.ttest_ind(squirrels_cinnamon['approaches'], squirrels_black['approaches']),'\n') print(stats.ttest_ind(squirrels_gray['indifferent'], squirrels_cinnamon['indifferent'])) print(stats.ttest_ind(squirrels_gray['indifferent'], squirrels_black['indifferent'])) print(stats.ttest_ind(squirrels_cinnamon['indifferent'], squirrels_black['indifferent']),'\n') print(stats.ttest_ind(squirrels_gray['runs_from'], squirrels_cinnamon['runs_from'])) print(stats.ttest_ind(squirrels_gray['runs_from'], squirrels_black['runs_from'])) print(stats.ttest_ind(squirrels_cinnamon['runs_from'], squirrels_black['runs_from']),'\n') def get_95_ci(array_1, array_2): sample_1_n = array_1.shape[0] sample_2_n = array_2.shape[0] sample_1_mean = array_1.mean() sample_2_mean = array_2.mean() sample_1_var = array_1.var() sample_2_var = array_2.var() mean_difference = sample_2_mean - sample_1_mean std_err_difference = math.sqrt((sample_1_var/sample_1_n)+(sample_2_var/sample_2_n)) margin_of_error = 1.96 * std_err_difference ci_lower = mean_difference - margin_of_error ci_upper = mean_difference + margin_of_error return("The difference in means at the 95% confidence interval (two-tail) is between "+str(ci_lower)+" and "+str(ci_upper)+".") print(get_95_ci(squirrels_gray['approaches'], squirrels_cinnamon['approaches'])) print(get_95_ci(squirrels_gray['runs_from'], squirrels_black['runs_from'])) ``` In most cases, no, but... 3 to 9 point difference at 95% confidence between gray-colored and cinnamon-colored squirrels as to whether they'll approach. From the series of pairwise t-tests, we can see that at the 0.05 significance level, there are differences between gray-colored and cinnamon-colored squirrels in their propensity to approach humans, and between gray-colored and black-colored squirrels in their propensity to run away from humans, but there are no significant differences between any of the other groups in any of the other categories. Generating 95% confidence intervals, however, shows that there is not a difference between gray- and black-colored squirrels with regard to their propensity to run away from humans, so it appears that we only have evidence between the gray- and cinnamon-colored squirrels with regard to their willingness to approach humans. ## **What does this mean?** There may be differences in the general population of Eastern gray squirrels as to how willing they are to approach humans depending on their fur color. The statistical tests show that there are reasons to believe that Eastern gray squirrels with a primary fur color of gray are more likely to approach humans than those with cinnamon as a primary fur color, and this may be due to some selection pressures based on the environment and how it interacts with their primary fur colors. On the other hand, it could be due to issues with the way the data was collected, since there were rows where all three values were false, which doesn't seem logically possible. It may be worth looking into further, just to make sure that the differences weren't due to anomalies in the data collection methods, but it seems like it would be pretty far down in priority. ## **What does this mean?** - Statistically significant difference between gray-colored and cinnamon-colored Eastern gray squirrels in willingness to approach humans. - Gray-colored squirrels are more likely to approach. - Focus on what causes gray squirrels to approach more, and study causes to determine possible mitigation strategies that park management can enact that will work for both humans and squirrels. # **Questions?**
github_jupyter
# Source: # http://blog.nextgenetics.net/?e=102 from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''') # Import dependencies and read in the data. import pandas import numpy from matplotlib import pyplot from scipy import stats import math import seaborn from matplotlib.ticker import FuncFormatter squirrels = pandas.read_csv('C:\\Users\\gsche\\Downloads\\2018_Central_Park_Squirrel_Census_-_Squirrel_Data.csv') # Get information about the data squirrels.info() squirrels.head() # Get the relevant columns with only rows that have non-null values for fur color # and rename the columns to fit with Python naming conventions. squirrels_trimmed = squirrels.rename(columns={'Primary Fur Color': 'primary_fur_color', 'Approaches': 'approaches', 'Indifferent': 'indifferent', 'Runs from': 'runs_from'})[['primary_fur_color', 'approaches', 'indifferent', 'runs_from']].dropna() print(squirrels_trimmed.info()) squirrels_trimmed.head() import matplotlib.image as mpimg img_A = mpimg.imread('C:\\Users\\gsche\\Downloads\\squirrel.jfif') img_B = mpimg.imread('C:\\Users\\gsche\\Downloads\\cinnamon squirrel.jfif') # display images fig, ax = pyplot.subplots(1,2) ax[0].axis('off') ax[0].imshow(img_A); ax[1].axis('off') ax[1].imshow(img_B); fur_counts = pandas.DataFrame(squirrels_trimmed['primary_fur_color'].value_counts()) fur_counts # Get the unique values for Primary Fur Color and check the distribution. seaborn.set(rc={'figure.figsize':(15, 8)}) seaborn.set_style('white') x = numpy.arange(3) y = numpy.arange(0, 3000, 500) fig, ax = pyplot.subplots() pyplot.bar(x, squirrels_trimmed['primary_fur_color'].value_counts(), 0.65, color='teal', edgecolor='black') pyplot.xticks(x, ('Gray', 'Cinnamon', 'Black'), fontsize=16) pyplot.yticks(y, fontsize=16) pyplot.title('Eastern Gray Squirrel Distribution by Primary Fur Color', fontsize=20, color='black') pyplot.show() # Convert the boolean values to integer values. squirrels_trimmed['approaches'] = squirrels_trimmed['approaches'].astype(int) squirrels_trimmed['indifferent'] = squirrels_trimmed['indifferent'].astype(int) squirrels_trimmed['runs_from'] = squirrels_trimmed['runs_from'].astype(int) # Create separate dataframes for the different primary fur colors. squirrels_gray = squirrels_trimmed[squirrels_trimmed.primary_fur_color=='Gray'] squirrels_cinnamon = squirrels_trimmed[squirrels_trimmed.primary_fur_color=='Cinnamon'] squirrels_black = squirrels_trimmed[squirrels_trimmed.primary_fur_color=='Black'] seaborn.set(rc={'figure.figsize':(15, 8)}) seaborn.set_style('white') labels = ['Gray', 'Cinnamon', 'Black'] approaches_means = [squirrels_gray['approaches'].mean(), squirrels_cinnamon['approaches'].mean(), squirrels_black['approaches'].mean()] indifferent_means = [squirrels_gray['indifferent'].mean(), squirrels_cinnamon['indifferent'].mean(), squirrels_black['indifferent'].mean()] runs_from_means = [squirrels_gray['runs_from'].mean(), squirrels_cinnamon['runs_from'].mean(), squirrels_black['runs_from'].mean()] x = numpy.arange(len(labels)) y = numpy.arange(0, 0.6, 0.1).round(decimals=1) fig, ax = pyplot.subplots() rects1 = ax.bar(x - 0.3, approaches_means, 0.3, label='Approaches', edgecolor='black') rects2 = ax.bar(x, indifferent_means, 0.3, label='Indifferent', edgecolor='black') rects3 = ax.bar(x + 0.3, runs_from_means, 0.3, label='Runs From', edgecolor='black') ax.set_xticks(x) ax.set_xticklabels(labels, fontsize=16) ax.set_yticklabels(y, fontsize=16) ax.legend(fontsize=16) ax.set_title('Ratio of Behavior by Primary Fur Color', fontsize=20) fig.tight_layout() pyplot.show() print(stats.ttest_ind(squirrels_gray['approaches'], squirrels_cinnamon['approaches'])) print(stats.ttest_ind(squirrels_gray['approaches'], squirrels_black['approaches'])) print(stats.ttest_ind(squirrels_cinnamon['approaches'], squirrels_black['approaches']),'\n') print(stats.ttest_ind(squirrels_gray['indifferent'], squirrels_cinnamon['indifferent'])) print(stats.ttest_ind(squirrels_gray['indifferent'], squirrels_black['indifferent'])) print(stats.ttest_ind(squirrels_cinnamon['indifferent'], squirrels_black['indifferent']),'\n') print(stats.ttest_ind(squirrels_gray['runs_from'], squirrels_cinnamon['runs_from'])) print(stats.ttest_ind(squirrels_gray['runs_from'], squirrels_black['runs_from'])) print(stats.ttest_ind(squirrels_cinnamon['runs_from'], squirrels_black['runs_from']),'\n') def get_95_ci(array_1, array_2): sample_1_n = array_1.shape[0] sample_2_n = array_2.shape[0] sample_1_mean = array_1.mean() sample_2_mean = array_2.mean() sample_1_var = array_1.var() sample_2_var = array_2.var() mean_difference = sample_2_mean - sample_1_mean std_err_difference = math.sqrt((sample_1_var/sample_1_n)+(sample_2_var/sample_2_n)) margin_of_error = 1.96 * std_err_difference ci_lower = mean_difference - margin_of_error ci_upper = mean_difference + margin_of_error return("The difference in means at the 95% confidence interval (two-tail) is between "+str(ci_lower)+" and "+str(ci_upper)+".") print(get_95_ci(squirrels_gray['approaches'], squirrels_cinnamon['approaches'])) print(get_95_ci(squirrels_gray['runs_from'], squirrels_black['runs_from']))
0.390476
0.84607
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) ### Egeria Hands-On Lab # Welcome to the Configuring Egeria Servers Lab ## Introduction Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata. In this hands-on lab you will learn how to configure the metadata servers used by [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/). ## The scenario <img src="https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png" style="float:left"> Coco Pharmaceuticals is going through a major business transformation that requires them to drastically reduce their cycle times, collaborate laterally across the different parts of the business and react quickly to the changing needs of their customers. (See [this link](https://opengovernance.odpi.org/coco-pharmaceuticals/) for the background to this transformation). Part of the changes needed to the IT systems that support the business is the roll out of a distributed open metadata and governance capability that is provided by Egeria. [Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at Coco Pharmaceuticals. In this hands-on lab Gary is configuring the servers that support this open ecosystem. These servers are collectively called Open Metadata and Governance (OMAG) Servers. Gary's userId is `garygeeke`. ``` import requests adminUserId = "garygeeke" ``` He needs to define the OMAG servers for Coco Pharmaceuticals. ``` organizationName = "Coco Pharmaceuticals" ``` ## Open Metadata and Governance (OMAG) management landscape At the heart of an open metadata and governance landscape are the servers that store and exchange metadata in a peer-to-peer exchange called the [open metadata repository cohort](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/cohort-member.html). These servers are collectively called **cohort members**. There are three types of cohort member that Gary needs to consider: * A [Metadata Server](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/metadata-server.html) that uses a native Egeria repository to store open metadata. There should be at least one of these servers in a cohort. It used to support either a community of users that are using the Egeria functions directly or to fill in any gaps in the metadata support provided by the third party tools that are connected to the cohort. * A [Metadata Access Point](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/metadata-access-point.html) that has no metadata repository of its own and uses federated queries to retrieve and store metadata in the other repositories connected to the cohort. * A [Repository Proxy](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/repository-proxy.html) that connects in a thrid party metadata server. Gary has decided to deploy a separate cohort member server for each part of the organization that owns [assets](https://egeria.odpi.org/open-metadata-implementation/access-services/docs/concepts/assets/). You can think of each of these servers as supporting a community of users within Coco Pharmaceuticals. The servers are as follows: * cocoMDS1 - Data Lake Operations - a **metadata server** used to manage the data in the data lake. * cocoMDS2 - Governance - a **metadata server** used by all of the governance teams to operate the governance programs. * cocoMDS3 - Research - a **metadata server** used by the research teams who are developing new treatments. * cocoMDS4 - Data Lake Users - a **metadata access point** used by general business users and the executive team to access data from the data lake. * cocoMDS5 - Business Systems - a **repository proxy** used to connect to the existing ETL tool that manages data movement amongst the business systems. It has a metadata record of the operational business systems such as procurements, sales, human resources and finance and the movement of data between them. This tool is also loading data from the business systems into the data lake. Its metadata is critical for providing lineage for the data used to run the business. * cocoMDS6 - Manufacturing - a **metadata server** used by the supplies warehouse, manufacturing and distribution teams. * cocoMDSx - Development - a **metadata server** used by the software development teams building new IT capablity. * cocoEDGEi - Manufacturing sensors edge node servers (many of them) - these **metadata servers** catalog the collected sensor data. In addition, Coco Pharmaceuticals needs additional servers to support Egeria's user interface and automated metadata processing: * cocoView1 - a [View Server](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/view-server.html) that runs the services for the user interface. * exchangeDL01 - an [Integration Daemon](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/integration-daemon.html) server that supports the automatic exchange of metadata with third party technologies. * governDL01 - an [Engine Host](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/engine-host.html) server that runs governance functions that monitor, validate, correct and enrich metadata for use by all of the technologies in the connected open metadata ecosystem. These servers will each be configured in later parts of this hands-on lab, but first there are decisons to be made about the platform that the servers will run on and how they will be connected together. ### Open Metadata and Governance (OMAG) Server Platforms Coco Pharmaceuticals' servers must be hosted on at least one OMAG Server Platform. This is a single executable (application) that can be started from the command line or a script or as part of a pre-built container environment such as `docker-compose` or `kubernetes`. If you are running this notebook as part of an Egeria hands on lab then the server platforms you need are already started. Run the following command to check that the platforms are running. ``` %run common/environment-check.ipynb ``` ---- If one of the platforms is not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). Once the platforms are running you are ready to proceed. ---- Most of the servers are supporting a pretty stable environment and can share an OMAG Server Platform because the workload they are supporting is predicable. The data lake however requires a lot of active governance and is evolving rapidly. To isolate this churn, Gary chooses to put all of the metadata and governance servers for the data lake on to their own platform. The development team requested that their infrastructure is completely separate from the operational systems, so they are given their own server platform. Finally each of the edge servers will run their own OMAG Server Platform to support their own metadata server. Figure 1 shows which servers will sit in each platform. The cohort members are shown in white, governance servers in orange and the view server (that supports the UI) is in green. ![Figure 1](images/coco-pharmaceuticals-systems-omag-server-platforms.png) > **Figure 1:** Coco Pharmaceuticals' OMAG Server Platforms The sensor edge node servers used to monitor the warehouse operation and manufacturing process each have their own platform and are not yet included in this notebook. ### Open Metadata Repository Cohorts A metadata server, metadata access point and repository proxy can become a member of none, one or many cohorts. Once a server has joined a cohort it can exchange metadata with the other members of that cohort. So the cohorts define scopes of sharing. Gary decides to begin with three open metadata repository cohorts: * **cocoCohort** - The production cohort contains all of the servers that are used to run, coordinate and govern the business. * **devCohort** - The development cohort where the development teams are building and testing new capablity. Much of their metadata describes the software components under construction and the governance of the software development lifecycle. * **iotCohort** - The IoT cohort used to manage the sensors and robots in the manufacturing systems. The metadata produced by the sensors and robots is only of interest to the manufactuing and governance team. Figure 2 shows which servers belong to each cohort. ![Figure 2](images/coco-pharmaceuticals-systems-cohorts.png) > **Figure 2:** Membership of Coco Pharmaceuticals' cohorts Below are the names of the three cohorts. ``` cocoCohort = "cocoCohort" devCohort = "devCohort" iotCohort = "iotCohort" ``` At the heart of each cohort is an event topic. By default, Egeria uses [Apache Kafka](https://kafka.apache.org/) topics. The servers that wil ljoin a cohort will need to be configured with the host name and port where Kafka is running. The command below pulls the value from an environment variable called `eventBusURLroot` with a default value of `localhost:9092`. It is used in all of the server configuration documents to connect it to Kafka. ``` eventBusURLroot = os.environ.get('eventBusURLroot', 'localhost:9092') jsonContentHeader = {'content-type':'application/json'} eventBusBody = { "producer": { "bootstrap.servers": eventBusURLroot }, "consumer":{ "bootstrap.servers": eventBusURLroot } } ``` ## Access services [The Open Metadata Access Services (OMAS)](https://egeria.odpi.org/open-metadata-implementation/access-services/) provide domain-specific services for data tools, engines and platforms to integrate with open metadata. These are the different types of access service. ``` getAccessServices(cocoMDS1PlatformName, cocoMDS1PlatformURL) ``` The table below shows which access services are needed by each server. | Access Service | cocoMDS1 | cocoMDS2 | cocoMDS3 | cocoMDS4 | cocoMDS5 | cocoMDS6 | cocoMDSx | cocoEDGE*i* | | :------------------- | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :---------: | | asset-catalog | Yes | Yes | Yes | Yes | No | Yes | Yes | No | | asset-consumer | Yes | Yes | Yes | Yes | No | Yes | Yes | No | | asset-owner | Yes | Yes | Yes | No | No | Yes | Yes | No | | community-profile | Yes | Yes | Yes | Yes | No | Yes | Yes | No | | glossary-view | Yes | Yes | Yes | Yes | No | Yes | Yes | No | | ------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | data-science | No | No | Yes | Yes | No | Yes | Yes | No | | ------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | subject-area | No | Yes | Yes | No | No | Yes | Yes | No | | ------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | governance-program | No | Yes | No | No | No | No | No | No | | data-privacy | No | Yes | No | No | No | No | No | No | | security-officer | No | Yes | No | No | No | No | No | No | | asset-lineage | No | Yes | No | No | No | No | No | No | | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | discovery-engine | Yes | No | Yes | No | No | Yes | Yes | No | | governance-engine | Yes | Yes | Yes | No | No | Yes | Yes | No | | asset-manager | Yes | No | Yes | No | No | Yes | Yes | No | | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | data-engine | Yes | No | No | No | No | Yes | No | Yes | | data-manager | Yes | No | No | No | No | Yes | No | Yes | | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | it-infrastructure | No | Yes | No | No | No | Yes | Yes | No | | project-management | No | Yes | Yes | No | No | Yes | Yes | No | | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | software-developer | No | No | No | No | No | No | Yes | No | | devops | No | No | No | No | No | No | Yes | No | | digital-architecture | No | No | No | No | No | No | Yes | No | | design-model | No | No | No | No | No | No | Yes | No | | -------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | ## Egeria Server Configuration Overview Open metadata servers are configured using REST API calls to an OMAG Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started. As each configuration call is made, the OMAG Server Platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server. The configuration document will then be deployed with the OMAG Server Platform that is to host the server. When a request is made to this OMAG Server Platform to start the server, it reads the configuration document and initializes the server with the appropriate services. ## Configuration Set Up A server can be configured by any OMAG Server Platform - it does not have to be the same platform where the server will run. For this hands on lab we will use the development team's OMAG Server Platform to create the servers' configuration documents and then deploy them to the platforms where they will run. ``` adminPlatformURL = devPlatformURL ``` The URLs for the configuration REST APIs have a common structure and begin with the following root: ``` adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/' ``` Many of Coco Pharmaceuticals' metadata servers need a local repository to store metadata about the data and processing occuring in the data lake. Egeria includes two types of repositories natively. One is an **in-memory repository** that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the **local graph repository**. The choice of local repository is made by specifying the local repository mode. The variables below show the two options. The `metadataRepositoryType` identfies which one is going to be used in the configuration. ``` inMemoryRepositoryOption = "in-memory-repository" graphRepositoryOption = "local-graph-repository" # Pick up which repo type to use from environment if set, otherwise default to inmemory metadataRepositoryType = os.environ.get('repositoryType', inMemoryRepositoryOption) ``` Egeria supports instance based security. These checks can be customized through an [Open Metadata Security Connector](https://egeria.odpi.org/open-metadata-implementation/common-services/metadata-security/). Coco Pharaceuticals have written their own connector to support the specific rules of their industry. The Connection definition below tells a server how to load this connector. It needs to be included in each server's configuration document. ``` serverSecurityConnectionBody = { "class": "Connection", "connectorType": { "class": "ConnectorType", "connectorProviderClassName": "org.odpi.openmetadata.metadatasecurity.samples.CocoPharmaServerSecurityProvider" } } ``` Finally, to ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server. ``` maxPageSize = '100' ``` ## Configuring cocoMDS1 - Data Lake Operations metadata server This section configures the `cocoMDS1` server. The server name is passed on every configuration call to identify which configuration document to update with the new configuration. The configuration document is created automatically on first use. ``` mdrServerName = "cocoMDS1" mdrServerUserId = "cocoMDS1npa" mdrServerPassword = "cocoMDS1passw0rd" mdrServerPlatform = dataLakePlatformURL metadataCollectionName = "Data Lake Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": ["quarantine", "clinical-trials", "research", "data-lake", "trash-can"] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) accessServiceOptions["DefaultZones"] = [ "quarantine" ] configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) print("\nDone.") ``` ---- ## Configuring cocoMDS2 - Governance metadata server This section configures the `cocoMDS2` server. This server is configured in a similar way to cocoMDS1 except that is has different Open Metadata Access Services (OMASs) enabled and it joins all of the cohorts. The code below covers the basic set up of the server properties, security, event bus and local repository. ``` mdrServerName = "cocoMDS2" mdrServerUserId = "cocoMDS2npa" mdrServerPassword = "cocoMDS2passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Governance Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) # Note: cohort membership is configured for all of the cohorts here configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, devCohort) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, iotCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-program', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-privacy', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'security-officer', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-lineage', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', {}) print("\nDone.") ``` ---- ## Configuring cocoMDS3 - Research Server cocoMDS3 is used by the research teams who are developing new treatments. These teams are working with their own assets as well as assets coming from the data lake. So they have their own repository and connector to the core cohort to access all of the operational metadata. This is one of the big changes brought by Coco Pharaceuticals' business transformation. In their old business model, the research teams were completely separate from the operational part of the organization. Now they need to be an active member of the day to day running of the organization, supporting the development of personalized medicines and their use in treating patients. ``` mdrServerName = "cocoMDS3" mdrServerUserId = "cocoMDS3npa" mdrServerPassword = "cocoMDS3passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Research Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": ["personal-files", "clinical-trials", "research", "data-lake", "trash-can"] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) accessServiceOptions["DefaultZones"] = [ "personal-files" ] configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions) print("\nDone.") ``` ---- ## Configuring cocoMDS4 - Data Lake Users Server cocoMDS4 used by general business users and the executive team to access data from the data lake. It does not have a repository of its own. Instead it issues federated queries to the other repositories in the `cocoCohort`. ``` mdrServerName = "cocoMDS4" mdrServerUserId = "cocoMDS4npa" mdrServerPassword = "cocoMDS4passw0rd" mdrServerPlatform = dataLakePlatformURL metadataCollectionName = "Data Lake Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) # Note: no metadata repository or collection configuration here configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": [ "data-lake" ] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions) print("\nDone.") ``` ---- ## Configuring cocoMDS5 - Business Systems Server cocoMDS5 is a repository proxy to an ETL tool called `iisCore01`. This ETL tool is well established in Coco Pharmaceuticals and has a built-in metadata repository that contains information about their operational business systems such as procurement, sales, human resources and finance. This ETL tool has its own user interface and services so the OMASs are not enabled. ``` mdrServerName = "cocoMDS5" mdrServerUserId = "cocoMDS5npa" mdrServerPassword = "cocoMDS5passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Business Systems Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureRepositoryProxyDetails(adminPlatformURL, adminUserId, mdrServerName, "org.odpi.openmetadata.adapters.repositoryservices.readonly.repositoryconnector.ReadOnlyOMRSRepositoryConnectorProvider") configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) # Note: no access service configuration here # Still need to add startup Archive print("\nDone.") ``` ---- ## Configuring cocoMDS6 - Manufacturing Server cocoMDS6 is the repository server used by the warehouse, manufacturing and distribution teams. It supports the systems for this part of the organization and acts as a hub for monitoring the IoT environment. ``` mdrServerName = "cocoMDS6" mdrServerUserId = "cocoMDS6npa" mdrServerPassword = "cocoMDS6passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Manufacturing Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, iotCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": [ "manufacturing" ], "DefaultZones" : [ "manufacturing"] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions) print("\nDone.") ``` ---- ## Configuring cocoMDSx - Development Server cocoMDSx is used by the development teams building new IT capablity. It will hold all of the software component assets and servers used for development and devOps. The development teams have their own OMAG Server Platform and cohort called 'devCohort'. ``` mdrServerName = "cocoMDSx" mdrServerUserId = "cocoMDSxnpa" mdrServerPassword = "cocoMDSxpassw0rd" mdrServerPlatform = devPlatformURL metadataCollectionName = "Development Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, devCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": [ "sdlc" ], "DefaultZones": [ "sdlc" ] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'software-developer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'devops', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'digital-architecture', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'design-model', accessServiceOptions) print("\nDone.") ``` ---- ## Configuring the exchangeDL01 Integration Daemon The **exchangeDL01** integration daemon server supports the automatic exchange of metadata with third party technologies. It runs [integration connectors](https://egeria.odpi.org/open-metadata-implementation/governance-servers/integration-daemon-services/docs/integration-connector.html) that each connect to a particular third party technology to exchange metadata. Egeria offers the following Open Metadata Integration Services (OMIS), or integration services for short. These integration services provide specialist services for an integration connector. The command below lists the different types of integration services. ``` getIntegrationServices(exchangeDL01PlatformName, exchangeDL01PlatformURL) ``` An integration connector depends on a single integration service. Gary plans to use two integration connectors supplied by Egeria: The **DataFilesMonitorIntegrationConnector** maintains a DataFile asset for each file in the directory (or any subdirectory). When a new file is created, a new DataFile asset is created. If a file is modified, the lastModified property of the corresponding DataFile asset is updated. When a file is deleted, its corresponding DataFile asset is also deleted. The **DataFolderMonitorIntegrationConnector** maintains a DataFolder asset for the directory. The files and directories underneath it are assumed to be elements/records in the DataFolder asset and so each time there is a change to the files and directories under the monitored directory, it results in an update to the lastModified property of the corresponding DataFolder asset. They will be used to automatically catalog data files provided by the different partner hospitals and move them from the landing area to the data lake once the cataloguing is complete. Figure 3 shows the integration daemon with its two connectors. It uses cocoMDS1 to store and retrieve metadata, since that is where the assets for the data lake are catalogued. ![Figure 3](images/integration-daemon.png) > **Figure 3:** exchangeDL01 with its partner metadata server ### Configuring the server The commands below configure the integration daemon with the Files Integrator OMIS and the two connectors. ``` daemonServerName = "exchangeDL01" daemonServerPlatform = dataLakePlatformURL daemonServerUserId = "exchangeDL01npa" daemonServerPassword = "exchangeDL01passw0rd" mdrServerName = "cocoMDS1" mdrServerPlatform = dataLakePlatformURL OakDeneConnectorName = "OakDeneLandingAreaFilesMonitor" OakDeneConnectorUserId = "onboardDL01npa" OakDeneConnectorSourceName = "HospitalLandingArea" OakDeneConnectorFolder = fileSystemRoot + '/landing-area/hospitals/oak-dene/clinical-trials/drop-foot' OakDeneConnectorConnection = { "class" : "Connection", "connectorType" : { "class" : "ConnectorType", "connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFilesMonitorIntegrationProvider" }, "endpoint" : { "class" : "Endpoint", "address" : OakDeneConnectorFolder } } OldMarketConnectorName = "OldMarketLandingAreaFilesMonitor" OldMarketConnectorUserId = "onboardDL01npa" OldMarketConnectorSourceName = "HospitalLandingArea" OldMarketConnectorFolder = fileSystemRoot + '/landing-area/hospitals/old-market/clinical-trials/drop-foot' OldMarketConnectorConnection = { "class" : "Connection", "connectorType" : { "class" : "ConnectorType", "connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFilesMonitorIntegrationProvider" }, "endpoint" : { "class" : "Endpoint", "address" : OldMarketConnectorFolder } } folderConnectorName = "DropFootClinicalTrialResultsFolderMonitor" folderConnectorUserId = "monitorDL01npa" folderConnectorSourceName = "DropFootClinicalTrialResults" folderConnectorFolder = fileSystemRoot + '/data-lake/research/clinical-trials/drop-foot/weekly-measurements' folderConnectorConnection = { "class" : "Connection", "connectorType" : { "class" : "ConnectorType", "connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFolderMonitorIntegrationProvider" }, "endpoint" : { "class" : "Endpoint", "address" : folderConnectorFolder } } print("Configuring " + daemonServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, daemonServerName, daemonServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, daemonServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, daemonServerName) configureOwningOrganization(adminPlatformURL, adminUserId, daemonServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, daemonServerName, daemonServerUserId) configurePassword(adminPlatformURL, adminUserId, daemonServerName, daemonServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, daemonServerName, serverSecurityConnectionBody) configureDefaultAuditLog(adminPlatformURL, adminUserId, daemonServerName) print("\nConfiguring " + daemonServerName + " integration connectors ...") connectorConfigs = [ { "class" : "IntegrationConnectorConfig", "connectorName" : OakDeneConnectorName, "connectorUserId" : OakDeneConnectorUserId, "connection" : OakDeneConnectorConnection, "metadataSourceQualifiedName" : OakDeneConnectorSourceName, "refreshTimeInterval" : 10, "usesBlockingCalls" : "false" }, { "class" : "IntegrationConnectorConfig", "connectorName" : OldMarketConnectorName, "connectorUserId" : OldMarketConnectorUserId, "connection" : OldMarketConnectorConnection, "metadataSourceQualifiedName" : OldMarketConnectorSourceName, "refreshTimeInterval" : 10, "usesBlockingCalls" : "false" }, { "class" : "IntegrationConnectorConfig", "connectorName" : folderConnectorName, "connectorUserId" : folderConnectorUserId, "connection" : folderConnectorConnection, "metadataSourceQualifiedName" : folderConnectorSourceName, "refreshTimeInterval" : 10, "usesBlockingCalls" : "false" }] configureIntegrationService(adminPlatformURL, adminUserId, daemonServerName, mdrServerName, mdrServerPlatform, "files-integrator", {}, connectorConfigs) print ("\nDone.") ``` ---- ## Configuring governDL01 Governance Engine Hosting Server The Engine Host OMAG server is a special kind of governance server that hosts one or more governance engines. A governance engine is a set of specialized services that perform specific functions to manage the digital landscape and the metadata that describes it. ### Automated metadata discovery One example of a type of governance engine is a discovery engine. The discovery engine runs discovery services. Discovery services analyze the content of a real-world artifact or resource. For example, a discovery service may open up a data set and assess the quality of the data inside. The result of a discovery service's analysis is stored in a metadata server as a discovery analysis report that is chained off of the asset's definition. This report can be retrieved either through the engine host server's API or through the metadata server's APIs, specifically the Discovery Engine OMAS and the Asset Owner OMAS. The interfaces used by discovery services are defined in the [Open Discovery Framework (ODF)](https://egeria.odpi.org/open-metadata-implementation/frameworks/open-discovery-framework/). This framework enables new implementations of discovery services to be deployed to the discovery engines. ### Automated governance Another type of governance engine is a governance action engine. The governance action engine runs governance action services. Governance action services monitor the asset metadata and verify that it is set up correctly, determin how to fix anomolies, errors and ommisions, make the necessary changes and provision real-world artifacts and resources beased on the resulting metadata. ### Understanding the engine services Coco Pharmaceuticals runs one engine host server for its data lake. It is called `governDL01` and it runs on the data lake platform. Within the engine host server there are engine services. Each engine service supports a specific type of governance engine. The command below shows you the different types of engine services ``` getEngineServices(governDL01PlatformName, governDL01PlatformURL) ``` ---- The governDL01 server is the Engine Host server that runs governance functions that monitor, validate, correct and enrich metadata for use by all of the technologies in the connected open metadata ecosystem. The **Asset Analysis** Open Metadata Engine Service (OMES) is responsible for running discovery engines from the [Open Discovery Framework (ODF)](https://egeria.odpi.org/open-metadata-implementation/frameworks/open-discovery-framework/docs/). Coco Pharmaceuticals has two discovery engines: * **AssetDiscovery** - extracts metadata about different types of assets on request. * **AssetQuality** - assesses the quality of the content of assets on request. The **Governance Action** Open Metadata Engine Service (OMES) is responsible for running governance action engines from the [Governance Action Framework (GAF)](https://egeria.odpi.org/open-metadata-implementation/frameworks/governance-action-framework/). Coco Pharmaceuticals has one governance action engine: * **AssetGovernance** - monitors for new assets in the landing areas, automatically curates them and provisions them in the data lake. Figure 4 shows the integration daemon with its two connectors. It uses cocoMDS1 to store and retrieve metadata, since that is where the assets for the data lake are catalogued. ![Figure 4](images/engine-host.png) > **Figure 4:** Metadata servers for governDL01 ### Configuring the server The commands below configure the engine host server with the Asset Analysis OMES and Governance Action OMES. The definitions of the named governance engines and their services are retrieved from the `cocoMDS1` metadata server through its Governance Engine OMAS. ``` engineServerName = "governDL01" engineServerPlatform = dataLakePlatformURL engineServerUserId = "governDL01npa" engineServerPassword = "governDL01passw0rd" engineServerMDRName = "cocoMDS2" engingServerMDRPlatform = corePlatformURL mdrServerName = "cocoMDS1" mdrServerPlatform = dataLakePlatformURL print("Configuring " + engineServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, engineServerName, engineServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, engineServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, engineServerName) configureOwningOrganization(adminPlatformURL, adminUserId, engineServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, engineServerName, engineServerUserId) configurePassword(adminPlatformURL, adminUserId, engineServerName, engineServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, engineServerName, serverSecurityConnectionBody) configureDefaultAuditLog(adminPlatformURL, adminUserId, engineServerName) print("\nConfiguring " + engineServerName + " engines ...") configureEngineDefinitionServices(adminPlatformURL, adminUserId, engineServerName, engineServerMDRName, engingServerMDRPlatform) discoveryEngines = [ { "class" : "EngineConfig", "engineQualifiedName" : "AssetDiscovery", "engineUserId" : "findItDL01npa" }, { "class" : "EngineConfig", "engineQualifiedName" : "AssetQuality", "engineUserId" : "findItDL01npa" }] governanceActionEngines = [ { "class" : "EngineConfig", "engineQualifiedName" : "AssetGovernance", "engineUserId" : "findItDL01npa" }] configureGovernanceEngineService(adminPlatformURL, adminUserId, engineServerName, mdrServerName, mdrServerPlatform, "asset-analysis", discoveryEngines) configureGovernanceEngineService(adminPlatformURL, adminUserId, engineServerName, mdrServerName, mdrServerPlatform, "governance-action", governanceActionEngines) print ("\nDone.") ``` ---- # Configuring the View Server and View Services Egeria's UI allows Coco Pharmaceutical's employees to understand more about their metadata environment. This UI uses special services, called view services, that run in an Egeria View Server. ``` getViewServices(cocoView1PlatformName, cocoView1PlatformURL) ``` This is an initial version of an example to configure the view services. Since this area is still in development the configuration is likely to change, and so all of the functions are in this section of the notebook rather than consolidated with our common functions. The new UI is deployed in the k8s and docker-compose environments (nodeport 30091 in k8s, 18091 in compose ) The tenant (`coco` in this case) must be explicitly provided in the URL, as must navigation to the login page For example in compose go to https://localhost:18091/coco/login Further docs will be added in future releases. Please use http://slack.lfai.foundation to get further help ``` # Common functions def configureGovernanceSolutionViewService(adminPlatformURL, adminUserId, viewServerName, viewService, remotePlatformURL,remoteServerName): adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/' print (" ... configuring the " + viewService + " Governance Solution View Service for this server...") url = adminCommandURLRoot + viewServerName + '/view-services/' + viewService jsonContentHeader = {'content-type':'application/json'} viewBody = { "class": "ViewServiceConfig", "omagserverPlatformRootURL": remotePlatformURL, "omagserverName" : remoteServerName } postAndPrintResult(url, json=viewBody, headers=jsonContentHeader) def configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, configBody): adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/' print (" ... configuring the " + viewService + " Integration View Service for this server...") url = adminCommandURLRoot + viewServerName + '/view-services/' + viewService jsonContentHeader = {'content-type':'application/json'} postAndPrintResult(url, json=configBody, headers=jsonContentHeader) # A view server supports the presentation server UI (a node based app). Here we run it on the datalake platform viewServerName = "cocoView1" viewServerUserId = "cocoView1npa" viewServerPassword = "cocoView1passw0rd" viewServerPlatform = dataLakePlatformURL viewServerType = "View Server" # Configuration is similar to most servers print("Configuring " + viewServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, viewServerName, viewServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, viewServerName) configureServerType(adminPlatformURL,adminUserId,viewServerName,viewServerType) configureOwningOrganization(adminPlatformURL, adminUserId, viewServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, viewServerName, viewServerUserId) configurePassword(adminPlatformURL, adminUserId, viewServerName, viewServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, viewServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, viewServerName, eventBusBody) configureDefaultAuditLog(adminPlatformURL, adminUserId, viewServerName) # The governance solution view services currently only consist of glossary author print ("Configuring the Governance Solution View Services") remotePlatformURL=corePlatformURL remoteServerName="cocoMDS2" viewService="glossary-author" configureGovernanceSolutionViewService(adminPlatformURL, adminUserId, viewServerName, viewService, remotePlatformURL,remoteServerName) print ("Configuring the Integration View Services") # repository explorer integration view service viewService="rex" rexConfigBody = { "class":"IntegrationViewServiceConfig", "viewServiceAdminClass":"org.odpi.openmetadata.viewservices.rex.admin.RexViewAdmin", "viewServiceFullName":"Repository Explorer", "viewServiceOperationalStatus":"ENABLED", "omagserverPlatformRootURL": "UNUSED", "omagserverName" : "UNUSED", "resourceEndpoints" : [ { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Core Platform", "platformName" : "Core", "platformRootURL" : corePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "DataLake Platform", "platformName" : "DataLake", "platformRootURL" : dataLakePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Development Platform", "platformName" : "Development", "platformRootURL" : devPlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS1", "description" : "Data Lake Operations", "platformName" : "DataLake", "serverName" : "cocoMDS1" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS2", "description" : "Governance", "platformName" : "Core", "serverName" : "cocoMDS2" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS3", "description" : "Research", "platformName" : "Core", "serverName" : "cocoMDS3" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS5", "description" : "Business Systems", "platformName" : "Core", "serverName" : "cocoMDS5" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS6", "description" : "Manufacturing", "platformName" : "Core", "serverName" : "cocoMDS6" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDSx", "description" : "Development", "platformName" : "Development", "serverName" : "cocoMDSx" }, ] } configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, rexConfigBody) # type-explorer has endpoints viewService="tex" texConfigBody = { "class":"IntegrationViewServiceConfig", "viewServiceAdminClass":"org.odpi.openmetadata.viewservices.tex.admin.TexViewAdmin", "viewServiceFullName":"Type Explorer", "viewServiceOperationalStatus":"ENABLED", "omagserverPlatformRootURL": "UNUSED", "omagserverName" : "UNUSED", "resourceEndpoints" : [ { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Core Platform", "platformName" : "Core", "platformRootURL" : corePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "DataLake Platform", "platformName" : "DataLake", "platformRootURL" : dataLakePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Development Platform", "platformName" : "Development", "platformRootURL" : devPlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS1", "description" : "Data Lake Operations", "platformName" : "DataLake", "serverName" : "cocoMDS1" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS2", "description" : "Governance", "platformName" : "Core", "serverName" : "cocoMDS2" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS3", "description" : "Research", "platformName" : "Core", "serverName" : "cocoMDS3" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS5", "description" : "Business Systems", "platformName" : "Core", "serverName" : "cocoMDS5" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS6", "description" : "Manufacturing", "platformName" : "Core", "serverName" : "cocoMDS6" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDSx", "description" : "Development", "platformName" : "Development", "serverName" : "cocoMDSx" }, ] } configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, texConfigBody) # Dino provides insight into the operational environment of egeria - this config body allows coco's platforms & servers to be accessed viewService="dino" DinoConfigBody = { "class":"IntegrationViewServiceConfig", "viewServiceAdminClass":"org.odpi.openmetadata.viewservices.dino.admin.DinoViewAdmin", "viewServiceFullName":"Dino", "viewServiceOperationalStatus":"ENABLED", "omagserverPlatformRootURL": "UNUSED", "omagserverName" : "UNUSED", "resourceEndpoints" : [ { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Core Platform", "platformName" : "Core", "platformRootURL" : corePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "DataLake Platform", "platformName" : "DataLake", "platformRootURL" : dataLakePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Development Platform", "platformName" : "Development", "platformRootURL" : devPlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS1", "description" : "Data Lake Operations", "platformName" : "DataLake", "serverName" : "cocoMDS1" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS2", "description" : "Governance", "platformName" : "Core", "serverName" : "cocoMDS2" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS3", "description" : "Research", "platformName" : "Core", "serverName" : "cocoMDS3" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS4", "description" : "Data Lake Users", "platformName" : "DataLake", "serverName" : "cocoMDS4" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS5", "description" : "Business Systems", "platformName" : "Core", "serverName" : "cocoMDS5" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS6", "description" : "Manufacturing", "platformName" : "Core", "serverName" : "cocoMDS6" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDSx", "description" : "Development", "platformName" : "Development", "serverName" : "cocoMDSx" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoView1", "description" : "View Server", "platformName" : "DataLake", "serverName" : "cocoView1" }, ] } configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, DinoConfigBody) print ("\nDone.") ``` # Deploying server configuration The commands that have been issued so far have created a configuration document for each server. These configuration documents are currently local to the Development OMAG Server Platform where the adminstration commands were issued (figure 3). ![Figure 3](images/creating-configuration-documents.png) > **Figure 3:** Creating configuration documents using administration commands If servers are to be started on the other server platforms then their configuration documents need to be deployed (copied) to these platforms (figure 4). ![Figure 4](images/deploying-configuration-documents.png) > **Figure 4:** Deploying configuration documents However, before deploying the configuration documents, the receiving OMAG Server Platforms need to be running. The code below checks the Core and Data Lake OMAG Server Platforms are running. ``` print("\nChecking OMAG Server Platform availability...") checkServerPlatform("Data Lake Platform", dataLakePlatformURL) checkServerPlatform("Core Platform", corePlatformURL) checkServerPlatform("Dev Platform", devPlatformURL) print ("\nDone.") ``` ---- Make sure the each of the platforms is running. ---- The commands below deploy the server configuration documents to the server platforms where the servers will run. ``` print("\nDeploying server configuration documents to appropriate platforms...") deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS1", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS2", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS3", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS4", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS5", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS6", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDSx", devPlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "exchangeDL01", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "governDL01", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoView1", dataLakePlatformURL) print("\nDone.") ``` ----
github_jupyter
import requests adminUserId = "garygeeke" organizationName = "Coco Pharmaceuticals" %run common/environment-check.ipynb cocoCohort = "cocoCohort" devCohort = "devCohort" iotCohort = "iotCohort" eventBusURLroot = os.environ.get('eventBusURLroot', 'localhost:9092') jsonContentHeader = {'content-type':'application/json'} eventBusBody = { "producer": { "bootstrap.servers": eventBusURLroot }, "consumer":{ "bootstrap.servers": eventBusURLroot } } getAccessServices(cocoMDS1PlatformName, cocoMDS1PlatformURL) adminPlatformURL = devPlatformURL adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/' inMemoryRepositoryOption = "in-memory-repository" graphRepositoryOption = "local-graph-repository" # Pick up which repo type to use from environment if set, otherwise default to inmemory metadataRepositoryType = os.environ.get('repositoryType', inMemoryRepositoryOption) serverSecurityConnectionBody = { "class": "Connection", "connectorType": { "class": "ConnectorType", "connectorProviderClassName": "org.odpi.openmetadata.metadatasecurity.samples.CocoPharmaServerSecurityProvider" } } maxPageSize = '100' mdrServerName = "cocoMDS1" mdrServerUserId = "cocoMDS1npa" mdrServerPassword = "cocoMDS1passw0rd" mdrServerPlatform = dataLakePlatformURL metadataCollectionName = "Data Lake Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": ["quarantine", "clinical-trials", "research", "data-lake", "trash-can"] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) accessServiceOptions["DefaultZones"] = [ "quarantine" ] configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) print("\nDone.") mdrServerName = "cocoMDS2" mdrServerUserId = "cocoMDS2npa" mdrServerPassword = "cocoMDS2passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Governance Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) # Note: cohort membership is configured for all of the cohorts here configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, devCohort) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, iotCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-program', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-privacy', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'security-officer', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-lineage', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', {}) print("\nDone.") mdrServerName = "cocoMDS3" mdrServerUserId = "cocoMDS3npa" mdrServerPassword = "cocoMDS3passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Research Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": ["personal-files", "clinical-trials", "research", "data-lake", "trash-can"] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) accessServiceOptions["DefaultZones"] = [ "personal-files" ] configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions) print("\nDone.") mdrServerName = "cocoMDS4" mdrServerUserId = "cocoMDS4npa" mdrServerPassword = "cocoMDS4passw0rd" mdrServerPlatform = dataLakePlatformURL metadataCollectionName = "Data Lake Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) # Note: no metadata repository or collection configuration here configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": [ "data-lake" ] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions) print("\nDone.") mdrServerName = "cocoMDS5" mdrServerUserId = "cocoMDS5npa" mdrServerPassword = "cocoMDS5passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Business Systems Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureRepositoryProxyDetails(adminPlatformURL, adminUserId, mdrServerName, "org.odpi.openmetadata.adapters.repositoryservices.readonly.repositoryconnector.ReadOnlyOMRSRepositoryConnectorProvider") configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) # Note: no access service configuration here # Still need to add startup Archive print("\nDone.") mdrServerName = "cocoMDS6" mdrServerUserId = "cocoMDS6npa" mdrServerPassword = "cocoMDS6passw0rd" mdrServerPlatform = corePlatformURL metadataCollectionName = "Manufacturing Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, cocoCohort) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, iotCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": [ "manufacturing" ], "DefaultZones" : [ "manufacturing"] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions) print("\nDone.") mdrServerName = "cocoMDSx" mdrServerUserId = "cocoMDSxnpa" mdrServerPassword = "cocoMDSxpassw0rd" mdrServerPlatform = devPlatformURL metadataCollectionName = "Development Catalog" print("Configuring " + mdrServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, mdrServerName, mdrServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, mdrServerName) configureOwningOrganization(adminPlatformURL, adminUserId, mdrServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, mdrServerName, mdrServerUserId) configurePassword(adminPlatformURL, adminUserId, mdrServerName, mdrServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, mdrServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, mdrServerName, eventBusBody) configureMetadataRepository(adminPlatformURL, adminUserId, mdrServerName, metadataRepositoryType) configureDescriptiveName(adminPlatformURL, adminUserId, mdrServerName, metadataCollectionName) configureCohortMembership(adminPlatformURL, adminUserId, mdrServerName, devCohort) print("\nConfiguring " + mdrServerName + " Access Services (OMAS)...") accessServiceOptions = { "SupportedZones": [ "sdlc" ], "DefaultZones": [ "sdlc" ] } configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-catalog', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-consumer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-owner', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'community-profile', {"KarmaPointPlateau":"500"}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'glossary-view', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'data-science', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'subject-area', {}) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'asset-manager', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'governance-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'discovery-engine', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'it-infrastructure', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'project-management', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'software-developer', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'devops', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'digital-architecture', accessServiceOptions) configureAccessService(adminPlatformURL, adminUserId, mdrServerName, 'design-model', accessServiceOptions) print("\nDone.") getIntegrationServices(exchangeDL01PlatformName, exchangeDL01PlatformURL) daemonServerName = "exchangeDL01" daemonServerPlatform = dataLakePlatformURL daemonServerUserId = "exchangeDL01npa" daemonServerPassword = "exchangeDL01passw0rd" mdrServerName = "cocoMDS1" mdrServerPlatform = dataLakePlatformURL OakDeneConnectorName = "OakDeneLandingAreaFilesMonitor" OakDeneConnectorUserId = "onboardDL01npa" OakDeneConnectorSourceName = "HospitalLandingArea" OakDeneConnectorFolder = fileSystemRoot + '/landing-area/hospitals/oak-dene/clinical-trials/drop-foot' OakDeneConnectorConnection = { "class" : "Connection", "connectorType" : { "class" : "ConnectorType", "connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFilesMonitorIntegrationProvider" }, "endpoint" : { "class" : "Endpoint", "address" : OakDeneConnectorFolder } } OldMarketConnectorName = "OldMarketLandingAreaFilesMonitor" OldMarketConnectorUserId = "onboardDL01npa" OldMarketConnectorSourceName = "HospitalLandingArea" OldMarketConnectorFolder = fileSystemRoot + '/landing-area/hospitals/old-market/clinical-trials/drop-foot' OldMarketConnectorConnection = { "class" : "Connection", "connectorType" : { "class" : "ConnectorType", "connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFilesMonitorIntegrationProvider" }, "endpoint" : { "class" : "Endpoint", "address" : OldMarketConnectorFolder } } folderConnectorName = "DropFootClinicalTrialResultsFolderMonitor" folderConnectorUserId = "monitorDL01npa" folderConnectorSourceName = "DropFootClinicalTrialResults" folderConnectorFolder = fileSystemRoot + '/data-lake/research/clinical-trials/drop-foot/weekly-measurements' folderConnectorConnection = { "class" : "Connection", "connectorType" : { "class" : "ConnectorType", "connectorProviderClassName" : "org.odpi.openmetadata.adapters.connectors.integration.basicfiles.DataFolderMonitorIntegrationProvider" }, "endpoint" : { "class" : "Endpoint", "address" : folderConnectorFolder } } print("Configuring " + daemonServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, daemonServerName, daemonServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, daemonServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, daemonServerName) configureOwningOrganization(adminPlatformURL, adminUserId, daemonServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, daemonServerName, daemonServerUserId) configurePassword(adminPlatformURL, adminUserId, daemonServerName, daemonServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, daemonServerName, serverSecurityConnectionBody) configureDefaultAuditLog(adminPlatformURL, adminUserId, daemonServerName) print("\nConfiguring " + daemonServerName + " integration connectors ...") connectorConfigs = [ { "class" : "IntegrationConnectorConfig", "connectorName" : OakDeneConnectorName, "connectorUserId" : OakDeneConnectorUserId, "connection" : OakDeneConnectorConnection, "metadataSourceQualifiedName" : OakDeneConnectorSourceName, "refreshTimeInterval" : 10, "usesBlockingCalls" : "false" }, { "class" : "IntegrationConnectorConfig", "connectorName" : OldMarketConnectorName, "connectorUserId" : OldMarketConnectorUserId, "connection" : OldMarketConnectorConnection, "metadataSourceQualifiedName" : OldMarketConnectorSourceName, "refreshTimeInterval" : 10, "usesBlockingCalls" : "false" }, { "class" : "IntegrationConnectorConfig", "connectorName" : folderConnectorName, "connectorUserId" : folderConnectorUserId, "connection" : folderConnectorConnection, "metadataSourceQualifiedName" : folderConnectorSourceName, "refreshTimeInterval" : 10, "usesBlockingCalls" : "false" }] configureIntegrationService(adminPlatformURL, adminUserId, daemonServerName, mdrServerName, mdrServerPlatform, "files-integrator", {}, connectorConfigs) print ("\nDone.") getEngineServices(governDL01PlatformName, governDL01PlatformURL) engineServerName = "governDL01" engineServerPlatform = dataLakePlatformURL engineServerUserId = "governDL01npa" engineServerPassword = "governDL01passw0rd" engineServerMDRName = "cocoMDS2" engingServerMDRPlatform = corePlatformURL mdrServerName = "cocoMDS1" mdrServerPlatform = dataLakePlatformURL print("Configuring " + engineServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, engineServerName, engineServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, engineServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, engineServerName) configureOwningOrganization(adminPlatformURL, adminUserId, engineServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, engineServerName, engineServerUserId) configurePassword(adminPlatformURL, adminUserId, engineServerName, engineServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, engineServerName, serverSecurityConnectionBody) configureDefaultAuditLog(adminPlatformURL, adminUserId, engineServerName) print("\nConfiguring " + engineServerName + " engines ...") configureEngineDefinitionServices(adminPlatformURL, adminUserId, engineServerName, engineServerMDRName, engingServerMDRPlatform) discoveryEngines = [ { "class" : "EngineConfig", "engineQualifiedName" : "AssetDiscovery", "engineUserId" : "findItDL01npa" }, { "class" : "EngineConfig", "engineQualifiedName" : "AssetQuality", "engineUserId" : "findItDL01npa" }] governanceActionEngines = [ { "class" : "EngineConfig", "engineQualifiedName" : "AssetGovernance", "engineUserId" : "findItDL01npa" }] configureGovernanceEngineService(adminPlatformURL, adminUserId, engineServerName, mdrServerName, mdrServerPlatform, "asset-analysis", discoveryEngines) configureGovernanceEngineService(adminPlatformURL, adminUserId, engineServerName, mdrServerName, mdrServerPlatform, "governance-action", governanceActionEngines) print ("\nDone.") getViewServices(cocoView1PlatformName, cocoView1PlatformURL) # Common functions def configureGovernanceSolutionViewService(adminPlatformURL, adminUserId, viewServerName, viewService, remotePlatformURL,remoteServerName): adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/' print (" ... configuring the " + viewService + " Governance Solution View Service for this server...") url = adminCommandURLRoot + viewServerName + '/view-services/' + viewService jsonContentHeader = {'content-type':'application/json'} viewBody = { "class": "ViewServiceConfig", "omagserverPlatformRootURL": remotePlatformURL, "omagserverName" : remoteServerName } postAndPrintResult(url, json=viewBody, headers=jsonContentHeader) def configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, configBody): adminCommandURLRoot = adminPlatformURL + '/open-metadata/admin-services/users/' + adminUserId + '/servers/' print (" ... configuring the " + viewService + " Integration View Service for this server...") url = adminCommandURLRoot + viewServerName + '/view-services/' + viewService jsonContentHeader = {'content-type':'application/json'} postAndPrintResult(url, json=configBody, headers=jsonContentHeader) # A view server supports the presentation server UI (a node based app). Here we run it on the datalake platform viewServerName = "cocoView1" viewServerUserId = "cocoView1npa" viewServerPassword = "cocoView1passw0rd" viewServerPlatform = dataLakePlatformURL viewServerType = "View Server" # Configuration is similar to most servers print("Configuring " + viewServerName + "...") configurePlatformURL(adminPlatformURL, adminUserId, viewServerName, viewServerPlatform) configureMaxPageSize(adminPlatformURL, adminUserId, mdrServerName, maxPageSize) clearServerType(adminPlatformURL, adminUserId, viewServerName) configureServerType(adminPlatformURL,adminUserId,viewServerName,viewServerType) configureOwningOrganization(adminPlatformURL, adminUserId, viewServerName, organizationName) configureUserId(adminPlatformURL, adminUserId, viewServerName, viewServerUserId) configurePassword(adminPlatformURL, adminUserId, viewServerName, viewServerPassword) configureSecurityConnection(adminPlatformURL, adminUserId, viewServerName, serverSecurityConnectionBody) configureEventBus(adminPlatformURL, adminUserId, viewServerName, eventBusBody) configureDefaultAuditLog(adminPlatformURL, adminUserId, viewServerName) # The governance solution view services currently only consist of glossary author print ("Configuring the Governance Solution View Services") remotePlatformURL=corePlatformURL remoteServerName="cocoMDS2" viewService="glossary-author" configureGovernanceSolutionViewService(adminPlatformURL, adminUserId, viewServerName, viewService, remotePlatformURL,remoteServerName) print ("Configuring the Integration View Services") # repository explorer integration view service viewService="rex" rexConfigBody = { "class":"IntegrationViewServiceConfig", "viewServiceAdminClass":"org.odpi.openmetadata.viewservices.rex.admin.RexViewAdmin", "viewServiceFullName":"Repository Explorer", "viewServiceOperationalStatus":"ENABLED", "omagserverPlatformRootURL": "UNUSED", "omagserverName" : "UNUSED", "resourceEndpoints" : [ { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Core Platform", "platformName" : "Core", "platformRootURL" : corePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "DataLake Platform", "platformName" : "DataLake", "platformRootURL" : dataLakePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Development Platform", "platformName" : "Development", "platformRootURL" : devPlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS1", "description" : "Data Lake Operations", "platformName" : "DataLake", "serverName" : "cocoMDS1" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS2", "description" : "Governance", "platformName" : "Core", "serverName" : "cocoMDS2" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS3", "description" : "Research", "platformName" : "Core", "serverName" : "cocoMDS3" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS5", "description" : "Business Systems", "platformName" : "Core", "serverName" : "cocoMDS5" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS6", "description" : "Manufacturing", "platformName" : "Core", "serverName" : "cocoMDS6" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDSx", "description" : "Development", "platformName" : "Development", "serverName" : "cocoMDSx" }, ] } configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, rexConfigBody) # type-explorer has endpoints viewService="tex" texConfigBody = { "class":"IntegrationViewServiceConfig", "viewServiceAdminClass":"org.odpi.openmetadata.viewservices.tex.admin.TexViewAdmin", "viewServiceFullName":"Type Explorer", "viewServiceOperationalStatus":"ENABLED", "omagserverPlatformRootURL": "UNUSED", "omagserverName" : "UNUSED", "resourceEndpoints" : [ { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Core Platform", "platformName" : "Core", "platformRootURL" : corePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "DataLake Platform", "platformName" : "DataLake", "platformRootURL" : dataLakePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Development Platform", "platformName" : "Development", "platformRootURL" : devPlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS1", "description" : "Data Lake Operations", "platformName" : "DataLake", "serverName" : "cocoMDS1" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS2", "description" : "Governance", "platformName" : "Core", "serverName" : "cocoMDS2" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS3", "description" : "Research", "platformName" : "Core", "serverName" : "cocoMDS3" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS5", "description" : "Business Systems", "platformName" : "Core", "serverName" : "cocoMDS5" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS6", "description" : "Manufacturing", "platformName" : "Core", "serverName" : "cocoMDS6" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDSx", "description" : "Development", "platformName" : "Development", "serverName" : "cocoMDSx" }, ] } configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, texConfigBody) # Dino provides insight into the operational environment of egeria - this config body allows coco's platforms & servers to be accessed viewService="dino" DinoConfigBody = { "class":"IntegrationViewServiceConfig", "viewServiceAdminClass":"org.odpi.openmetadata.viewservices.dino.admin.DinoViewAdmin", "viewServiceFullName":"Dino", "viewServiceOperationalStatus":"ENABLED", "omagserverPlatformRootURL": "UNUSED", "omagserverName" : "UNUSED", "resourceEndpoints" : [ { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Core Platform", "platformName" : "Core", "platformRootURL" : corePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "DataLake Platform", "platformName" : "DataLake", "platformRootURL" : dataLakePlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Platform", "description" : "Development Platform", "platformName" : "Development", "platformRootURL" : devPlatformURL }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS1", "description" : "Data Lake Operations", "platformName" : "DataLake", "serverName" : "cocoMDS1" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS2", "description" : "Governance", "platformName" : "Core", "serverName" : "cocoMDS2" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS3", "description" : "Research", "platformName" : "Core", "serverName" : "cocoMDS3" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS4", "description" : "Data Lake Users", "platformName" : "DataLake", "serverName" : "cocoMDS4" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS5", "description" : "Business Systems", "platformName" : "Core", "serverName" : "cocoMDS5" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDS6", "description" : "Manufacturing", "platformName" : "Core", "serverName" : "cocoMDS6" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoMDSx", "description" : "Development", "platformName" : "Development", "serverName" : "cocoMDSx" }, { "class" : "ResourceEndpointConfig", "resourceCategory" : "Server", "serverInstanceName" : "cocoView1", "description" : "View Server", "platformName" : "DataLake", "serverName" : "cocoView1" }, ] } configureIntegrationViewService(adminPlatformURL, adminUserId, viewServerName, viewService, DinoConfigBody) print ("\nDone.") print("\nChecking OMAG Server Platform availability...") checkServerPlatform("Data Lake Platform", dataLakePlatformURL) checkServerPlatform("Core Platform", corePlatformURL) checkServerPlatform("Dev Platform", devPlatformURL) print ("\nDone.") print("\nDeploying server configuration documents to appropriate platforms...") deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS1", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS2", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS3", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS4", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS5", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDS6", corePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoMDSx", devPlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "exchangeDL01", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "governDL01", dataLakePlatformURL) deployServerToPlatform(adminPlatformURL, adminUserId, "cocoView1", dataLakePlatformURL) print("\nDone.")
0.233619
0.956957
# 1章 Python入門 ## 1.1 Pythonとは 割愛 ## 1.2 Pythonのインストール 自分の環境設定を記載 macOS Sierra Python 3.6.1 ライブラリはrequirements.txtを参照 # 1.3 Pythonインタプリタ pythonのバージョンを確認 今回はJupyter上で実行させるため、行頭に「!」を付与 ``` !python --version ``` ### 1.3.1 算術計算 以降は対話モードではなく、スクリプトとして実行。 ``` print(1 + 2) print(1 - 2) print(4 * 5) print(7 / 5) print(3 ** 2) ``` ### 1.3.2 データ型 ``` print(type(10)) print(type(2.718)) print(type("hello")) ``` ### 1.3.1 変数 ``` x = 10 print(x) x = 100 print(x) y = 3.14 print(x * y) print(type(x * y)) ``` ### 1.3.4 リスト ``` a = [1, 2, 3, 4, 5] print(a) print(len(a)) print(a[0]) print(a[4]) a[4] = 99 print(a) # スライシング print(a) # インデックスの0番目以上~2番目未満まで取得 print(a[0:2]) # インデックスの1番目から最後まで取得 print(a[1:]) # 最初からインデックスの3番目未満まで取得 print(a[:3]) # 最初から最後の要素の1つ前まで取得 print(a[:-1]) # 最初から最後の要素の2つ前まで取得 print(a[:-2]) ``` ### 1.3.5 ディクショナリ ``` me = {'height':180} print(me["height"]) me["weight"] = 70 print(me) ``` ### 1.3.6 ブーリアン ``` hungry = True sleepy = False print(type(hungry)) print(not hungry) print(hungry and sleepy) print(hungry or sleepy) ``` ### 1.3.7 if文 ``` hungry = True if hungry: print("I'm hungry") hungry = False if hungry: print("I'm hungry") else: print("I'm not hungry") print("I'm sleepy") ``` ### 1.3.8 for文 ``` for i in [1, 2, 3]: print(i) ``` ### 1.3.9 関数 ``` def hello(): print("Hello World!") hello() ``` ## Pythonスクリプトファイル ### 1.4.1 ファイルに保存 割愛 ### 1.4.2 クラス ``` class Man: def __init__(self, name): self.name = name print("Initialized!") def hello(self): print("Hello " + self.name + "!") def goodbye(self): print("Good-bye " + self.name + "!") m = Man("David") m.hello() m.goodbye() ``` ## 1.5 Numpy ### 1.5.1 Numpyのインポート ``` import numpy as np ``` ### 1.5.2 NumPy配列の生成 ``` x = np.array([1.0, 2.0, 3.0]) print(x) print(type(x)) ``` ### 1.5.3 NumPyの算術計算 ``` x = np.array([1.0, 2.0, 3.0]) y = np.array([2.0, 4.0, 6.0]) ## 要素ごと(element-wise)の計算 # 要素ごとの足し算 print(x + y) print(x - y) print(x * y) print(x / y) # 配列とスカラ値の計算(ブロードキャスト) x = np.array([1.0, 2.0, 3.0]) print(x / 2.0) ``` ### 1.5.4 NumPyのN次元配列 1次元配列はベクトル、2次元配列は行列と呼ぶ。ベクトルや行列を一般化したものをテンソルと呼ぶ。ここでは2胃j源配列を行列、3次元以上の配列をテンソルもしくは多次元配列と呼ぶ。 ``` A = np.array([[1, 2], [3, 4]]) print(A) print(A.shape) print(A.dtype) B = np.array([[3, 0], [0, 6]]) print(A + B) print(A * B) print(A) print(A * 10) ``` ### 1.5.5 ブロードキャスト スカラ値を行列などの要素に拡大して演算が行われている。(ブロードキャスト) ``` A = np.array([[1, 2], [3, 4]]) B = np.array([10, 20]) print(A * B) ``` ### 1.5.6 要素へのアクセス ``` X = np.array([[51, 55], [14, 19], [0, 4]]) print(X) print(X[0]) print(X[0][1]) for row in X: print(row) X = X.flatten() print(X) print(X[np.array([0, 2, 4])]) print(X > 15) print(X[X>15]) ``` ## 1.6 Matplotlib ### 1.6.1 単純なグラフの描画 ``` import numpy as np import matplotlib.pyplot as plt # データの作成 x = np.arange(0, 6, 0.1) # 0から6まで0.1刻みで生成 y = np.sin(x) plt.plot(x, y) plt.show() ``` ### 1.6.2 pyplotの機能 ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 6, 0.1) y1 = np.sin(x) y2 = np.cos(x) plt.plot(x, y1, label="sin") plt.plot(x, y2, linestyle = "--", label="cos") plt.xlabel("x") plt.ylabel("y") plt.title("sin & cos") plt.legend() plt.show() ``` ### 1.6.3 画像の表示 ``` import matplotlib.pyplot as plt from matplotlib.image import imread img = imread('../docs/lena.png') plt.imshow(img) plt.show() ```
github_jupyter
!python --version print(1 + 2) print(1 - 2) print(4 * 5) print(7 / 5) print(3 ** 2) print(type(10)) print(type(2.718)) print(type("hello")) x = 10 print(x) x = 100 print(x) y = 3.14 print(x * y) print(type(x * y)) a = [1, 2, 3, 4, 5] print(a) print(len(a)) print(a[0]) print(a[4]) a[4] = 99 print(a) # スライシング print(a) # インデックスの0番目以上~2番目未満まで取得 print(a[0:2]) # インデックスの1番目から最後まで取得 print(a[1:]) # 最初からインデックスの3番目未満まで取得 print(a[:3]) # 最初から最後の要素の1つ前まで取得 print(a[:-1]) # 最初から最後の要素の2つ前まで取得 print(a[:-2]) me = {'height':180} print(me["height"]) me["weight"] = 70 print(me) hungry = True sleepy = False print(type(hungry)) print(not hungry) print(hungry and sleepy) print(hungry or sleepy) hungry = True if hungry: print("I'm hungry") hungry = False if hungry: print("I'm hungry") else: print("I'm not hungry") print("I'm sleepy") for i in [1, 2, 3]: print(i) def hello(): print("Hello World!") hello() class Man: def __init__(self, name): self.name = name print("Initialized!") def hello(self): print("Hello " + self.name + "!") def goodbye(self): print("Good-bye " + self.name + "!") m = Man("David") m.hello() m.goodbye() import numpy as np x = np.array([1.0, 2.0, 3.0]) print(x) print(type(x)) x = np.array([1.0, 2.0, 3.0]) y = np.array([2.0, 4.0, 6.0]) ## 要素ごと(element-wise)の計算 # 要素ごとの足し算 print(x + y) print(x - y) print(x * y) print(x / y) # 配列とスカラ値の計算(ブロードキャスト) x = np.array([1.0, 2.0, 3.0]) print(x / 2.0) A = np.array([[1, 2], [3, 4]]) print(A) print(A.shape) print(A.dtype) B = np.array([[3, 0], [0, 6]]) print(A + B) print(A * B) print(A) print(A * 10) A = np.array([[1, 2], [3, 4]]) B = np.array([10, 20]) print(A * B) X = np.array([[51, 55], [14, 19], [0, 4]]) print(X) print(X[0]) print(X[0][1]) for row in X: print(row) X = X.flatten() print(X) print(X[np.array([0, 2, 4])]) print(X > 15) print(X[X>15]) import numpy as np import matplotlib.pyplot as plt # データの作成 x = np.arange(0, 6, 0.1) # 0から6まで0.1刻みで生成 y = np.sin(x) plt.plot(x, y) plt.show() import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 6, 0.1) y1 = np.sin(x) y2 = np.cos(x) plt.plot(x, y1, label="sin") plt.plot(x, y2, linestyle = "--", label="cos") plt.xlabel("x") plt.ylabel("y") plt.title("sin & cos") plt.legend() plt.show() import matplotlib.pyplot as plt from matplotlib.image import imread img = imread('../docs/lena.png') plt.imshow(img) plt.show()
0.226612
0.895202
## Scheduled Scoring with Prefect Cloud on Saturn Cloud <table> <tr> <td> <img src="https://docs.dask.org/en/latest/_images/dask_horizontal.svg" width="300"> </td> <td> <img src="https://dask.org/_images/prefect-logo.svg" width="150"> </td> </tr> </table> This notebook contains sample code to take a `prefect` flow and distribute its work with a `Dask` cluster. This is similar to `prefect-scheduled-scoring.ipynb` with one important addition: instead of deploying the flow as a Saturn Cloud deployment, this notebook describes how to register a flow with [Prefect Cloud](https://www.prefect.io/cloud/) so that service can be used for orchestrating when the flow runs. **NOTE**: This notebook assumes that you have already done the following: * created a Prefect Cloud account * set up the appropriate credentials in Saturn * set up a Prefect Cloud agent in Saturn Cloud Details on these prerequisites can be found in ["Fault-Tolerant Data Pipelines with Prefect Cloud"](https://www.saturncloud.io/docs/tutorials/prefect-cloud). The flow below mocks the process of measuring the effectiveness of a deployed statistical model. ### Model Details The data used for this example is the **"Incident management process enriched event log"** dataset [from the UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Incident+management+process+enriched+event+log).That dataset contains tickets from an IT support system, including characteristics like the priority of the ticket, the time it was opened, and the time it was closed. This dataset can be used to solve a regression task: > Given the characteristics of a ticket, how long will it be until it is closed? ### Environment Setup The code in this notebook uses `prefect` for orchestration *(figuring out what to do, and in what order)* and `dask` for execution *(doing the things)*. It relies on the following additional non-builtin libraries: * `numpy`: data manipulation * `pandas`: data manipulation * `requests`: read in data from the internet * `scikit-learn`: evaluation metric functions * `dask-saturn`: create and interact with Saturn Cloud `Dask` clusters ([link](https://github.com/saturncloud/dask-saturn)) * `prefect-saturn`: register Prefect flows with both Prefect Cloud and have them run on Saturn Cloud Dask clusters ([link](https://github.com/saturncloud/prefect-saturn)) ``` import json import numpy as np import os import pandas as pd import prefect import requests import uuid from datetime import datetime, timedelta from io import BytesIO from prefect import task, Parameter, Task, Flow from prefect.schedules import IntervalSchedule from sklearn.metrics import mean_absolute_error from sklearn.metrics import median_absolute_error from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from zipfile import ZipFile from prefect_saturn import PrefectCloudIntegration PREFECT_CLOUD_PROJECT_NAME = os.environ["PREFECT_CLOUD_PROJECT_NAME"] SATURN_USERNAME = os.environ["SATURN_USERNAME"] ``` Authenticate with Prefect Cloud. ``` !prefect auth login -t ${PREFECT_USER_TOKEN} ``` ### Define Tasks `prefect` refers to a workload as a "flow", which comprises multiple individual things to do called "tasks". From [the Prefect docs](https://docs.prefect.io/core/concepts/tasks.html): > A task is like a function: it optionally takes inputs, performs an action, and produces an optional result. The goal of this notebooks flow is to evaluate, on an ongoing basis, the performance of a model that predicts time-to-close for tickets in an IT support system. That can be broken down into the following tasks * `get_trial_id()`: assign a unique ID to each run * `get_ticket_data_batch()`: get a random set of newly-closed tickets * `get_target()`: given a batch of tickets, compute how long it took to close them * `predict()`: predict the time-to-close, using the heuristic "higher-priority tickets close faster" * `evaluate_model()`: compute evaluation metrics comparing predicted and actual time-to-close * `get_trial_summary()`: collect all evaluation metrics in one object * `write_trial_summary()`: write trial results somewhere ``` @task def get_trial_id() -> str: """ Generate a unique identifier for this trial. """ return str(uuid.uuid4()) @task def get_ticket_data_batch(batch_size: int) -> pd.DataFrame: """ Simulate the experience of getting a random sample of new tickets from an IT system, to test the performance of a model. """ url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00498/incident_event_log.zip" resp = requests.get(url) zipfile = ZipFile(BytesIO(resp.content)) data_file = "incident_event_log.csv" # _date_parser has to be a lambda or pandas won't convert dates correctly _date_parser = lambda x: pd.NaT if x == '?' else datetime.strptime(x, "%d/%m/%Y %H:%M") df = pd.read_csv( zipfile.open(data_file), parse_dates=[ "opened_at", "resolved_at", "closed_at", "sys_created_at", "sys_updated_at" ], infer_datetime_format=False, converters={ "opened_at": _date_parser, "resolved_at": _date_parser, "closed_at": _date_parser, "sys_created_at": _date_parser, "sys_updated_at": _date_parser }, na_values = ['?'] ) df["sys_updated_at"] = pd.to_datetime(df["sys_updated_at"]) rows_to_score = np.random.randint(0, df.shape[0], 100) return(df.iloc[rows_to_score]) @task def get_target(df): """ Compute time-til-close on a data frame of tickets """ time_til_close = (df['closed_at'] - df['sys_updated_at']) / np.timedelta64(1, 's') return time_til_close @task def predict(df): """ Given an input data frame, predict how long it will be until the ticket is closed. For simplicity, using a super simple model that just says "high-priority tickets get closed faster". """ seconds_in_an_hour = 60.0 * 60.0 preds = df["priority"].map({ "1 - Critical": 6.0 * seconds_in_an_hour, "2 - High": 24.0 * seconds_in_an_hour, "3 - Moderate": 120.0 * seconds_in_an_hour, "4 - Lower": 240.0 * seconds_in_an_hour, }) default_guess_for_no_priority = 180.0 * seconds_in_an_hour preds = preds.fillna(default_guess_for_no_priority) return(preds) @task def evaluate_model(y_true, y_pred, metric_name: str) -> float: metric_func_lookup = { "mae": mean_absolute_error, "medae": median_absolute_error, "mse": mean_squared_error, "r2": r2_score } metric_func = metric_func_lookup[metric_name] return metric_func(y_true, y_pred) @task def get_trial_summary(trial_id:str, actuals, input_df: pd.DataFrame, metrics: dict) -> dict: out = {"id": trial_id} out["data"] = { "num_obs": input_df.shape[0], "metrics": metrics, "target": { "mean": actuals.mean(), "median": actuals.median(), "min": actuals.min(), "max": actuals.max() } } return out @task(log_stdout=True) def write_trial_summary(trial_summary: str): """ Write out a summary of the file. Currently just logs back to the Prefect logger """ logger = prefect.context.get("logger") logger.info(json.dumps(trial_summary)) ``` ### Construct a Flow Now that all of the task logic has been defined, the next step is to compose those tasks into a "flow". From [the Prefect docs](https://docs.prefect.io/core/concepts/flows.html): > A Flow is a container for Tasks. It represents an entire workflow or application by describing the dependencies between tasks. > Flows are DAGs, or "directed acyclic graphs." This is a mathematical way of describing certain organizational principles: > * A graph is a data structure that uses "edges" to connect "nodes." Prefect models each Flow as a graph in which Task dependencies are modeled by Edges. > * A directed graph means that edges have a start and an end: when two tasks are connected, one of them unambiguously runs first and the other one runs second. > * An acyclic directed graph has no circular dependencies: if you walk through the graph, you will never revisit a task you've seen before. Because we want this job to run on a schedule, the code below provides one additional argument to `Flow()`, a special "schedule" object. In this case, the code below says "run this flow every 10 minutes". ``` schedule = IntervalSchedule( interval=timedelta(minutes=10) ) ``` *NOTE: `prefect` flows do not have to be run on a schedule. To test a single run, just omit `schedule` from the code block below.* ``` with Flow(f"{SATURN_USERNAME}-ticket-model-evaluation", schedule=schedule) as flow: batch_size = Parameter( 'batch-size', default=1000 ) trial_id = get_trial_id() # pull sample data sample_ticket_df = get_ticket_data_batch(batch_size) # compute target actuals = get_target(sample_ticket_df) # get prediction preds = predict(sample_ticket_df) # compute evaluation metrics mae = evaluate_model(actuals, preds, "mae") medae = evaluate_model(actuals, preds, "medae") mse = evaluate_model(actuals, preds, "mse") r2 = evaluate_model(actuals, preds, "r2") # get trial summary in a string trial_summary = get_trial_summary( trial_id=trial_id, input_df=sample_ticket_df, actuals=actuals, metrics={ "MAE": mae, "MedAE": medae, "MSE": mse, "R2": r2 } ) # store trial summary trial_complete = write_trial_summary(trial_summary) ``` At this point, we have all of the work defined in tasks and arranged within a flow, but none of the tasks have run yet. In the next section, we'll do that using `Dask`. ## Register with Prefect Cloud Now that the business logic of the flow is complete, we can add information that Saturn will need to know to run it. ``` integration = PrefectCloudIntegration( prefect_cloud_project_name=PREFECT_CLOUD_PROJECT_NAME ) flow = integration.register_flow_with_saturn(flow) ``` `register_flow_with_saturn()` does a few important things: * specifies how and where the flow's code is stored so it can be retrieved by a Prefect Cloud agent - see `flow.storage` * specifies the infrastructure needed to run the flow. In this case, it uses a `KubernetesJobEnvironment` with a Saturn `Dask` cluster` - see `flow.environment` The final step necessary is to "register" the flow with Prefect Cloud. If this is the first time you've registered this flow, it will create a new flow in Prefect Cloud under the project in `PREFECT_CLOUD_PROJECT_NAME`. If you already have a flow in this project with this name, it will create a new version of it in Prefect Cloud. ``` flow.register( project_name=PREFECT_CLOUD_PROJECT_NAME, labels=["saturn-cloud"] ) ``` ### Run the flow You shouldn't have to do anything to run the flow. Now that Prefect Cloud has it, it will be run once every 10 minutes. You can confirm this by doing all of the following: * If you are an admin, go to the "Logs" page in Saturn Cloud and look at the Prefect Cloud Agent running the flow * Go to the "Dask" page in Saturn Cloud. You should see that a new Dask cluster has been created to run this flow * Go to Prefect Cloud. If you navigate to this flow and click "Runs", you should see task statuses and and logs for this flow If you want to run the flow immediately, navigate to the flow in the Prefect Cloud UI and click "Quick Run".
github_jupyter
import json import numpy as np import os import pandas as pd import prefect import requests import uuid from datetime import datetime, timedelta from io import BytesIO from prefect import task, Parameter, Task, Flow from prefect.schedules import IntervalSchedule from sklearn.metrics import mean_absolute_error from sklearn.metrics import median_absolute_error from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from zipfile import ZipFile from prefect_saturn import PrefectCloudIntegration PREFECT_CLOUD_PROJECT_NAME = os.environ["PREFECT_CLOUD_PROJECT_NAME"] SATURN_USERNAME = os.environ["SATURN_USERNAME"] !prefect auth login -t ${PREFECT_USER_TOKEN} @task def get_trial_id() -> str: """ Generate a unique identifier for this trial. """ return str(uuid.uuid4()) @task def get_ticket_data_batch(batch_size: int) -> pd.DataFrame: """ Simulate the experience of getting a random sample of new tickets from an IT system, to test the performance of a model. """ url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00498/incident_event_log.zip" resp = requests.get(url) zipfile = ZipFile(BytesIO(resp.content)) data_file = "incident_event_log.csv" # _date_parser has to be a lambda or pandas won't convert dates correctly _date_parser = lambda x: pd.NaT if x == '?' else datetime.strptime(x, "%d/%m/%Y %H:%M") df = pd.read_csv( zipfile.open(data_file), parse_dates=[ "opened_at", "resolved_at", "closed_at", "sys_created_at", "sys_updated_at" ], infer_datetime_format=False, converters={ "opened_at": _date_parser, "resolved_at": _date_parser, "closed_at": _date_parser, "sys_created_at": _date_parser, "sys_updated_at": _date_parser }, na_values = ['?'] ) df["sys_updated_at"] = pd.to_datetime(df["sys_updated_at"]) rows_to_score = np.random.randint(0, df.shape[0], 100) return(df.iloc[rows_to_score]) @task def get_target(df): """ Compute time-til-close on a data frame of tickets """ time_til_close = (df['closed_at'] - df['sys_updated_at']) / np.timedelta64(1, 's') return time_til_close @task def predict(df): """ Given an input data frame, predict how long it will be until the ticket is closed. For simplicity, using a super simple model that just says "high-priority tickets get closed faster". """ seconds_in_an_hour = 60.0 * 60.0 preds = df["priority"].map({ "1 - Critical": 6.0 * seconds_in_an_hour, "2 - High": 24.0 * seconds_in_an_hour, "3 - Moderate": 120.0 * seconds_in_an_hour, "4 - Lower": 240.0 * seconds_in_an_hour, }) default_guess_for_no_priority = 180.0 * seconds_in_an_hour preds = preds.fillna(default_guess_for_no_priority) return(preds) @task def evaluate_model(y_true, y_pred, metric_name: str) -> float: metric_func_lookup = { "mae": mean_absolute_error, "medae": median_absolute_error, "mse": mean_squared_error, "r2": r2_score } metric_func = metric_func_lookup[metric_name] return metric_func(y_true, y_pred) @task def get_trial_summary(trial_id:str, actuals, input_df: pd.DataFrame, metrics: dict) -> dict: out = {"id": trial_id} out["data"] = { "num_obs": input_df.shape[0], "metrics": metrics, "target": { "mean": actuals.mean(), "median": actuals.median(), "min": actuals.min(), "max": actuals.max() } } return out @task(log_stdout=True) def write_trial_summary(trial_summary: str): """ Write out a summary of the file. Currently just logs back to the Prefect logger """ logger = prefect.context.get("logger") logger.info(json.dumps(trial_summary)) schedule = IntervalSchedule( interval=timedelta(minutes=10) ) with Flow(f"{SATURN_USERNAME}-ticket-model-evaluation", schedule=schedule) as flow: batch_size = Parameter( 'batch-size', default=1000 ) trial_id = get_trial_id() # pull sample data sample_ticket_df = get_ticket_data_batch(batch_size) # compute target actuals = get_target(sample_ticket_df) # get prediction preds = predict(sample_ticket_df) # compute evaluation metrics mae = evaluate_model(actuals, preds, "mae") medae = evaluate_model(actuals, preds, "medae") mse = evaluate_model(actuals, preds, "mse") r2 = evaluate_model(actuals, preds, "r2") # get trial summary in a string trial_summary = get_trial_summary( trial_id=trial_id, input_df=sample_ticket_df, actuals=actuals, metrics={ "MAE": mae, "MedAE": medae, "MSE": mse, "R2": r2 } ) # store trial summary trial_complete = write_trial_summary(trial_summary) integration = PrefectCloudIntegration( prefect_cloud_project_name=PREFECT_CLOUD_PROJECT_NAME ) flow = integration.register_flow_with_saturn(flow) flow.register( project_name=PREFECT_CLOUD_PROJECT_NAME, labels=["saturn-cloud"] )
0.666822
0.983182
# Collision Avoidance - Train Model Welcome to this host side Jupyter Notebook! This should look familiar if you ran through the notebooks that run on the robot. In this notebook we'll train our image classifier to detect two classes ``free`` and ``blocked``, which we'll use for avoiding collisions. For this, we'll use a popular deep learning library *PyTorch* ``` import torch import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.datasets as datasets import torchvision.models as models import torchvision.transforms as transforms ``` ### Upload and extract dataset Before you start, you should upload the ``dataset.zip`` file that you created in the ``data_collection.ipynb`` notebook on the robot. You should then extract this dataset by calling the command below ``` !unzip -q dataset.zip ``` You should see a folder named ``dataset`` appear in the file browser. ### Create dataset instance Now we use the ``ImageFolder`` dataset class available with the ``torchvision.datasets`` package. We attach transforms from the ``torchvision.transforms`` package to prepare the data for training. ``` dataset = datasets.ImageFolder( 'dataset', transforms.Compose([ transforms.ColorJitter(0.1, 0.1, 0.1, 0.1), transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) ) ``` ### Split dataset into train and test sets Next, we split the dataset into *training* and *test* sets. The test set will be used to verify the accuracy of the model we train. ``` train_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - 50, 50]) ``` ### Create data loaders to load data in batches We'll create two ``DataLoader`` instances, which provide utilities for shuffling data, producing *batches* of images, and loading the samples in parallel with multiple workers. ``` train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=16, shuffle=True, num_workers=4 ) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=16, shuffle=True, num_workers=4 ) ``` ### Define the neural network Now, we define the neural network we'll be training. The *torchvision* package provides a collection of pre-trained models that we can use. In a process called *transfer learning*, we can repurpose a pre-trained model (trained on millions of images) for a new task that has possibly much less data available. Important features that were learned in the original training of the pre-trained model are re-usable for the new task. We'll use the ``alexnet`` model. ``` model = models.alexnet(pretrained=True) ``` The ``alexnet`` model was originally trained for a dataset that had 1000 class labels, but our dataset only has two class labels! We'll replace the final layer with a new, untrained layer that has only two outputs. ``` model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 2) ``` Finally, we transfer our model for execution on the GPU ``` device = torch.device('cuda') model = model.to(device) ``` ### Train the neural network Using the code below we will train the neural network for 30 epochs, saving the best performing model after each epoch. > An epoch is a full run through our data. ``` NUM_EPOCHS = 30 BEST_MODEL_PATH = 'best_model.pth' best_accuracy = 0.0 optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) for epoch in range(NUM_EPOCHS): for images, labels in iter(train_loader): images = images.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model(images) loss = F.cross_entropy(outputs, labels) loss.backward() optimizer.step() test_error_count = 0.0 for images, labels in iter(test_loader): images = images.to(device) labels = labels.to(device) outputs = model(images) test_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1)))) test_accuracy = 1.0 - float(test_error_count) / float(len(test_dataset)) print('%d: %f' % (epoch, test_accuracy)) if test_accuracy > best_accuracy: torch.save(model.state_dict(), BEST_MODEL_PATH) best_accuracy = test_accuracy ``` Once that is finished, you should see a file ``best_model.pth`` in the Jupyter Lab file browser. Select ``Right click`` -> ``Download`` to download the model to your workstation
github_jupyter
import torch import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.datasets as datasets import torchvision.models as models import torchvision.transforms as transforms !unzip -q dataset.zip dataset = datasets.ImageFolder( 'dataset', transforms.Compose([ transforms.ColorJitter(0.1, 0.1, 0.1, 0.1), transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) ) train_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - 50, 50]) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=16, shuffle=True, num_workers=4 ) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=16, shuffle=True, num_workers=4 ) model = models.alexnet(pretrained=True) model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 2) device = torch.device('cuda') model = model.to(device) NUM_EPOCHS = 30 BEST_MODEL_PATH = 'best_model.pth' best_accuracy = 0.0 optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) for epoch in range(NUM_EPOCHS): for images, labels in iter(train_loader): images = images.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model(images) loss = F.cross_entropy(outputs, labels) loss.backward() optimizer.step() test_error_count = 0.0 for images, labels in iter(test_loader): images = images.to(device) labels = labels.to(device) outputs = model(images) test_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1)))) test_accuracy = 1.0 - float(test_error_count) / float(len(test_dataset)) print('%d: %f' % (epoch, test_accuracy)) if test_accuracy > best_accuracy: torch.save(model.state_dict(), BEST_MODEL_PATH) best_accuracy = test_accuracy
0.768386
0.992837
``` import numpy as np import tensorflow as tf ``` # OOP Structures ## Structure of AC node in tensorflow ``` class Node(): """ For each line of the *ac file, we generate a corresponding Node Three type of nodes: 1. variable: 'v' 2. constant: 'n' 3. operation: '+' or '*' """ #Initialize Node def __init__(self,id, op, para1, para2=None, node_dict=None): #node id is the line index self.id =id #create tensor based on node type self._whatOp(op, para1, para2, node_dict) #update graph node list node_dict[self.id] = self.getNode() def _variable(self, variable, value): """Create variable type node """ prefix = '' if value == '0': prefix = 'not_' init = tf.constant(1.0) self.tensor = tf.get_variable(prefix+'v_'+variable, initializer=init, dtype=tf.float32) def _constant(self, value): """Create constant type node """ self.tensor = tf.constant(float(value), name = 'n_'+str(self.id), dtype=tf.float32) def _operation(self, op, variable1, variable2, node_dict): """Create operation type node """ if op =='+': prefix = 'add_' else: prefix = 'mul_' #Call child nodes tensors child_1 = node_dict[int(variable1)] child_2 = node_dict[int(variable2)] #Get Tensorflow operator tf_op = self.getTfOp(op) self.tensor = tf_op(child_1,child_2,name=prefix+variable1+'_'+variable2) def _whatOp(self, op, para1, para2=None, node_dict=None): """Look up type of node and call corresponding construction function """ if op == 'v': # Check parameter completeness assert para2 != None, 'Variable {0} need a default value'.format(para1) self._variable(para1, para2) elif op == 'n': self._constant(para1) else: # Check parameter completeness assert para2 != None, 'Operation need two tensors' assert node_dict != None, 'No node dictionary detected' self._operation(op, para1, para2, node_dict) def getTfOp(self, op): """ Call tensorflow operations !For later extension """ if op == '+': return tf.add elif op == '*': return tf.multiply else: raise ValueError('Only support two operations: "+" and "*".') def getNode(self): return self.tensor ``` ## AC class Inner representation of *.ac file ``` class AC(): """ Main AC class 1. Read *.ac file 2. Create Tensorflow graph 2. Inference Function implemented: 3. Compute all marginals 4. Answer query(could with evidence) """ def __init__(self,path): """Initialize graph and session """ self.node_dict = {} self._readACFile(path) self.num_node = len(self._file_lines)-2 self._createGraph() self.sess = tf.InteractiveSession() self.sess.run(tf.global_variables_initializer()) def _readACFile(self, path): """Read *.ac file Private """ f = open(path,'r') self._file_lines = f.readlines() f.close() def _createGraph(self): """Create Tensorflow graph Private """ with tf.variable_scope("AC"): for id,line in enumerate(self._file_lines[1:self.num_node+1]): line = line.replace('\n','') op_para = line.split(' ') line_length = len(op_para) if line_length == 2: Node(id,op_para[0],op_para[1],node_dict=self.node_dict) else: Node(id,op_para[0],op_para[1],op_para[2],self.node_dict) print('Done') def _resetAC(self): """Clear Evidence Private """ self.sess.run(tf.global_variables_initializer()) def _setEvidence(self,evidence_list): """Set Evidence Private Note: Only be called by query function, no independent evidence setting allowed. """ evidences = [] for e in evidence_list: #Set every variable conflict with evidence to 0! prefix = 'not_' if e[2] =='0': prefix = '' variable = tf.get_variable(prefix+'v_'+e[1]) evidences.append(variable.assign(0.0)) self.sess.run(evidences) def query(self,target_list, evidence_list=None): """Query """ output = 0 with tf.variable_scope("AC", reuse=True): if evidence_list != None: self._setEvidence(evidence_list) gradient = self.node_dict[self.num_node-1] #Chain rule.. somehow... for target in target_list: #Find variable by name prefix = '' if target[2]=='0': prefix = 'not_' variable = tf.get_variable(prefix+'v_'+target[1]) gradient = tf.gradients(gradient, variable) output = self.sess.run(gradient) self._resetAC() return output def getMarginals(self): """Get Marginal Probabilities It seems the input is not probabilities though... """ output = {} variables = tf.trainable_variables() root = self.node_dict[self.num_node-1] for v in variables: output[v.name] = self.sess.run(tf.gradients(root,v)) return output def printVariables(self): """Print values of all variables Code checking function, not useful if proved correct """ variables = tf.trainable_variables() for v in variables: print('Variable {0} has value:{1}'.format(v.name, self.sess.run(v))) #Instantiate an AC graph ac =AC('example.ac') #Check Initialization Correctness ac.printVariables() #List all nodes in AC ac.node_dict #One Query Example ac.query([['v','1','0']]) #Query with Evidence 1: Single-evidence ac.query([['v','1','0']],[['v','0','0']]) #Query with Evidence 2: Multi-evidences ac.query([['v','1','0']],[['v','0','1'],['v','2','1']]) ac.getMarginals() ``` # Visualization To see the graph, please use ipython instead of Github preview. ``` #Tensorflow Graph Plots #Cite: https://www.tensorflow.org/get_started/graph_viz from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:960px;height:600px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) show_graph(tf.get_default_graph().as_graph_def()) ```
github_jupyter
import numpy as np import tensorflow as tf class Node(): """ For each line of the *ac file, we generate a corresponding Node Three type of nodes: 1. variable: 'v' 2. constant: 'n' 3. operation: '+' or '*' """ #Initialize Node def __init__(self,id, op, para1, para2=None, node_dict=None): #node id is the line index self.id =id #create tensor based on node type self._whatOp(op, para1, para2, node_dict) #update graph node list node_dict[self.id] = self.getNode() def _variable(self, variable, value): """Create variable type node """ prefix = '' if value == '0': prefix = 'not_' init = tf.constant(1.0) self.tensor = tf.get_variable(prefix+'v_'+variable, initializer=init, dtype=tf.float32) def _constant(self, value): """Create constant type node """ self.tensor = tf.constant(float(value), name = 'n_'+str(self.id), dtype=tf.float32) def _operation(self, op, variable1, variable2, node_dict): """Create operation type node """ if op =='+': prefix = 'add_' else: prefix = 'mul_' #Call child nodes tensors child_1 = node_dict[int(variable1)] child_2 = node_dict[int(variable2)] #Get Tensorflow operator tf_op = self.getTfOp(op) self.tensor = tf_op(child_1,child_2,name=prefix+variable1+'_'+variable2) def _whatOp(self, op, para1, para2=None, node_dict=None): """Look up type of node and call corresponding construction function """ if op == 'v': # Check parameter completeness assert para2 != None, 'Variable {0} need a default value'.format(para1) self._variable(para1, para2) elif op == 'n': self._constant(para1) else: # Check parameter completeness assert para2 != None, 'Operation need two tensors' assert node_dict != None, 'No node dictionary detected' self._operation(op, para1, para2, node_dict) def getTfOp(self, op): """ Call tensorflow operations !For later extension """ if op == '+': return tf.add elif op == '*': return tf.multiply else: raise ValueError('Only support two operations: "+" and "*".') def getNode(self): return self.tensor class AC(): """ Main AC class 1. Read *.ac file 2. Create Tensorflow graph 2. Inference Function implemented: 3. Compute all marginals 4. Answer query(could with evidence) """ def __init__(self,path): """Initialize graph and session """ self.node_dict = {} self._readACFile(path) self.num_node = len(self._file_lines)-2 self._createGraph() self.sess = tf.InteractiveSession() self.sess.run(tf.global_variables_initializer()) def _readACFile(self, path): """Read *.ac file Private """ f = open(path,'r') self._file_lines = f.readlines() f.close() def _createGraph(self): """Create Tensorflow graph Private """ with tf.variable_scope("AC"): for id,line in enumerate(self._file_lines[1:self.num_node+1]): line = line.replace('\n','') op_para = line.split(' ') line_length = len(op_para) if line_length == 2: Node(id,op_para[0],op_para[1],node_dict=self.node_dict) else: Node(id,op_para[0],op_para[1],op_para[2],self.node_dict) print('Done') def _resetAC(self): """Clear Evidence Private """ self.sess.run(tf.global_variables_initializer()) def _setEvidence(self,evidence_list): """Set Evidence Private Note: Only be called by query function, no independent evidence setting allowed. """ evidences = [] for e in evidence_list: #Set every variable conflict with evidence to 0! prefix = 'not_' if e[2] =='0': prefix = '' variable = tf.get_variable(prefix+'v_'+e[1]) evidences.append(variable.assign(0.0)) self.sess.run(evidences) def query(self,target_list, evidence_list=None): """Query """ output = 0 with tf.variable_scope("AC", reuse=True): if evidence_list != None: self._setEvidence(evidence_list) gradient = self.node_dict[self.num_node-1] #Chain rule.. somehow... for target in target_list: #Find variable by name prefix = '' if target[2]=='0': prefix = 'not_' variable = tf.get_variable(prefix+'v_'+target[1]) gradient = tf.gradients(gradient, variable) output = self.sess.run(gradient) self._resetAC() return output def getMarginals(self): """Get Marginal Probabilities It seems the input is not probabilities though... """ output = {} variables = tf.trainable_variables() root = self.node_dict[self.num_node-1] for v in variables: output[v.name] = self.sess.run(tf.gradients(root,v)) return output def printVariables(self): """Print values of all variables Code checking function, not useful if proved correct """ variables = tf.trainable_variables() for v in variables: print('Variable {0} has value:{1}'.format(v.name, self.sess.run(v))) #Instantiate an AC graph ac =AC('example.ac') #Check Initialization Correctness ac.printVariables() #List all nodes in AC ac.node_dict #One Query Example ac.query([['v','1','0']]) #Query with Evidence 1: Single-evidence ac.query([['v','1','0']],[['v','0','0']]) #Query with Evidence 2: Multi-evidences ac.query([['v','1','0']],[['v','0','1'],['v','2','1']]) ac.getMarginals() #Tensorflow Graph Plots #Cite: https://www.tensorflow.org/get_started/graph_viz from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:960px;height:600px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) show_graph(tf.get_default_graph().as_graph_def())
0.667256
0.697364
## Scraping ``` # Dependencies & Set-up from bs4 import BeautifulSoup as bs import requests from splinter import Browser from webdriver_manager.chrome import ChromeDriverManager import pandas as pd ``` ### NASA Mars News ``` # Set-up splinter executable_path = {'executable_path': ChromeDriverManager().install()} browser = Browser('chrome', **executable_path, headless=False) # Set URL browser.visit('https://mars.nasa.gov/news/') html = browser.html soup = bs(html, 'html.parser') # Search for news titles and teaser paragraphs results = soup.find_all('ul', class_='item_list') # Loop through results for result in results: title = result.find_all('div', class_='content_title') paragraph = result.find_all('div', class_='article_teaser_body') # Extract the first title and paragraph, and assign to variables news_title = title[0].text news_paragraph = paragraph[0].text # Print results print(news_title) print(news_paragraph) ``` ### JPL Mars Space Images - Featured Image ``` # Open browser to JPL Featured Image # Set URL browser.visit('https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html') html = browser.html soup = bs(html, 'html.parser') # Find image relative path image = soup.find('a', class_='showimg')['href'] print(image) # Add relative path to full URL string featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/' + image print(featured_image_url) ``` ### Mars Facts ``` # Establish Mars facts url url = 'https://space-facts.com/mars/' # Use Pandas to parse the url facts = pd.read_html(url) # Set the data frame mars_facts = facts[0] # Assign the column headers mars_facts.columns = ['Description', 'Value'] # Set Index to Description column without row indexing mars_facts.set_index('Description', inplace=True) # Display mars_facts # Convert to html mars_facts_table = [mars_facts.to_html(classes='data table table-borderless', index=False, header=False, border=0)] mars_facts_table ``` ### Mars Hemispheres ``` # Open browser to USGS Astrogeology site browser.visit('https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars') # Set up html = browser.html soup = bs(html, 'html.parser') hemi_names = [] # Search for names of all 4 hemispheres results = soup.find_all('div', class_='collapsible results') hemispheres = results[0].find_all('h3') # Get text and store in list for name in hemispheres: hemi_names.append(name.text) hemi_names # Click through thumbnail links thumbnail_results = results[0].find_all('a') thumbnail_links = [] for thumbnail in thumbnail_results: if (thumbnail.img): thumbnail_url = 'https://astrogeology.usgs.gov/' + thumbnail['href'] thumbnail_links.append(thumbnail_url) thumbnail_links # Extract image source of full-sized images full_imgs = [] for url in thumbnail_links: browser.visit(url) html = browser.html soup = bs(html, 'html.parser') results = soup.find_all('img', class_='wide-image') relative_path = results[0]['src'] img_link = 'https://astrogeology.usgs.gov/' + relative_path full_imgs.append(img_link) full_imgs # Store as a list of dictionaries mars_hemi_zip = zip(hemi_names, full_imgs) hemisphere_image_urls = [] for title, img in mars_hemi_zip: mars_hemi_dict = {} mars_hemi_dict['title'] = title mars_hemi_dict['img_url'] = img hemisphere_image_urls.append(mars_hemi_dict) hemisphere_image_urls browser.quit() ```
github_jupyter
# Dependencies & Set-up from bs4 import BeautifulSoup as bs import requests from splinter import Browser from webdriver_manager.chrome import ChromeDriverManager import pandas as pd # Set-up splinter executable_path = {'executable_path': ChromeDriverManager().install()} browser = Browser('chrome', **executable_path, headless=False) # Set URL browser.visit('https://mars.nasa.gov/news/') html = browser.html soup = bs(html, 'html.parser') # Search for news titles and teaser paragraphs results = soup.find_all('ul', class_='item_list') # Loop through results for result in results: title = result.find_all('div', class_='content_title') paragraph = result.find_all('div', class_='article_teaser_body') # Extract the first title and paragraph, and assign to variables news_title = title[0].text news_paragraph = paragraph[0].text # Print results print(news_title) print(news_paragraph) # Open browser to JPL Featured Image # Set URL browser.visit('https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html') html = browser.html soup = bs(html, 'html.parser') # Find image relative path image = soup.find('a', class_='showimg')['href'] print(image) # Add relative path to full URL string featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/' + image print(featured_image_url) # Establish Mars facts url url = 'https://space-facts.com/mars/' # Use Pandas to parse the url facts = pd.read_html(url) # Set the data frame mars_facts = facts[0] # Assign the column headers mars_facts.columns = ['Description', 'Value'] # Set Index to Description column without row indexing mars_facts.set_index('Description', inplace=True) # Display mars_facts # Convert to html mars_facts_table = [mars_facts.to_html(classes='data table table-borderless', index=False, header=False, border=0)] mars_facts_table # Open browser to USGS Astrogeology site browser.visit('https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars') # Set up html = browser.html soup = bs(html, 'html.parser') hemi_names = [] # Search for names of all 4 hemispheres results = soup.find_all('div', class_='collapsible results') hemispheres = results[0].find_all('h3') # Get text and store in list for name in hemispheres: hemi_names.append(name.text) hemi_names # Click through thumbnail links thumbnail_results = results[0].find_all('a') thumbnail_links = [] for thumbnail in thumbnail_results: if (thumbnail.img): thumbnail_url = 'https://astrogeology.usgs.gov/' + thumbnail['href'] thumbnail_links.append(thumbnail_url) thumbnail_links # Extract image source of full-sized images full_imgs = [] for url in thumbnail_links: browser.visit(url) html = browser.html soup = bs(html, 'html.parser') results = soup.find_all('img', class_='wide-image') relative_path = results[0]['src'] img_link = 'https://astrogeology.usgs.gov/' + relative_path full_imgs.append(img_link) full_imgs # Store as a list of dictionaries mars_hemi_zip = zip(hemi_names, full_imgs) hemisphere_image_urls = [] for title, img in mars_hemi_zip: mars_hemi_dict = {} mars_hemi_dict['title'] = title mars_hemi_dict['img_url'] = img hemisphere_image_urls.append(mars_hemi_dict) hemisphere_image_urls browser.quit()
0.333503
0.453141
## Work 1. 請比較使用不同層數以及不同 Dropout rate 對訓練的效果 2. 將 optimizer 改成使用 Adam 並加上適當的 dropout rate 檢視結果 ``` import os import keras import itertools # Disable GPU os.environ["CUDA_VISIBLE_DEVICES"] = "" train, test = keras.datasets.cifar10.load_data() ## 資料前處理 def preproc_x(x, flatten=True): x = x / 255. if flatten: x = x.reshape((len(x), -1)) return x def preproc_y(y, num_classes=10): if y.shape[-1] == 1: y = keras.utils.to_categorical(y, num_classes) return y x_train, y_train = train x_test, y_test = test # Preproc the inputs x_train = preproc_x(x_train) x_test = preproc_x(x_test) # Preprc the outputs y_train = preproc_y(y_train) y_test = preproc_y(y_test) from keras.layers import Dropout """ 建立神經網路,並加入 dropout layer """ def build_mlp(input_shape, output_units=10, num_neurons=[512, 256, 128], drp_ratio=0.2): input_layer = keras.layers.Input(input_shape) for i, n_units in enumerate(num_neurons): if i == 0: x = keras.layers.Dense(units=n_units, activation="relu", name="hidden_layer"+str(i+1))(input_layer) x = Dropout(drp_ratio)(x) else: x = keras.layers.Dense(units=n_units, activation="relu", name="hidden_layer"+str(i+1))(x) x = Dropout(drp_ratio)(x) out = keras.layers.Dense(units=output_units, activation="softmax", name="output")(x) model = keras.models.Model(inputs=[input_layer], outputs=[out]) return model """Code Here 設定超參數 """ ## 超參數設定 LEARNING_RATE = 1e-3 EPOCHS = 50 BATCH_SIZE = 256 MOMENTUM = 0.95 Dropout_EXP = [0.5, 0.25, 0.1] results = {} """Code Here 撰寫你的訓練流程並將結果用 dictionary 紀錄 """ for dropout in Dropout_EXP: model = build_mlp(input_shape=x_train.shape[1:], drp_ratio=dropout) model.summary() optimizer = keras.optimizers.Adam(lr=LEARNING_RATE) model.compile(loss="categorical_crossentropy", metrics=["accuracy"], optimizer=optimizer) model.fit(x_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(x_test, y_test), shuffle=True) # Collect results train_loss = model.history.history["loss"] valid_loss = model.history.history["val_loss"] train_acc = model.history.history["acc"] valid_acc = model.history.history["val_acc"] name_tag = 'drop_rate_%.2f' % dropout results[name_tag] = {'train-loss': train_loss, 'valid-loss': valid_loss, 'train-acc': train_acc, 'valid-acc': valid_acc} import matplotlib.pyplot as plt %matplotlib inline """Code Here 將結果繪出 """ color_bar = ["r", "g", "b"] plt.figure(figsize=(8,6)) for i, cond in enumerate(results.keys()): plt.plot(range(len(results[cond]['train-loss'])),results[cond]['train-loss'], '-', label=cond, color=color_bar[i]) plt.plot(range(len(results[cond]['valid-loss'])),results[cond]['valid-loss'], '--', label=cond, color=color_bar[i]) plt.title("Loss") plt.legend() plt.show() plt.figure(figsize=(8,6)) for i, cond in enumerate(results.keys()): plt.plot(range(len(results[cond]['train-acc'])),results[cond]['train-acc'], '-', label=cond, color=color_bar[i]) plt.plot(range(len(results[cond]['valid-acc'])),results[cond]['valid-acc'], '--', label=cond, color=color_bar[i]) plt.title("Accuracy") plt.legend() plt.show() ```
github_jupyter
import os import keras import itertools # Disable GPU os.environ["CUDA_VISIBLE_DEVICES"] = "" train, test = keras.datasets.cifar10.load_data() ## 資料前處理 def preproc_x(x, flatten=True): x = x / 255. if flatten: x = x.reshape((len(x), -1)) return x def preproc_y(y, num_classes=10): if y.shape[-1] == 1: y = keras.utils.to_categorical(y, num_classes) return y x_train, y_train = train x_test, y_test = test # Preproc the inputs x_train = preproc_x(x_train) x_test = preproc_x(x_test) # Preprc the outputs y_train = preproc_y(y_train) y_test = preproc_y(y_test) from keras.layers import Dropout """ 建立神經網路,並加入 dropout layer """ def build_mlp(input_shape, output_units=10, num_neurons=[512, 256, 128], drp_ratio=0.2): input_layer = keras.layers.Input(input_shape) for i, n_units in enumerate(num_neurons): if i == 0: x = keras.layers.Dense(units=n_units, activation="relu", name="hidden_layer"+str(i+1))(input_layer) x = Dropout(drp_ratio)(x) else: x = keras.layers.Dense(units=n_units, activation="relu", name="hidden_layer"+str(i+1))(x) x = Dropout(drp_ratio)(x) out = keras.layers.Dense(units=output_units, activation="softmax", name="output")(x) model = keras.models.Model(inputs=[input_layer], outputs=[out]) return model """Code Here 設定超參數 """ ## 超參數設定 LEARNING_RATE = 1e-3 EPOCHS = 50 BATCH_SIZE = 256 MOMENTUM = 0.95 Dropout_EXP = [0.5, 0.25, 0.1] results = {} """Code Here 撰寫你的訓練流程並將結果用 dictionary 紀錄 """ for dropout in Dropout_EXP: model = build_mlp(input_shape=x_train.shape[1:], drp_ratio=dropout) model.summary() optimizer = keras.optimizers.Adam(lr=LEARNING_RATE) model.compile(loss="categorical_crossentropy", metrics=["accuracy"], optimizer=optimizer) model.fit(x_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(x_test, y_test), shuffle=True) # Collect results train_loss = model.history.history["loss"] valid_loss = model.history.history["val_loss"] train_acc = model.history.history["acc"] valid_acc = model.history.history["val_acc"] name_tag = 'drop_rate_%.2f' % dropout results[name_tag] = {'train-loss': train_loss, 'valid-loss': valid_loss, 'train-acc': train_acc, 'valid-acc': valid_acc} import matplotlib.pyplot as plt %matplotlib inline """Code Here 將結果繪出 """ color_bar = ["r", "g", "b"] plt.figure(figsize=(8,6)) for i, cond in enumerate(results.keys()): plt.plot(range(len(results[cond]['train-loss'])),results[cond]['train-loss'], '-', label=cond, color=color_bar[i]) plt.plot(range(len(results[cond]['valid-loss'])),results[cond]['valid-loss'], '--', label=cond, color=color_bar[i]) plt.title("Loss") plt.legend() plt.show() plt.figure(figsize=(8,6)) for i, cond in enumerate(results.keys()): plt.plot(range(len(results[cond]['train-acc'])),results[cond]['train-acc'], '-', label=cond, color=color_bar[i]) plt.plot(range(len(results[cond]['valid-acc'])),results[cond]['valid-acc'], '--', label=cond, color=color_bar[i]) plt.title("Accuracy") plt.legend() plt.show()
0.678753
0.692934
# _Qubit Chain_ <img src="images/line_qubits_site1.png" alt="Qubit Chain"> ### Contributor Alexander Yu. Vlasov *** Here is used representation initially introduced for _perfect state transfer_. The chain with $n$ nodes is modeled by $n$ qubits. The more compact representation with $n$ qubits for a chain $N=2^n$ nodes is discussed elsewhere, _e.g._, see arXiv:1710.03615 \[quant-ph\](2017) with [tutorial about topological quantum walk](../../../terra/qis_adv/topological_quantum_walk.ipynb). ### Chain model The nodes chain are mapped into states with single unit in computational basis $$|100\dots 0\rangle,|010\dots 0\rangle,\dots,|000\dots1\rangle.$$ Let us denote $$|{\underline k}\rangle \equiv |{\underbrace{0\ldots 0}_{k-1}\,}1\underbrace{0\ldots 0}_{n-k}\rangle, \quad k=1,\ldots,n.$$ The method also may be extended for $m>1$ particles using states with $m$ units in computational basis. ### Single link Let us start with with one particle on a chain with two nodes. By definition, the evolution may affect only states $|{\underline 1}\rangle \equiv |10\rangle$ and $|{\underline 2}\rangle \equiv |01\rangle$. The two-gate for such evolution is represented by some matrix $$ M_u \equiv \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & u_{11} & u_{12} & 0 \\ 0 & u_{21} & u_{22} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, $$ where $u_{jk}$ is unitary matrix. The chain with two nodes corresponds to a link in a qubit chain and such a representation is directly extended into a sequence of two-qubit gates on a chain. ### Relation with matchgates A _matchgate_ is defined by two unitary matrices $u$ and $u'$ with $\det{u'}=\det{u}$ $$ M(u,u') \equiv \begin{pmatrix} u'_{11} & 0 & 0 & u'_{12} \\ 0 & u_{11} & u_{12} & 0 \\ 0 & u_{21} & u_{22} & 0 \\ u'_{21} & 0 & 0 & u'_{22} \end{pmatrix} $$ The $M_u$ is particular case of matchgate $M(u,{\bf 1})$ with $u' = \begin{pmatrix}1&0\\0&1\end{pmatrix}.$ A quantum circuit with _matchgates_ acting on neighboring qubits (_aka_ matchcircuit) may be effectively modeled on a classical computer. So, such circuits may be used for testing of quantum computers with many qubits. The $M_u$ also respects number of units in computational basis and so can be used for modeling of distribution of states or quantum walks along a chains in agreement with model defined earlier. State distribution along a quantum network with $n$ qubits in such a case may be mapped into corresponding scalar chain with $n$ nodes. ### Adaptation to Qiskit Because of $\det(u)=1$ the gate $M_u$ may be described with three angles similar with $u3$ and $C_{u3}$ gate in QISKit. Let us denote that as $$ M_u \equiv M_{u3}(\theta, \phi, \lambda) \equiv \begin{pmatrix}\ 1 & 0 & 0 & 0\\ 0 & e^{-i(\phi+\lambda)/2}\cos(\theta/2) & -e^{-i(\phi-\lambda)/2}\sin(\theta/2) & 0\\ 0 & e^{i(\phi-\lambda)/2}\sin(\theta/2) & e^{i(\phi+\lambda)/2}\cos(\theta/2) & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix}. $$ The gate $M_{u3}$ may be written as a procedure on OpenQASM 2.0 mu3 (theta, phi, lambda) a,b { cx a,b; cu3 (theta,phi,lambda) b,a; cx a,b; } Similar method is used further with Qiskit, however it may be not very optimal for hardware. ### Staggered quantum walk Let us consider as an example the staggered walk described earlier in simpler model of [scalar quantum chain](scalar_chain.ipynb). The sequences of gates representing modifications of partitions $S'_1$ and $S_2$ now should use $4 \times 4$ analogues of $2 \times 2$ matrices applied earlier to neighboring nodes of chain $$ m_2 \equiv M_{i S_2} = \begin{pmatrix}\ 1 & 0 & 0 & 0\\ 0 & 0 & i & 0\\ 0 & i & 0 & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix} = M_{u3}(\pi/2,0,0). $$ $$ m_1 \equiv M_{S'_1} = \begin{pmatrix}\ 1 & 0 & 0 & 0\\ 0 & 1/\sqrt{2} & -1/\sqrt{2} & 0\\ 0 & 1/\sqrt{2} & 1/\sqrt{2} & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix} = M_{u3}(\pi,\pi,0). $$ See notebook with [modeling of qubit chain](qubit_chain_mod.ipynb) using Qiskit.
github_jupyter
# _Qubit Chain_ <img src="images/line_qubits_site1.png" alt="Qubit Chain"> ### Contributor Alexander Yu. Vlasov *** Here is used representation initially introduced for _perfect state transfer_. The chain with $n$ nodes is modeled by $n$ qubits. The more compact representation with $n$ qubits for a chain $N=2^n$ nodes is discussed elsewhere, _e.g._, see arXiv:1710.03615 \[quant-ph\](2017) with [tutorial about topological quantum walk](../../../terra/qis_adv/topological_quantum_walk.ipynb). ### Chain model The nodes chain are mapped into states with single unit in computational basis $$|100\dots 0\rangle,|010\dots 0\rangle,\dots,|000\dots1\rangle.$$ Let us denote $$|{\underline k}\rangle \equiv |{\underbrace{0\ldots 0}_{k-1}\,}1\underbrace{0\ldots 0}_{n-k}\rangle, \quad k=1,\ldots,n.$$ The method also may be extended for $m>1$ particles using states with $m$ units in computational basis. ### Single link Let us start with with one particle on a chain with two nodes. By definition, the evolution may affect only states $|{\underline 1}\rangle \equiv |10\rangle$ and $|{\underline 2}\rangle \equiv |01\rangle$. The two-gate for such evolution is represented by some matrix $$ M_u \equiv \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & u_{11} & u_{12} & 0 \\ 0 & u_{21} & u_{22} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, $$ where $u_{jk}$ is unitary matrix. The chain with two nodes corresponds to a link in a qubit chain and such a representation is directly extended into a sequence of two-qubit gates on a chain. ### Relation with matchgates A _matchgate_ is defined by two unitary matrices $u$ and $u'$ with $\det{u'}=\det{u}$ $$ M(u,u') \equiv \begin{pmatrix} u'_{11} & 0 & 0 & u'_{12} \\ 0 & u_{11} & u_{12} & 0 \\ 0 & u_{21} & u_{22} & 0 \\ u'_{21} & 0 & 0 & u'_{22} \end{pmatrix} $$ The $M_u$ is particular case of matchgate $M(u,{\bf 1})$ with $u' = \begin{pmatrix}1&0\\0&1\end{pmatrix}.$ A quantum circuit with _matchgates_ acting on neighboring qubits (_aka_ matchcircuit) may be effectively modeled on a classical computer. So, such circuits may be used for testing of quantum computers with many qubits. The $M_u$ also respects number of units in computational basis and so can be used for modeling of distribution of states or quantum walks along a chains in agreement with model defined earlier. State distribution along a quantum network with $n$ qubits in such a case may be mapped into corresponding scalar chain with $n$ nodes. ### Adaptation to Qiskit Because of $\det(u)=1$ the gate $M_u$ may be described with three angles similar with $u3$ and $C_{u3}$ gate in QISKit. Let us denote that as $$ M_u \equiv M_{u3}(\theta, \phi, \lambda) \equiv \begin{pmatrix}\ 1 & 0 & 0 & 0\\ 0 & e^{-i(\phi+\lambda)/2}\cos(\theta/2) & -e^{-i(\phi-\lambda)/2}\sin(\theta/2) & 0\\ 0 & e^{i(\phi-\lambda)/2}\sin(\theta/2) & e^{i(\phi+\lambda)/2}\cos(\theta/2) & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix}. $$ The gate $M_{u3}$ may be written as a procedure on OpenQASM 2.0 mu3 (theta, phi, lambda) a,b { cx a,b; cu3 (theta,phi,lambda) b,a; cx a,b; } Similar method is used further with Qiskit, however it may be not very optimal for hardware. ### Staggered quantum walk Let us consider as an example the staggered walk described earlier in simpler model of [scalar quantum chain](scalar_chain.ipynb). The sequences of gates representing modifications of partitions $S'_1$ and $S_2$ now should use $4 \times 4$ analogues of $2 \times 2$ matrices applied earlier to neighboring nodes of chain $$ m_2 \equiv M_{i S_2} = \begin{pmatrix}\ 1 & 0 & 0 & 0\\ 0 & 0 & i & 0\\ 0 & i & 0 & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix} = M_{u3}(\pi/2,0,0). $$ $$ m_1 \equiv M_{S'_1} = \begin{pmatrix}\ 1 & 0 & 0 & 0\\ 0 & 1/\sqrt{2} & -1/\sqrt{2} & 0\\ 0 & 1/\sqrt{2} & 1/\sqrt{2} & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix} = M_{u3}(\pi,\pi,0). $$ See notebook with [modeling of qubit chain](qubit_chain_mod.ipynb) using Qiskit.
0.826607
0.956553
``` # implement some basic data structure and methods for a Linked List class Node: def __init__(self, value): self.value = value self.next = None class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 ll = LinkedList(4) print(ll.head.value) # create a print list method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l ll = LinkedList(1) ll.tail.next = Node(2) ll.tail.next.next = Node(3) ll.to_list() # edge case ll = LinkedList(None) ll.to_list() ll = LinkedList(None) ll.tail.next = Node(1) ll.tail.next.next = Node(2) ll.to_list() # append method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l def append(self, value): if self.tail == None or self.head == None: self.head = Node(value) self.tail = self.head else: node = Node(value) self.tail.next = node self.tail = node self.length += 1 return True l = LinkedList(None) l.append(1) l.to_list() l.append(2) l.append(4) l.to_list() l.append(None) l.append(5) l.to_list() # pop method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l def append(self, value): if self.tail == None or self.head == None: self.head = Node(value) self.tail = self.head else: node = Node(value) self.tail.next = node self.tail = node self.length += 1 return True def pop(self): if self.tail == self.head: self.head = None self.tail = None else: # travel to the node before the tail node = self.head while node.next is not self.tail: node = node.next old_tail = self.tail node.next = None self.tail = node old_tail = None self.length -= 1 return True l = LinkedList(1) l.append(2) l.append(3) l.append(4) l.to_list() l.pop() l.to_list() # reverse method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l def append(self, value): if self.tail == None or self.head == None: self.head = Node(value) self.tail = self.head else: node = Node(value) self.tail.next = node self.tail = node self.length += 1 return True def pop(self): if self.tail == self.head: self.head = None self.tail = None else: # travel to the node before the tail node = self.head while node.next is not self.tail: node = node.next old_tail = self.tail node.next = None self.tail = node old_tail = None self.length -= 1 return True def reverse(self): # reverse head/tail self.head,self.tail = self.tail,self.head # use three pointers to move along the linked list to reverse before = None current = self.tail # old head after = current.next # loop to travese along the list untill the end while current is not None: # exact order to hold the correct pointer to reverse the direction that 'current' is holding after = current.next # hold pointer to the one after current current.next = before # reverse direction that current is holding before = current # move before forward one node current = after # now move current forward one node return True l = LinkedList(1) l.append(2) l.append(3) l.append(4) l.to_list() l.reverse() l.to_list() ```
github_jupyter
# implement some basic data structure and methods for a Linked List class Node: def __init__(self, value): self.value = value self.next = None class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 ll = LinkedList(4) print(ll.head.value) # create a print list method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l ll = LinkedList(1) ll.tail.next = Node(2) ll.tail.next.next = Node(3) ll.to_list() # edge case ll = LinkedList(None) ll.to_list() ll = LinkedList(None) ll.tail.next = Node(1) ll.tail.next.next = Node(2) ll.to_list() # append method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l def append(self, value): if self.tail == None or self.head == None: self.head = Node(value) self.tail = self.head else: node = Node(value) self.tail.next = node self.tail = node self.length += 1 return True l = LinkedList(None) l.append(1) l.to_list() l.append(2) l.append(4) l.to_list() l.append(None) l.append(5) l.to_list() # pop method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l def append(self, value): if self.tail == None or self.head == None: self.head = Node(value) self.tail = self.head else: node = Node(value) self.tail.next = node self.tail = node self.length += 1 return True def pop(self): if self.tail == self.head: self.head = None self.tail = None else: # travel to the node before the tail node = self.head while node.next is not self.tail: node = node.next old_tail = self.tail node.next = None self.tail = node old_tail = None self.length -= 1 return True l = LinkedList(1) l.append(2) l.append(3) l.append(4) l.to_list() l.pop() l.to_list() # reverse method class LinkedList: def __init__(self, value): self.head = Node(value) self.tail = self.head self.length = 1 def to_list(self): l = [] h = self.head while h: l.append(h.value) h = h.next return l def append(self, value): if self.tail == None or self.head == None: self.head = Node(value) self.tail = self.head else: node = Node(value) self.tail.next = node self.tail = node self.length += 1 return True def pop(self): if self.tail == self.head: self.head = None self.tail = None else: # travel to the node before the tail node = self.head while node.next is not self.tail: node = node.next old_tail = self.tail node.next = None self.tail = node old_tail = None self.length -= 1 return True def reverse(self): # reverse head/tail self.head,self.tail = self.tail,self.head # use three pointers to move along the linked list to reverse before = None current = self.tail # old head after = current.next # loop to travese along the list untill the end while current is not None: # exact order to hold the correct pointer to reverse the direction that 'current' is holding after = current.next # hold pointer to the one after current current.next = before # reverse direction that current is holding before = current # move before forward one node current = after # now move current forward one node return True l = LinkedList(1) l.append(2) l.append(3) l.append(4) l.to_list() l.reverse() l.to_list()
0.663124
0.427217
``` import numpy as np def subs(infile): dna_string = "" motif = "" with open(infile, "r") as fin: lines = fin.readlines() dna_string = lines[0].strip() motif = lines[1].strip() match_idx = [] for i in range(len(dna_string)): match_len = 0 for j in range(len(motif)): if dna_string[i+j] != motif[j]: break match_len += 1 if match_len == len(motif): match_idx.append(str(i + 1)) print (" ".join(match_idx)) def cons(infile): consensus = { "A": None, "T": None, "G": None, "C": None } seq_list = [] current_seq = "" with open(infile, "r") as fin: for line in fin.readlines(): if line.startswith(">"): if current_seq: seq_list.append(current_seq) current_seq = "" else: current_seq += line.strip() seq_list.append(current_seq) line_len = 0 for line in seq_list: if not line_len: line_len = len(line) for key in consensus.keys(): consensus[key] = np.zeros(line_len, dtype=int) for i in range(line_len): ch = line[i] consensus[ch][i] += 1 seq_consensus = "" for i in range(line_len): char_selected = None char_count = 0 for key in consensus.keys(): this_count = consensus[key][i] if (not char_selected) or (this_count > char_count): char_selected = key char_count = this_count seq_consensus += char_selected fout = open(infile + ".solution.txt", "w") print(seq_consensus, file=fout) for key in sorted(consensus.keys()): values = consensus[key] values = [str(v) for v in values] print(key + ": " + " ".join(values), file=fout) fout.close() def fibd(n,m): ages = [1] + [0]*(m-1) for i in range(n-1): ages = [sum(ages[1:])] + ages[:-1] return sum(ages) def grph(infile): seq_list = dict() current_key = "" with open(infile, "r") as fin: for line in fin.readlines(): if line.startswith(">"): current_key = line.strip()[1:] else: if current_key not in seq_list.keys(): seq_list[current_key] = "" seq_list[current_key] += line.strip() prefixes = dict() suffixes = dict() for key in seq_list.keys(): prefixes[key] = seq_list[key][:3] suffixes[key] = seq_list[key][-3:] nodes = [] for suff in suffixes.keys(): for pref in prefixes.keys(): if suff != pref: if suffixes[suff] == prefixes[pref]: nodes.append((suff,pref)) fout = open(infile + ".solution.txt", "w") for nod in nodes: print (nod[0] + " " + nod[1], file=fout) fout.close() def iev(string_vals): prob = [1.0, 1.0, 1.0, 0.75, 0.5, 0.0] occ = [float(s) for s in string_vals.split(" ")] expected = 0.0 for i in range(len(prob)): expected += (prob[i] * occ[i] * 2) return expected def lcsm(infile): seq_list = [] current_seq = "" with open(infile, "r") as fin: for line in fin.readlines(): if line.startswith(">"): if current_seq: seq_list.append(current_seq) current_seq = "" else: current_seq += line.strip() seq_list.append(current_seq) lcs_max = 0 for seq in seq_list: this_len = len(seq) if not lcs_max: lcs_max = this_len continue if this_len > lcs_max: lcs_max = this_len selected_kmer = None for k in range(lcs_max, 2, -1): kmers = set() for i in range(len(seq_list[0]) - k): kmers.add(seq_list[0][i:i+k]) found_kmer = False for kmer in kmers: kmer_is_selected = True for seq in seq_list[1:]: match_found = False for j in range(len(seq)-k): sub = seq[j:j+k] if sub == kmer: match_found = True break if not match_found: kmer_is_selected = False break if kmer_is_selected: selected_kmer = kmer found_kmer = True break if found_kmer: break print (selected_kmer) lcsm("download/rosalind_lcsm(1).txt") ```
github_jupyter
import numpy as np def subs(infile): dna_string = "" motif = "" with open(infile, "r") as fin: lines = fin.readlines() dna_string = lines[0].strip() motif = lines[1].strip() match_idx = [] for i in range(len(dna_string)): match_len = 0 for j in range(len(motif)): if dna_string[i+j] != motif[j]: break match_len += 1 if match_len == len(motif): match_idx.append(str(i + 1)) print (" ".join(match_idx)) def cons(infile): consensus = { "A": None, "T": None, "G": None, "C": None } seq_list = [] current_seq = "" with open(infile, "r") as fin: for line in fin.readlines(): if line.startswith(">"): if current_seq: seq_list.append(current_seq) current_seq = "" else: current_seq += line.strip() seq_list.append(current_seq) line_len = 0 for line in seq_list: if not line_len: line_len = len(line) for key in consensus.keys(): consensus[key] = np.zeros(line_len, dtype=int) for i in range(line_len): ch = line[i] consensus[ch][i] += 1 seq_consensus = "" for i in range(line_len): char_selected = None char_count = 0 for key in consensus.keys(): this_count = consensus[key][i] if (not char_selected) or (this_count > char_count): char_selected = key char_count = this_count seq_consensus += char_selected fout = open(infile + ".solution.txt", "w") print(seq_consensus, file=fout) for key in sorted(consensus.keys()): values = consensus[key] values = [str(v) for v in values] print(key + ": " + " ".join(values), file=fout) fout.close() def fibd(n,m): ages = [1] + [0]*(m-1) for i in range(n-1): ages = [sum(ages[1:])] + ages[:-1] return sum(ages) def grph(infile): seq_list = dict() current_key = "" with open(infile, "r") as fin: for line in fin.readlines(): if line.startswith(">"): current_key = line.strip()[1:] else: if current_key not in seq_list.keys(): seq_list[current_key] = "" seq_list[current_key] += line.strip() prefixes = dict() suffixes = dict() for key in seq_list.keys(): prefixes[key] = seq_list[key][:3] suffixes[key] = seq_list[key][-3:] nodes = [] for suff in suffixes.keys(): for pref in prefixes.keys(): if suff != pref: if suffixes[suff] == prefixes[pref]: nodes.append((suff,pref)) fout = open(infile + ".solution.txt", "w") for nod in nodes: print (nod[0] + " " + nod[1], file=fout) fout.close() def iev(string_vals): prob = [1.0, 1.0, 1.0, 0.75, 0.5, 0.0] occ = [float(s) for s in string_vals.split(" ")] expected = 0.0 for i in range(len(prob)): expected += (prob[i] * occ[i] * 2) return expected def lcsm(infile): seq_list = [] current_seq = "" with open(infile, "r") as fin: for line in fin.readlines(): if line.startswith(">"): if current_seq: seq_list.append(current_seq) current_seq = "" else: current_seq += line.strip() seq_list.append(current_seq) lcs_max = 0 for seq in seq_list: this_len = len(seq) if not lcs_max: lcs_max = this_len continue if this_len > lcs_max: lcs_max = this_len selected_kmer = None for k in range(lcs_max, 2, -1): kmers = set() for i in range(len(seq_list[0]) - k): kmers.add(seq_list[0][i:i+k]) found_kmer = False for kmer in kmers: kmer_is_selected = True for seq in seq_list[1:]: match_found = False for j in range(len(seq)-k): sub = seq[j:j+k] if sub == kmer: match_found = True break if not match_found: kmer_is_selected = False break if kmer_is_selected: selected_kmer = kmer found_kmer = True break if found_kmer: break print (selected_kmer) lcsm("download/rosalind_lcsm(1).txt")
0.117458
0.17989
<h2>Categorical Variables and One Hot Encoding</h2> ``` import pandas as pd df = pd.read_csv("/Users/Asus/Documents/Atmel Studio/homeprices.csv") df ``` <h2 style='color:purple'>Using pandas to create dummy variables</h2> ``` dummies = pd.get_dummies(df.town) dummies merged = pd.concat([df,dummies],axis='columns') merged final = merged.drop(['town'], axis='columns') final ``` <h3 style='color:purple'>Dummy Variable Trap</h3> When you can derive one variable from other variables, they are known to be multi-colinear. Here if you know values of california and georgia then you can easily infer value of new jersey state, i.e. california=0 and georgia=0. There for these state variables are called to be multi-colinear. In this situation linear regression won't work as expected. Hence you need to drop one column. **NOTE: sklearn library takes care of dummy variable trap hence even if you don't drop one of the state columns it is going to work, however we should make a habit of taking care of dummy variable trap ourselves just in case library that you are using is not handling this for you** ``` final = final.drop(['west windsor'], axis='columns') final X = final.drop('price', axis='columns') X y = final.price from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X,y) model.predict(X) # 2600 sqr ft home in new jersey model.score(X,y) model.predict([[3400,0,0]]) # 3400 sqr ft home in west windsor model.predict([[2800,0,1]]) # 2800 sqr ft home in robbinsville ``` <h2 style='color:purple'>Using sklearn OneHotEncoder</h2> First step is to use label encoder to convert town names into numbers ``` from sklearn.preprocessing import LabelEncoder le = LabelEncoder() dfle = df dfle.town = le.fit_transform(dfle.town) dfle X = dfle[['town','area']].values X y = dfle.price.values y ``` Now use one hot encoder to create dummy variables for each of the town ``` from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer ct = ColumnTransformer([('town', OneHotEncoder(), [0])], remainder = 'passthrough') X = ct.fit_transform(X) X X = X[:,1:] X model.fit(X,y) model.predict([[0,1,3400]]) # 3400 sqr ft home in west windsor model.predict([[1,0,2800]]) # 2800 sqr ft home in robbinsville ``` <h2 style='color:green'>Exercise</h2> At the same level as this notebook on github, there is an Exercise folder that contains carprices.csv. This file has car sell prices for 3 different models. First plot data points on a scatter plot chart to see if linear regression model can be applied. If yes, then build a model that can answer following questions, **1) Predict price of a mercedez benz that is 4 yr old with mileage 45000** **2) Predict price of a BMW X5 that is 7 yr old with mileage 86000** **3) Tell me the score (accuracy) of your model. (Hint: use LinearRegression().score())**
github_jupyter
import pandas as pd df = pd.read_csv("/Users/Asus/Documents/Atmel Studio/homeprices.csv") df dummies = pd.get_dummies(df.town) dummies merged = pd.concat([df,dummies],axis='columns') merged final = merged.drop(['town'], axis='columns') final final = final.drop(['west windsor'], axis='columns') final X = final.drop('price', axis='columns') X y = final.price from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X,y) model.predict(X) # 2600 sqr ft home in new jersey model.score(X,y) model.predict([[3400,0,0]]) # 3400 sqr ft home in west windsor model.predict([[2800,0,1]]) # 2800 sqr ft home in robbinsville from sklearn.preprocessing import LabelEncoder le = LabelEncoder() dfle = df dfle.town = le.fit_transform(dfle.town) dfle X = dfle[['town','area']].values X y = dfle.price.values y from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer ct = ColumnTransformer([('town', OneHotEncoder(), [0])], remainder = 'passthrough') X = ct.fit_transform(X) X X = X[:,1:] X model.fit(X,y) model.predict([[0,1,3400]]) # 3400 sqr ft home in west windsor model.predict([[1,0,2800]]) # 2800 sqr ft home in robbinsville
0.334807
0.866528
<a href="https://colab.research.google.com/github/Janani-harshu/Machine_Learning_Projects/blob/main/Insta_Reach_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Instagram is one of the most popular social media applications today. People using Instagram professionally are using it for promoting their business, building a portfolio, blogging, and creating various kinds of content. As Instagram is a popular application used by millions of people with different niches, Instagram keeps changing to make itself better for the content creators and the users. But as this keeps changing, it affects the reach of our posts that affects us in the long run. So if a content creator wants to do well on Instagram in the long run, they have to look at the data of their Instagram reach. That is where the use of Data Science in social media comes in. In this notebook, I will take you through Instagram Reach Analysis using Python, which will help content creators to understand how to adapt to the changes in Instagram in the long run. ## Insta Reach Analysis I have been researching Instagram reach for a long time now. Every time I post on my Instagram account, I collect data on how well the post reach after a week. That helps in understanding how Instagram’s algorithm is working. If you want to analyze the reach of your Instagram account, you have to collect your data manually as there are some APIs, but they don’t work well. So it’s better to collect your Instagram data manually. ``` # Importing the necessary libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator from sklearn.model_selection import train_test_split from sklearn.linear_model import PassiveAggressiveRegressor data = pd.read_csv("/content/Instagram.csv", encoding = 'latin1') print(data.head()) # Checking for null values in the dataset data.isnull().sum() # Drop all the null value columns data = data.dropna() data.info # Distribution of impresssions from home plt.figure(figsize=(10, 8)) plt.style.use('fivethirtyeight') plt.title("Distribution of Impressions From Home") sns.distplot(data['From Home']) plt.show() ``` The impressions I get from the home section on Instagram shows how much my posts reach my followers. Looking at the impressions from home, I can say it’s hard to reach all my followers daily. ``` # Distribution of Impressions From Hashtags plt.figure(figsize=(10, 8)) plt.title("Distribution of Impressions From Hashtags") sns.distplot(data['From Hashtags']) plt.show() ``` Hashtags are tools we use to categorize our posts on Instagram so that we can reach more people based on the kind of content we are creating. Looking at hashtag impressions shows that not all posts can be reached using hashtags, but many new users can be reached from hashtags. ``` # Distribution of Impressions From Explore plt.figure(figsize=(10, 8)) plt.title("Distribution of Impressions From Explore") sns.distplot(data['From Explore']) plt.show() ``` The explore section of Instagram is the recommendation system of Instagram. It recommends posts to the users based on their preferences and interests. By looking at the impressions I have received from the explore section, I can say that Instagram does not recommend our posts much to the users. Some posts have received a good reach from the explore section, but it’s still very low compared to the reach I receive from hashtags. ``` # Percentage of impressions I get from various sources on Instagram home = data["From Home"].sum() hashtags = data["From Hashtags"].sum() explore = data["From Explore"].sum() other = data["From Other"].sum() labels = ['From Home','From Hashtags','From Explore','Other'] values = [home, hashtags, explore, other] fig = px.pie(data, values=values, names=labels, title='Impressions on Instagram Posts From Various Sources', hole=0.5) fig.show() ``` So the above donut plot shows that almost 50 per cent of the reach is from my followers, 38.1 per cent is from hashtags, 9.14 per cent is from the explore section, and 3.01 per cent is from other sources. ## Analyzing Content Now let’s analyze the content of my Instagram posts. The dataset has two columns, namely caption and hashtags, which will help us understand the kind of content I post on Instagram. Let’s create a wordcloud of the caption column to look at the most used words in the caption of my Instagram posts: ``` text = " ".join(i for i in data.Caption) stopwords = set(STOPWORDS) wordcloud = WordCloud(stopwords=stopwords, background_color="black").generate(text) plt.style.use('classic') plt.figure( figsize=(12,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() # wordcloud of the hashtags column to look at the most used hashtags in my Instagram posts text = " ".join(i for i in data.Hashtags) stopwords = set(STOPWORDS) wordcloud = WordCloud(stopwords=stopwords, background_color="white").generate(text) plt.figure( figsize=(12,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() ``` ## Analyzing Relationships Now let’s analyze relationships to find the most important factors of our Instagram reach. It will also help us in understanding how the Instagram algorithm works. ``` # relationship between the number of likes and the number of impressions on my Instagram posts figure = px.scatter(data_frame = data, x="Impressions", y="Likes", size="Likes", trendline="ols", title = "Relationship Between Likes and Impressions") figure.show() ``` There is a linear relationship between the number of likes and the reach I got on Instagram ``` # relationship between the number of comments and the number of impressions on my Instagram posts figure = px.scatter(data_frame = data, x="Impressions", y="Comments", size="Comments", trendline="ols", title = "Relationship Between Comments and Total Impressions") figure.show() ``` It looks like the number of comments we get on a post doesn’t affect its reach ``` # relationship between the number of shares and the number of impressions figure = px.scatter(data_frame = data, x="Impressions", y="Shares", size="Shares", trendline="ols", title = "Relationship Between Shares and Total Impressions") figure.show() ``` A more number of shares will result in a higher reach, but shares don’t affect the reach of a post as much as likes do. ``` # relationship between the number of saves and the number of impressions figure = px.scatter(data_frame = data, x="Impressions", y="Saves", size="Saves", trendline="ols", title = "Relationship Between Post Saves and Total Impressions") figure.show() ``` There is a linear relationship between the number of times my post is saved and the reach of my Instagram post. ``` # correlation of all the columns with the Impressions column correlation = data.corr() print(correlation["Impressions"].sort_values(ascending=False)) ``` ## Analyzing Conversion Rate In Instagram, conversation rate means how many followers you are getting from the number of profile visits from a post. The formula that you can use to calculate conversion rate is (Follows/Profile Visits) * 100. Now let’s have a look at the conversation rate of my Instagram account: ``` conversion_rate = (data["Follows"].sum() / data["Profile Visits"].sum()) * 100 print(conversion_rate) ``` So the conversation rate of my Instagram account is 31% which sounds like a very good conversation rate. ``` # relationship between the total profile visits and the number of followers gained from all profile visits figure = px.scatter(data_frame = data, x="Profile Visits", y="Follows", size="Follows", trendline="ols", title = "Relationship Between Profile Visits and Followers Gained") figure.show() ``` ## Instagram Reach Prediction Model Now in this section, I will train a machine learning model to predict the reach of an Instagram post. ``` x = np.array(data[['Likes', 'Saves', 'Comments', 'Shares', 'Profile Visits', 'Follows']]) y = np.array(data["Impressions"]) # Split the data into training and test sets before training the model xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.2, random_state=42) # Passive Agressive Regressor passive = PassiveAggressiveRegressor() passive.fit(xtrain, ytrain) passive.score(xtest, ytest) # Linear Regressor from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(xtrain, ytrain) lr.score(xtest, ytest) # Lasso from sklearn.linear_model import Lasso lasso = Lasso(alpha = 998.9) lasso.fit(xtrain, ytrain) lasso.score(xtest, ytest) # XGBoost from xgboost import XGBRegressor xg = XGBRegressor() xg.fit(xtrain, ytrain) xg.score(xtest, ytest) # Random Forest Regressor from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() rf.fit(xtrain, ytrain) rf.score(xtest, ytest) ``` let’s predict the reach of an Instagram post by giving inputs to the machine learning model (Passive Agressive Regressor has good score of 0.937): ``` # Features = [['Impressions','Saves', 'Comments', 'Shares', 'Profile Visits', 'Follows']] features = np.array([[282.0, 233.0, 4.0, 9.0, 165.0, 54.0]]) passive.predict(features) ```
github_jupyter
# Importing the necessary libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator from sklearn.model_selection import train_test_split from sklearn.linear_model import PassiveAggressiveRegressor data = pd.read_csv("/content/Instagram.csv", encoding = 'latin1') print(data.head()) # Checking for null values in the dataset data.isnull().sum() # Drop all the null value columns data = data.dropna() data.info # Distribution of impresssions from home plt.figure(figsize=(10, 8)) plt.style.use('fivethirtyeight') plt.title("Distribution of Impressions From Home") sns.distplot(data['From Home']) plt.show() # Distribution of Impressions From Hashtags plt.figure(figsize=(10, 8)) plt.title("Distribution of Impressions From Hashtags") sns.distplot(data['From Hashtags']) plt.show() # Distribution of Impressions From Explore plt.figure(figsize=(10, 8)) plt.title("Distribution of Impressions From Explore") sns.distplot(data['From Explore']) plt.show() # Percentage of impressions I get from various sources on Instagram home = data["From Home"].sum() hashtags = data["From Hashtags"].sum() explore = data["From Explore"].sum() other = data["From Other"].sum() labels = ['From Home','From Hashtags','From Explore','Other'] values = [home, hashtags, explore, other] fig = px.pie(data, values=values, names=labels, title='Impressions on Instagram Posts From Various Sources', hole=0.5) fig.show() text = " ".join(i for i in data.Caption) stopwords = set(STOPWORDS) wordcloud = WordCloud(stopwords=stopwords, background_color="black").generate(text) plt.style.use('classic') plt.figure( figsize=(12,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() # wordcloud of the hashtags column to look at the most used hashtags in my Instagram posts text = " ".join(i for i in data.Hashtags) stopwords = set(STOPWORDS) wordcloud = WordCloud(stopwords=stopwords, background_color="white").generate(text) plt.figure( figsize=(12,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() # relationship between the number of likes and the number of impressions on my Instagram posts figure = px.scatter(data_frame = data, x="Impressions", y="Likes", size="Likes", trendline="ols", title = "Relationship Between Likes and Impressions") figure.show() # relationship between the number of comments and the number of impressions on my Instagram posts figure = px.scatter(data_frame = data, x="Impressions", y="Comments", size="Comments", trendline="ols", title = "Relationship Between Comments and Total Impressions") figure.show() # relationship between the number of shares and the number of impressions figure = px.scatter(data_frame = data, x="Impressions", y="Shares", size="Shares", trendline="ols", title = "Relationship Between Shares and Total Impressions") figure.show() # relationship between the number of saves and the number of impressions figure = px.scatter(data_frame = data, x="Impressions", y="Saves", size="Saves", trendline="ols", title = "Relationship Between Post Saves and Total Impressions") figure.show() # correlation of all the columns with the Impressions column correlation = data.corr() print(correlation["Impressions"].sort_values(ascending=False)) conversion_rate = (data["Follows"].sum() / data["Profile Visits"].sum()) * 100 print(conversion_rate) # relationship between the total profile visits and the number of followers gained from all profile visits figure = px.scatter(data_frame = data, x="Profile Visits", y="Follows", size="Follows", trendline="ols", title = "Relationship Between Profile Visits and Followers Gained") figure.show() x = np.array(data[['Likes', 'Saves', 'Comments', 'Shares', 'Profile Visits', 'Follows']]) y = np.array(data["Impressions"]) # Split the data into training and test sets before training the model xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.2, random_state=42) # Passive Agressive Regressor passive = PassiveAggressiveRegressor() passive.fit(xtrain, ytrain) passive.score(xtest, ytest) # Linear Regressor from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(xtrain, ytrain) lr.score(xtest, ytest) # Lasso from sklearn.linear_model import Lasso lasso = Lasso(alpha = 998.9) lasso.fit(xtrain, ytrain) lasso.score(xtest, ytest) # XGBoost from xgboost import XGBRegressor xg = XGBRegressor() xg.fit(xtrain, ytrain) xg.score(xtest, ytest) # Random Forest Regressor from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() rf.fit(xtrain, ytrain) rf.score(xtest, ytest) # Features = [['Impressions','Saves', 'Comments', 'Shares', 'Profile Visits', 'Follows']] features = np.array([[282.0, 233.0, 4.0, 9.0, 165.0, 54.0]]) passive.predict(features)
0.613352
0.989909
``` import pandas as pd import numpy as np import time from joblib import Parallel,delayed train_data = pd.read_excel('C:/Users/Srujan/Documents/Datasets/flight_price/Data_Train.xlsx',parse_dates=['Date_of_Journey']) train_data train_data.info() #no null values and all independent features are are object type and dependent feature is integer type train_data['Additional_Info'].value_counts() train_data['Airline'].value_counts() train_data['Source'].value_counts() train_data['Destination'].value_counts() train_data = train_data[(train_data.Airline != 'Jet Airways') & (train_data.Airline != 'Jet Airways Business')] train_data.reset_index(drop=True,inplace=True) train_data # Droppin Route and Additional info columns train_data.drop(['Route','Additional_Info'],axis=1,inplace=True) ``` Departure time and Arrival time are strings. We need to extract hour and minute from both the columns ``` # Departure time is when a plane leaves the gate. # Similar to Date_of_Journey we can extract values from Dep_Time # Extracting Hours train_data["Dep_hour"] = pd.to_datetime(train_data["Dep_Time"]).dt.hour # Extracting Minutes train_data["Dep_min"] = pd.to_datetime(train_data["Dep_Time"]).dt.minute # Now we can drop Dep_Time as it is of no use train_data.drop(["Dep_Time"], axis = 1, inplace = True) train_data["Dep_hour"] = train_data["Dep_hour"].astype(np.int16) train_data["Dep_min"] = train_data["Dep_min"].astype(np.int16) # Arrival time is when the plane pulls up to the gate. # Similar to Date_of_Journey we can extract values from Arrival_Time # Extracting Hours train_data["Arrival_hour"] = pd.to_datetime(train_data.Arrival_Time).dt.hour # Extracting Minutes train_data["Arrival_min"] = pd.to_datetime(train_data.Arrival_Time).dt.minute # Now we can drop Arrival_Time as it is of no use train_data.drop(["Arrival_Time"], axis = 1, inplace = True) train_data["Arrival_hour"] = train_data["Arrival_hour"].astype(np.int16) train_data["Arrival_min"] = train_data["Arrival_min"].astype(np.int16) ``` Extracting journey date, journey month and journey day(day of the week) from Date_of_Journey column and dropping that column after extracting. ``` train_data['Jounrey_date'] = train_data['Date_of_Journey'].dt.day train_data['Jounrey_month'] = train_data['Date_of_Journey'].dt.month train_data['Journey_day'] = train_data['Date_of_Journey'].dt.dayofweek train_data.drop('Date_of_Journey',axis=1,inplace=True) train_data train_data.info() train_data['Total_Stops'] = train_data['Total_Stops'].fillna(train_data['Total_Stops'].mode()[0]) ``` Creating a new duration column. Even tough duration column is available, it is a string. We will find an easy way to extract the duration from departure time and arrival time. Running a for loop normally took around 5 seconds for this task so using joblib for multiprocessing which completes the work in less than 1 second. ``` def create_duration_time(dep_hr,arr_hr,dep_min,arr_min): if dep_hr>arr_hr: hours = arr_hr + 24 - dep_hr else: hours = arr_hr - dep_hr minutes = arr_min - dep_min duration = hours * 60 + minutes return duration start = time.time() duration = Parallel(n_jobs=4)(delayed(create_duration_time)(dh,ah,dm,am) for dh,ah,dm,am in zip(train_data['Dep_hour'],\ train_data['Arrival_hour'],\ train_data['Dep_min'],\ train_data['Arrival_min'])) end=time.time() print(end-start) train_data.loc[:,'Duration'] = duration train_data['Total_Stops'].replace({'non-stop':0,'1 stop':1,'2 stops':2,'3 stops':3,'4 stops':4},inplace=True) train_data import seaborn as sns import matplotlib.pyplot as plt # From graph we can see that Jet Airways Business have the highest Price. # Apart from the first Airline almost all are having similar median # Airline vs Price sns.catplot(y = "Price", x = "Airline", data = train_data.sort_values("Price", ascending = False), kind="boxen", height = 6, aspect = 3) # Airline vs Price sns.catplot(y = "Price", x = "Total_Stops", data = train_data.sort_values("Price", ascending = False), kind="boxen", height = 6, aspect = 3) sns.heatmap(train_data.corr()) sns.scatterplot(x='Duration',y='Price',data=train_data,hue='Airline') ``` ## Feature Selection ``` X=train_data.copy() y=X['Price'] X = X.iloc[:,3:] X.drop(['Price'],axis=1,inplace=True) X.head() X.isnull().any() from sklearn.feature_selection import RFE from sklearn.tree import DecisionTreeRegressor model_dt = DecisionTreeRegressor() rfe = RFE(estimator=model_dt, n_features_to_select=1, step=1) rfe.fit(X,y) rfe.ranking_ X.columns for rank, col in zip(rfe.ranking_,X.columns): print(rank, col) ``` Seems like Journey_day is not that useful and its correlation with target is also not satisfying. So we will drop Journey_day ``` train_data.drop('Journey_day',axis=1,inplace=True) train_data train_data = pd.get_dummies(train_data,drop_first=True) train_data train_data.to_csv('modified_dataset.csv') ```
github_jupyter
import pandas as pd import numpy as np import time from joblib import Parallel,delayed train_data = pd.read_excel('C:/Users/Srujan/Documents/Datasets/flight_price/Data_Train.xlsx',parse_dates=['Date_of_Journey']) train_data train_data.info() #no null values and all independent features are are object type and dependent feature is integer type train_data['Additional_Info'].value_counts() train_data['Airline'].value_counts() train_data['Source'].value_counts() train_data['Destination'].value_counts() train_data = train_data[(train_data.Airline != 'Jet Airways') & (train_data.Airline != 'Jet Airways Business')] train_data.reset_index(drop=True,inplace=True) train_data # Droppin Route and Additional info columns train_data.drop(['Route','Additional_Info'],axis=1,inplace=True) # Departure time is when a plane leaves the gate. # Similar to Date_of_Journey we can extract values from Dep_Time # Extracting Hours train_data["Dep_hour"] = pd.to_datetime(train_data["Dep_Time"]).dt.hour # Extracting Minutes train_data["Dep_min"] = pd.to_datetime(train_data["Dep_Time"]).dt.minute # Now we can drop Dep_Time as it is of no use train_data.drop(["Dep_Time"], axis = 1, inplace = True) train_data["Dep_hour"] = train_data["Dep_hour"].astype(np.int16) train_data["Dep_min"] = train_data["Dep_min"].astype(np.int16) # Arrival time is when the plane pulls up to the gate. # Similar to Date_of_Journey we can extract values from Arrival_Time # Extracting Hours train_data["Arrival_hour"] = pd.to_datetime(train_data.Arrival_Time).dt.hour # Extracting Minutes train_data["Arrival_min"] = pd.to_datetime(train_data.Arrival_Time).dt.minute # Now we can drop Arrival_Time as it is of no use train_data.drop(["Arrival_Time"], axis = 1, inplace = True) train_data["Arrival_hour"] = train_data["Arrival_hour"].astype(np.int16) train_data["Arrival_min"] = train_data["Arrival_min"].astype(np.int16) train_data['Jounrey_date'] = train_data['Date_of_Journey'].dt.day train_data['Jounrey_month'] = train_data['Date_of_Journey'].dt.month train_data['Journey_day'] = train_data['Date_of_Journey'].dt.dayofweek train_data.drop('Date_of_Journey',axis=1,inplace=True) train_data train_data.info() train_data['Total_Stops'] = train_data['Total_Stops'].fillna(train_data['Total_Stops'].mode()[0]) def create_duration_time(dep_hr,arr_hr,dep_min,arr_min): if dep_hr>arr_hr: hours = arr_hr + 24 - dep_hr else: hours = arr_hr - dep_hr minutes = arr_min - dep_min duration = hours * 60 + minutes return duration start = time.time() duration = Parallel(n_jobs=4)(delayed(create_duration_time)(dh,ah,dm,am) for dh,ah,dm,am in zip(train_data['Dep_hour'],\ train_data['Arrival_hour'],\ train_data['Dep_min'],\ train_data['Arrival_min'])) end=time.time() print(end-start) train_data.loc[:,'Duration'] = duration train_data['Total_Stops'].replace({'non-stop':0,'1 stop':1,'2 stops':2,'3 stops':3,'4 stops':4},inplace=True) train_data import seaborn as sns import matplotlib.pyplot as plt # From graph we can see that Jet Airways Business have the highest Price. # Apart from the first Airline almost all are having similar median # Airline vs Price sns.catplot(y = "Price", x = "Airline", data = train_data.sort_values("Price", ascending = False), kind="boxen", height = 6, aspect = 3) # Airline vs Price sns.catplot(y = "Price", x = "Total_Stops", data = train_data.sort_values("Price", ascending = False), kind="boxen", height = 6, aspect = 3) sns.heatmap(train_data.corr()) sns.scatterplot(x='Duration',y='Price',data=train_data,hue='Airline') X=train_data.copy() y=X['Price'] X = X.iloc[:,3:] X.drop(['Price'],axis=1,inplace=True) X.head() X.isnull().any() from sklearn.feature_selection import RFE from sklearn.tree import DecisionTreeRegressor model_dt = DecisionTreeRegressor() rfe = RFE(estimator=model_dt, n_features_to_select=1, step=1) rfe.fit(X,y) rfe.ranking_ X.columns for rank, col in zip(rfe.ranking_,X.columns): print(rank, col) train_data.drop('Journey_day',axis=1,inplace=True) train_data train_data = pd.get_dummies(train_data,drop_first=True) train_data train_data.to_csv('modified_dataset.csv')
0.364664
0.7237
# **Project: LSTM Autoencoder for Network Anomaly Detection** ### **Instructions:** In this project, we will create and train an LSTM-based autoencoder to detect anomalies in the KDD99 network traffic dataset. Note that KDD99 does not include timestamps as a feature. The simplest approach to making these discrete datapoints into time-domain data is to assume that each datapoint occurs at the timestep immediately after the previous datapoint. However, more sophisticated approaches can also be adopted (e.g., grouping by TCP connections). The choice of serialization technique (i.e., conversion into time-domain) is up to you. This project must be implemented in a Jupyer Notebook, and must be compatible with Google Colab (i.e., if you are using a particular library that is not on Colab by default, your notebook must install it via !pip install ... ). Your notebook must also contain a section on performance analysis, where you report the performance of your IDS model via performance metrics and/or plots (similar to Section 5). **A very important note:** You should not expect very high detection rates from your model. ## Read in KDD99 Data Set ``` #importing required libraries import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn import metrics from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # The CSV file has no column heads, so add them df.columns = ['duration','protocol_type','service','flag','src_bytes','dst_bytes','land','wrong_fragment','urgent','hot', 'num_failed_logins','logged_in','num_compromised','root_shell', 'su_attempted','num_root','num_file_creations','num_shells', 'num_access_files','num_outbound_cmds','is_host_login','is_guest_login','count','srv_count','serror_rate','srv_serror_rate', 'rerror_rate','srv_rerror_rate','same_srv_rate','diff_srv_rate','srv_diff_host_rate','dst_host_count','dst_host_srv_count', 'dst_host_same_srv_rate','dst_host_diff_srv_rate','dst_host_same_src_port_rate','dst_host_srv_diff_host_rate','dst_host_serror_rate', 'dst_host_srv_serror_rate','dst_host_rerror_rate','dst_host_srv_rerror_rate','outcome'] # displaying the contents df.head() ``` ## Data Preprocessing ``` # For now, just drop NA's (rows with missing values) df.dropna(inplace=True,axis=1) df.shape df.dtypes df.groupby('outcome')['outcome'].count() ``` ## Encoding numeric and text data: ``` # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) #encoding the feature vector text_col =['protocol_type', 'service', 'flag', 'land', 'logged_in', 'is_host_login', 'is_guest_login', ] for i in df.columns: if i not in text_col: if i != 'outcome': encode_numeric_zscore(df, i) for x in text_col: encode_text_dummy(df, x) df.dropna(inplace=True,axis=1) df[0:5] df['protocol_type-tcp'].unique() normal_mask = df['outcome']=='normal.' attack_mask = df['outcome']!='normal.' df.drop('outcome',axis=1,inplace=True) df_normal = df[normal_mask] df_attack = df[attack_mask] print(f"Normal count: {len(df_normal)}") print(f"Attack count: {len(df_attack)}") # This is the numeric feature vector, as it goes to the neural net x_normal = df_normal.values x_attack = df_attack.values from sklearn.model_selection import train_test_split x_normal_train, x_normal_test = train_test_split(x_normal, test_size=0.3, random_state=12) print(f"Normal train count: {len(x_normal_train)}") print(f"Normal test count: {len(x_normal_test)}") x_normal_train.shape, x_normal_test.shape #Scaling the dataset from sklearn.preprocessing import StandardScaler sc = StandardScaler() x_normal_train = sc.fit_transform(x_normal_train) x_normal_test = sc.fit_transform(x_normal_test) x_normal = sc.fit_transform(x_normal) x_attack = sc.fit_transform(x_attack) ``` We already know that the KDD99 dataset doesn't include timestamps as a feature. So, lets consider simplest approach to making these datapoints into time-domain data i.e., to assume that each datapoint occurs at the timestep immediately after the previous datapoint. ``` x_normal_train = pd.DataFrame(x_normal_train) x_normal_test = pd.DataFrame(x_normal_test) x_attack = pd.DataFrame(x_attack) x_normal = pd.DataFrame(x_normal) #function to convert to time domain dataset def create_dataset(X, time_steps): Xs = [] for i in range(len(X) - time_steps): v = X.iloc[i:(i + time_steps)].values Xs.append(v) return np.array(Xs) # reshape to [samples, time_steps, n_features] TIME_STEPS = 1 Xnormal_train = create_dataset(x_normal_train, TIME_STEPS) print(Xnormal_train.shape) Xnormal_test = create_dataset(x_normal_test, TIME_STEPS) print(Xnormal_test.shape) Xattack = create_dataset(x_attack, TIME_STEPS) print(Xattack.shape) Xnormal = create_dataset(x_normal,TIME_STEPS) print(Xattack.shape) ``` ## LSTM Autoencoder Model Architecture: ``` model = keras.Sequential() model.add(keras.layers.LSTM(units=64, input_shape=(Xnormal_train.shape[1], Xnormal_train.shape[2]))) model.add(keras.layers.Dropout(rate=0.5)) model.add(keras.layers.RepeatVector(n=Xnormal_train.shape[1])) model.add(keras.layers.LSTM(units=32, return_sequences=True)) model.add(keras.layers.Dropout(rate=0.2)) model.add(keras.layers.TimeDistributed(keras.layers.Dense(units=Xnormal_train.shape[2]))) model.compile(loss='mae', optimizer='adam', metrics=['accuracy']) model.summary() ``` ## LSTM Autoencoder Model Training: ``` history = model.fit( Xnormal_train, Xnormal_train, epochs=15, batch_size=128, validation_split=0.2, shuffle = False ) plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='validation') plt.legend(); ``` ## Model Evaluation: ``` X_train_pred = model.predict(Xnormal_train) train_mae_loss = np.mean(np.abs(X_train_pred - Xnormal_train), axis=1) train_mae_loss.shape sns.distplot(train_mae_loss, bins=10, kde=True); X_test_pred = model.predict(Xnormal_test) test_mae_loss = np.mean(np.abs(X_test_pred - Xnormal_test), axis=1) test_mae_loss.shape sns.distplot(test_mae_loss, bins=10, kde=True); X_normal_pred = model.predict(Xnormal) normal_mae_loss = np.mean(np.abs(X_normal_pred - Xnormal), axis=1) normal_mae_loss.shape sns.distplot(normal_mae_loss, bins=10, kde=True); X_attack_pred = model.predict(Xattack) attack_mae_loss = np.mean(np.abs(X_attack_pred - Xattack), axis=1) attack_mae_loss.shape sns.distplot(attack_mae_loss, bins=10, kde=True); score1 = model.evaluate(Xnormal_train, Xnormal_train) print("\n Sample Loss(MAE) & Accuracy Scores (Train):", score1[0], score1[1], "\n") score2 = model.evaluate(Xnormal_test, Xnormal_test) print("\nOut of Sample Loss(MAE) & Accuracy Scores (Test):", score2[0], score2[1], "\n") score3 = model.evaluate(Xattack, Xattack) print("\nAttack Underway Loss(MAE) & Accuracy Scores (Anomaly):", score3[0], score3[1], "\n") ``` **Conclusion:** The accuracy of model prediction on normal train and test sets is almost similarm aroung 0.48. Where as the model accuracy when trained on the anomaly or attack data is 0.0047 which is no way comaprable to the normal data accuracy. It is the same even with the loss, Mean Absolute Error (mae). The normal train and test sets have mae around 0.04 and the anomaly ot attack data has a mae of 0.16. From this we can detect the anomaly if any in the data.
github_jupyter
#importing required libraries import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn import metrics from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # The CSV file has no column heads, so add them df.columns = ['duration','protocol_type','service','flag','src_bytes','dst_bytes','land','wrong_fragment','urgent','hot', 'num_failed_logins','logged_in','num_compromised','root_shell', 'su_attempted','num_root','num_file_creations','num_shells', 'num_access_files','num_outbound_cmds','is_host_login','is_guest_login','count','srv_count','serror_rate','srv_serror_rate', 'rerror_rate','srv_rerror_rate','same_srv_rate','diff_srv_rate','srv_diff_host_rate','dst_host_count','dst_host_srv_count', 'dst_host_same_srv_rate','dst_host_diff_srv_rate','dst_host_same_src_port_rate','dst_host_srv_diff_host_rate','dst_host_serror_rate', 'dst_host_srv_serror_rate','dst_host_rerror_rate','dst_host_srv_rerror_rate','outcome'] # displaying the contents df.head() # For now, just drop NA's (rows with missing values) df.dropna(inplace=True,axis=1) df.shape df.dtypes df.groupby('outcome')['outcome'].count() # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) #encoding the feature vector text_col =['protocol_type', 'service', 'flag', 'land', 'logged_in', 'is_host_login', 'is_guest_login', ] for i in df.columns: if i not in text_col: if i != 'outcome': encode_numeric_zscore(df, i) for x in text_col: encode_text_dummy(df, x) df.dropna(inplace=True,axis=1) df[0:5] df['protocol_type-tcp'].unique() normal_mask = df['outcome']=='normal.' attack_mask = df['outcome']!='normal.' df.drop('outcome',axis=1,inplace=True) df_normal = df[normal_mask] df_attack = df[attack_mask] print(f"Normal count: {len(df_normal)}") print(f"Attack count: {len(df_attack)}") # This is the numeric feature vector, as it goes to the neural net x_normal = df_normal.values x_attack = df_attack.values from sklearn.model_selection import train_test_split x_normal_train, x_normal_test = train_test_split(x_normal, test_size=0.3, random_state=12) print(f"Normal train count: {len(x_normal_train)}") print(f"Normal test count: {len(x_normal_test)}") x_normal_train.shape, x_normal_test.shape #Scaling the dataset from sklearn.preprocessing import StandardScaler sc = StandardScaler() x_normal_train = sc.fit_transform(x_normal_train) x_normal_test = sc.fit_transform(x_normal_test) x_normal = sc.fit_transform(x_normal) x_attack = sc.fit_transform(x_attack) x_normal_train = pd.DataFrame(x_normal_train) x_normal_test = pd.DataFrame(x_normal_test) x_attack = pd.DataFrame(x_attack) x_normal = pd.DataFrame(x_normal) #function to convert to time domain dataset def create_dataset(X, time_steps): Xs = [] for i in range(len(X) - time_steps): v = X.iloc[i:(i + time_steps)].values Xs.append(v) return np.array(Xs) # reshape to [samples, time_steps, n_features] TIME_STEPS = 1 Xnormal_train = create_dataset(x_normal_train, TIME_STEPS) print(Xnormal_train.shape) Xnormal_test = create_dataset(x_normal_test, TIME_STEPS) print(Xnormal_test.shape) Xattack = create_dataset(x_attack, TIME_STEPS) print(Xattack.shape) Xnormal = create_dataset(x_normal,TIME_STEPS) print(Xattack.shape) model = keras.Sequential() model.add(keras.layers.LSTM(units=64, input_shape=(Xnormal_train.shape[1], Xnormal_train.shape[2]))) model.add(keras.layers.Dropout(rate=0.5)) model.add(keras.layers.RepeatVector(n=Xnormal_train.shape[1])) model.add(keras.layers.LSTM(units=32, return_sequences=True)) model.add(keras.layers.Dropout(rate=0.2)) model.add(keras.layers.TimeDistributed(keras.layers.Dense(units=Xnormal_train.shape[2]))) model.compile(loss='mae', optimizer='adam', metrics=['accuracy']) model.summary() history = model.fit( Xnormal_train, Xnormal_train, epochs=15, batch_size=128, validation_split=0.2, shuffle = False ) plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='validation') plt.legend(); X_train_pred = model.predict(Xnormal_train) train_mae_loss = np.mean(np.abs(X_train_pred - Xnormal_train), axis=1) train_mae_loss.shape sns.distplot(train_mae_loss, bins=10, kde=True); X_test_pred = model.predict(Xnormal_test) test_mae_loss = np.mean(np.abs(X_test_pred - Xnormal_test), axis=1) test_mae_loss.shape sns.distplot(test_mae_loss, bins=10, kde=True); X_normal_pred = model.predict(Xnormal) normal_mae_loss = np.mean(np.abs(X_normal_pred - Xnormal), axis=1) normal_mae_loss.shape sns.distplot(normal_mae_loss, bins=10, kde=True); X_attack_pred = model.predict(Xattack) attack_mae_loss = np.mean(np.abs(X_attack_pred - Xattack), axis=1) attack_mae_loss.shape sns.distplot(attack_mae_loss, bins=10, kde=True); score1 = model.evaluate(Xnormal_train, Xnormal_train) print("\n Sample Loss(MAE) & Accuracy Scores (Train):", score1[0], score1[1], "\n") score2 = model.evaluate(Xnormal_test, Xnormal_test) print("\nOut of Sample Loss(MAE) & Accuracy Scores (Test):", score2[0], score2[1], "\n") score3 = model.evaluate(Xattack, Xattack) print("\nAttack Underway Loss(MAE) & Accuracy Scores (Anomaly):", score3[0], score3[1], "\n")
0.516839
0.875946
## Co-moving stars in Kepler (done in a very hacky way) (but this is probably ok because we're assuming all the stars are far away) ``` import numpy as np from astropy.table import Table, unique from astropy import units as u import astropy.coordinates as coord from astropy.time import Time from astropy.io import fits import matplotlib.pyplot as plt from tqdm import tqdm import pandas as pd ``` Load up the table of all Gaia DR2 sources within the Kepler field: ``` gaia_table_file = '../data/gaia-kepler-dustin.fits' hdul = fits.open(gaia_table_file) gaia_src_tbl = Table(hdul[1].data) min_columns = ['source_id', 'ra', 'dec', 'parallax', 'pmra', 'pmdec', 'parallax_error', 'pmra_error', 'pmdec_error', 'parallax_pmra_corr', 'parallax_pmdec_corr', 'pmra_pmdec_corr'] min_table = gaia_src_tbl[min_columns] min_table = min_table.to_pandas() #full_table = gaia_src_tbl.to_pandas() ``` Now load up the Gaia-Kepler cross-match so we can add Kepler IDs to the best-match Gaia sources: ``` kepler_table_file = '../data/kepler_dr2_1arcsec.fits' hdul = fits.open(kepler_table_file) kepler_tbl = Table(hdul[1].data) gaia_kepler_matches = kepler_tbl['kepid', 'kepler_gaia_ang_dist', 'source_id', 'nconfp', 'nkoi', 'planet?'] gaia_kepler_matches = gaia_kepler_matches.to_pandas() print(len(gaia_kepler_matches)) ``` Trim off the less-good matches so that there's one unique Gaia source per Kepler target: ``` gaia_kepler_matches.sort_values(['kepid', 'kepler_gaia_ang_dist'], inplace=True) gaia_kepler_matches.drop_duplicates('kepid', inplace=True) print(len(gaia_kepler_matches)) ``` And join the tables: ``` full_table = full_table.merge(gaia_kepler_matches, on='source_id', how='left') len(full_table) test_id = 2105885485289168768 print(full_table[full_table['source_id'] == test_id]) ``` Now load up Dustin's pairs: ``` pairs_file = '../data/matched-pairs-dustin.fits' hdul = fits.open(pairs_file) pairs = hdul[0].data pairs[:10] pairs = pd.DataFrame(data=pairs) pairs.iloc[:10] ``` Define some useful functions: ``` def make_x(star): """ returns a vector of x = [parallax, pmra, pmdec] """ names = ['parallax', 'pmra', 'pmdec'] return star.loc[names].values.astype('f') def make_xerr(star): """ returns a vector of xerr = [parallax_error, pmra_error, pmdec_error] """ err_names = ['parallax_error', 'pmra_error', 'pmdec_error'] return star.loc[err_names].values.astype('f') def ppm_check(star1, star2, sigma=5.): """ Returns True if the differences between parallax, pmra, and pmdec are all below the sigma threshold. """ x1 = make_x(star1) x2 = make_x(star2) if np.any(np.isnan([x1,x2])): return False xerr1 = make_xerr(star1) xerr2 = make_xerr(star2) if np.any(np.isnan([xerr1, xerr2])): return False if np.any(np.abs(x1 - x2)/np.sqrt(xerr1**2 + xerr2**2) >= sigma): return False return True def make_cov(star): """ returns covariance matrix C corresponding to x """ names = ['parallax', 'pmra', 'pmdec'] C = np.diag(make_xerr(star)**2) for i, name1 in enumerate(names): for j, name2 in enumerate(names): if j >= i: continue corr = star.loc["{0}_{1}_corr".format(name2, name1)] C[i, j] = corr * np.sqrt(C[i, i] * C[j, j]) C[j, i] = C[i, j] return C def chisq(star1, star2): """ calculates chisquared for two stars based on their parallax and 2D proper motions """ deltax = make_x(star1) - make_x(star2) cplusc = make_cov(star1) + make_cov(star2) return np.dot(deltax, np.linalg.solve(cplusc, deltax)) def check_with_primary(m, primary): if ppm_check(primary, m): return chisq(primary, m) else: return -1 ``` Now calculate a goodness-of-fit metric for each pair, skipping over the ones that don't make an initial cut: ``` %%time chisqs = np.zeros_like(pairs) - 1. for i,row in tqdm(enumerate(pairs[:100])): primary = min_table.iloc[i] row_mask = (row > -1) & (row > i) # indices in row for matches to compute matches = min_table.iloc[row[row_mask]] # ignore non-matches and duplicates if np.sum(row_mask) > 0: row_of_chisqs = matches.apply(check_with_primary, args=(primary,), axis=1) chisqs[i,row_mask] = row_of_chisqs.values ``` Save the outputs and take a look at their distribution: ``` hdu = fits.PrimaryHDU(chisqs) hdulist = fits.HDUList([hdu]) hdulist.writeto('../data/chisqs_matched-pairs.fits') hdulist.close() # optional - load up already-saved outputs #hdul = fits.open('../data/chisqs_matched-pairs.fits') #chisqs = hdul[0].data plt.hist(chisqs[(chisqs > 0.) & (chisqs < 10.)], bins=100) plt.xlabel('$\chi^2$', fontsize=16) plt.ylabel('# Pairs', fontsize=16) plt.yscale('log') plt.savefig('chisq_keplerpairs.png') ``` OK, now let's select the best-fit pairs and save their indicies for easy access: ``` matches_mask = (chisqs > 0) & (chisqs < 2) np.sum(matches_mask) len_inds, len_matches = np.shape(pairs) pairs_inds = np.array([np.arange(len_inds),]*len_matches).transpose() pairs_ind1s = pairs_inds[matches_mask] pairs_ind2s = pairs[matches_mask] def read_match_attr(ind1, ind2, attr): return table.iloc[ind1][attr], table.iloc[ind2][attr] print("source_ids of a pair:") print(read_match_attr(pairs_ind1s[0], pairs_ind2s[0], 'source_id')) ``` Sanity check: plot the parallax and proper motions of an identified match ``` from plot_tools import error_ellipse fs = 12 def plot_xs(i, sigma=1): star1 = table.iloc[pairs_ind1s[i]] star2 = table.iloc[pairs_ind2s[i]] x1 = make_x(star1) cov1 = make_cov(star1) x2 = make_x(star2) cov2 = make_cov(star2) fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(131) error_ellipse(ax1, x1[0], x1[1], cov1[:2,:2], ec='red', sigma=sigma) error_ellipse(ax1, x2[0], x2[1], cov2[:2,:2], ec='blue', sigma=sigma) ax1.set_xlim([min([x1[0], x2[0]]) - 5., max([x1[0], x2[0]]) + 5.]) ax1.set_ylim([min([x1[1], x2[1]]) - 5., max([x1[1], x2[1]]) + 5.]) ax1.set_xlabel('Parallax (mas)', fontsize=fs) ax1.set_ylabel('PM RA (mas yr$^{-1}$)', fontsize=fs) ax2 = fig.add_subplot(133) error_ellipse(ax2, x1[1], x1[2], cov1[1:,1:], ec='red', sigma=sigma) error_ellipse(ax2, x2[1], x2[2], cov2[1:,1:], ec='blue', sigma=sigma) ax2.set_xlim([min([x1[1], x2[1]]) - 5., max([x1[1], x2[1]]) + 5.]) ax2.set_ylim([min([x1[2], x2[2]]) - 5., max([x1[2], x2[2]]) + 5.]) ax2.set_xlabel('PM RA (mas yr$^{-1}$)', fontsize=fs) ax2.set_ylabel('PM Dec (mas yr$^{-1}$)', fontsize=fs) ax3 = fig.add_subplot(132) c1 = np.delete(np.delete(cov1, 1, axis=0), 1, axis=1) c2 = np.delete(np.delete(cov2, 1, axis=0), 1, axis=1) error_ellipse(ax3, x1[0], x1[2], c1, ec='red', sigma=sigma) error_ellipse(ax3, x2[0], x2[2], c2, ec='blue', sigma=sigma) ax3.set_xlim([min([x1[0], x2[0]]) - 5., max([x1[0], x2[0]]) + 5.]) ax3.set_ylim([min([x1[2], x2[2]]) - 5., max([x1[2], x2[2]]) + 5.]) ax3.set_xlabel('Parallax (mas)', fontsize=fs) ax3.set_ylabel('PM Dec (mas yr$^{-1}$)', fontsize=fs) fig.subplots_adjust(wspace = 0.5) fig.text(0.5, 0.95, 'match #{0}'.format(i), horizontalalignment='center', transform=ax3.transAxes, fontsize=fs+2) i = np.random.randint(0, len(pairs_ind1s)) print("match {0}: source_ids {1}".format(i, read_match_attr(pairs_ind1s[i], pairs_ind2s[i], 'source_id'))) plot_xs(i, sigma=3) pd.options.display.max_columns = None src1, src2 = read_match_attr(pairs_ind1s[i], pairs_ind2s[i], 'source_id') table[table['source_id'].isin([src1, src2])] print("saved chisquared = {0:.5f}".format(chisqs[pairs_ind1s[i]][np.where(pairs[pairs_ind1s[i]] == pairs_ind2s[i])[0][0]])) star1 = table.iloc[pairs_ind1s[i]] star2 = table.iloc[pairs_ind2s[i]] chisq(star1, star2) ``` Let's look at the relative luminosities of each match: ``` (gmag1, gmag2) = read_match_attr(pairs_ind1s, pairs_ind2s, 'phot_g_mean_mag') (plx1, plx2) = read_match_attr(pairs_ind1s, pairs_ind2s, 'parallax') dist1 = 1.e3/plx1 absg1 = gmag1 - 5.*(np.log10(dist1) - 1.) dist2 = 1.e3/plx2 absg2 = gmag2 - 5.*(np.log10(dist2) - 1.) ``` Select only the ones with measured G: ``` mask = np.all(np.vstack([np.isfinite(absg1), np.isfinite(absg2)]), axis=0) good_pairs_2d = np.vstack([absg1[mask], absg2[mask]]) #good_pairs_2d = np.sort(good_pairs_2d, axis=0) # we could sort by brightness here absg1, absg2 = good_pairs_2d[0], good_pairs_2d[1] absg = np.append(absg1, absg2) hist = plt.hist(absg, bins=500) plt.xlim([-5,15]) plt.xlabel('G') plt.ylabel('# of stars') plt.savefig('absmag_hist.png') from matplotlib.colors import LogNorm plt.hist2d(absg1, absg2, bins=(1000,1000), norm=LogNorm()) cbar = plt.colorbar() cbar.ax.set_ylabel('# of stars', rotation=270) plt.xlabel('G$_{1}$') plt.ylabel('G$_{2}$') plt.xlim([-5, 15]) plt.ylim([-5, 15]) plt.savefig('absmag_pairs.pdf') ``` Now let's see how many of the matches are in the Kepler catalog, and whether any of them have planets! ``` ind1_is_kic = np.isfinite(table.iloc[pairs_ind1s]['kepid']) ind2_is_kic = np.isfinite(table.iloc[pairs_ind2s]['kepid']) one_is_kic = np.any(np.vstack([ind1_is_kic, ind2_is_kic]), axis=0) both_are_kic = np.all(np.vstack([ind1_is_kic, ind2_is_kic]), axis=0) np.sum(both_are_kic) for i1, i2 in zip(pairs_ind1s[both_are_kic], pairs_ind2s[both_are_kic]): print(read_match_attr(i1,i2,'planet?')) ```
github_jupyter
import numpy as np from astropy.table import Table, unique from astropy import units as u import astropy.coordinates as coord from astropy.time import Time from astropy.io import fits import matplotlib.pyplot as plt from tqdm import tqdm import pandas as pd gaia_table_file = '../data/gaia-kepler-dustin.fits' hdul = fits.open(gaia_table_file) gaia_src_tbl = Table(hdul[1].data) min_columns = ['source_id', 'ra', 'dec', 'parallax', 'pmra', 'pmdec', 'parallax_error', 'pmra_error', 'pmdec_error', 'parallax_pmra_corr', 'parallax_pmdec_corr', 'pmra_pmdec_corr'] min_table = gaia_src_tbl[min_columns] min_table = min_table.to_pandas() #full_table = gaia_src_tbl.to_pandas() kepler_table_file = '../data/kepler_dr2_1arcsec.fits' hdul = fits.open(kepler_table_file) kepler_tbl = Table(hdul[1].data) gaia_kepler_matches = kepler_tbl['kepid', 'kepler_gaia_ang_dist', 'source_id', 'nconfp', 'nkoi', 'planet?'] gaia_kepler_matches = gaia_kepler_matches.to_pandas() print(len(gaia_kepler_matches)) gaia_kepler_matches.sort_values(['kepid', 'kepler_gaia_ang_dist'], inplace=True) gaia_kepler_matches.drop_duplicates('kepid', inplace=True) print(len(gaia_kepler_matches)) full_table = full_table.merge(gaia_kepler_matches, on='source_id', how='left') len(full_table) test_id = 2105885485289168768 print(full_table[full_table['source_id'] == test_id]) pairs_file = '../data/matched-pairs-dustin.fits' hdul = fits.open(pairs_file) pairs = hdul[0].data pairs[:10] pairs = pd.DataFrame(data=pairs) pairs.iloc[:10] def make_x(star): """ returns a vector of x = [parallax, pmra, pmdec] """ names = ['parallax', 'pmra', 'pmdec'] return star.loc[names].values.astype('f') def make_xerr(star): """ returns a vector of xerr = [parallax_error, pmra_error, pmdec_error] """ err_names = ['parallax_error', 'pmra_error', 'pmdec_error'] return star.loc[err_names].values.astype('f') def ppm_check(star1, star2, sigma=5.): """ Returns True if the differences between parallax, pmra, and pmdec are all below the sigma threshold. """ x1 = make_x(star1) x2 = make_x(star2) if np.any(np.isnan([x1,x2])): return False xerr1 = make_xerr(star1) xerr2 = make_xerr(star2) if np.any(np.isnan([xerr1, xerr2])): return False if np.any(np.abs(x1 - x2)/np.sqrt(xerr1**2 + xerr2**2) >= sigma): return False return True def make_cov(star): """ returns covariance matrix C corresponding to x """ names = ['parallax', 'pmra', 'pmdec'] C = np.diag(make_xerr(star)**2) for i, name1 in enumerate(names): for j, name2 in enumerate(names): if j >= i: continue corr = star.loc["{0}_{1}_corr".format(name2, name1)] C[i, j] = corr * np.sqrt(C[i, i] * C[j, j]) C[j, i] = C[i, j] return C def chisq(star1, star2): """ calculates chisquared for two stars based on their parallax and 2D proper motions """ deltax = make_x(star1) - make_x(star2) cplusc = make_cov(star1) + make_cov(star2) return np.dot(deltax, np.linalg.solve(cplusc, deltax)) def check_with_primary(m, primary): if ppm_check(primary, m): return chisq(primary, m) else: return -1 %%time chisqs = np.zeros_like(pairs) - 1. for i,row in tqdm(enumerate(pairs[:100])): primary = min_table.iloc[i] row_mask = (row > -1) & (row > i) # indices in row for matches to compute matches = min_table.iloc[row[row_mask]] # ignore non-matches and duplicates if np.sum(row_mask) > 0: row_of_chisqs = matches.apply(check_with_primary, args=(primary,), axis=1) chisqs[i,row_mask] = row_of_chisqs.values hdu = fits.PrimaryHDU(chisqs) hdulist = fits.HDUList([hdu]) hdulist.writeto('../data/chisqs_matched-pairs.fits') hdulist.close() # optional - load up already-saved outputs #hdul = fits.open('../data/chisqs_matched-pairs.fits') #chisqs = hdul[0].data plt.hist(chisqs[(chisqs > 0.) & (chisqs < 10.)], bins=100) plt.xlabel('$\chi^2$', fontsize=16) plt.ylabel('# Pairs', fontsize=16) plt.yscale('log') plt.savefig('chisq_keplerpairs.png') matches_mask = (chisqs > 0) & (chisqs < 2) np.sum(matches_mask) len_inds, len_matches = np.shape(pairs) pairs_inds = np.array([np.arange(len_inds),]*len_matches).transpose() pairs_ind1s = pairs_inds[matches_mask] pairs_ind2s = pairs[matches_mask] def read_match_attr(ind1, ind2, attr): return table.iloc[ind1][attr], table.iloc[ind2][attr] print("source_ids of a pair:") print(read_match_attr(pairs_ind1s[0], pairs_ind2s[0], 'source_id')) from plot_tools import error_ellipse fs = 12 def plot_xs(i, sigma=1): star1 = table.iloc[pairs_ind1s[i]] star2 = table.iloc[pairs_ind2s[i]] x1 = make_x(star1) cov1 = make_cov(star1) x2 = make_x(star2) cov2 = make_cov(star2) fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(131) error_ellipse(ax1, x1[0], x1[1], cov1[:2,:2], ec='red', sigma=sigma) error_ellipse(ax1, x2[0], x2[1], cov2[:2,:2], ec='blue', sigma=sigma) ax1.set_xlim([min([x1[0], x2[0]]) - 5., max([x1[0], x2[0]]) + 5.]) ax1.set_ylim([min([x1[1], x2[1]]) - 5., max([x1[1], x2[1]]) + 5.]) ax1.set_xlabel('Parallax (mas)', fontsize=fs) ax1.set_ylabel('PM RA (mas yr$^{-1}$)', fontsize=fs) ax2 = fig.add_subplot(133) error_ellipse(ax2, x1[1], x1[2], cov1[1:,1:], ec='red', sigma=sigma) error_ellipse(ax2, x2[1], x2[2], cov2[1:,1:], ec='blue', sigma=sigma) ax2.set_xlim([min([x1[1], x2[1]]) - 5., max([x1[1], x2[1]]) + 5.]) ax2.set_ylim([min([x1[2], x2[2]]) - 5., max([x1[2], x2[2]]) + 5.]) ax2.set_xlabel('PM RA (mas yr$^{-1}$)', fontsize=fs) ax2.set_ylabel('PM Dec (mas yr$^{-1}$)', fontsize=fs) ax3 = fig.add_subplot(132) c1 = np.delete(np.delete(cov1, 1, axis=0), 1, axis=1) c2 = np.delete(np.delete(cov2, 1, axis=0), 1, axis=1) error_ellipse(ax3, x1[0], x1[2], c1, ec='red', sigma=sigma) error_ellipse(ax3, x2[0], x2[2], c2, ec='blue', sigma=sigma) ax3.set_xlim([min([x1[0], x2[0]]) - 5., max([x1[0], x2[0]]) + 5.]) ax3.set_ylim([min([x1[2], x2[2]]) - 5., max([x1[2], x2[2]]) + 5.]) ax3.set_xlabel('Parallax (mas)', fontsize=fs) ax3.set_ylabel('PM Dec (mas yr$^{-1}$)', fontsize=fs) fig.subplots_adjust(wspace = 0.5) fig.text(0.5, 0.95, 'match #{0}'.format(i), horizontalalignment='center', transform=ax3.transAxes, fontsize=fs+2) i = np.random.randint(0, len(pairs_ind1s)) print("match {0}: source_ids {1}".format(i, read_match_attr(pairs_ind1s[i], pairs_ind2s[i], 'source_id'))) plot_xs(i, sigma=3) pd.options.display.max_columns = None src1, src2 = read_match_attr(pairs_ind1s[i], pairs_ind2s[i], 'source_id') table[table['source_id'].isin([src1, src2])] print("saved chisquared = {0:.5f}".format(chisqs[pairs_ind1s[i]][np.where(pairs[pairs_ind1s[i]] == pairs_ind2s[i])[0][0]])) star1 = table.iloc[pairs_ind1s[i]] star2 = table.iloc[pairs_ind2s[i]] chisq(star1, star2) (gmag1, gmag2) = read_match_attr(pairs_ind1s, pairs_ind2s, 'phot_g_mean_mag') (plx1, plx2) = read_match_attr(pairs_ind1s, pairs_ind2s, 'parallax') dist1 = 1.e3/plx1 absg1 = gmag1 - 5.*(np.log10(dist1) - 1.) dist2 = 1.e3/plx2 absg2 = gmag2 - 5.*(np.log10(dist2) - 1.) mask = np.all(np.vstack([np.isfinite(absg1), np.isfinite(absg2)]), axis=0) good_pairs_2d = np.vstack([absg1[mask], absg2[mask]]) #good_pairs_2d = np.sort(good_pairs_2d, axis=0) # we could sort by brightness here absg1, absg2 = good_pairs_2d[0], good_pairs_2d[1] absg = np.append(absg1, absg2) hist = plt.hist(absg, bins=500) plt.xlim([-5,15]) plt.xlabel('G') plt.ylabel('# of stars') plt.savefig('absmag_hist.png') from matplotlib.colors import LogNorm plt.hist2d(absg1, absg2, bins=(1000,1000), norm=LogNorm()) cbar = plt.colorbar() cbar.ax.set_ylabel('# of stars', rotation=270) plt.xlabel('G$_{1}$') plt.ylabel('G$_{2}$') plt.xlim([-5, 15]) plt.ylim([-5, 15]) plt.savefig('absmag_pairs.pdf') ind1_is_kic = np.isfinite(table.iloc[pairs_ind1s]['kepid']) ind2_is_kic = np.isfinite(table.iloc[pairs_ind2s]['kepid']) one_is_kic = np.any(np.vstack([ind1_is_kic, ind2_is_kic]), axis=0) both_are_kic = np.all(np.vstack([ind1_is_kic, ind2_is_kic]), axis=0) np.sum(both_are_kic) for i1, i2 in zip(pairs_ind1s[both_are_kic], pairs_ind2s[both_are_kic]): print(read_match_attr(i1,i2,'planet?'))
0.501465
0.916297
<a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/main/C4/W2/ungraded_labs/C4_W2_Lab_1_features_and_labels.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Ungraded Lab: Preparing Time Series Features and Labels In this lab, you will prepare time series data into features and labels that you can use to train a model. This is mainly achieved by a *windowing* technique where in you group consecutive measurement values into one feature and the next measurement will be the label. For example, in hourly measurements, you can use values taken at hours 1 to 11 to predict the value at hour 12. The next sections will show how you can implement this in Tensorflow. Let's begin! ## Imports Tensorflow will be your lone import in this module and you'll be using methods mainly from the [tf.data API](https://www.tensorflow.org/guide/data), particularly the [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) class. This contains many useful methods to arrange sequences of data and you'll see that shortly. ``` import tensorflow as tf ``` ## Create a Simple Dataset For this exercise, you will just use a sequence of numbers as your dataset so you can clearly see the effect of each command. For example, the cell below uses the [range()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#range) method to generate a dataset containing numbers 0 to 9. ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Preview the result for val in dataset: print(val.numpy()) ``` You will see this command several times in the next sections. ## Windowing the data As mentioned earlier, you want to group consecutive elements of your data and use that to predict a future value. This is called windowing and you can use that with the [window()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#window) method as shown below. Here, you will take 5 elements per window (i.e. `size` parameter) and you will move this window 1 element at a time (i.e. `shift` parameter). One caveat to using this method is that each window returned is a [Dataset](https://www.tensorflow.org/guide/data#dataset_structure) in itself. This is a Python iterable and, as of the current version (TF 2.8), it won't show the elements if you use the `print()` method on it. It will just show a description of the data structure (e.g. `<_VariantDataset shapes: (), types: tf.int64>`). ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data dataset = dataset.window(size=5, shift=1) # Print the result for window_dataset in dataset: print(window_dataset) ``` If you want to see the elements, you will have to iterate over each iterable. This can be done by modifying the print statement above with a nested for-loop or list comprehension. The code below shows the list comprehension while in the lecture video, you saw the for-loop. ``` # Print the result for window_dataset in dataset: print([item.numpy() for item in window_dataset]) ``` Now that you can see the elements of each window, you'll notice that the resulting sets are not sized evenly because there are no more elements after the number `9`. You can use the `drop_remainder` flag to make sure that only 5-element windows are retained. ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(size=5, shift=1, drop_remainder=True) # Print the result for window_dataset in dataset: print([item.numpy() for item in window_dataset]) ``` ## Flatten the Windows In training the model later, you will want to prepare the windows to be [tensors](https://www.tensorflow.org/guide/tensor) instead of the `Dataset` structure. You can do that by feeding a mapping function to the [flat_map()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flat_map) method. This function will be applied to each window and the results will be [flattened into a single dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#flatten_a_dataset_of_windows_2). To illustrate, the code below will put all elements of a window into a single batch then flatten the result. ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Print the results for window in dataset: print(window.numpy()) ``` ## Group into features and labels Next, you will want to mark the labels in each window. For this exercise, you will do that by splitting the last element of each window from the first four. This is done with the [map()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map) method containing a lambda function that defines the window slicing. ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Create tuples with features (first four elements of the window) and labels (last element) dataset = dataset.map(lambda window: (window[:-1], window[-1])) # Print the results for x,y in dataset: print("x = ", x.numpy()) print("y = ", y.numpy()) print() ``` ## Shuffle the data It is good practice to shuffle your dataset to reduce *sequence bias* while training your model. This refers to the neural network overfitting to the order of inputs and consequently, it will not perform well when it does not see that particular order when testing. You don't want the sequence of training inputs to impact the network this way so it's good to shuffle them up. You can simply use the [shuffle()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) method to do this. The `buffer_size` parameter is required for that and as mentioned in the doc, you should put a number equal or greater than the total number of elements for better shuffling. We can see from the previous cells that the total number of windows in the dataset is `6` so we can choose this number or higher. ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Create tuples with features (first four elements of the window) and labels (last element) dataset = dataset.map(lambda window: (window[:-1], window[-1])) # Shuffle the windows dataset = dataset.shuffle(buffer_size=10) # Print the results for x,y in dataset: print("x = ", x.numpy()) print("y = ", y.numpy()) print() ``` ## Create batches for training Lastly, you will want to group your windows into batches. You can do that with the [batch()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch) method as shown below. Simply specify the batch size and it will return a batched dataset with that number of windows. As a rule of thumb, it is also good to specify a [prefetch()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch) step. This optimizes the execution time when the model is already training. By specifying a prefetch `buffer_size` of `1` as shown below, Tensorflow will prepare the next one batch in advance (i.e. putting it in a buffer) while the current batch is being consumed by the model. You can read more about it [here](https://towardsdatascience.com/optimising-your-input-pipeline-performance-with-tf-data-part-1-32e52a30cac4#Prefetching). ``` # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Create tuples with features (first four elements of the window) and labels (last element) dataset = dataset.map(lambda window: (window[:-1], window[-1])) # Shuffle the windows dataset = dataset.shuffle(buffer_size=10) # Create batches of windows dataset = dataset.batch(2).prefetch(1) # Print the results for x,y in dataset: print("x = ", x.numpy()) print("y = ", y.numpy()) print() ``` ## Wrap Up This short exercise showed you how to chain different methods of the `tf.data.Dataset` class to prepare a sequence into shuffled and batched window datasets. You will be using this same concept in the next exercises when you apply it to synthetic data and use the result to train a neural network. On to the next!
github_jupyter
import tensorflow as tf # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Preview the result for val in dataset: print(val.numpy()) # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data dataset = dataset.window(size=5, shift=1) # Print the result for window_dataset in dataset: print(window_dataset) # Print the result for window_dataset in dataset: print([item.numpy() for item in window_dataset]) # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(size=5, shift=1, drop_remainder=True) # Print the result for window_dataset in dataset: print([item.numpy() for item in window_dataset]) # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Print the results for window in dataset: print(window.numpy()) # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Create tuples with features (first four elements of the window) and labels (last element) dataset = dataset.map(lambda window: (window[:-1], window[-1])) # Print the results for x,y in dataset: print("x = ", x.numpy()) print("y = ", y.numpy()) print() # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Create tuples with features (first four elements of the window) and labels (last element) dataset = dataset.map(lambda window: (window[:-1], window[-1])) # Shuffle the windows dataset = dataset.shuffle(buffer_size=10) # Print the results for x,y in dataset: print("x = ", x.numpy()) print("y = ", y.numpy()) print() # Generate a tf dataset with 10 elements (i.e. numbers 0 to 9) dataset = tf.data.Dataset.range(10) # Window the data but only take those with the specified size dataset = dataset.window(5, shift=1, drop_remainder=True) # Flatten the windows by putting its elements in a single batch dataset = dataset.flat_map(lambda window: window.batch(5)) # Create tuples with features (first four elements of the window) and labels (last element) dataset = dataset.map(lambda window: (window[:-1], window[-1])) # Shuffle the windows dataset = dataset.shuffle(buffer_size=10) # Create batches of windows dataset = dataset.batch(2).prefetch(1) # Print the results for x,y in dataset: print("x = ", x.numpy()) print("y = ", y.numpy()) print()
0.718397
0.990329
# 自定义调试体验文档 [![查看源文件](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/master/docs/notebook/mindspore_custom_debugging_info.ipynb) ## 概述 本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/master/docs/sample_code/lenet/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、Print算子、日志打印、数据Dump功能等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。 体验过程如下: 1. 数据准备。 2. 定义深度神经网络LeNet5。 3. 使用Callback回调函数构建StopAtTime类来控制训练停止时间。 4. 设置日志环境变量。 5. 启动同步Dump功能。 5. 定义训练网络并执行训练。 6. 执行测试。 7. 算子输出数据的读取与展示。 > 本次体验适用于GPU环境。 ## 数据准备 ### 数据集的下载 这里我们需要将MNIST数据集中随机取出一张图片,并增强成适合LeNet网络的数据格式(如何处理请参考[初学入门](https://www.mindspore.cn/tutorials/zh-CN/master/quick_start.html)) 以下示例代码将数据集下载并解压到指定位置。 ``` import os import requests requests.packages.urllib3.disable_warnings() def download_dataset(dataset_url, path): filename = dataset_url.split("/")[-1] save_path = os.path.join(path, filename) if os.path.exists(save_path): return if not os.path.exists(path): os.makedirs(path) res = requests.get(dataset_url, stream=True, verify=False) with open(save_path, "wb") as f: for chunk in res.iter_content(chunk_size=512): if chunk: f.write(chunk) print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path)) train_path = "datasets/MNIST_Data/train" test_path = "datasets/MNIST_Data/test" download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path) download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path) download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path) download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path) ``` 下载的数据集文件的目录结构如下: ```text ./datasets/MNIST_Data ├── test │ ├── t10k-images-idx3-ubyte │ └── t10k-labels-idx1-ubyte └── train ├── train-images-idx3-ubyte └── train-labels-idx1-ubyte ``` `custom_debugging_info.ipynb`为本文文档。 ### 数据集的增强操作 下载的数据集,需要通过`mindspore.dataset`处理成适用于MindSpore框架的数据,再使用一系列框架中提供的工具进行数据增强操作来适应LeNet网络的数据处理需求。 ``` import mindspore.dataset as ds import mindspore.dataset.vision.c_transforms as CV import mindspore.dataset.transforms.c_transforms as C from mindspore.dataset.vision import Inter from mindspore import dtype as mstype def create_dataset(data_path, batch_size=32, repeat_size=1, num_parallel_workers=1): """ create dataset for train or test Args: data_path (str): Data path batch_size (int): The number of data records in each group repeat_size (int): The number of replicated data records num_parallel_workers (int): The number of parallel workers """ # define dataset mnist_ds = ds.MnistDataset(data_path) # define operation parameters resize_height, resize_width = 32, 32 rescale = 1.0 / 255.0 shift = 0.0 rescale_nml = 1 / 0.3081 shift_nml = -1 * 0.1307 / 0.3081 # define map operations trans_image_op = [ CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR), CV.Rescale(rescale_nml, shift_nml), CV.Rescale(rescale, shift), CV.HWC2CHW() ] type_cast_op = C.TypeCast(mstype.int32) # apply map operations on images mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers) mnist_ds = mnist_ds.map(operations=trans_image_op, input_columns="image", num_parallel_workers=num_parallel_workers) # apply DatasetOps buffer_size = 10000 mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True) mnist_ds = mnist_ds.repeat(repeat_size) return mnist_ds ``` ## 定义深度神经网络LeNet5 针对MNIST数据集我们采用的是LeNet5网络,先对卷积函数和全连接函数初始化,然后`construct`构建神经网络。 ``` from mindspore.common.initializer import Normal import mindspore.nn as nn class LeNet5(nn.Cell): """Lenet network structure.""" def __init__(self): super(LeNet5, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5, pad_mode="valid") self.conv2 = nn.Conv2d(6, 16, 5, pad_mode="valid") self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02)) self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02)) self.fc3 = nn.Dense(84, 10) self.relu = nn.ReLU() self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) self.flatten = nn.Flatten() def construct(self, x): x = self.max_pool2d(self.relu(self.conv1(x))) x = self.max_pool2d(self.relu(self.conv2(x))) x = self.flatten(x) x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.fc3(x) return x ``` ## 构建自定义回调函数StopAtTime 使用回调函数的基类Callback,构建训练定时器`StopAtTime`,其基类(可在源码中找到位置在`/mindspore/nn/callback`)为: ```python class Callback(): def begin(self, run_context): pass def epoch_begin(self, run_context): pass def epoch_end(self, run_context): pass def step_begin(self, run_context): pass def step_end(self, run_context): pass def end(self, run_context): pass ``` - `begin`:表示训练开始时执行。 - `epoch_begin`:表示每个epoch开始时执行。 - `epoch_end`:表示每个epoch结束时执行。 - `step_begin`:表示每个step刚开始时执行。 - `step_end`:表示每个step结束时执行。 - `end`:表示训练结束时执行。 了解上述基类的用法后,还有一个参数`run_context`,这是一个类,存储了模型训练中的各种参数,我们在这里使用`print(cb_params.list_callback)`将其放在`end`中打印(当然也可以使用`print(cb_param)`打印所有参数信息,由于参数信息太多,我们这里只选了一个参数举例),后续在执行完训练后,根据打印信息,会简单介绍`run_context`类中各参数的意义,我们开始构建训练定时器,如下: ``` from mindspore.train.callback import Callback import time class StopAtTime(Callback): def __init__(self, run_time): super(StopAtTime, self).__init__() self.run_time = run_time*60 def begin(self, run_context): cb_params = run_context.original_args() cb_params.init_time = time.time() def step_end(self, run_context): cb_params = run_context.original_args() epoch_num = cb_params.cur_epoch_num step_num = cb_params.cur_step_num loss = cb_params.net_outputs cur_time = time.time() if (cur_time - cb_params.init_time) > self.run_time: print("epoch: ", epoch_num, " step: ", step_num, " loss: ", loss) run_context.request_stop() def end(self, run_context): cb_params = run_context.original_args() print(cb_params.list_callback) ``` ## 启动同步Dump功能 本例中使用同步Dump功能,导出每次迭代中前向传播和反向传播算子的输出数据,导出的数据方便用户在进行优化训练策略时进行分析使用,如需导出更多数据可参考[官方教程](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#dump)。 ``` import os import json abspath = os.getcwd() data_dump = { "common_dump_settings": { "dump_mode": 0, "path": abspath + "/data_dump", "net_name": "LeNet5", "iteration": "0|5-8|100-120", "input_output": 2, "kernels": ["Default/network-WithLossCell/_backbone-LeNet5/flatten-Flatten/Reshape-op118"], "support_device": [0, 1, 2, 3, 4, 5, 6, 7] }, "e2e_dump_settings": { "enable": True, "trans_flag": False } } with open("./data_dump.json", "w", encoding="GBK") as f: json.dump(data_dump, f) os.environ['MINDSPORE_DUMP_CONFIG'] = abspath + "/data_dump.json" ``` 执行完上述命令后会在工作目录上生成`data_dump.json`文件,目录结构如下: ```text . └── data_dump.json ``` 启动同步Dump功能需要注意: - `path`需要设置成绝对路径。例如`/usr/data_dump`可以,`./data_dump`则不行。 - `e2e_dump_settings`中的`enable`需要设置成`True`。 - 需要将生成的`data_dump.json`文件添加至系统环境变量中。 ## 设置日志环境变量 MindSpore采用`glog`来输出日志,我们这里将日志输出到屏幕: `GlOG_v`:控制日志的级别,默认值为2,即WARNING级别,对应关系如下:0-DEBUG、1-INFO、2-WARNING、3-ERROR、4-CRITICAL。本次设置为1。 `GLOG_logtostderr`:控制日志输出方式,设置为`1`时,日志输出到屏幕;值设置为`0`时,日志输出到文件。设置输出屏幕时,日志部分的信息会显示成红色,设置成输出到文件时,会在`GLOG_log_dir`路径下生成`mindspore.log`文件。 > 更多设置请参考官网:<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html> ``` import os from mindspore import log as logger os.environ['GLOG_v'] = '1' os.environ['GLOG_logtostderr'] = '1' os.environ['GLOG_log_dir'] = 'D:/' if os.name == "nt" else '/var/log/mindspore' os.environ['logger_maxBytes'] = '5242880' os.environ['logger_backupCount'] = '10' print(logger.get_log_config()) ``` 打印信息为`GLOG_v`的等级:`INFO`级别。 输出方式`GLOG_logtostderr`:`1`表示屏幕输出。 ## 定义训练网络并执行训练 ### 定义训练网络 此过程中先将之前生成的模型文件`.ckpt`和`.meta`的数据删除,并将模型需要用到的参数配置到`Model`。 ``` from mindspore import context, Model from mindspore.nn import SoftmaxCrossEntropyWithLogits from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor # clean files if os.name == "nt": os.system('del/f/s/q *.ckpt *.meta') else: os.system('rm -f *.ckpt *.meta *.pb') context.set_context(mode=context.GRAPH_MODE, device_target="GPU") lr = 0.01 momentum = 0.9 epoch_size = 3 train_data_path = "./datasets/MNIST_Data/train" eval_data_path = "./datasets/MNIST_Data/test" model_path = "./models/ckpt/custom_debugging_info/" net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") repeat_size = 1 network = LeNet5() metrics = { 'accuracy': nn.Accuracy(), 'loss': nn.Loss(), 'precision': nn.Precision(), 'recall': nn.Recall(), 'f1_score': nn.F1() } net_opt = nn.Momentum(network.trainable_params(), lr, momentum) config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10) ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet", directory=model_path, config=config_ck) model = Model(network, net_loss, net_opt, metrics=metrics) ``` ### 执行训练 在构建训练网络中,给`model.train`传入了三个回调函数,分别是`ckpoint_cb`,`LossMonitor`,`stop_cb`;其分别代表如下: `ckpoint_cb`:即是`ModelCheckpoint`,设置模型保存的回调函数。 `LossMonitor`:loss值监视器,打印训练过程每步的loss值。 `stop_cb`:即是`StopAtTime`,上面刚构建的训练定时器。 我们将训练定时器`StopAtTime`设置成36秒,即`run_time=0.6`。 ``` print("============== Starting Training ==============") ds_train = create_dataset(train_data_path, repeat_size=repeat_size) stop_cb = StopAtTime(run_time=0.6) model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(375), stop_cb], dataset_sink_mode=False) ``` 以上打印信息中,主要分为两部分: - 日志信息部分: - `[INFO]`部分信息即为日志输出的信息,由于没有Warning信息,目前主要记录的是训练的几个重要步骤。 - 回调函数信息部分: - `LossMonitor`:每步的loss值。 - `StopAtTime`:在每个epoch结束及训练时间结束时,打印当前epoch的训练总时间(单位为毫秒),每步训练花费的时间以及平均loss值,另外在训练结束时还打印了`run_context.list_callback`的信息,这条信息表示本次训练过程中使用的回调函数;另外`run_conext.original_args`中还包含以下参数: - `train_network`:网络的各类参数。 - `epoch_num`:训练的epoch数。 - `batch_num`:一个epoch的step数。 - `mode`:MODEL的模式。 - `loss_fn`:使用的损失函数。 - `optimizer`:使用的优化器。 - `parallel_mode`:并行模式。 - `device_number`:训练卡的数量。 - `train_dataset`:训练的数据集。 - `list_callback`:使用的回调函数。 - `train_dataset_element`:打印当前batch的数据集。 - `cur_step_num`:当前训练的step数。 - `cur_epoch_num`:当前的epoch。 - `net_outputs`:网络返回值。 几乎在训练中的所有重要数据,都可以从Callback中取得,所以Callback也是在自定义调试中比较常用的功能。 ## 执行测试 测试网络中我们的自定义函数`metrics`将在`model.eval`中被调用,除了模型的预测正确率外`recall`,`F1`等不同的检验标准下的预测正确率也会打印出来: ``` print("============== Starting Testing ==============") ds_eval = create_dataset(eval_data_path, repeat_size=repeat_size) acc = model.eval(ds_eval, dataset_sink_mode=False) print("============== Accuracy:{} ==============".format(acc)) ``` `Accuracy`部分的信息即为`metric`控制输出的信息,模型的预测值正确率和其他标准下验证(0-9)的正确率值,至于不同的验证标准计算方法,大家可以去官网搜索`mindspore.nn`查找,这里就不多介绍了。 ## 算子输出数据的读取展示 执行完成上述训练后,可以在`data_dump`文件夹中找到导出的训练数据,按照本例`data_dump.json`文件的设置,在目录`data_dump/LeNet5/device_0/`中找到每次迭代的数据,保存每次迭代的数据文件夹名称为`iteration_{迭代次数}`,每个算子输出数据的文件后缀为`.bin`,可以使用`numpy.fromfile`读取其中的数据。 本例子,在第400次迭代数据中,随机读取其中一个算子的输出文件并进行展示: ``` import numpy as np import random dump_data_path = "./data_dump/LeNet5/device_0/iteration_400/" ops_output_file = random.choice(os.listdir(dump_data_path)) print("ops name:", ops_output_file, "\n") ops_dir = dump_data_path + ops_output_file ops_output = np.fromfile(ops_dir) print("ops output value:", ops_output, "\n") print("the shape of ops output:", ops_output.shape) ``` ## 总结 本例使用了MNIST数据集,通过LeNet5神经网络进行训练,将自定义调试函数结合到代码中进行调试,同时展示了使用方法和部分功能,并使用调试函数导出需要的输出数据,来更好的认识自定义调试函数的方便性,以上就是本次的体验内容。
github_jupyter
import os import requests requests.packages.urllib3.disable_warnings() def download_dataset(dataset_url, path): filename = dataset_url.split("/")[-1] save_path = os.path.join(path, filename) if os.path.exists(save_path): return if not os.path.exists(path): os.makedirs(path) res = requests.get(dataset_url, stream=True, verify=False) with open(save_path, "wb") as f: for chunk in res.iter_content(chunk_size=512): if chunk: f.write(chunk) print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path)) train_path = "datasets/MNIST_Data/train" test_path = "datasets/MNIST_Data/test" download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path) download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path) download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path) download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path) ./datasets/MNIST_Data ├── test │ ├── t10k-images-idx3-ubyte │ └── t10k-labels-idx1-ubyte └── train ├── train-images-idx3-ubyte └── train-labels-idx1-ubyte import mindspore.dataset as ds import mindspore.dataset.vision.c_transforms as CV import mindspore.dataset.transforms.c_transforms as C from mindspore.dataset.vision import Inter from mindspore import dtype as mstype def create_dataset(data_path, batch_size=32, repeat_size=1, num_parallel_workers=1): """ create dataset for train or test Args: data_path (str): Data path batch_size (int): The number of data records in each group repeat_size (int): The number of replicated data records num_parallel_workers (int): The number of parallel workers """ # define dataset mnist_ds = ds.MnistDataset(data_path) # define operation parameters resize_height, resize_width = 32, 32 rescale = 1.0 / 255.0 shift = 0.0 rescale_nml = 1 / 0.3081 shift_nml = -1 * 0.1307 / 0.3081 # define map operations trans_image_op = [ CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR), CV.Rescale(rescale_nml, shift_nml), CV.Rescale(rescale, shift), CV.HWC2CHW() ] type_cast_op = C.TypeCast(mstype.int32) # apply map operations on images mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers) mnist_ds = mnist_ds.map(operations=trans_image_op, input_columns="image", num_parallel_workers=num_parallel_workers) # apply DatasetOps buffer_size = 10000 mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True) mnist_ds = mnist_ds.repeat(repeat_size) return mnist_ds from mindspore.common.initializer import Normal import mindspore.nn as nn class LeNet5(nn.Cell): """Lenet network structure.""" def __init__(self): super(LeNet5, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5, pad_mode="valid") self.conv2 = nn.Conv2d(6, 16, 5, pad_mode="valid") self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02)) self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02)) self.fc3 = nn.Dense(84, 10) self.relu = nn.ReLU() self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) self.flatten = nn.Flatten() def construct(self, x): x = self.max_pool2d(self.relu(self.conv1(x))) x = self.max_pool2d(self.relu(self.conv2(x))) x = self.flatten(x) x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.fc3(x) return x class Callback(): def begin(self, run_context): pass def epoch_begin(self, run_context): pass def epoch_end(self, run_context): pass def step_begin(self, run_context): pass def step_end(self, run_context): pass def end(self, run_context): pass from mindspore.train.callback import Callback import time class StopAtTime(Callback): def __init__(self, run_time): super(StopAtTime, self).__init__() self.run_time = run_time*60 def begin(self, run_context): cb_params = run_context.original_args() cb_params.init_time = time.time() def step_end(self, run_context): cb_params = run_context.original_args() epoch_num = cb_params.cur_epoch_num step_num = cb_params.cur_step_num loss = cb_params.net_outputs cur_time = time.time() if (cur_time - cb_params.init_time) > self.run_time: print("epoch: ", epoch_num, " step: ", step_num, " loss: ", loss) run_context.request_stop() def end(self, run_context): cb_params = run_context.original_args() print(cb_params.list_callback) import os import json abspath = os.getcwd() data_dump = { "common_dump_settings": { "dump_mode": 0, "path": abspath + "/data_dump", "net_name": "LeNet5", "iteration": "0|5-8|100-120", "input_output": 2, "kernels": ["Default/network-WithLossCell/_backbone-LeNet5/flatten-Flatten/Reshape-op118"], "support_device": [0, 1, 2, 3, 4, 5, 6, 7] }, "e2e_dump_settings": { "enable": True, "trans_flag": False } } with open("./data_dump.json", "w", encoding="GBK") as f: json.dump(data_dump, f) os.environ['MINDSPORE_DUMP_CONFIG'] = abspath + "/data_dump.json" . └── data_dump.json import os from mindspore import log as logger os.environ['GLOG_v'] = '1' os.environ['GLOG_logtostderr'] = '1' os.environ['GLOG_log_dir'] = 'D:/' if os.name == "nt" else '/var/log/mindspore' os.environ['logger_maxBytes'] = '5242880' os.environ['logger_backupCount'] = '10' print(logger.get_log_config()) from mindspore import context, Model from mindspore.nn import SoftmaxCrossEntropyWithLogits from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor # clean files if os.name == "nt": os.system('del/f/s/q *.ckpt *.meta') else: os.system('rm -f *.ckpt *.meta *.pb') context.set_context(mode=context.GRAPH_MODE, device_target="GPU") lr = 0.01 momentum = 0.9 epoch_size = 3 train_data_path = "./datasets/MNIST_Data/train" eval_data_path = "./datasets/MNIST_Data/test" model_path = "./models/ckpt/custom_debugging_info/" net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") repeat_size = 1 network = LeNet5() metrics = { 'accuracy': nn.Accuracy(), 'loss': nn.Loss(), 'precision': nn.Precision(), 'recall': nn.Recall(), 'f1_score': nn.F1() } net_opt = nn.Momentum(network.trainable_params(), lr, momentum) config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10) ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet", directory=model_path, config=config_ck) model = Model(network, net_loss, net_opt, metrics=metrics) print("============== Starting Training ==============") ds_train = create_dataset(train_data_path, repeat_size=repeat_size) stop_cb = StopAtTime(run_time=0.6) model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(375), stop_cb], dataset_sink_mode=False) print("============== Starting Testing ==============") ds_eval = create_dataset(eval_data_path, repeat_size=repeat_size) acc = model.eval(ds_eval, dataset_sink_mode=False) print("============== Accuracy:{} ==============".format(acc)) import numpy as np import random dump_data_path = "./data_dump/LeNet5/device_0/iteration_400/" ops_output_file = random.choice(os.listdir(dump_data_path)) print("ops name:", ops_output_file, "\n") ops_dir = dump_data_path + ops_output_file ops_output = np.fromfile(ops_dir) print("ops output value:", ops_output, "\n") print("the shape of ops output:", ops_output.shape)
0.549157
0.884439
``` import pandas as pd import numpy as np import string ``` # Load Data ``` # load example mtcars data df = pd.read_csv('mtcars.csv') print(df.shape) df.head() ``` # Create Simple Barplot ``` from rapid_plotly import barplot ``` A simple barplot can be created by passing three dataframes to `barplot.create_graph`: * `in_data` - the height of the bars * `names` - a dataframe containing the hover text for the bars, otherwise identical to `in_data` * `errors` - a dataframe containing the half-height of the error bars, otherwise identical to `in_data` ``` # create graph data in_data = pd.DataFrame(df.groupby('cyl').mean()['mpg']) in_data.index = in_data.index.astype(int).astype(str) + ' Cylinders' print('main data:') display(in_data.head()) # generate names l = string.ascii_lowercase names = in_data.copy() f = lambda: l[np.random.randint(0,len(l))] for x in names.index: names.loc[x, 'mpg'] = f()+f() print('names:') display(names.head()) # generate error bars data errors = in_data.copy() errors['mpg'] = 2.5 print('errors:') display(errors.head()) ``` A simple graph can be quickly created to verify that the data is as expected: ``` # create input data for graph args = dict( in_data=in_data ) # view plot inline fig = barplot.create_graph(**args) ``` Now that the graph appears to be as expected, more characteristics can be added by adding them to `args`: ``` # add additional characteristics to graph title = '<b>Fuel Mileage by Number of Cylinders</b>' title += '<br><i>for mtcars data</i>' args['title'] = title args['names'] = names args['errors'] = errors args['xlab'] = 'Number of Cylinders' args['ylab'] = 'Miles Per Gallon' args['annotations'] = [{'text':'More cylinders correlates to better<br> fuel mileage', 'x':1.5, 'y':24.5, 'showarrow':False}] ``` Preview the results again: ``` # view plot inline fig = barplot.create_graph(**args) ``` After creating a graph, it can be written to an html file by passing `fig` to `barplot.output_graph`: ``` # write graph to html file fp = 'barplot-example.html' barplot.output_graph(fig, fp) # write graph to png file fp = 'barplot-example.png' barplot.output_graph(fig, fp) ``` For any of the `create_graph` functions in `rapid_plotly`, a detailed docstring is included: ``` from IPython.display import Markdown display(Markdown(barplot.create_graph.__doc__)) ``` # Create Grouped Barplot A grouped barplot compares the effect of the same treatment across multiple categories. The next graph will show the relationship between fuel mileage, the number of cylinders and the number of gears for cars. For grouped barplots, dataframes can be passed where the rows represent the x-axis categories and the columns represent each bar in each category. ``` # create data for grouped barplot in_data = df.groupby(['cyl', 'gear']).mean()[['mpg']].reset_index() in_data = pd.pivot_table( data=in_data, columns=['gear'], index=['cyl'] ) in_data.columns = ['3 gears', '4 gears', '5 gears'] in_data = in_data.fillna(in_data.loc[8].mean()) in_data.index = in_data.index.astype(str) + ' Cylinders' print('main data:') display(in_data) # create names names = in_data.copy() for row in names.index: for col in names.columns: names.loc[row, col] = f()+f() print('names:') display(names) # create error bars errors = in_data.copy() for col in errors.columns: errors[col] = 0.75 print('errors:') display(errors) ``` Get a quick visual of the data: ``` # create args args = {'in_data':in_data} fig = barplot.create_graph(**args) ``` Now add more detail by adding elements to `args`: ``` # add additional characteristics to graph title = '<b>Fuel Mileage by Number of Cylinders and Number of Gears</b>' title += '<br><i>for mtcars data</i>' args['title'] = title args['names'] = names args['errors'] = errors args['xlab'] = 'Number of Cylinders' args['ylab'] = 'Miles Per Gallon' args['annotations'] = [{'text':'More gears correlate to better fuel<br> mileage for cars with 4 cylinder engines', 'x':0.45, 'y':28, 'ax':150, 'ay':25, 'showarrow':True}] fig = barplot.create_graph(**args) ``` This looks okay with the default colors, but the main point of the graph would be more immediately visible if the "4 Cylinder" bargroup was a different shade of color than the other bargroups. New colors were generated using [coolors.co](https://coolors.co) and tints of the new colors were created on [color-hex.com](www.color-hex.com). A new dataframe `colors` can be created in a similar fashion to `in_data`, `names` and `errors`: ``` # create new colors colors = pd.DataFrame({ '3 gears':['#9195b2']*3, '4 gears':['#969694']*3, '5 gears':['#c1c991']*3 }, index=in_data.index) colors.loc['4 Cylinders'] = ['#232C65', '#2D2D2A', '#849324'] args['colors'] = colors print('colors:') colors fig = barplot.create_graph(**args) # write graph to html file fp = 'grouped-barplot-example.html' barplot.output_graph(fig, fp) # write graph to png file fp = 'grouped-barplot-example.png' barplot.output_graph(fig, fp) ``` # Create Scatterplot ``` from rapid_plotly import scatterplot ``` First, set up some data which can be used to create an example scatterplot: ``` # create main data sl = df[['hp', 'mpg']].copy() x_data = sl[['hp']].copy() y_data = sl[['mpg']].copy() print('x values:') display(x_data.head()) print('y values:') display(y_data.head()) # create names n = (df[['cyl', 'carb', 'gear', 'wt']].apply( lambda x: '# Cylinders: %s<br># Carbs: %s<br># Gears: %s<br>Weight: %s' % (x['cyl'], x['carb'], x['gear'], x['wt']), axis=1 ) ).copy() n = n.rename('mpg') names = sl.copy() names['hp'] = n del names['mpg'] print('names: ') display(names.head()) # create colors colors = sl.copy() colors.loc[:, :] = '#C14953' del colors['mpg'] print('colors: ') display(colors.head()) ``` The `scatterplot` module takes a separate dataframe for the x values and for the y values: ``` args = {'x_data':x_data, 'y_data':y_data} fig = scatterplot.create_graph(**args) ``` Adding names, labels and colors: ``` # build graph args args['names'] = names args['colors'] = colors args['title'] = '<b>Fuel Mileage as a Function of Horsepower</b><br><i>for mtcars data</i>' args['xlab'] = 'Horsepower' args['ylab'] = 'Fuel Mileage (mpg)' # display plot fig = scatterplot.create_graph(**args) ``` The `scatterplot` module allows for passing lists of x and y values to plot multiple series of data on the sample plot. Generate example data which compares `hp` and `mpg` before and after a made-up fuel mileage enhancement: ``` # create main data sl = df[['hp', 'mpg']].copy() x_data = sl[['hp']].copy() x_data_treat = x_data.copy() x_data_treat['hp'] = x_data['hp'] - (np.random.normal(loc=5, scale=2, size=len(x_data))) x_data_treat.columns = ['hp_alt'] y_data = sl[['mpg']].copy() y_data_treat = y_data.copy() y_data_treat['mpg'] = y_data['mpg'] + (np.random.normal(loc=5, scale=2, size=len(x_data))) y_data_treat.columns = ['mpg_alt'] print('x1 values:') display(x_data.head()) print('x2 values:') display(x_data_treat.head()) print('y1 values:') display(y_data.head()) print('y2 values:') display(y_data_treat.head()) # create names n = (df.reset_index()[['index', 'cyl', 'carb', 'gear', 'wt']].apply( lambda x: 'Car ID %s<br># Cylinders: %s<br># Carbs: %s<br># Gears: %s<br>Weight: %s' % (x['index'], x['cyl'], x['carb'], x['gear'], x['wt']), axis=1 ) ).copy() names = sl.copy() names['hp'] = 'Before Treatment<br>' + n del names['mpg'] names['hp_alt'] = 'After Treatment<br>' + n print('names: ') display(names.head()) # create colors colors = sl.copy() colors.loc[:, :] = '#232C65' del colors['mpg'] colors['hp_alt'] = '#2D2D2A' print('colors: ') display(colors.head()) ``` Now a list of x data and a list of y data can be used to plot both cases on the same graph: ``` # build graph args args['x_data'] = [x_data, x_data_treat] args['y_data'] = [y_data, y_data_treat] args['names'] = names args['colors'] = colors args['title'] = '<b>Fuel Mileage as a Function of Horsepower</b><br><i>for mtcars data</i>' args['xlab'] = 'Horsepower' args['ylab'] = 'Fuel Mileage (mpg)' # set up callout text sl = y_data.join(y_data_treat) sl['diff'] = sl.mpg_alt - sl.mpg cid = (sl[(sl.mpg < sl.mpg_alt)] .sort_values(by=['diff'], ascending=False).index[0]) x1_loc = x_data.iloc[cid].values[0] x2_loc = x_data_treat.iloc[cid].values[0] y1_loc = y_data.iloc[cid].values[0] y2_loc = y_data_treat.iloc[cid].values[0] c1 = {'text':'Car %s before upgrade' % cid, 'x':x1_loc, 'y':y1_loc, 'showarrow':True, 'ax':150, 'ay':-25} c2 = {'text':'Car %s after upgrade' % cid, 'x':x2_loc, 'y':y2_loc, 'showarrow':True, 'ax':150, 'ay':0} text = 'Fuel mileage upgrade works for most cars' args['annotations'] = [{'text':text, 'x':200, 'y':37, 'showarrow':False}, c1, c2] # display plot fig = scatterplot.create_graph(**args) # write graph to html file fp = 'scatterplot-example.html' barplot.output_graph(fig, fp) # write graph to png file fp = 'scatterplot-example.png' barplot.output_graph(fig, fp) ``` # Create Lineplot ``` from rapid_plotly import lineplot ``` First, set up some data which can be used to create an example lineplot: ``` # create some data sdate = pd.to_datetime('2019-01-01') edate = sdate + pd.Timedelta(days=100) df = pd.DataFrame(pd.date_range(sdate, edate), columns=['date']) df['smalldata'] = np.random.normal(100, 25, size=len(df)) df['largedata'] = np.random.normal(1000, 250, size=len(df)) # create descriptive date string for hover text f = lambda row: '%s, %s %s (Q%s)' % ( # weekday row['date'].strftime('%a'), # month row['date'].strftime('%b'), # day row['date'].strftime('%d'), # quarter row['date'].quarter, ) df['date_description'] = df.apply(f, axis=1) df = (df.set_index('date')).copy() df.head() # create hovertext labels names = df.copy() f = lambda row: 'Small Data<br>%s<br>%s' % (round(row['smalldata'], 3), row['date_description']) names['smalldata'] = names.apply(f, axis=1) f = lambda row: 'Large Data<br>%s<br>%s' % (round(row['largedata'], 3), row['date_description']) names['largedata'] = names.apply(f, axis=1) names.head() ``` Now we can build the graph `dict` and create a graph: ``` # create graph args = dict( in_data=df[['smalldata']], names=names, title='<b>Random Data</b>', xlab='', ylab='Random Values', # By default plotly shows the value of the data # on the hover popup, but since we built descriptive # labels on the hovertext, we can disable the default # hovertext with the `hoverinfo` arg. hoverinfo='text', ) fig = lineplot.create_graph(**args) ``` ## Multiple Axis Lineplot Often we want to compare data which has the same x axis, but y axes which vary substantially in range. We can view multiple lines on the graph by simply passing multiple columns to `in_data`: ``` # create graph args['in_data'] = in_data=df[['smalldata', 'largedata']] fig = lineplot.create_graph(**args) ``` ...but this makes it hard to see the variation in the `smalldata` series, because of the range in `largedata`. We can use `alt_trace_cols` to specify traces to go on a secondary y axis, on the right: ``` args['alt_trace_cols'] = ['largedata'] args['ylab'] = 'Smaller Random Values' args['y2lab'] = 'Larger Random Values' fig = lineplot.create_graph(**args) ``` Now `smalldata` and `largedata` are more easily comparable. # Create Barplot with Line Overlay Sometimes it is desirable to have multiple graph types on the same graph, for example a barplot with a line graph. Let's first build a bar graph from the first example: ``` from rapid_plotly import barplot # load example mtcars data df = pd.read_csv('mtcars.csv') print(df.shape) df.head() # create graph data in_data = pd.DataFrame(df.groupby('cyl').mean()['mpg']) in_data.index = in_data.index.astype(int).astype(str) + ' Cylinders' # generate names l = string.ascii_lowercase names = in_data.copy() f = lambda: l[np.random.randint(0,len(l))] for x in names.index: names.loc[x, 'mpg'] = f()+f() # generate error bars data errors = in_data.copy() errors['mpg'] = 2.5 # build graph args args = dict( in_data=in_data, names=names, errors=errors, title='<b>Fuel Mileage by Number of Cylinders</b>', ylab='Miles per Gallon', xlab='', ) fig = barplot.create_graph(**args) ``` Now let's fake up some data to show the lineplot example: ``` # copy in_data and add random numbers in_data_alt = in_data.rename(columns={'mpg':'altdata'}).copy() in_data_alt['altdata'] = [100, 125, 75] in_data_alt ``` ... and build a line trace using `create_trace` form helpers: ``` from rapid_plotly import helpers aux_traces = [helpers.simple_line_trace(in_data_alt, yaxis='y2')] args['alt_y'] = True args['aux_traces'] = aux_traces fig = barplot.create_graph(**args) ```
github_jupyter
import pandas as pd import numpy as np import string # load example mtcars data df = pd.read_csv('mtcars.csv') print(df.shape) df.head() from rapid_plotly import barplot # create graph data in_data = pd.DataFrame(df.groupby('cyl').mean()['mpg']) in_data.index = in_data.index.astype(int).astype(str) + ' Cylinders' print('main data:') display(in_data.head()) # generate names l = string.ascii_lowercase names = in_data.copy() f = lambda: l[np.random.randint(0,len(l))] for x in names.index: names.loc[x, 'mpg'] = f()+f() print('names:') display(names.head()) # generate error bars data errors = in_data.copy() errors['mpg'] = 2.5 print('errors:') display(errors.head()) # create input data for graph args = dict( in_data=in_data ) # view plot inline fig = barplot.create_graph(**args) # add additional characteristics to graph title = '<b>Fuel Mileage by Number of Cylinders</b>' title += '<br><i>for mtcars data</i>' args['title'] = title args['names'] = names args['errors'] = errors args['xlab'] = 'Number of Cylinders' args['ylab'] = 'Miles Per Gallon' args['annotations'] = [{'text':'More cylinders correlates to better<br> fuel mileage', 'x':1.5, 'y':24.5, 'showarrow':False}] # view plot inline fig = barplot.create_graph(**args) # write graph to html file fp = 'barplot-example.html' barplot.output_graph(fig, fp) # write graph to png file fp = 'barplot-example.png' barplot.output_graph(fig, fp) from IPython.display import Markdown display(Markdown(barplot.create_graph.__doc__)) # create data for grouped barplot in_data = df.groupby(['cyl', 'gear']).mean()[['mpg']].reset_index() in_data = pd.pivot_table( data=in_data, columns=['gear'], index=['cyl'] ) in_data.columns = ['3 gears', '4 gears', '5 gears'] in_data = in_data.fillna(in_data.loc[8].mean()) in_data.index = in_data.index.astype(str) + ' Cylinders' print('main data:') display(in_data) # create names names = in_data.copy() for row in names.index: for col in names.columns: names.loc[row, col] = f()+f() print('names:') display(names) # create error bars errors = in_data.copy() for col in errors.columns: errors[col] = 0.75 print('errors:') display(errors) # create args args = {'in_data':in_data} fig = barplot.create_graph(**args) # add additional characteristics to graph title = '<b>Fuel Mileage by Number of Cylinders and Number of Gears</b>' title += '<br><i>for mtcars data</i>' args['title'] = title args['names'] = names args['errors'] = errors args['xlab'] = 'Number of Cylinders' args['ylab'] = 'Miles Per Gallon' args['annotations'] = [{'text':'More gears correlate to better fuel<br> mileage for cars with 4 cylinder engines', 'x':0.45, 'y':28, 'ax':150, 'ay':25, 'showarrow':True}] fig = barplot.create_graph(**args) # create new colors colors = pd.DataFrame({ '3 gears':['#9195b2']*3, '4 gears':['#969694']*3, '5 gears':['#c1c991']*3 }, index=in_data.index) colors.loc['4 Cylinders'] = ['#232C65', '#2D2D2A', '#849324'] args['colors'] = colors print('colors:') colors fig = barplot.create_graph(**args) # write graph to html file fp = 'grouped-barplot-example.html' barplot.output_graph(fig, fp) # write graph to png file fp = 'grouped-barplot-example.png' barplot.output_graph(fig, fp) from rapid_plotly import scatterplot # create main data sl = df[['hp', 'mpg']].copy() x_data = sl[['hp']].copy() y_data = sl[['mpg']].copy() print('x values:') display(x_data.head()) print('y values:') display(y_data.head()) # create names n = (df[['cyl', 'carb', 'gear', 'wt']].apply( lambda x: '# Cylinders: %s<br># Carbs: %s<br># Gears: %s<br>Weight: %s' % (x['cyl'], x['carb'], x['gear'], x['wt']), axis=1 ) ).copy() n = n.rename('mpg') names = sl.copy() names['hp'] = n del names['mpg'] print('names: ') display(names.head()) # create colors colors = sl.copy() colors.loc[:, :] = '#C14953' del colors['mpg'] print('colors: ') display(colors.head()) args = {'x_data':x_data, 'y_data':y_data} fig = scatterplot.create_graph(**args) # build graph args args['names'] = names args['colors'] = colors args['title'] = '<b>Fuel Mileage as a Function of Horsepower</b><br><i>for mtcars data</i>' args['xlab'] = 'Horsepower' args['ylab'] = 'Fuel Mileage (mpg)' # display plot fig = scatterplot.create_graph(**args) # create main data sl = df[['hp', 'mpg']].copy() x_data = sl[['hp']].copy() x_data_treat = x_data.copy() x_data_treat['hp'] = x_data['hp'] - (np.random.normal(loc=5, scale=2, size=len(x_data))) x_data_treat.columns = ['hp_alt'] y_data = sl[['mpg']].copy() y_data_treat = y_data.copy() y_data_treat['mpg'] = y_data['mpg'] + (np.random.normal(loc=5, scale=2, size=len(x_data))) y_data_treat.columns = ['mpg_alt'] print('x1 values:') display(x_data.head()) print('x2 values:') display(x_data_treat.head()) print('y1 values:') display(y_data.head()) print('y2 values:') display(y_data_treat.head()) # create names n = (df.reset_index()[['index', 'cyl', 'carb', 'gear', 'wt']].apply( lambda x: 'Car ID %s<br># Cylinders: %s<br># Carbs: %s<br># Gears: %s<br>Weight: %s' % (x['index'], x['cyl'], x['carb'], x['gear'], x['wt']), axis=1 ) ).copy() names = sl.copy() names['hp'] = 'Before Treatment<br>' + n del names['mpg'] names['hp_alt'] = 'After Treatment<br>' + n print('names: ') display(names.head()) # create colors colors = sl.copy() colors.loc[:, :] = '#232C65' del colors['mpg'] colors['hp_alt'] = '#2D2D2A' print('colors: ') display(colors.head()) # build graph args args['x_data'] = [x_data, x_data_treat] args['y_data'] = [y_data, y_data_treat] args['names'] = names args['colors'] = colors args['title'] = '<b>Fuel Mileage as a Function of Horsepower</b><br><i>for mtcars data</i>' args['xlab'] = 'Horsepower' args['ylab'] = 'Fuel Mileage (mpg)' # set up callout text sl = y_data.join(y_data_treat) sl['diff'] = sl.mpg_alt - sl.mpg cid = (sl[(sl.mpg < sl.mpg_alt)] .sort_values(by=['diff'], ascending=False).index[0]) x1_loc = x_data.iloc[cid].values[0] x2_loc = x_data_treat.iloc[cid].values[0] y1_loc = y_data.iloc[cid].values[0] y2_loc = y_data_treat.iloc[cid].values[0] c1 = {'text':'Car %s before upgrade' % cid, 'x':x1_loc, 'y':y1_loc, 'showarrow':True, 'ax':150, 'ay':-25} c2 = {'text':'Car %s after upgrade' % cid, 'x':x2_loc, 'y':y2_loc, 'showarrow':True, 'ax':150, 'ay':0} text = 'Fuel mileage upgrade works for most cars' args['annotations'] = [{'text':text, 'x':200, 'y':37, 'showarrow':False}, c1, c2] # display plot fig = scatterplot.create_graph(**args) # write graph to html file fp = 'scatterplot-example.html' barplot.output_graph(fig, fp) # write graph to png file fp = 'scatterplot-example.png' barplot.output_graph(fig, fp) from rapid_plotly import lineplot # create some data sdate = pd.to_datetime('2019-01-01') edate = sdate + pd.Timedelta(days=100) df = pd.DataFrame(pd.date_range(sdate, edate), columns=['date']) df['smalldata'] = np.random.normal(100, 25, size=len(df)) df['largedata'] = np.random.normal(1000, 250, size=len(df)) # create descriptive date string for hover text f = lambda row: '%s, %s %s (Q%s)' % ( # weekday row['date'].strftime('%a'), # month row['date'].strftime('%b'), # day row['date'].strftime('%d'), # quarter row['date'].quarter, ) df['date_description'] = df.apply(f, axis=1) df = (df.set_index('date')).copy() df.head() # create hovertext labels names = df.copy() f = lambda row: 'Small Data<br>%s<br>%s' % (round(row['smalldata'], 3), row['date_description']) names['smalldata'] = names.apply(f, axis=1) f = lambda row: 'Large Data<br>%s<br>%s' % (round(row['largedata'], 3), row['date_description']) names['largedata'] = names.apply(f, axis=1) names.head() # create graph args = dict( in_data=df[['smalldata']], names=names, title='<b>Random Data</b>', xlab='', ylab='Random Values', # By default plotly shows the value of the data # on the hover popup, but since we built descriptive # labels on the hovertext, we can disable the default # hovertext with the `hoverinfo` arg. hoverinfo='text', ) fig = lineplot.create_graph(**args) # create graph args['in_data'] = in_data=df[['smalldata', 'largedata']] fig = lineplot.create_graph(**args) args['alt_trace_cols'] = ['largedata'] args['ylab'] = 'Smaller Random Values' args['y2lab'] = 'Larger Random Values' fig = lineplot.create_graph(**args) from rapid_plotly import barplot # load example mtcars data df = pd.read_csv('mtcars.csv') print(df.shape) df.head() # create graph data in_data = pd.DataFrame(df.groupby('cyl').mean()['mpg']) in_data.index = in_data.index.astype(int).astype(str) + ' Cylinders' # generate names l = string.ascii_lowercase names = in_data.copy() f = lambda: l[np.random.randint(0,len(l))] for x in names.index: names.loc[x, 'mpg'] = f()+f() # generate error bars data errors = in_data.copy() errors['mpg'] = 2.5 # build graph args args = dict( in_data=in_data, names=names, errors=errors, title='<b>Fuel Mileage by Number of Cylinders</b>', ylab='Miles per Gallon', xlab='', ) fig = barplot.create_graph(**args) # copy in_data and add random numbers in_data_alt = in_data.rename(columns={'mpg':'altdata'}).copy() in_data_alt['altdata'] = [100, 125, 75] in_data_alt from rapid_plotly import helpers aux_traces = [helpers.simple_line_trace(in_data_alt, yaxis='y2')] args['alt_y'] = True args['aux_traces'] = aux_traces fig = barplot.create_graph(**args)
0.322526
0.92373
<a href="https://colab.research.google.com/github/jasonyang429/CNN-for-CIFAR100/blob/main/Cifar100_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### CIFAR100 Image Classification This is an example of CIFAR 100 Image Classification Task using fine classes. I have used 2 different models where the first one is the model that I have personally designed, and the second model is using transfer leraning with ResNet50 architecture. For my own model, I have reached ~70% Training accuracy, ~55% validation accuracy after 120 epochs, where each epochs took about 38 seconds, running on GPU. For the transfer learning model using ResNet50 where I got insights from another person [(link here)](https://github.com/balajikulkarni/The-one-with-Deep-Learning/blob/master/TransferLearning/TransferLearning.ipynb) have reached a ~65% training accuracy and ~45% validation accuracy. This model took me 1.5 hours just for 15 epochs, where each epochs took about 420 seconds. The evaluation task I only carried out on the transfer learning model, where it reached about 70% accuracy. Firstly, import all neccessary modules. ``` import tensorflow as tf from tensorflow.keras.applications.resnet50 import preprocess_input import numpy as np import matplotlib.pyplot as plt from keras_preprocessing.image import ImageDataGenerator import os from tensorflow.keras.datasets import cifar100 print(tf.__version__) ``` Here, split the train and test sets, and convert the labels into one hot encoded vectors. ``` (x_train, y_train), (x_test, y_test) = cifar100.load_data(label_mode='fine') y_train_oh = tf.keras.utils.to_categorical(y_train) y_test_oh = tf.keras.utils.to_categorical(y_test) split = int(0.8 * len(x_test)) x_val = x_test[split:] y_val_oh = y_test_oh[split:] x_test = x_test[:split] y_test_oh = y_test_oh[:split] print("Training set size:", len(x_train)) print("Validation set size:", len(x_val)) print("Test set size:", len(x_test)) print("Number of classes:", len(np.unique(y_train))) print("Input shape:", x_train.shape) ``` Here, I visualized random images from the training sets. ``` def label_name(label): labels = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'crab', 'crocodile', 'cup', 'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose', 'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] return labels[label] fig, axs = plt.subplots(nrows=3, ncols=3, figsize=(10,10)) for i, ax in enumerate(axs.flatten()): random_img = np.random.random_integers(0, split-1) plt.sca(ax) plt.imshow(x_train[random_img], cmap=plt.get_cmap()) plt.title('The label is: {}'.format(label_name(int(y_train[random_img])))) plt.suptitle('9 Samples from training sets') plt.show() ``` Here, I have used a ResNet50 architecture for transfer learning. I have also included my own architecture in the comments below, with the optimizers, loss function and learning rate respectively. Also, I have did some image augmentation for the training sets, for better generalization purpose. From [here](https://github.com/balajikulkarni/The-one-with-Deep-Learning/blob/master/TransferLearning/TransferLearning.ipynb), the reason for UpSampling2D is to convert the CIFAR100 images' shapes from (32, 32, 3) into (224, 224, 3) which is same as the ImageNet images' shapes. The reason for setting BatchNormalization layer to trainable could be referred [here](https://github.com/keras-team/keras/pull/9965) ``` x_train = preprocess_input(x_train) x_test = preprocess_input(x_test) resnet_model = tf.keras.applications.resnet50.ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) for layer in resnet_model.layers: if isinstance(layer, tf.keras.layers.BatchNormalization): layer.trainable = True else: layer.trainable = False model = tf.keras.Sequential([ tf.keras.layers.UpSampling2D(), tf.keras.layers.UpSampling2D(), tf.keras.layers.UpSampling2D(), resnet_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(256, activation='relu', kernel_regularizer='l2'), tf.keras.layers.Dropout(0.5), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(100, activation='softmax') ]) datagen = ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True) datagen.fit(x_train) # MODEL - 1 # ~70% train accuracy and ~55% val accuracy after 120 epochs, each epochs 38 seconds # model.compile(optimizer=tf.keras.optimizers.SGD(0.003, momentum=0.9, nesterov=True), loss='categorical_crossentropy', # metrics=['accuracy']) # model = tf.keras.Sequential([ # tf.keras.layers.Conv2D(256, (3,3), 1, input_shape=(32,32,3)), # tf.keras.layers.MaxPooling2D(2,2), # tf.keras.layers.Dropout(0.1), # tf.keras.layers.Conv2D(512, (3,3), 1, activation='relu'), # tf.keras.layers.MaxPooling2D(2,2), # tf.keras.layers.Dropout(0.2), # tf.keras.layers.Conv2D(1024, (3,3), 1, activation='relu'), # tf.keras.layers.MaxPooling2D(2,2), # tf.keras.layers.GlobalAveragePooling2D(), # tf.keras.layers.Flatten(), # tf.keras.layers.Dense(1024, activation='relu'), # tf.keras.layers.Dropout(0.2), # tf.keras.layers.Dense(512, activation='relu'), # tf.keras.layers.Dropout(0.15), # tf.keras.layers.Dense(256, activation='relu'), # tf.keras.layers.Dropout(0.1), # tf.keras.layers.Dense(128, activation='relu'), # tf.keras.layers.Dropout(0.05), # tf.keras.layers.Dense(64, activation='relu'), # tf.keras.layers.Dense(100, activation='softmax') # ]) ``` Compiling the model, and the time taken for each epochs is about 420 seconds, even on GPU. ``` BATCH_SIZE = 64 STEPS_PER_EPOCH = x_train.shape[0]//BATCH_SIZE VALID_STEPS = x_val.shape[0]//BATCH_SIZE model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(datagen.flow(x_train, y_train_oh, batch_size=BATCH_SIZE), epochs=15, steps_per_epoch=STEPS_PER_EPOCH, verbose=2, validation_data=(x_val, y_val_oh)) model.summary() ``` Visualizing the training and validation accuracy here, where the validation accuracy seems to be fluctuating. This could be further resolved by * Adding more regularization * Decreasing the model complexity, which is not ideal for using transfer learning ``` acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'b', label='Training accuracy') plt.plot(epochs, val_acc, 'r', label='Validation accuracy') plt.title('Training and validation accuracy') plt.figure() plt.plot(epochs, loss, 'b', label='Training Loss') plt.plot(epochs, val_loss, 'r', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` Here, the final prediction result, which are 71% accuracy. ``` model.evaluate(x_test, y_test_oh) ```
github_jupyter
import tensorflow as tf from tensorflow.keras.applications.resnet50 import preprocess_input import numpy as np import matplotlib.pyplot as plt from keras_preprocessing.image import ImageDataGenerator import os from tensorflow.keras.datasets import cifar100 print(tf.__version__) (x_train, y_train), (x_test, y_test) = cifar100.load_data(label_mode='fine') y_train_oh = tf.keras.utils.to_categorical(y_train) y_test_oh = tf.keras.utils.to_categorical(y_test) split = int(0.8 * len(x_test)) x_val = x_test[split:] y_val_oh = y_test_oh[split:] x_test = x_test[:split] y_test_oh = y_test_oh[:split] print("Training set size:", len(x_train)) print("Validation set size:", len(x_val)) print("Test set size:", len(x_test)) print("Number of classes:", len(np.unique(y_train))) print("Input shape:", x_train.shape) def label_name(label): labels = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'crab', 'crocodile', 'cup', 'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose', 'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] return labels[label] fig, axs = plt.subplots(nrows=3, ncols=3, figsize=(10,10)) for i, ax in enumerate(axs.flatten()): random_img = np.random.random_integers(0, split-1) plt.sca(ax) plt.imshow(x_train[random_img], cmap=plt.get_cmap()) plt.title('The label is: {}'.format(label_name(int(y_train[random_img])))) plt.suptitle('9 Samples from training sets') plt.show() x_train = preprocess_input(x_train) x_test = preprocess_input(x_test) resnet_model = tf.keras.applications.resnet50.ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) for layer in resnet_model.layers: if isinstance(layer, tf.keras.layers.BatchNormalization): layer.trainable = True else: layer.trainable = False model = tf.keras.Sequential([ tf.keras.layers.UpSampling2D(), tf.keras.layers.UpSampling2D(), tf.keras.layers.UpSampling2D(), resnet_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(256, activation='relu', kernel_regularizer='l2'), tf.keras.layers.Dropout(0.5), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(100, activation='softmax') ]) datagen = ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True) datagen.fit(x_train) # MODEL - 1 # ~70% train accuracy and ~55% val accuracy after 120 epochs, each epochs 38 seconds # model.compile(optimizer=tf.keras.optimizers.SGD(0.003, momentum=0.9, nesterov=True), loss='categorical_crossentropy', # metrics=['accuracy']) # model = tf.keras.Sequential([ # tf.keras.layers.Conv2D(256, (3,3), 1, input_shape=(32,32,3)), # tf.keras.layers.MaxPooling2D(2,2), # tf.keras.layers.Dropout(0.1), # tf.keras.layers.Conv2D(512, (3,3), 1, activation='relu'), # tf.keras.layers.MaxPooling2D(2,2), # tf.keras.layers.Dropout(0.2), # tf.keras.layers.Conv2D(1024, (3,3), 1, activation='relu'), # tf.keras.layers.MaxPooling2D(2,2), # tf.keras.layers.GlobalAveragePooling2D(), # tf.keras.layers.Flatten(), # tf.keras.layers.Dense(1024, activation='relu'), # tf.keras.layers.Dropout(0.2), # tf.keras.layers.Dense(512, activation='relu'), # tf.keras.layers.Dropout(0.15), # tf.keras.layers.Dense(256, activation='relu'), # tf.keras.layers.Dropout(0.1), # tf.keras.layers.Dense(128, activation='relu'), # tf.keras.layers.Dropout(0.05), # tf.keras.layers.Dense(64, activation='relu'), # tf.keras.layers.Dense(100, activation='softmax') # ]) BATCH_SIZE = 64 STEPS_PER_EPOCH = x_train.shape[0]//BATCH_SIZE VALID_STEPS = x_val.shape[0]//BATCH_SIZE model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(datagen.flow(x_train, y_train_oh, batch_size=BATCH_SIZE), epochs=15, steps_per_epoch=STEPS_PER_EPOCH, verbose=2, validation_data=(x_val, y_val_oh)) model.summary() acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'b', label='Training accuracy') plt.plot(epochs, val_acc, 'r', label='Validation accuracy') plt.title('Training and validation accuracy') plt.figure() plt.plot(epochs, loss, 'b', label='Training Loss') plt.plot(epochs, val_loss, 'r', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() model.evaluate(x_test, y_test_oh)
0.684475
0.955277
# Introduction to Data Science # Lecture 4: Introduction to Descriptive Statistics *COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/* In this lecture, we'll cover - variable types - descriptive statistics in python (min, max, mean, median, std, var, histograms, quantiles) - simple plotting functions - correlation vs causation - confounding variables - descriptive vs. inferential statistics - discrete and continuous random variables (e.g.: Bernoulli, Binomial, Normal) ## Two types of variables - **categorical**: records a category (e.g., gender, color, T/F, educational level, Likert scales) - **quantitative variables**: records a numerical measurement Categorical variables might or might not have an order associated with the categories. In this lecture we'll focus on **quantitative** variables, which can be either **discrete** or **continuous**: - **discrete variables**: values are discrete (e.g., year born, counts) - **continuous variables**: values are real numbers (e.g., length, temperature, time) (Note categorical variables are always discrete.) ## Quiz! For each of the following variables, is the variable type categorical, quantitative discrete, or quantitative continuous? 1. Latitude 2. Olympic 50 meter race times 3. Olympic floor gymnastics score 4. College major 6. Number of offspring of a rat <img src="purity.png" width="90%" alt="https://xkcd.com/435/"/> ## Descriptive statistics (quantitative variables) The goal is to describe a dataset with a small number of statistics or figures Suppose we are given a sample, $x_1, x_2, \ldots, x_n$, of numerical values Some *descriptive statistics* for quantitative data are the min, max, median, and mean, $\frac{1}{n} \sum_{i=1}^n x_i$ **Goal**: Use python to compute descriptive statistics. We'll use the python package [numpy](http://www.numpy.org/) for now. ``` # First import python packages import numpy as np import pandas as pd import matplotlib.pyplot as plt #So that graphs are included in the notebook %matplotlib inline plt.rcParams['figure.figsize'] = (10, 6) plt.style.use('ggplot') ``` ## Alta monthly average snowfall, October - April Let's compute descriptive statistics for the monthly average snowfall at Alta. The snowfall data was collected from 1980 to 2014 and is available [here](https://www.alta.com/conditions/weather-observations/snowfall-history). ``` Alta_avg_month_snow = [28.50,76.77,92.00,95.40,90.85,99.66,80.00] months = ['Oct','Nov','Dec','Jan','Feb','March','Apr'] # Alta_avg_month_snow is a list of floats print(type(Alta_avg_month_snow)) print(type(Alta_avg_month_snow[0])) # months is a list of strings print(type(months)) print(type(months[0])) # how many months of data do we have? len(Alta_avg_month_snow) # compute the min and max snowfall print(np.min(Alta_avg_month_snow)) print(np.max(Alta_avg_month_snow)) # what month do these correspond to? imin = np.argmin(Alta_avg_month_snow) print(imin) print(months[imin]) print(Alta_avg_month_snow[imin]) imax = np.argmax(Alta_avg_month_snow) print(months[imax]) print(Alta_avg_month_snow[imax]) # compute the mean mean_snow = np.mean(Alta_avg_month_snow) print(mean_snow) # compute the median median_snow = np.median(Alta_avg_month_snow) print(median_snow) ``` ## Plotting quantitative data We can use the python library [matplotlib](https://matplotlib.org/) to make a simple plot of the average monthly snowfall. After all, a picture is worth a thousand words. ``` plt.plot(np.arange(7), Alta_avg_month_snow) #Note: plot(y) uses x as 0..N-1; plot(x,y) plots x versus y #print(np.arange(7)) #plt.xticks(np.arange(7),months) #plt.plot([0,6],[mean_snow,mean_snow], label="mean avg. monthly snowfall") #plt.plot([0,6],[median_snow,median_snow], label="median avg. monthly snowfall") #plt.title("Alta average monthly snowfall") #plt.xlabel("month") #plt.ylabel("snowfall (inches)") #plt.legend(loc='lower right') plt.show() #Display all previous plots in one figure plt.plot(np.arange(7), Alta_avg_month_snow,'o') plt.show() ``` ## Population data from the 1994 census Let's compute some descriptive statistics for age in the 1994 census. We'll use the 'Census Income' dataset available [here](https://archive.ics.uci.edu/ml/datasets/Adult). ``` # use pandas to import a table of data from a website data = pd.read_table("http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", sep=",", names=("age", "type_employer", "fnlwgt", "education", "education_num", "marital", "occupation", "relationship", "race","sex","capital_gain", "capital_loss", "hr_per_week","country", "income")) print(type(data)) print(data) # export a list containing ages of people in 1994 Census ages = data["age"].tolist() print(ages) # now use numpy to compute descriptive statistics for ages print(len(ages)) print(np.min(ages)) print(np.max(ages)) print(np.mean(ages)) print(np.median(ages)) ``` ## Histograms We can also make a histogram using the python library [matplotlib](https://matplotlib.org/) to show the distribution of ages in the dataset. ``` plt.hist(ages,np.arange(0,100,4)) # Use bins defined by np.arange(0,100,4) #plt.hist(ages) # Use 20 bins plt.title("1994 Census Histogram") plt.xlabel("Age") plt.ylabel("Frequency") plt.show() ``` # Quantiles Quantiles describe what percentage of the observations in a sample have smaller value ``` print(np.percentile(ages,25)) print(np.percentile(ages,75)) ``` For this data, 25% of the people are under 28 years old The middle 50% of the data (the data between the 25% and 75% quantiles) is between 28 and 48 years old **Question**: how do I read off quantiles from a histogram? # Variance and Standard Deviation Variance and standard deviation quantify the amount of variation or dispersion of a set of data values. Mean, $\mu = \frac{1}{n} \sum_{i = 1}^n x_i$ <br> Variance $= \sigma^2 = \frac{1}{n} \sum_{i = 1}^n (x_i - \mu)^2$ <br> Std. dev. $= \sigma$ ``` print(np.var(ages)) print(np.std(ages)) ``` In terms of the histogram,... <img src="SmallLargeStandDev.png" width="50%" alt="https://en.wikipedia.org/wiki/Correlation_and_dependence#/media/File:Correlation_examples2.svg"> ## Covariance and Correlation Covariance and correlation measure of how much two variables change together. The *covariance* of two variables $x$ and $y$ is given by $$ cov(x,y) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu_x)(y_i - \mu_y), $$ where + $\mu_x$ is mean of $x_1,x_2,\ldots,x_n$ and + $\mu_y$ is mean of $y_1,y_2,\ldots,y_n$. The *correlation coefficient* of two variables $x$ and $y$ is given by $$ corr(x,y) = \frac{cov(x,y)}{\sigma_x \sigma_y}, $$ where + $\sigma_x$ is std. dev. of $x_1,x_2,\ldots,x_n$ and + $\sigma_y$ is std. dev. of $y_1,y_2,\ldots,y_n$. <br> <br> Here is a plot of several pairs of variables, together with the correlation coefficient: <img src="Correlation_examples2.svg" width="90%" alt="https://en.wikipedia.org/wiki/Correlation_and_dependence#/media/File:Correlation_examples2.svg"> In 1994 consensus data, let's use numpy to find the correlation between age and hr_per_week ``` hr = data["hr_per_week"].tolist() plt.hist2d(ages,hr,bins=25) plt.title("Age vs. Hours worked per week") plt.xlabel("Age") plt.ylabel("Hours worked per week") plt.show() plt.plot(ages,hr,'o') plt.show() np.corrcoef(ages,hr) ``` ## Correlation vs Causation <img src="correlation.png" width="100%" alt="https://xkcd.com/552/"> ## Spurious Correlations I (www.tylervigen.com) <img src="Conf1.png" width="100%" alt="www.tylervigen.com"> ## Spurious Correlations II (www.tylervigen.com) <img src="Conf2.png" width="100%" alt="www.tylervigen.com"> ## Confounding: example Suppose we are given city statistics covering a four-month summer period. We observe that swimming pool deaths tend to increase on days when more ice cream is sold. Should we conclude that ice cream is the killer? ## Confounding: example cont. No! As astute analysts, we identify average daily temperature as a confounding variable: on hotter days, people are more likely to both buy ice cream and visit swimming pools. Regression methods can be used to statistically control for this confounding variable, eliminating the direct relationship between ice cream sales and swimming pool deaths. <br> <br> **source**: Jacob Westfall and Tal Yarkoni, Statistically Controlling for Confounding Constructs Is Harder than You Think, PLOS One (2016). [link](https://doi.org/10.1371/journal.pone.0152719) ## In Class Activity in Breakout Rooms Open jupyter notebook 04-DescriptiveStatistics_Activity.ipynb. ## Descriptive vs. Inferential Statistics Descriptive statistics quantitatively describe or summarize features of a dataset. Inferential statistics attempts to learn about the population from which the data was sampled. **Example**: The week before a US presidential election, it is not possible to ask every voting person who they intend to vote for. Instead, a relatively small number of individuals are surveyed. The *hope* is that we can determine the population's preferred candidate from the surveyed results. Often, we will model a population characteristic as a *probability distribution*. *Inferential statistics* is deducing properties of an underlying probability distribution from sampled data. ## Bernoulli Distribution The Bernoulli distribution, named after Jacob Bernoulli, is the probability distribution of a random variable which takes the value 1 (success) with probability $p$ and the value 0 (failure) with probability $q=1-p$. The Bernoulli distribution with $p=0.5$ (implying $q=0.5$) describes a 'fair' coin toss where 1 and 0 represent "heads" and "tails", respectively. If the coin is unfair, then we would have that $p\neq 0.5$. We can use python to sample from the Bernoulli probability distribution. ``` import scipy as sc from scipy.stats import bernoulli, binom, norm n = 1000; coin_flips = bernoulli.rvs(p=0.5, size=n) print(coin_flips) ``` How many heads did we get? We just count the number of 1's. ``` print(sum(coin_flips)) print(sum(coin_flips)/n) ``` What if we flip the coin more times? ``` n = 1000000 coin_flips = bernoulli.rvs(p=0.5, size=n) print(sum(coin_flips)/n) ``` Some facts about Bernoulli variables: * mean is p * variance is p(1-p) ## Binomial distribution The binomial distribution, with parameters $n$ and $p$, is a discrete probability distribution describing the total number of "successes" in $n$ Bernoulli random variables. For simplicity, take $p=0.5$ so that the Bernoulli distribution describes the outcome of a coin. For each flip, the probability of heads is $p$ (so the probability of tails is $q=1-p$). But we don't keep track of the individual flips. We only keep track of how many heads/tails there were in total. So, the binomial distribution can be thought of as summarizing a bunch of (independent) Bernoulli random variables. The following code is equivalent to flipping a fair (p=0.5) coin n=10 times and counting the number of heads and then repeating this process 1,000,000 times. ``` p = 0.5 n = 10 bin_vars = binom.rvs(n=n,p=p,size=1000000) print(bin_vars[:100]) bins=np.arange(12)-.5 print(bins) plt.hist(bin_vars, bins=bins,density=True) plt.title("A histogram of binomial random variables") plt.xlim([-.5,10.5]) plt.show() ``` Some facts about the binomial distribution: * The mean is $np$ * The variance is $np(1-p)$ ## Discrete random variables and probability mass functions The Binomial and Bernoulli random variables are examples of *discrete random variables* since they can take only discrete values. A Bernoulli random variable can take values $0$ or $1$. A binomial random variable can only take values $$ 0,1,\ldots, n. $$ One can compute the probability that the variable takes each value. This is called the *probability mass function*. For a Bernoulli random variable, the probability mass function is given by $$ f(k) = \begin{cases} p & k=1 \\ 1-p & k = 0 \end{cases} $$ For a binomial random variable, the probability mass function is given by $$ f(k) = \binom{n}{k} p^k (1-p)^{n-k}. $$ Here, $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ is the number of ways to arrange the $k$ heads among the $n$ flips. For a fair coin, we have $p=0.5$ and $f(k) = \binom{n}{k} \frac{1}{2^n}$. This is the number of ways to arrange $k$ heads among $n$ outcomes divided by the total number of outcomes. The probability mass function can be plotted using the scipy library as follows. ``` f = lambda k: binom.pmf(k, n=n,p=p) x = np.arange(n+1); plt.plot(x, f(x),'*-') plt.title("Probability mass function for a Binomial random variable") plt.xlim([0,n]) plt.show() ``` Observe that the probability mass function looks very much like the histogram plot! (not a coincidence) ## Concept check **Question**: what is a discrete random variable? A *discrete random variable (r.v.)* is an abstraction of a coin. It can take on a *discrete* set of possible different values, each with a preassigned probability. We saw two examples of discrete random variables: Bernoulli and binomial. A Bernoulli r.v. takes value $1$ with probability $p$ and $0$ with probability $1-p$. A binomial r.v. takes values $0,1,\ldots,n$, with a given probability. The probabilities are given by the probability mass function. This function looks just like the histogram for a sample of a large number of random variables. You can use the same descriptive statistics to describe a discrete random value (min, max, mean, variance, etc..). **Question**: what is the random variable that describes a fair dice? the sum of two fair dice? ## Normal (Gaussian) distribution Roughly speaking, normal random variables are described by a "bell curve". The curve is centered at the mean, $\mu$, and has width given by the standard deviation, $\sigma$. ``` mu = 0 # mean sigma = 1 # standard deviation x = np.arange(mu-4*sigma,mu+4*sigma,0.001); pdf = norm.pdf(x,loc=mu, scale=sigma) # Here, I could have also written # pdf = 1/(sigma * sc.sqrt(2 * sc.pi)) * sc.exp( - (x - mu)**2 / (2 * sigma**2)) plt.plot(x, pdf, linewidth=2, color='k') plt.show() ``` ## Continuous random variables and probability density functions A normal random variable is an example of a *continuous* random variable. A normal random variable can take any real value, but some numbers are more likely than others. More formally, we say that the *probability density function (PDF)* for the normal (Gaussian) distribution is $$ f(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} }, $$ where $\mu$ is the mean and $\sigma$ is the variance. What this means is that the probability that a normal random variable will take values in the interval $[a,b]$ is given by $$ \int_a^b f(x) dx. $$ This is just the area under the curve for this interval. For $a=\mu-\sigma$ and $b = \mu+\sigma$, we plot this below. ``` plt.plot(x, pdf, linewidth=2, color='k') x2 = np.arange(mu-sigma,mu+sigma,0.001) plt.fill_between(x2, y1= norm.pdf(x2,loc=mu, scale=sigma), facecolor='red', alpha=0.5) plt.show() ``` One can check that $$ \int_{-\infty}^\infty f(x) dx = 1 $$ which just means that the probability that the random variable takes value between $-\infty$ and $\infty$ is one. This integral can be computed using the *cumulative distribution function* (CDF) $$ F(x) = \int_{-\infty}^x f(t) dt = \text{Prob. random variable }\leq x . $$ We have that $$ \int_a^b f(x) dx = F(b) - F(a) $$ ``` norm.cdf(mu+sigma, loc=mu, scale=sigma) - norm.cdf(mu-sigma, loc=mu, scale=sigma) ``` This means that 68% of the time, this normal random variable will have values between $\mu-\sigma$ and $\mu+\sigma$. You used to have to look these values up in a table! Let's see what it looks like if we sample 1,000,000 normal random variables and then plot a histogram. ``` norm_vars = norm.rvs(loc=mu,scale=sigma,size=1000000) print(norm_vars[:100]) plt.hist(norm_vars, bins=100,density=True) plt.plot(x, pdf, linewidth=2, color='k') plt.title("A histogram of normal random variables") plt.show() ``` When $n$ is large, the histogram of the sampled variables looks just like the probability distribution function! # Time permitting: explore categorical variables Note: the descriptive statistics and discussed in this lecture can only be computed for quantitative variables. Similarly, histograms, pdf's, and cdf's only apply to quantitative variables Recall the data frame we previsouly made from the 1994 census data: ``` #print(data) print(data["marital"].value_counts(),"\n") print(data["marital"].value_counts(normalize=True),"\n") print(data["sex"].value_counts(normalize=True),"\n") print(data["income"].value_counts(normalize=True),"\n") data.groupby(['sex'])['income'].value_counts(normalize=True) ``` # Concept recap - variable types - descriptive statistics in python (min, max, mean, median, std, var, histograms, quantiles) - correlation vs causation - confounding variables - descriptive vs. inferential statistics - discrete and continuous random variables (e.g.: Bernouilli, Binomial, Normal) ## Looking ahead: Hypothesis testing
github_jupyter
# First import python packages import numpy as np import pandas as pd import matplotlib.pyplot as plt #So that graphs are included in the notebook %matplotlib inline plt.rcParams['figure.figsize'] = (10, 6) plt.style.use('ggplot') Alta_avg_month_snow = [28.50,76.77,92.00,95.40,90.85,99.66,80.00] months = ['Oct','Nov','Dec','Jan','Feb','March','Apr'] # Alta_avg_month_snow is a list of floats print(type(Alta_avg_month_snow)) print(type(Alta_avg_month_snow[0])) # months is a list of strings print(type(months)) print(type(months[0])) # how many months of data do we have? len(Alta_avg_month_snow) # compute the min and max snowfall print(np.min(Alta_avg_month_snow)) print(np.max(Alta_avg_month_snow)) # what month do these correspond to? imin = np.argmin(Alta_avg_month_snow) print(imin) print(months[imin]) print(Alta_avg_month_snow[imin]) imax = np.argmax(Alta_avg_month_snow) print(months[imax]) print(Alta_avg_month_snow[imax]) # compute the mean mean_snow = np.mean(Alta_avg_month_snow) print(mean_snow) # compute the median median_snow = np.median(Alta_avg_month_snow) print(median_snow) plt.plot(np.arange(7), Alta_avg_month_snow) #Note: plot(y) uses x as 0..N-1; plot(x,y) plots x versus y #print(np.arange(7)) #plt.xticks(np.arange(7),months) #plt.plot([0,6],[mean_snow,mean_snow], label="mean avg. monthly snowfall") #plt.plot([0,6],[median_snow,median_snow], label="median avg. monthly snowfall") #plt.title("Alta average monthly snowfall") #plt.xlabel("month") #plt.ylabel("snowfall (inches)") #plt.legend(loc='lower right') plt.show() #Display all previous plots in one figure plt.plot(np.arange(7), Alta_avg_month_snow,'o') plt.show() # use pandas to import a table of data from a website data = pd.read_table("http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", sep=",", names=("age", "type_employer", "fnlwgt", "education", "education_num", "marital", "occupation", "relationship", "race","sex","capital_gain", "capital_loss", "hr_per_week","country", "income")) print(type(data)) print(data) # export a list containing ages of people in 1994 Census ages = data["age"].tolist() print(ages) # now use numpy to compute descriptive statistics for ages print(len(ages)) print(np.min(ages)) print(np.max(ages)) print(np.mean(ages)) print(np.median(ages)) plt.hist(ages,np.arange(0,100,4)) # Use bins defined by np.arange(0,100,4) #plt.hist(ages) # Use 20 bins plt.title("1994 Census Histogram") plt.xlabel("Age") plt.ylabel("Frequency") plt.show() print(np.percentile(ages,25)) print(np.percentile(ages,75)) print(np.var(ages)) print(np.std(ages)) hr = data["hr_per_week"].tolist() plt.hist2d(ages,hr,bins=25) plt.title("Age vs. Hours worked per week") plt.xlabel("Age") plt.ylabel("Hours worked per week") plt.show() plt.plot(ages,hr,'o') plt.show() np.corrcoef(ages,hr) import scipy as sc from scipy.stats import bernoulli, binom, norm n = 1000; coin_flips = bernoulli.rvs(p=0.5, size=n) print(coin_flips) print(sum(coin_flips)) print(sum(coin_flips)/n) n = 1000000 coin_flips = bernoulli.rvs(p=0.5, size=n) print(sum(coin_flips)/n) p = 0.5 n = 10 bin_vars = binom.rvs(n=n,p=p,size=1000000) print(bin_vars[:100]) bins=np.arange(12)-.5 print(bins) plt.hist(bin_vars, bins=bins,density=True) plt.title("A histogram of binomial random variables") plt.xlim([-.5,10.5]) plt.show() f = lambda k: binom.pmf(k, n=n,p=p) x = np.arange(n+1); plt.plot(x, f(x),'*-') plt.title("Probability mass function for a Binomial random variable") plt.xlim([0,n]) plt.show() mu = 0 # mean sigma = 1 # standard deviation x = np.arange(mu-4*sigma,mu+4*sigma,0.001); pdf = norm.pdf(x,loc=mu, scale=sigma) # Here, I could have also written # pdf = 1/(sigma * sc.sqrt(2 * sc.pi)) * sc.exp( - (x - mu)**2 / (2 * sigma**2)) plt.plot(x, pdf, linewidth=2, color='k') plt.show() plt.plot(x, pdf, linewidth=2, color='k') x2 = np.arange(mu-sigma,mu+sigma,0.001) plt.fill_between(x2, y1= norm.pdf(x2,loc=mu, scale=sigma), facecolor='red', alpha=0.5) plt.show() norm.cdf(mu+sigma, loc=mu, scale=sigma) - norm.cdf(mu-sigma, loc=mu, scale=sigma) norm_vars = norm.rvs(loc=mu,scale=sigma,size=1000000) print(norm_vars[:100]) plt.hist(norm_vars, bins=100,density=True) plt.plot(x, pdf, linewidth=2, color='k') plt.title("A histogram of normal random variables") plt.show() #print(data) print(data["marital"].value_counts(),"\n") print(data["marital"].value_counts(normalize=True),"\n") print(data["sex"].value_counts(normalize=True),"\n") print(data["income"].value_counts(normalize=True),"\n") data.groupby(['sex'])['income'].value_counts(normalize=True)
0.466603
0.985496
``` %load_ext autoreload %autoreload 2 import warnings import pandas as pd import numpy as np import os import sys # error msg, add the modules import operator # sorting from math import * import matplotlib.pyplot as plt sys.path.append('../') warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) import prep from warp import * trace_file = 'trace.csv' metrics_file = 'metrics.csv' # read trace and metrics into dataframe, from generated trace file df_kernel_trace, df_metrics = prep.Prep_trace_metrics(trace_file, metrics_file) unique_kerns = [] for index, kenname in enumerate(df_kernel_trace.Name.unique()): if index > 0 and 'memcpy' not in kenname: head = kenname.split('(')[0] if '<' in head: head = head.split('<')[0] #print head if 'void' in head: head = head.split(' ') #print head[1] head = head[1] if head not in unique_kerns: unique_kerns.append(head) print head target_kern_name = 'shfl_scan_test' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern target_kern_name = 'uniform_add' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern target_kern_name = 'shfl_intimage_rows' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern target_kern_name = 'shfl_vertical_shfl' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern ```
github_jupyter
%load_ext autoreload %autoreload 2 import warnings import pandas as pd import numpy as np import os import sys # error msg, add the modules import operator # sorting from math import * import matplotlib.pyplot as plt sys.path.append('../') warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) import prep from warp import * trace_file = 'trace.csv' metrics_file = 'metrics.csv' # read trace and metrics into dataframe, from generated trace file df_kernel_trace, df_metrics = prep.Prep_trace_metrics(trace_file, metrics_file) unique_kerns = [] for index, kenname in enumerate(df_kernel_trace.Name.unique()): if index > 0 and 'memcpy' not in kenname: head = kenname.split('(')[0] if '<' in head: head = head.split('<')[0] #print head if 'void' in head: head = head.split(' ') #print head[1] head = head[1] if head not in unique_kerns: unique_kerns.append(head) print head target_kern_name = 'shfl_scan_test' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern target_kern_name = 'uniform_add' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern target_kern_name = 'shfl_intimage_rows' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern target_kern_name = 'shfl_vertical_shfl' sass_result = target_kern_name + '.sm_52.sass.result' kernel_stat_out ='kernelStat_' + target_kern_name + '.csv' out_columns = prep.init_df_columns(df_metrics) df_kern = pd.DataFrame(columns=out_columns) # current kernel trace df_current = prep.GenCurKernInfo(df_kernel_trace, df_metrics,target_kern_name, out_columns) # add current trace df_kern = df_kern.append(df_current, ignore_index=True) warp_inst = WarpInst() warp_inst.run(df_kern, sass_result) df_kern.insert(1, 'm2c_ratio', warp_inst.mem_clks / warp_inst.cmp_clks) df_kern.to_csv(kernel_stat_out, index=False, encoding='utf-8') df_kern
0.133359
0.139748
``` import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import lightgbm as lgb import matplotlib.pyplot as plt %matplotlib inline from sklearn.decomposition import PCA from sklearn.pipeline import Pipeline from sklearn.model_selection import StratifiedKFold from sklearn.metrics import roc_auc_score from time import gmtime, strftime import optuna def cv(param, X, y, X_test=None): print(param) if X_test is not None: n_splits = 10 n_estimators = 1000000 decom = PCA(n_components=param['n_components'], random_state=7485) predictions = np.zeros(len(X_test)) else: n_splits = 5 n_estimators = 300 predictions = None folds = StratifiedKFold(n_splits=n_splits, random_state = 7485, shuffle=True) oof = np.zeros(len(X)) pca_pipeline = Pipeline([ ('decomposition', PCA(n_components=param['n_components'], random_state=7485)), ('model', lgb.LGBMModel(n_estimators=n_estimators, **param)) ]) for fold_, (trn_idx, val_idx) in enumerate(folds.split(X.values, y.values)): print("Fold {}".format(fold_)) if X_test is not None: decom.fit(X=X.iloc[trn_idx].values, y=y.iloc[trn_idx].values) eval_set = [ (decom.transform(X.iloc[trn_idx]), y.iloc[trn_idx]), (decom.transform(X.iloc[val_idx]), y.iloc[val_idx]) ] clf = pca_pipeline.fit( X=X.iloc[trn_idx].values, y=y.iloc[trn_idx].values, model__eval_set=eval_set, model__eval_metric=param['eval_metric'], model__early_stopping_rounds = 1000, model__verbose=1000) predictions += clf.predict(X_test, num_iteration=clf.named_steps['model'].best_iteration_) / folds.n_splits else: clf = pca_pipeline.fit(X=X.iloc[trn_idx].values, y=y.iloc[trn_idx].values) oof[val_idx] = clf.predict(X.iloc[val_idx], num_iteration=clf.named_steps['model'].best_iteration_) score = roc_auc_score(y, oof) print("CV score: {:<8.5f}".format(score)) if X_test is not None: return predictions else: return score train_df = pd.read_csv('train.csv') test_df = pd.read_csv('test.csv') param = { #'n_components': 75, 'objective': 'binary', 'boosting_type': 'gbdt', 'eval_metric': 'auc', 'n_jobs': -1, 'verbosity': 0, #'num_leaves': 10, #'min_child_samples': 80, #'colsample_bytree': 0.05, 'subsample_freq': 5, #'subsample': 0.4, #'min_child_weight': 10.0, 'learning_rate': 0.01, 'max_depth': -1, } num_data, num_feature = train_df.shape MAX_NUM_LEAVES = min(100, num_feature//10) MAX_MIN_DATA_IN_LEAF = 100 print(num_data, num_feature) def objective(trial): param['n_components'] = trial.suggest_int('n_components', 1, X.shape[1]) param['num_leaves'] = trial.suggest_int('num_leaves', 2, MAX_NUM_LEAVES) param['min_child_samples'] = trial.suggest_int('min_child_samples', 0, MAX_MIN_DATA_IN_LEAF) param['min_child_weight'] = trial.suggest_loguniform('min_child_weight', 1e-5, 20) param['colsample_bytree'] = trial.suggest_uniform('colsample_bytree', 0.01, 1.0) #param['subsample_freq'] = trial.suggest_int('subsample_freq', 0, 5) #param['learning_rate'] = trial.suggest_loguniform('learning_rate', 0.001, 0.3) if param['subsample_freq'] > 0: param['subsample'] = trial.suggest_uniform('subsample', 0.01, 1.0) if param['boosting_type'] == 'dart': param['drop_rate'] = trial.suggest_loguniform('drop_rate', 1e-8, 1.0) param['skip_drop'] = trial.suggest_loguniform('skip_drop', 1e-8, 1.0) if param['boosting_type'] == 'goss': param['top_rate'] = trial.suggest_uniform('top_rate', 0.0, 1.0) param['other_rate'] = trial.suggest_uniform('other_rate', 0.0, 1.0 - param['top_rate']) score = cv(param, X, y) return 1-score train_df.head() train_df.target.value_counts(normalize=True) test_df.head() train_df.dtypes train_df.isnull().sum().sort_values(ascending=False)[:10] # Drop Different Columns from train and test print('\nTrain and Test Datasets have the same columns?:', train_df.drop('target',axis=1).columns.tolist()==test_df.columns.tolist()) print("\nVariables not in test but in train : ", set(train_df.drop('target',axis=1).columns).difference(set(test_df.columns))) dif = list(set(train_df.drop('target',axis=1).columns).difference(set(test_df.columns))) # Prepare data X = train_df.drop(['ID_code', 'target'], axis=1) X_test = test_df.drop(['ID_code'], axis=1) y = train_df.target print(len(X), len(X_test)) #pca = PCA(n_components=200) #pca.fit(X.values) #plt.bar([n for n in range(1, len(pca.explained_variance_ratio_)+1)], pca.explained_variance_ratio_) #ev_ratio = pca.explained_variance_ratio_ #ev_ratio = np.hstack([0,ev_ratio.cumsum()]) #plt.plot(ev_ratio) #plt.show() # Parameter Tuning study = optuna.create_study() study.optimize(objective, n_trials=100) print('Number of finished trials: {}'.format(len(study.trials))) print('Best trial:') trial = study.best_trial best_params = study.best_params print(' Value: {}'.format(trial.value)) print(' Params: ') for key, value in trial.params.items(): print(' {}: {}'.format(key, value)) # Building model using BEST parameters, then predict test data print("pca_lgb_model ...") param.update(best_params) param['verbosity'] = 1 prediction = cv(param, X, y, X_test) print("...Done") # Save test_df['target'] = prediction submission_string = 'pca_gbm_' + strftime("%Y-%m-%d %H:%M:%S", gmtime()) + '.csv' test_df.loc[:, ['ID_code', 'target']].to_csv(submission_string, index=False) ```
github_jupyter
import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import lightgbm as lgb import matplotlib.pyplot as plt %matplotlib inline from sklearn.decomposition import PCA from sklearn.pipeline import Pipeline from sklearn.model_selection import StratifiedKFold from sklearn.metrics import roc_auc_score from time import gmtime, strftime import optuna def cv(param, X, y, X_test=None): print(param) if X_test is not None: n_splits = 10 n_estimators = 1000000 decom = PCA(n_components=param['n_components'], random_state=7485) predictions = np.zeros(len(X_test)) else: n_splits = 5 n_estimators = 300 predictions = None folds = StratifiedKFold(n_splits=n_splits, random_state = 7485, shuffle=True) oof = np.zeros(len(X)) pca_pipeline = Pipeline([ ('decomposition', PCA(n_components=param['n_components'], random_state=7485)), ('model', lgb.LGBMModel(n_estimators=n_estimators, **param)) ]) for fold_, (trn_idx, val_idx) in enumerate(folds.split(X.values, y.values)): print("Fold {}".format(fold_)) if X_test is not None: decom.fit(X=X.iloc[trn_idx].values, y=y.iloc[trn_idx].values) eval_set = [ (decom.transform(X.iloc[trn_idx]), y.iloc[trn_idx]), (decom.transform(X.iloc[val_idx]), y.iloc[val_idx]) ] clf = pca_pipeline.fit( X=X.iloc[trn_idx].values, y=y.iloc[trn_idx].values, model__eval_set=eval_set, model__eval_metric=param['eval_metric'], model__early_stopping_rounds = 1000, model__verbose=1000) predictions += clf.predict(X_test, num_iteration=clf.named_steps['model'].best_iteration_) / folds.n_splits else: clf = pca_pipeline.fit(X=X.iloc[trn_idx].values, y=y.iloc[trn_idx].values) oof[val_idx] = clf.predict(X.iloc[val_idx], num_iteration=clf.named_steps['model'].best_iteration_) score = roc_auc_score(y, oof) print("CV score: {:<8.5f}".format(score)) if X_test is not None: return predictions else: return score train_df = pd.read_csv('train.csv') test_df = pd.read_csv('test.csv') param = { #'n_components': 75, 'objective': 'binary', 'boosting_type': 'gbdt', 'eval_metric': 'auc', 'n_jobs': -1, 'verbosity': 0, #'num_leaves': 10, #'min_child_samples': 80, #'colsample_bytree': 0.05, 'subsample_freq': 5, #'subsample': 0.4, #'min_child_weight': 10.0, 'learning_rate': 0.01, 'max_depth': -1, } num_data, num_feature = train_df.shape MAX_NUM_LEAVES = min(100, num_feature//10) MAX_MIN_DATA_IN_LEAF = 100 print(num_data, num_feature) def objective(trial): param['n_components'] = trial.suggest_int('n_components', 1, X.shape[1]) param['num_leaves'] = trial.suggest_int('num_leaves', 2, MAX_NUM_LEAVES) param['min_child_samples'] = trial.suggest_int('min_child_samples', 0, MAX_MIN_DATA_IN_LEAF) param['min_child_weight'] = trial.suggest_loguniform('min_child_weight', 1e-5, 20) param['colsample_bytree'] = trial.suggest_uniform('colsample_bytree', 0.01, 1.0) #param['subsample_freq'] = trial.suggest_int('subsample_freq', 0, 5) #param['learning_rate'] = trial.suggest_loguniform('learning_rate', 0.001, 0.3) if param['subsample_freq'] > 0: param['subsample'] = trial.suggest_uniform('subsample', 0.01, 1.0) if param['boosting_type'] == 'dart': param['drop_rate'] = trial.suggest_loguniform('drop_rate', 1e-8, 1.0) param['skip_drop'] = trial.suggest_loguniform('skip_drop', 1e-8, 1.0) if param['boosting_type'] == 'goss': param['top_rate'] = trial.suggest_uniform('top_rate', 0.0, 1.0) param['other_rate'] = trial.suggest_uniform('other_rate', 0.0, 1.0 - param['top_rate']) score = cv(param, X, y) return 1-score train_df.head() train_df.target.value_counts(normalize=True) test_df.head() train_df.dtypes train_df.isnull().sum().sort_values(ascending=False)[:10] # Drop Different Columns from train and test print('\nTrain and Test Datasets have the same columns?:', train_df.drop('target',axis=1).columns.tolist()==test_df.columns.tolist()) print("\nVariables not in test but in train : ", set(train_df.drop('target',axis=1).columns).difference(set(test_df.columns))) dif = list(set(train_df.drop('target',axis=1).columns).difference(set(test_df.columns))) # Prepare data X = train_df.drop(['ID_code', 'target'], axis=1) X_test = test_df.drop(['ID_code'], axis=1) y = train_df.target print(len(X), len(X_test)) #pca = PCA(n_components=200) #pca.fit(X.values) #plt.bar([n for n in range(1, len(pca.explained_variance_ratio_)+1)], pca.explained_variance_ratio_) #ev_ratio = pca.explained_variance_ratio_ #ev_ratio = np.hstack([0,ev_ratio.cumsum()]) #plt.plot(ev_ratio) #plt.show() # Parameter Tuning study = optuna.create_study() study.optimize(objective, n_trials=100) print('Number of finished trials: {}'.format(len(study.trials))) print('Best trial:') trial = study.best_trial best_params = study.best_params print(' Value: {}'.format(trial.value)) print(' Params: ') for key, value in trial.params.items(): print(' {}: {}'.format(key, value)) # Building model using BEST parameters, then predict test data print("pca_lgb_model ...") param.update(best_params) param['verbosity'] = 1 prediction = cv(param, X, y, X_test) print("...Done") # Save test_df['target'] = prediction submission_string = 'pca_gbm_' + strftime("%Y-%m-%d %H:%M:%S", gmtime()) + '.csv' test_df.loc[:, ['ID_code', 'target']].to_csv(submission_string, index=False)
0.407687
0.303116
### Predict flower species from measurements ``` # Load libraries import pandas from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from pickle import dump from pickle import load import seaborn as sns sns.set(); # Load dataset url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = pandas.read_csv(url, names=names) # Peek at the Data dataset.head(20) # Statistical Summary # This includes the count, mean, the min and max values as well as some percentiles. print(dataset.describe()) ``` #### Data Visualization ``` # Univariate [box and whisker plots] # Given that the input variables are numeric, we can create box and whisker plots of each. # This gives a clearer idea of the distribution of the input attributes dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False) plt.show() # histograms dataset.hist() plt.show() # It looks like perhaps two of the input variables have a Gaussian distribution. # This is useful to note as we can use algorithms that can exploit this assumption ## Multivariate Plots #Now we can look at the interactions between the variables. #First, let’s look at scatterplots of all pairs of attributes. #This can be helpful to spot structured relationships between input variables. # scatter plot matrix scatter_matrix(dataset) plt.show() # Note the diagonal grouping of some pairs of attributes. This suggests a high correlation and a predictable relationship. ``` ## Evaluate Some Algorithms ### Predict species from flower measurements ``` # Split-out validation dataset array = dataset.values X = array[:,0:4] Y = array[:,4] validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed) # use 10-fold cross validation to estimate accuracy. # Test options and evaluation metric seed = 7 scoring = 'accuracy' # Spot Check Algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn results = [] names = [] for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # Compare Algorithm Accuracy fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() # Make predictions on validation dataset knn = KNeighborsClassifier() knn.fit(X_train, Y_train) predictions = knn.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) # save the model to disk filename = 'finalized_model.sav' dump(model, open(filename, 'wb')) ```
github_jupyter
# Load libraries import pandas from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from pickle import dump from pickle import load import seaborn as sns sns.set(); # Load dataset url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = pandas.read_csv(url, names=names) # Peek at the Data dataset.head(20) # Statistical Summary # This includes the count, mean, the min and max values as well as some percentiles. print(dataset.describe()) # Univariate [box and whisker plots] # Given that the input variables are numeric, we can create box and whisker plots of each. # This gives a clearer idea of the distribution of the input attributes dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False) plt.show() # histograms dataset.hist() plt.show() # It looks like perhaps two of the input variables have a Gaussian distribution. # This is useful to note as we can use algorithms that can exploit this assumption ## Multivariate Plots #Now we can look at the interactions between the variables. #First, let’s look at scatterplots of all pairs of attributes. #This can be helpful to spot structured relationships between input variables. # scatter plot matrix scatter_matrix(dataset) plt.show() # Note the diagonal grouping of some pairs of attributes. This suggests a high correlation and a predictable relationship. # Split-out validation dataset array = dataset.values X = array[:,0:4] Y = array[:,4] validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed) # use 10-fold cross validation to estimate accuracy. # Test options and evaluation metric seed = 7 scoring = 'accuracy' # Spot Check Algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn results = [] names = [] for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # Compare Algorithm Accuracy fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() # Make predictions on validation dataset knn = KNeighborsClassifier() knn.fit(X_train, Y_train) predictions = knn.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) # save the model to disk filename = 'finalized_model.sav' dump(model, open(filename, 'wb'))
0.861626
0.904861
# Computing a 2D histogram using Histogram object Here we use a 2d histogram to compute a series of 1d histograms in parallel from different channels. The image generator simulates the counts measured by a series of Nc counting detectors. The essential pieces for the call are: ```h = Histogram([Nh,0,mu0*3],[Nc,0,Nc+1])``` and ```h.fill(channelvals,channelinds)``` ``` %%latex The counts per channel should follow a Poisson probability distribution: \begin{eqnarray*} P(k) & = & e^{-\lambda} \frac{\lambda^k}{k!} \end{eqnarray*} The $\lambda$ parameter is also allowed to fluctuate on some uniform distribution: \begin{eqnarray*} P_\lambda(\lambda) & = & \frac{1}{2 \Delta} \Theta(|1- \frac{\mu-\mu_0}{\Delta}|)\\ \Theta(x) & = & \left \{ \begin{array}{c c} 1 & |X| < 0\\ 0 & |X|>0 \end{array}\right. \end{eqnarray*} # Import libraries import time as ttime import numpy as np from matplotlib import pyplot as plt %matplotlib notebook #The Histogram object from skbeam.core.accumulators.histogram import Histogram # Define intial parameters and image generator # numer of repeats N = 1000 # number of channelvals Nc = 100 # avg intensity mu0 = 1e3 # percentage deviation (from uniform distribution of counts) dev = 0 # Number of bins for histogram Nh = 1000#3*mu0 # the channel indices to create a one to one correspondence between # channels and bins (i.e. no binning desired in this dimension) channelinds = np.arange(Nc).astype(float) # line intensity generator def gen(N, Nc, mu0, dev): ''' Generate a sequence of N images with Nc channels. The values are returned as counts with Poisson probability distribution that depends on mu0, which is a stochastic parameter that varies uniformly per channel from mu0*(1-dev) to mu0*(1+dev) ''' channelvals = np.ones(Nc) # flatfield flatfield = 1+(np.random.random(Nc)-.5)*dev for i in range(N): yield np.random.poisson(mu0*channelvals*flatfield).astype(float) # Initialize histogram, give tuples of [Nbins, Low, High] for each dimension h = Histogram([Nh,0,mu0*3],[Nc,0,Nc+1]) # This is meant to store the waterfall plot for demonstration purposes img = np.zeros((N, Nc)) # the extent of the histogram image extent=[h.centers[1][0],h.centers[1][-1],h.centers[0][-1],h.centers[0][0]] # set up the figure fig,((ax0,ax1),(ax2,ax3)) = plt.subplots(nrows=2,ncols=2) # loop over images from generator and accumulate into the histogram for i, channelvals in enumerate(gen(N, Nc, mu0, dev)): # fill histogram h.fill(channelvals,channelinds) # store data in a waterfall plot img[i] = channelvals # plot results, but only intermittently, so the updating does not # take a long time if i % 10 == 0: # the waterfall plot ax0.cla() ax0.imshow(img) ax0.set_aspect('auto') ax0.set_xlabel("channelvals") ax0.set_ylabel("time") # The instantaneous counts ax1.cla() ax1.plot(channelvals,'r') ax1.set_xlabel("channel index") ax1.set_ylabel("counts") # the histogram ax2.cla() ax2.imshow(h.values,extent=extent) ax2.set_aspect('auto') ax2.set_ylabel("counts") ax2.set_xlabel("channelvals") # A cross section of the 2D histogram array ax3.cla() ax3.plot(h.centers[0],h.values[:,10]) ax3.set_xlabel("counts") ax3.set_ylabel("frequency") fig.canvas.draw() # needed for plotting to update properly ttime.sleep(.01) ```
github_jupyter
and
0.255251
0.981293
# Master Pytorch Chapter 1 : Tensor https://9bow.github.io/PyTorch-tutorials-kr-0.3.1/index.html ## Tensor(텐서) - 다차원의 행렬로 여러 type을 포함하는 데이터의 형태 중 하나이다. ``` import torch print(torch.__version__) x = torch.Tensor(5, 3) y = torch.empty(5, 3) print(x, '\n') print(y, '\n') print(type(x), x.dtype, type(y), y.dtype) # 무작위로 초기화 된 행렬 생성 x = torch.rand(5, 3) print(x) print(type(x), x.dtype) print(x.size()) # 값이 0인 행렬 생성 x = torch.zeros(5, 3) print(x) print(x.dtype) # 값은 0이고 data type이 long인 행렬 생성 x = torch.zeros(5, 3, dtype = torch.long) print(x) print(x.dtype) # 다양한 data type을 가진 행렬 생성 x = torch.zeros(1, 5, dtype = torch.int8) y = torch.zeros(1, 5, dtype = torch.uint8) z = torch.zeros(1, 5, dtype = torch.float) a = torch.zeros(1, 5, dtype = torch.double) b = torch.zeros(1, 5, dtype = torch.int) print(x) print(y) print(z) print(a) print(b) # 직접 값을 대입해여 행렬 생성 x = torch.Tensor([1, 5]) print(x) print(type(x), x.dtype) # 이미 정의한 Tensor를 이용해 새로운 행렬 행성 # 사용자가 새로운 값을 적용하지 않는 한, 입력 인자들의 속성 재사용 # ex : dtype, size 등 x = torch.zeros(5, 3, dtype = torch.uint8) print(x, x.dtype, '\n') y = torch.randn(x.shape) print(y, y.dtype) ``` ## Operations ``` x = torch.rand(5, 3) y = torch.rand(5, 3) print(x, '\n') print(y, '\n') print(x + y) torch.add(x, y) result = torch.Tensor(5, 3) torch.add(x, y, out=result) result y.add(x) ``` ## Indexing ``` x = torch.Tensor(5, 3) print(x, '\n') print(x[:, 1]) # 1열 모든 행 print(x[1, :]) # 1행 모든 열 print(type(x)) x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # 8열, 나머지는 행으로 r = x.view(8, -1) # 8행, 나머지는 열로 print(x.size(), y.size(), z.size(), r.size()) z = x.view(16, -1) # 16행, 나머지는 열로 z ``` ## Convert Tensor&Numpy ``` import numpy as np a = np.ones(5) print(a, type(a), '\n') b = torch.from_numpy(a) # 주소를 공유하는 복사이기 때문에 a값이 변하면 같이 변한다. print(a, type(a), '\n') np.add(a, 1, out=a) # np.add를 사용하면 b값도 같이 변한다 <<< 이유가...(깊은 복사 vs 얕은 복사) print(a, type(a), '\n') print(b, type(b)) ``` # CUDA Tensors - to. 메소드를 이용하면 Tensor를 어떠한 장치로도 옮길 수 있다. - CUDA가 사용 가능한 환경에서만 실행 - torch.device 를 이용하여 Tensor를 GPU로 이동시킬 수 있다. ``` print(torch.cuda.device_count()) # GPU 개수 확인 # print(torch.cuda.current_device()) # 현재 GPU 확인 # print(torch.cuda.device(0)) # GPU 주소 확인 # print(torch.cuda.get_device_name(0)) # GPU 이름 확인 if torch.cuda.is_available(): # cuda 사용가능 여부(GPU가 있는지 확인) device = torch.device('cuda') # CUDA 장치를 객체 device에 할당 x = torch.ones(1, 5) y = torch.ones_like(x, device = device) x = x.to(device) z = x + y print(x) print(y) print(z) print(z.to('cpu')) print(z.to('cpu', dtype = torch.float64)) else: print('GPU가 없습니다.') ``` # New things I learned ### 1. from __future__ import print_function - __future__를 통해 최신 python버젼의 function을 가져올 수 있다. - 참고https://ddanggle.gitbooks.io/interpy-kr/ch21-Targeting-python.html ### 2. Reshaping tensor with view(-1, n) - -1값이 들어갈 땐 n을 보고 reshape할 모양을 유추 ### 3. Jupyter notebook extension toolkits - notebook의 추가 기능 사용 - 참고https://www.slideshare.net/zzsza/10-jupyter-notebook ### 4. _like의 기능(ex. zeros와 zeros_like의 차이) - 그냥 zeros(x)는 x의 size - zeros_like(input)은 input의 size로 ### 5. 일반 tensor와 cuda위의 tensor와는 연산 불가 ### 6. dtype에서 signed 와 unsigned integer의 차이 - signed는 플러스(+)와 마이너스(-) 둘 다 가능하다. - unsigned는 플러스(+)만 가능하다. - 참고https://stackoverflow.com/questions/5739888/what-is-the-difference-between-signed-and-unsigned-int
github_jupyter
import torch print(torch.__version__) x = torch.Tensor(5, 3) y = torch.empty(5, 3) print(x, '\n') print(y, '\n') print(type(x), x.dtype, type(y), y.dtype) # 무작위로 초기화 된 행렬 생성 x = torch.rand(5, 3) print(x) print(type(x), x.dtype) print(x.size()) # 값이 0인 행렬 생성 x = torch.zeros(5, 3) print(x) print(x.dtype) # 값은 0이고 data type이 long인 행렬 생성 x = torch.zeros(5, 3, dtype = torch.long) print(x) print(x.dtype) # 다양한 data type을 가진 행렬 생성 x = torch.zeros(1, 5, dtype = torch.int8) y = torch.zeros(1, 5, dtype = torch.uint8) z = torch.zeros(1, 5, dtype = torch.float) a = torch.zeros(1, 5, dtype = torch.double) b = torch.zeros(1, 5, dtype = torch.int) print(x) print(y) print(z) print(a) print(b) # 직접 값을 대입해여 행렬 생성 x = torch.Tensor([1, 5]) print(x) print(type(x), x.dtype) # 이미 정의한 Tensor를 이용해 새로운 행렬 행성 # 사용자가 새로운 값을 적용하지 않는 한, 입력 인자들의 속성 재사용 # ex : dtype, size 등 x = torch.zeros(5, 3, dtype = torch.uint8) print(x, x.dtype, '\n') y = torch.randn(x.shape) print(y, y.dtype) x = torch.rand(5, 3) y = torch.rand(5, 3) print(x, '\n') print(y, '\n') print(x + y) torch.add(x, y) result = torch.Tensor(5, 3) torch.add(x, y, out=result) result y.add(x) x = torch.Tensor(5, 3) print(x, '\n') print(x[:, 1]) # 1열 모든 행 print(x[1, :]) # 1행 모든 열 print(type(x)) x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # 8열, 나머지는 행으로 r = x.view(8, -1) # 8행, 나머지는 열로 print(x.size(), y.size(), z.size(), r.size()) z = x.view(16, -1) # 16행, 나머지는 열로 z import numpy as np a = np.ones(5) print(a, type(a), '\n') b = torch.from_numpy(a) # 주소를 공유하는 복사이기 때문에 a값이 변하면 같이 변한다. print(a, type(a), '\n') np.add(a, 1, out=a) # np.add를 사용하면 b값도 같이 변한다 <<< 이유가...(깊은 복사 vs 얕은 복사) print(a, type(a), '\n') print(b, type(b)) print(torch.cuda.device_count()) # GPU 개수 확인 # print(torch.cuda.current_device()) # 현재 GPU 확인 # print(torch.cuda.device(0)) # GPU 주소 확인 # print(torch.cuda.get_device_name(0)) # GPU 이름 확인 if torch.cuda.is_available(): # cuda 사용가능 여부(GPU가 있는지 확인) device = torch.device('cuda') # CUDA 장치를 객체 device에 할당 x = torch.ones(1, 5) y = torch.ones_like(x, device = device) x = x.to(device) z = x + y print(x) print(y) print(z) print(z.to('cpu')) print(z.to('cpu', dtype = torch.float64)) else: print('GPU가 없습니다.')
0.312265
0.945751
# Quantile Regression The NAG function [`correg.quantile_linreg_easy`](https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.correg.quantile_linreg_easy.html) can be used to model the conditional $\tau$-th quantile of a dependent variable against one or more independent or explanatory variables. Whereas the method of least squares results in estimates of the conditional <em>mean</em> of the response (dependent) variable, quantile regression gives estimates of the conditional <em>median</em> (or any other quantile) of the response variable. In our example below the dependent variable is household food expenditure, which is regressed against household income. The data is from a study of 1857 by Engels. ``` income = [ 420.1577, 541.4117, 901.1575, 639.0802, 750.8756, 945.7989, 829.3979, 979.1648, 1309.8789, 1492.3987, 502.8390, 616.7168, 790.9225, 555.8786, 713.4412, 838.7561, 535.0766, 596.4408, 924.5619, 487.7583, 692.6397, 997.8770, 506.9995, 654.1587, 933.9193, 433.6813, 587.5962, 896.4746, 454.4782, 584.9989, 800.7990, 502.4369, 713.5197, 906.0006, 880.5969, 796.8289, 854.8791, 1167.3716, 523.8000, 670.7792, 377.0584, 851.5430, 1121.0937, 625.5179, 805.5377, 558.5812, 884.4005, 1257.4989, 2051.1789, 1466.3330, 730.0989, 2432.3910, 940.9218, 1177.8547, 1222.5939, 1519.5811, 687.6638, 953.1192, 953.1192, 953.1192, 939.0418, 1283.4025, 1511.5789, 1342.5821, 511.7980, 689.7988, 1532.3074, 1056.0808, 387.3195, 387.3195, 410.9987, 499.7510, 832.7554, 614.9986, 887.4658, 1595.1611, 1807.9520, 541.2006, 1057.6767, 800.7990, 1245.6964, 1201.0002, 634.4002, 956.2315, 1148.6010, 1768.8236, 2822.5330, 922.3548, 2293.1920, 627.4726, 889.9809, 1162.2000, 1197.0794, 530.7972, 1142.1526, 1088.0039, 484.6612, 1536.0201, 678.8974, 671.8802, 690.4683, 860.6948, 873.3095, 894.4598, 1148.6470, 926.8762, 839.0414, 829.4974, 1264.0043, 1937.9771, 698.8317, 920.4199, 1897.5711, 891.6824, 889.6784, 1221.4818, 544.5991, 1031.4491, 1462.9497, 830.4353, 975.0415, 1337.9983, 867.6427, 725.7459, 989.0056, 1525.0005, 672.1960, 923.3977, 472.3215, 590.7601, 831.7983, 1139.4945, 507.5169, 576.1972, 696.5991, 650.8180, 949.5802, 497.1193, 570.1674, 724.7306, 408.3399, 638.6713, 1225.7890, 715.3701, 800.4708, 975.5974, 1613.7565, 608.5019, 958.6634, 835.9426, 1024.8177, 1006.4353, 726.0000, 494.4174, 776.5958, 415.4407, 581.3599, 643.3571, 2551.6615, 1795.3226, 1165.7734, 815.6212, 1264.2066, 1095.4056, 447.4479, 1178.9742, 975.8023, 1017.8522, 423.8798, 558.7767, 943.2487, 1348.3002, 2340.6174, 587.1792, 1540.9741, 1115.8481, 1044.6843, 1389.7929, 2497.7860, 1585.3809, 1862.0438, 2008.8546, 697.3099, 571.2517, 598.3465, 461.0977, 977.1107, 883.9849, 718.3594, 543.8971, 1587.3480, 4957.8130, 969.6838, 419.9980, 561.9990, 689.5988, 1398.5203, 820.8168, 875.1716, 1392.4499, 1256.3174, 1362.8590, 1999.2552, 1209.4730, 1125.0356, 1827.4010, 1014.1540, 880.3944, 873.7375, 951.4432, 473.0022, 601.0030, 713.9979, 829.2984, 959.7953, 1212.9613, 958.8743, 1129.4431, 1943.0419, 539.6388, 463.5990, 562.6400, 736.7584, 1415.4461, 2208.7897, 636.0009, 759.4010, 1078.8382, 748.6413, 987.6417, 788.0961, 1020.0225, 1230.9235, 440.5174, 743.0772, ] expenditure = [ 255.8394, 310.9587, 485.6800, 402.9974, 495.5608, 633.7978, 630.7566, 700.4409, 830.9586, 815.3602, 338.0014, 412.3613, 520.0006, 452.4015, 512.7201, 658.8395, 392.5995, 443.5586, 640.1164, 333.8394, 466.9583, 543.3969, 317.7198, 424.3209, 518.9617, 338.0014, 419.6412, 476.3200, 386.3602, 423.2783, 503.3572, 354.6389, 497.3182, 588.5195, 654.5971, 550.7274, 528.3770, 640.4813, 401.3204, 435.9990, 276.5606, 588.3488, 664.1978, 444.8602, 462.8995, 377.7792, 553.1504, 810.8962, 1067.9541, 1049.8788, 522.7012, 1424.8047, 517.9196, 830.9586, 925.5795, 1162.0024, 383.4580, 621.1173, 621.1173, 621.1173, 548.6002, 745.2353, 837.8005, 795.3402, 418.5976, 508.7974, 883.2780, 742.5276, 242.3202, 242.3202, 266.0010, 408.4992, 614.7588, 385.3184, 515.6200, 1138.1620, 993.9630, 299.1993, 750.3202, 572.0807, 907.3969, 811.5776, 427.7975, 649.9985, 860.6002, 1143.4211, 2032.6792, 590.6183, 1570.3911, 483.4800, 600.4804, 696.2021, 774.7962, 390.5984, 612.5619, 708.7622, 296.9192, 1071.4627, 496.5976, 503.3974, 357.6411, 430.3376, 624.6990, 582.5413, 580.2215, 543.8807, 588.6372, 627.9999, 712.1012, 968.3949, 482.5816, 593.1694, 1033.5658, 693.6795, 693.6795, 761.2791, 361.3981, 628.4522, 771.4486, 757.1187, 821.5970, 1022.3202, 679.4407, 538.7491, 679.9981, 977.0033, 561.2015, 728.3997, 372.3186, 361.5210, 620.8006, 819.9964, 360.8780, 395.7608, 442.0001, 404.0384, 670.7993, 297.5702, 353.4882, 383.9376, 284.8008, 431.1000, 801.3518, 448.4513, 577.9111, 570.5210, 865.3205, 444.5578, 680.4198, 576.2779, 708.4787, 734.2356, 433.0010, 327.4188, 485.5198, 305.4390, 468.0008, 459.8177, 863.9199, 831.4407, 534.7610, 392.0502, 934.9752, 813.3081, 263.7100, 769.0838, 630.5863, 645.9874, 319.5584, 348.4518, 614.5068, 662.0096, 1504.3708, 406.2180, 692.1689, 588.1371, 511.2609, 700.5600, 1301.1451, 879.0660, 912.8851, 1509.7812, 484.0605, 399.6703, 444.1001, 248.8101, 527.8014, 500.6313, 436.8107, 374.7990, 726.3921, 1827.2000, 523.4911, 334.9998, 473.2009, 581.2029, 929.7540, 591.1974, 637.5483, 674.9509, 776.7589, 959.5170, 1250.9643, 737.8201, 810.6772, 983.0009, 708.8968, 633.1200, 631.7982, 608.6419, 300.9999, 377.9984, 397.0015, 588.5195, 681.7616, 807.3603, 696.8011, 811.1962, 1305.7201, 442.0001, 353.6013, 468.0008, 526.7573, 890.2390, 1318.8033, 331.0005, 416.4015, 596.8406, 429.0399, 619.6408, 400.7990, 775.0209, 772.7611, 306.5191, 522.6019, ] ``` In the design matrix for the regression we include an intercept term by augmenting the income data set with a column of ones ``` income_X = [[1., incomei] for incomei in income] ``` Our quantiles of interest ``` tau = [0.1, 0.5, 0.9] ``` Compute the regression ``` from naginterfaces.library import correg regn = correg.quantile_linreg_easy(income_X, expenditure, tau) ``` The regression coefficients are returned in attribute `b` of the function's return tuple. For the plot, compute the regression lines ``` import numpy as np plot_x = np.linspace(0, max(income)) plot_ys = [regn.b[0, i] + regn.b[1, i]*plot_x for i in range(len(tau))] # Select the display backend for Jupyter: %matplotlib nbagg ``` Make a scatter plot of the original income data (without the intercept) and add in the regression lines ``` import matplotlib.pyplot as plt plt.scatter(income, expenditure, c='red', marker='+', linewidth=0.5) for tau_i, tau_val in enumerate(tau): plt.plot( plot_x, plot_ys[tau_i], label=r'$\tau$ = {:.2f}'.format(tau_val), linewidth=(4 if tau_val == 0.5 else 2), alpha=0.25, ) plt.ylim((0., max(expenditure))) plt.xlabel('Household Income') plt.ylabel('Household Food Expenditure') plt.legend(loc='lower right') plt.title( 'Quantile Regression\n' 'Engels\' 1857 Study of Household Expenditure on Food' ) plt.show() ```
github_jupyter
income = [ 420.1577, 541.4117, 901.1575, 639.0802, 750.8756, 945.7989, 829.3979, 979.1648, 1309.8789, 1492.3987, 502.8390, 616.7168, 790.9225, 555.8786, 713.4412, 838.7561, 535.0766, 596.4408, 924.5619, 487.7583, 692.6397, 997.8770, 506.9995, 654.1587, 933.9193, 433.6813, 587.5962, 896.4746, 454.4782, 584.9989, 800.7990, 502.4369, 713.5197, 906.0006, 880.5969, 796.8289, 854.8791, 1167.3716, 523.8000, 670.7792, 377.0584, 851.5430, 1121.0937, 625.5179, 805.5377, 558.5812, 884.4005, 1257.4989, 2051.1789, 1466.3330, 730.0989, 2432.3910, 940.9218, 1177.8547, 1222.5939, 1519.5811, 687.6638, 953.1192, 953.1192, 953.1192, 939.0418, 1283.4025, 1511.5789, 1342.5821, 511.7980, 689.7988, 1532.3074, 1056.0808, 387.3195, 387.3195, 410.9987, 499.7510, 832.7554, 614.9986, 887.4658, 1595.1611, 1807.9520, 541.2006, 1057.6767, 800.7990, 1245.6964, 1201.0002, 634.4002, 956.2315, 1148.6010, 1768.8236, 2822.5330, 922.3548, 2293.1920, 627.4726, 889.9809, 1162.2000, 1197.0794, 530.7972, 1142.1526, 1088.0039, 484.6612, 1536.0201, 678.8974, 671.8802, 690.4683, 860.6948, 873.3095, 894.4598, 1148.6470, 926.8762, 839.0414, 829.4974, 1264.0043, 1937.9771, 698.8317, 920.4199, 1897.5711, 891.6824, 889.6784, 1221.4818, 544.5991, 1031.4491, 1462.9497, 830.4353, 975.0415, 1337.9983, 867.6427, 725.7459, 989.0056, 1525.0005, 672.1960, 923.3977, 472.3215, 590.7601, 831.7983, 1139.4945, 507.5169, 576.1972, 696.5991, 650.8180, 949.5802, 497.1193, 570.1674, 724.7306, 408.3399, 638.6713, 1225.7890, 715.3701, 800.4708, 975.5974, 1613.7565, 608.5019, 958.6634, 835.9426, 1024.8177, 1006.4353, 726.0000, 494.4174, 776.5958, 415.4407, 581.3599, 643.3571, 2551.6615, 1795.3226, 1165.7734, 815.6212, 1264.2066, 1095.4056, 447.4479, 1178.9742, 975.8023, 1017.8522, 423.8798, 558.7767, 943.2487, 1348.3002, 2340.6174, 587.1792, 1540.9741, 1115.8481, 1044.6843, 1389.7929, 2497.7860, 1585.3809, 1862.0438, 2008.8546, 697.3099, 571.2517, 598.3465, 461.0977, 977.1107, 883.9849, 718.3594, 543.8971, 1587.3480, 4957.8130, 969.6838, 419.9980, 561.9990, 689.5988, 1398.5203, 820.8168, 875.1716, 1392.4499, 1256.3174, 1362.8590, 1999.2552, 1209.4730, 1125.0356, 1827.4010, 1014.1540, 880.3944, 873.7375, 951.4432, 473.0022, 601.0030, 713.9979, 829.2984, 959.7953, 1212.9613, 958.8743, 1129.4431, 1943.0419, 539.6388, 463.5990, 562.6400, 736.7584, 1415.4461, 2208.7897, 636.0009, 759.4010, 1078.8382, 748.6413, 987.6417, 788.0961, 1020.0225, 1230.9235, 440.5174, 743.0772, ] expenditure = [ 255.8394, 310.9587, 485.6800, 402.9974, 495.5608, 633.7978, 630.7566, 700.4409, 830.9586, 815.3602, 338.0014, 412.3613, 520.0006, 452.4015, 512.7201, 658.8395, 392.5995, 443.5586, 640.1164, 333.8394, 466.9583, 543.3969, 317.7198, 424.3209, 518.9617, 338.0014, 419.6412, 476.3200, 386.3602, 423.2783, 503.3572, 354.6389, 497.3182, 588.5195, 654.5971, 550.7274, 528.3770, 640.4813, 401.3204, 435.9990, 276.5606, 588.3488, 664.1978, 444.8602, 462.8995, 377.7792, 553.1504, 810.8962, 1067.9541, 1049.8788, 522.7012, 1424.8047, 517.9196, 830.9586, 925.5795, 1162.0024, 383.4580, 621.1173, 621.1173, 621.1173, 548.6002, 745.2353, 837.8005, 795.3402, 418.5976, 508.7974, 883.2780, 742.5276, 242.3202, 242.3202, 266.0010, 408.4992, 614.7588, 385.3184, 515.6200, 1138.1620, 993.9630, 299.1993, 750.3202, 572.0807, 907.3969, 811.5776, 427.7975, 649.9985, 860.6002, 1143.4211, 2032.6792, 590.6183, 1570.3911, 483.4800, 600.4804, 696.2021, 774.7962, 390.5984, 612.5619, 708.7622, 296.9192, 1071.4627, 496.5976, 503.3974, 357.6411, 430.3376, 624.6990, 582.5413, 580.2215, 543.8807, 588.6372, 627.9999, 712.1012, 968.3949, 482.5816, 593.1694, 1033.5658, 693.6795, 693.6795, 761.2791, 361.3981, 628.4522, 771.4486, 757.1187, 821.5970, 1022.3202, 679.4407, 538.7491, 679.9981, 977.0033, 561.2015, 728.3997, 372.3186, 361.5210, 620.8006, 819.9964, 360.8780, 395.7608, 442.0001, 404.0384, 670.7993, 297.5702, 353.4882, 383.9376, 284.8008, 431.1000, 801.3518, 448.4513, 577.9111, 570.5210, 865.3205, 444.5578, 680.4198, 576.2779, 708.4787, 734.2356, 433.0010, 327.4188, 485.5198, 305.4390, 468.0008, 459.8177, 863.9199, 831.4407, 534.7610, 392.0502, 934.9752, 813.3081, 263.7100, 769.0838, 630.5863, 645.9874, 319.5584, 348.4518, 614.5068, 662.0096, 1504.3708, 406.2180, 692.1689, 588.1371, 511.2609, 700.5600, 1301.1451, 879.0660, 912.8851, 1509.7812, 484.0605, 399.6703, 444.1001, 248.8101, 527.8014, 500.6313, 436.8107, 374.7990, 726.3921, 1827.2000, 523.4911, 334.9998, 473.2009, 581.2029, 929.7540, 591.1974, 637.5483, 674.9509, 776.7589, 959.5170, 1250.9643, 737.8201, 810.6772, 983.0009, 708.8968, 633.1200, 631.7982, 608.6419, 300.9999, 377.9984, 397.0015, 588.5195, 681.7616, 807.3603, 696.8011, 811.1962, 1305.7201, 442.0001, 353.6013, 468.0008, 526.7573, 890.2390, 1318.8033, 331.0005, 416.4015, 596.8406, 429.0399, 619.6408, 400.7990, 775.0209, 772.7611, 306.5191, 522.6019, ] income_X = [[1., incomei] for incomei in income] tau = [0.1, 0.5, 0.9] from naginterfaces.library import correg regn = correg.quantile_linreg_easy(income_X, expenditure, tau) import numpy as np plot_x = np.linspace(0, max(income)) plot_ys = [regn.b[0, i] + regn.b[1, i]*plot_x for i in range(len(tau))] # Select the display backend for Jupyter: %matplotlib nbagg import matplotlib.pyplot as plt plt.scatter(income, expenditure, c='red', marker='+', linewidth=0.5) for tau_i, tau_val in enumerate(tau): plt.plot( plot_x, plot_ys[tau_i], label=r'$\tau$ = {:.2f}'.format(tau_val), linewidth=(4 if tau_val == 0.5 else 2), alpha=0.25, ) plt.ylim((0., max(expenditure))) plt.xlabel('Household Income') plt.ylabel('Household Food Expenditure') plt.legend(loc='lower right') plt.title( 'Quantile Regression\n' 'Engels\' 1857 Study of Household Expenditure on Food' ) plt.show()
0.227727
0.843444
<div class="alert alert-block alert-success"> <b><center>CNN Basic Examples</center></b> <b><center>Basic CNN</center></b> </div> # Configure Learning Environment ``` # !pip install git+https://github.com/nockchun/rspy --force # !pip install mybatis_mapper2sql import rspy as rsp rsp.setSystemWarning(off=True) import tensorflow as tf from tensorflow.keras import layers, models, losses, optimizers, datasets, preprocessing, utils import numpy as np import pandas as pd from sklearn.model_selection import train_test_split np.set_printoptions(linewidth=200, precision=2) tf.__version__ ``` # Prepare Data SMS Spam Dataset(two columns) : v1 contains the label (ham or spam) and v2 contains the raw text. ``` # Naver Sentiment Movie Corpus v1.0 다운로드 data_path = utils.get_file( 'spam.csv', 'https://raw.githubusercontent.com/mohitgupta-omg/Kaggle-SMS-Spam-Collection-Dataset-/master/spam.csv' ) df_ori = pd.read_csv(data_path, encoding="latin1") df_ori.head() df_data = df_ori.copy() del df_data["Unnamed: 2"] del df_data["Unnamed: 3"] del df_data["Unnamed: 4"] df_data.v1 = df_ori.v1.replace(["ham", "spam"], [0, 1]) df_data.head() len(df_data), len(df_data.v1.unique()), len(df_data.v2.unique()) df_data.drop_duplicates(inplace=True) len(df_data), len(df_data.v1.unique()), len(df_data.v2.unique()) train_ori = df_data.v2.values label_ori = df_data.v1.values train_ori.shape, label_ori.shape ``` # Preprocessing Data ``` tokenizer = preprocessing.text.Tokenizer() tokenizer.fit_on_texts(train_ori) SIZE_VOCAB = len(tokenizer.word_index) + 1 train_num = tokenizer.texts_to_sequences(train_ori) print(train_num[:2]) LEN_SENTENCE_MAX = max([len(item) for item in train_num]) LEN_SENTENCE_MAX train_num = preprocessing.sequence.pad_sequences(train_num, LEN_SENTENCE_MAX, padding="post") train_num.shape print(train_num[:1]) train_data, test_data, train_label, test_label = train_test_split(train_num, label_ori, test_size= 0.2, random_state=88) train_data.shape, train_label.shape, test_data.shape, test_label.shape ``` # Create Model ``` model = models.Sequential([ layers.Input((172,)), layers.Embedding(SIZE_VOCAB, 32), layers.SimpleRNN(32), layers.Dense(1, activation="sigmoid") ]) model.summary() utils.plot_model(model, "intermediate/model.png", True) ``` # Compile & Learning ``` model.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] ) history = model.fit( train_data, train_label, epochs=10, batch_size=64, validation_split=0.2 ) df_history = pd.DataFrame(history.history) df_history[["loss", "val_loss"]].plot() df_history[["accuracy", "val_accuracy"]].plot() ``` # Evaluate ``` test_scores = model.evaluate(test_data, test_label, verbose=2) ``` # Save And Serialize ``` model.save("intermediate/spam.h5") ``` # Prediction ``` # Recreate the exact same model purely from the file: model_loaded = models.load_model("intermediate/spam.h5") test_data[:1] model_loaded.predict(test_data[:20]).tolist() test_label[:20] ```
github_jupyter
# !pip install git+https://github.com/nockchun/rspy --force # !pip install mybatis_mapper2sql import rspy as rsp rsp.setSystemWarning(off=True) import tensorflow as tf from tensorflow.keras import layers, models, losses, optimizers, datasets, preprocessing, utils import numpy as np import pandas as pd from sklearn.model_selection import train_test_split np.set_printoptions(linewidth=200, precision=2) tf.__version__ # Naver Sentiment Movie Corpus v1.0 다운로드 data_path = utils.get_file( 'spam.csv', 'https://raw.githubusercontent.com/mohitgupta-omg/Kaggle-SMS-Spam-Collection-Dataset-/master/spam.csv' ) df_ori = pd.read_csv(data_path, encoding="latin1") df_ori.head() df_data = df_ori.copy() del df_data["Unnamed: 2"] del df_data["Unnamed: 3"] del df_data["Unnamed: 4"] df_data.v1 = df_ori.v1.replace(["ham", "spam"], [0, 1]) df_data.head() len(df_data), len(df_data.v1.unique()), len(df_data.v2.unique()) df_data.drop_duplicates(inplace=True) len(df_data), len(df_data.v1.unique()), len(df_data.v2.unique()) train_ori = df_data.v2.values label_ori = df_data.v1.values train_ori.shape, label_ori.shape tokenizer = preprocessing.text.Tokenizer() tokenizer.fit_on_texts(train_ori) SIZE_VOCAB = len(tokenizer.word_index) + 1 train_num = tokenizer.texts_to_sequences(train_ori) print(train_num[:2]) LEN_SENTENCE_MAX = max([len(item) for item in train_num]) LEN_SENTENCE_MAX train_num = preprocessing.sequence.pad_sequences(train_num, LEN_SENTENCE_MAX, padding="post") train_num.shape print(train_num[:1]) train_data, test_data, train_label, test_label = train_test_split(train_num, label_ori, test_size= 0.2, random_state=88) train_data.shape, train_label.shape, test_data.shape, test_label.shape model = models.Sequential([ layers.Input((172,)), layers.Embedding(SIZE_VOCAB, 32), layers.SimpleRNN(32), layers.Dense(1, activation="sigmoid") ]) model.summary() utils.plot_model(model, "intermediate/model.png", True) model.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] ) history = model.fit( train_data, train_label, epochs=10, batch_size=64, validation_split=0.2 ) df_history = pd.DataFrame(history.history) df_history[["loss", "val_loss"]].plot() df_history[["accuracy", "val_accuracy"]].plot() test_scores = model.evaluate(test_data, test_label, verbose=2) model.save("intermediate/spam.h5") # Recreate the exact same model purely from the file: model_loaded = models.load_model("intermediate/spam.h5") test_data[:1] model_loaded.predict(test_data[:20]).tolist() test_label[:20]
0.713531
0.899828
``` from io import BytesIO import time import boto3 import PIL.Image import sagemaker import gym from gym import wrappers import numpy as np import mxnet as mx import matplotlib.pyplot as plt %matplotlib inline from importlib import reload from IPython import display import ipywidgets as widgets from IPython.display import display as i_display from battlesnake_gym.snake_gym import BattlesnakeGym from heuristics_utils import simulate ``` # Define the openAI gym Optionally, you can define the initial game state (the situation simulator) of the snakes and food. To use the initial state, set `USE_INITIAL_STATE = True` and enter the desired coordinates of the snake and food using the initial_state dictionary. The dictionary follows the same format as the battlesnake API. ``` USE_INITIAL_STATE = False # Sample initial state for the situation simulator initial_state = { "turn": 4, "board": { "height": 11, "width": 11, "food": [ { "x": 1, "y": 3 } ], "snakes": [{ "health": 90, "body": [{"x": 8, "y": 5}], }, { "health": 90, "body": [{"x": 1, "y": 6}], }, { "health": 90, "body": [{"x": 3, "y": 3}], }, { "health": 90, "body": [{"x": 6, "y": 4}], }, ] } } if USE_INITIAL_STATE == False: initial_state = None ``` The parameters here must match the ones provided during training (except initial_state) ``` map_size = (11, 11) number_of_snakes = 4 env = BattlesnakeGym(map_size=map_size, number_of_snakes=number_of_snakes, observation_type="bordered-51s", initial_game_state=initial_state) ``` # Load the trained model The pretrained_models are loaded into an MXNet net. _You can safely ignore the __WARNING about the type for data0__._ ``` params_name = "inference/pretrained_models/Model-{}x{}/local-0000.params".format(map_size[0], map_size[1]) symbol_name = "inference/pretrained_models/Model-{}x{}/local-symbol.json".format(map_size[0], map_size[1]) ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu() net = mx.gluon.SymbolBlock.imports(symbol_name, ['data0', 'data1', 'data2', 'data3'], params_name, ctx=ctx) net.hybridize(static_alloc=True, static_shape=True) ``` # Simulation loop Run a simulation with the environment with the heuritics that you wrote. To edit the heuristics, edit the file `MXNetEnv/inference/inference_src/battlesnake_heuristics`. Note that you can track the progress of your work with git. ``` import inference.inference_src.battlesnake_heuristics reload(inference.inference_src.battlesnake_heuristics) from inference.inference_src.battlesnake_heuristics import MyBattlesnakeHeuristics heuristics = MyBattlesnakeHeuristics() infos, rgb_arrays, actions, heuristics_remarks, json_array = simulate(env, net, heuristics, number_of_snakes) ``` # Playback the simulation Defines the user interface of the simulator. ``` def get_env_json(): if slider.value < len(json_array): return json_array[slider.value] else: return "" def play_simulation(_): for i in range(slider.value, len(rgb_arrays) - slider.value - 1): slider.value = slider.value + 1 display_image(slider.value) time.sleep(0.2) def on_left_button_pressed(_): if slider.value > 0: slider.value = slider.value - 1 display_image(slider.value) def on_right_button_pressed(_): if slider.value < len(rgb_arrays): slider.value = slider.value + 1 display_image(slider.value) def display_image(index): if index >= len(rgb_arrays): return info = infos[index] action = actions[index] heuristics = heuristics_remarks[index] snake_colours = env.snakes.get_snake_colours() line_0 = [widgets.Label("Turn count".format(info["current_turn"])), widgets.Label("Snake")] line_1 = [widgets.Label(""), widgets.Label("Health")] line_2 = [widgets.Label("{}".format(info["current_turn"])), widgets.Label("Action")] line_3 = [widgets.Label(""), widgets.Label("Gym remarks")] line_4 = [widgets.Label(""), widgets.Label("Heur. remarks")] action_convertion_dict = {0: "Up", 1: "Down", 2: "Left", 3: "Right", 4: "None"} for snake_id in range(len(action)): snake_health = "{}".format(info["snake_health"][snake_id]) snake_health_widget = widgets.Label(snake_health) snake_action = "{}".format(action_convertion_dict[action[snake_id]]) snake_action_widget = widgets.Label(snake_action) snake_colour = snake_colours[snake_id] hex_colour = '#%02x%02x%02x' % (snake_colour[0], snake_colour[1], snake_colour[2]) snake_colour_widget = widgets.HTML(value = f"<b><font color="+hex_colour+">⬤</b>") gym_remarks = "" if snake_id in info["snake_info"]: if info["snake_info"][snake_id] != "Did not colide": gym_remarks = "{}".format(info["snake_info"][snake_id]) gym_remarks_widget = widgets.Label(gym_remarks) heuris_remarks = "{}".format(heuristics[snake_id]) heuris_remarks_widget = widgets.Label(heuris_remarks) line_0.append(snake_colour_widget) line_1.append(snake_health_widget) line_2.append(snake_action_widget) line_3.append(gym_remarks_widget) line_4.append(heuris_remarks_widget) line_0_widget = widgets.VBox(line_0) line_1_widget= widgets.VBox(line_1) line_2_widget = widgets.VBox(line_2) line_3_widget = widgets.VBox(line_3) line_4_widget = widgets.VBox(line_4) info_widget = widgets.HBox([line_0_widget, line_1_widget, line_2_widget, line_3_widget, line_4_widget]) image = PIL.Image.fromarray(rgb_arrays[index]) f = BytesIO() image.save(f, "png") states_widget = widgets.Image(value=f.getvalue(), width=500) main_widgets_list = [states_widget, info_widget] main_widget = widgets.HBox(main_widgets_list) display.clear_output(wait=True) i_display(navigator) i_display(main_widget) left_button = widgets.Button(description='◄') left_button.on_click(on_left_button_pressed) right_button = widgets.Button(description='►') right_button.on_click(on_right_button_pressed) slider = widgets.IntSlider(max=len(rgb_arrays) - 1) play_button = widgets.Button(description='Play') play_button.on_click(play_simulation) navigator = widgets.HBox([left_button, right_button, slider, play_button]) display_image(index=0) ``` To get a JSON representation of the gym (environment), run the following function. You can also use output of the following function as an initial_state of the gym. *Please provide this json array if you are reporting bugs in the gym* ``` get_env_json() ``` # Deploy the SageMaker endpoint This section will deploy your new heuristics into the SageMaker endpoint ``` sage_session = sagemaker.session.Session() s3_bucket = sage_session.default_bucket() role = sagemaker.get_execution_role() print("Your sagemaker s3_bucket is s3://{}".format(s3_bucket)) ``` ## (Optional) Run if you retrained the model If you retrained your model in PolicyTrainining.ipynb but you did not create a new endpoint, please run the following cell to update the models. ``` %cd inference !mv pretrained_models Models !tar -czf Models.tar.gz Models !mv Models pretrained_models s3_client = boto3.client('s3') s3_client.upload_file("Models.tar.gz", s3_bucket, "battlesnake-aws/pretrainedmodels/Models.tar.gz") !rm Models.tar.gz %cd .. ``` ## Deploy your new heuristics Using the new heuristics you developed, a new SageMaker endpoint will be created. Firstly, delete the old endpoint, model and endpoint config. ``` sm_client = boto3.client(service_name='sagemaker') sm_client.delete_endpoint(EndpointName='battlesnake-endpoint') sm_client.delete_endpoint_config(EndpointConfigName='battlesnake-endpoint') sm_client.delete_model(ModelName="battlesnake-mxnet") ``` Run the following cells to create a new model and endpoint with the new heuristics ``` target_key = "battlesnake-aws/pretrainedmodels/Models.tar.gz" model_data = "s3://{}/{}".format(s3_bucket, target_key) endpoint_instance_type = "SAGEMAKER_INFERENCE_INSTANCE_TYPE" from sagemaker.mxnet import MXNetModel mxnet_model = MXNetModel(model_data=model_data, entry_point='predict.py', role=role, framework_version='1.6.0', source_dir='inference/inference_src', name="battlesnake-mxnet", py_version='py3') predictor = mxnet_model.deploy(initial_instance_count=1, instance_type=endpoint_instance_type, endpoint_name='battlesnake-endpoint') ``` ## Testing the new endpoint You should see `Action to take is X` ``` data1 = np.zeros(shape=(1, 2, 3, map_size[0]+2, map_size[1]+2)) data2 = np.zeros(shape=(1, 2)) data3 = np.zeros(shape=(1, 2)) data4 = np.zeros(shape=(1, 2)) health_dict = {0: 50, 1: 50} json = {"board": { "height": 15, "width": 15, "food": [], "snakes": [] }, "you": { "id": "snake-id-string", "name": "Sneky Snek", "health": 90, "body": [{"x": 1, "y": 3}] } } action = predictor.predict({"state": data1, "snake_id": data2, "turn_count": data3, "health": data4, "all_health": health_dict, "map_width": map_size[0], "json": json}) print("Action to take is {}".format(action)) ```
github_jupyter
from io import BytesIO import time import boto3 import PIL.Image import sagemaker import gym from gym import wrappers import numpy as np import mxnet as mx import matplotlib.pyplot as plt %matplotlib inline from importlib import reload from IPython import display import ipywidgets as widgets from IPython.display import display as i_display from battlesnake_gym.snake_gym import BattlesnakeGym from heuristics_utils import simulate USE_INITIAL_STATE = False # Sample initial state for the situation simulator initial_state = { "turn": 4, "board": { "height": 11, "width": 11, "food": [ { "x": 1, "y": 3 } ], "snakes": [{ "health": 90, "body": [{"x": 8, "y": 5}], }, { "health": 90, "body": [{"x": 1, "y": 6}], }, { "health": 90, "body": [{"x": 3, "y": 3}], }, { "health": 90, "body": [{"x": 6, "y": 4}], }, ] } } if USE_INITIAL_STATE == False: initial_state = None map_size = (11, 11) number_of_snakes = 4 env = BattlesnakeGym(map_size=map_size, number_of_snakes=number_of_snakes, observation_type="bordered-51s", initial_game_state=initial_state) params_name = "inference/pretrained_models/Model-{}x{}/local-0000.params".format(map_size[0], map_size[1]) symbol_name = "inference/pretrained_models/Model-{}x{}/local-symbol.json".format(map_size[0], map_size[1]) ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu() net = mx.gluon.SymbolBlock.imports(symbol_name, ['data0', 'data1', 'data2', 'data3'], params_name, ctx=ctx) net.hybridize(static_alloc=True, static_shape=True) import inference.inference_src.battlesnake_heuristics reload(inference.inference_src.battlesnake_heuristics) from inference.inference_src.battlesnake_heuristics import MyBattlesnakeHeuristics heuristics = MyBattlesnakeHeuristics() infos, rgb_arrays, actions, heuristics_remarks, json_array = simulate(env, net, heuristics, number_of_snakes) def get_env_json(): if slider.value < len(json_array): return json_array[slider.value] else: return "" def play_simulation(_): for i in range(slider.value, len(rgb_arrays) - slider.value - 1): slider.value = slider.value + 1 display_image(slider.value) time.sleep(0.2) def on_left_button_pressed(_): if slider.value > 0: slider.value = slider.value - 1 display_image(slider.value) def on_right_button_pressed(_): if slider.value < len(rgb_arrays): slider.value = slider.value + 1 display_image(slider.value) def display_image(index): if index >= len(rgb_arrays): return info = infos[index] action = actions[index] heuristics = heuristics_remarks[index] snake_colours = env.snakes.get_snake_colours() line_0 = [widgets.Label("Turn count".format(info["current_turn"])), widgets.Label("Snake")] line_1 = [widgets.Label(""), widgets.Label("Health")] line_2 = [widgets.Label("{}".format(info["current_turn"])), widgets.Label("Action")] line_3 = [widgets.Label(""), widgets.Label("Gym remarks")] line_4 = [widgets.Label(""), widgets.Label("Heur. remarks")] action_convertion_dict = {0: "Up", 1: "Down", 2: "Left", 3: "Right", 4: "None"} for snake_id in range(len(action)): snake_health = "{}".format(info["snake_health"][snake_id]) snake_health_widget = widgets.Label(snake_health) snake_action = "{}".format(action_convertion_dict[action[snake_id]]) snake_action_widget = widgets.Label(snake_action) snake_colour = snake_colours[snake_id] hex_colour = '#%02x%02x%02x' % (snake_colour[0], snake_colour[1], snake_colour[2]) snake_colour_widget = widgets.HTML(value = f"<b><font color="+hex_colour+">⬤</b>") gym_remarks = "" if snake_id in info["snake_info"]: if info["snake_info"][snake_id] != "Did not colide": gym_remarks = "{}".format(info["snake_info"][snake_id]) gym_remarks_widget = widgets.Label(gym_remarks) heuris_remarks = "{}".format(heuristics[snake_id]) heuris_remarks_widget = widgets.Label(heuris_remarks) line_0.append(snake_colour_widget) line_1.append(snake_health_widget) line_2.append(snake_action_widget) line_3.append(gym_remarks_widget) line_4.append(heuris_remarks_widget) line_0_widget = widgets.VBox(line_0) line_1_widget= widgets.VBox(line_1) line_2_widget = widgets.VBox(line_2) line_3_widget = widgets.VBox(line_3) line_4_widget = widgets.VBox(line_4) info_widget = widgets.HBox([line_0_widget, line_1_widget, line_2_widget, line_3_widget, line_4_widget]) image = PIL.Image.fromarray(rgb_arrays[index]) f = BytesIO() image.save(f, "png") states_widget = widgets.Image(value=f.getvalue(), width=500) main_widgets_list = [states_widget, info_widget] main_widget = widgets.HBox(main_widgets_list) display.clear_output(wait=True) i_display(navigator) i_display(main_widget) left_button = widgets.Button(description='◄') left_button.on_click(on_left_button_pressed) right_button = widgets.Button(description='►') right_button.on_click(on_right_button_pressed) slider = widgets.IntSlider(max=len(rgb_arrays) - 1) play_button = widgets.Button(description='Play') play_button.on_click(play_simulation) navigator = widgets.HBox([left_button, right_button, slider, play_button]) display_image(index=0) get_env_json() sage_session = sagemaker.session.Session() s3_bucket = sage_session.default_bucket() role = sagemaker.get_execution_role() print("Your sagemaker s3_bucket is s3://{}".format(s3_bucket)) %cd inference !mv pretrained_models Models !tar -czf Models.tar.gz Models !mv Models pretrained_models s3_client = boto3.client('s3') s3_client.upload_file("Models.tar.gz", s3_bucket, "battlesnake-aws/pretrainedmodels/Models.tar.gz") !rm Models.tar.gz %cd .. sm_client = boto3.client(service_name='sagemaker') sm_client.delete_endpoint(EndpointName='battlesnake-endpoint') sm_client.delete_endpoint_config(EndpointConfigName='battlesnake-endpoint') sm_client.delete_model(ModelName="battlesnake-mxnet") target_key = "battlesnake-aws/pretrainedmodels/Models.tar.gz" model_data = "s3://{}/{}".format(s3_bucket, target_key) endpoint_instance_type = "SAGEMAKER_INFERENCE_INSTANCE_TYPE" from sagemaker.mxnet import MXNetModel mxnet_model = MXNetModel(model_data=model_data, entry_point='predict.py', role=role, framework_version='1.6.0', source_dir='inference/inference_src', name="battlesnake-mxnet", py_version='py3') predictor = mxnet_model.deploy(initial_instance_count=1, instance_type=endpoint_instance_type, endpoint_name='battlesnake-endpoint') data1 = np.zeros(shape=(1, 2, 3, map_size[0]+2, map_size[1]+2)) data2 = np.zeros(shape=(1, 2)) data3 = np.zeros(shape=(1, 2)) data4 = np.zeros(shape=(1, 2)) health_dict = {0: 50, 1: 50} json = {"board": { "height": 15, "width": 15, "food": [], "snakes": [] }, "you": { "id": "snake-id-string", "name": "Sneky Snek", "health": 90, "body": [{"x": 1, "y": 3}] } } action = predictor.predict({"state": data1, "snake_id": data2, "turn_count": data3, "health": data4, "all_health": health_dict, "map_width": map_size[0], "json": json}) print("Action to take is {}".format(action))
0.398524
0.80454
# Lesson 2 Exercise 2: Creating Denormalized Tables <img src="images/postgresSQLlogo.png" width="250" height="250"> ## Walk through the basics of modeling data from normalized from to denormalized form. We will create tables in PostgreSQL, insert rows of data, and do simple JOIN SQL queries to show how these multiple tables can work together. #### Where you see ##### you will need to fill in code. This exercise will be more challenging than the last. Use the information provided to create the tables and write the insert statements. #### Remember the examples shown are simple, but imagine these situations at scale with large datasets, many users, and the need for quick response time. Note: __Do not__ click the blue Preview button in the lower task bar ### Import the library Note: An error might popup after this command has exectuted. If it does read it careful before ignoring. ``` import psycopg2 ``` ### Create a connection to the database, get a cursor, and set autocommit to true ``` try: conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student") except psycopg2.Error as e: print("Error: Could not make connection to the Postgres database") print(e) try: cur = conn.cursor() except psycopg2.Error as e: print("Error: Could not get cursor to the Database") print(e) conn.set_session(autocommit=True) ``` #### Let's start with our normalized (3NF) database set of tables we had in the last exercise, but we have added a new table `sales`. `Table Name: transactions2 column 0: transaction Id column 1: Customer Name column 2: Cashier Id column 3: Year ` `Table Name: albums_sold column 0: Album Id column 1: Transaction Id column 3: Album Name` `Table Name: employees column 0: Employee Id column 1: Employee Name ` `Table Name: sales column 0: Transaction Id column 1: Amount Spent ` <img src="images/table16.png" width="450" height="450"> <img src="images/table15.png" width="450" height="450"> <img src="images/table17.png" width="350" height="350"> <img src="images/table18.png" width="350" height="350"> ### TO-DO: Add all Create statements for all Tables and Insert data into the tables ``` # TO-DO: Add all Create statements for all tables try: cur.execute('DROP TABLE IF EXISTS transactions2;') cur.execute('DROP TABLE IF EXISTS employees;') cur.execute('DROP TABLE IF EXISTS albums_sold;') cur.execute('DROP TABLE IF EXISTS sales;') except psycopg2.Error as e: print('Error: Issue dropping tables') print(e) try: cur.execute("CREATE TABLE IF NOT EXISTS transactions2 (transaction_id int,Customer_Name varchar, Cashier_id int, Year int)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS albums_sold (Album_id int, Transaction_id int, Album_Name varchar)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS employees (Employee_id int, Employee_Name varchar)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS sales (Transaction_id int, Amount_Spent int)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) # TO-DO: Insert data into the tables try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (1, "Amanda", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (2, "Toby", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (3, "Max", 2, 2018)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (1, 1, "Rubber Soul")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (2, 1, "Let It Be")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (3, 2, "My Generation")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (4, 3, "Meet the Beatles")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (5, 3, "Help!")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO employees (employee_id, employee_name) \ VALUES (%s, %s)", \ (1, "Sam")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO employees (employee_id, employee_name) \ VALUES (%s, %s)", \ (2, "Bob")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (1, 40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (2, 19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (3, 45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` #### TO-DO: Confirm using the Select statement the data were added correctly ``` print("Table: transactions2\n") try: cur.execute("SELECT * FROM transactions2;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: albums_sold\n") try: cur.execute("SELECT * FROM albums_sold;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: employees\n") try: cur.execute("SELECT * FROM employees;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: sales\n") try: cur.execute("SELECT * FROM sales;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` ### Let's say you need to do a query that gives: `transaction_id customer_name cashier name year albums sold amount sold` ### TO-DO: Complete the statement below to perform a 3 way `JOIN` on the 4 tables you have created. ``` try: cur.execute("""SELECT t2.transaction_id, t2.Customer_Name, e.Employee_Name, t2.Year, abs.album_name, s.Amount_spent FROM (((transactions2 as t2 join employees as e on t2.Cashier_id = e.Employee_id) join albums_sold as abs on t2.transaction_id = abs.transaction_id) join sales as s on t2.transaction_id = s.transaction_id) """) except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Great we were able to get the data we wanted. ### But, we had to perform a 3 way `JOIN` to get there. While it's great we had that flexibility, we need to remember that `JOINS` are slow and if we have a read heavy workload that required low latency queries we want to reduce the number of `JOINS`. Let's think about denormalizing our normalized tables. ### With denormalization you want to think about the queries you are running and how to reduce the number of JOINS even if that means duplicating data. The following are the queries you need to run. #### Query 1 : `select transaction_id, customer_name, amount_spent FROM <min number of tables>` It should generate the amount spent on each transaction #### Query 2: `select cashier_name, SUM(amount_spent) FROM <min number of tables> GROUP BY cashier_name` It should generate the total sales by cashier ### Query 1: `select transaction_id, customer_name, amount_spent FROM <min number of tables>` One way to do this would be to do a JOIN on the `sales` and `transactions2` table but we want to minimize the use of `JOINS`. To reduce the number of tables, first add `amount_spent` to the `transactions` table so that you will not need to do a JOIN at all. `Table Name: transactions column 0: transaction Id column 1: Customer Name column 2: Cashier Id column 3: Year column 4: amount_spent` <img src="images/table19.png" width="450" height="450"> ### TO-DO: Add the tables as part of the denormalization process ``` # TO-DO: Create all tables try: cur.execute("CREATE TABLE IF NOT EXISTS transactions (transaction_id int, Customer_Name varchar, Cashier_id int, Year int, Amount_spent int)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert data into all tables try: cur.execute("INSERT INTO transactions (transaction_id, Customer_Name, Cashier_id,Year, Amount_Spent) \ VALUES (%s, %s, %s, %s, %s)", \ (1,'Amanda',1,2000,40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (transaction_id, Customer_Name, Cashier_id,Year, Amount_Spent) \ VALUES (%s, %s, %s, %s, %s)", \ (2,"Max",1,2000,19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (transaction_id, Customer_Name, Cashier_id,Year, Amount_Spent) \ VALUES (%s, %s, %s, %s, %s)", \ (3,'Max',2,2018,45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` ### Now you should be able to do a simplifed query to get the information you need. No `JOIN` is needed. ``` try: cur.execute("SELECT transaction_id,Customer_name, Amount_spent FROM transactions;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Your output for the above cell should be the following: (1, 'Amanda', 40)<br> (2, 'Toby', 19)<br> (3, 'Max', 45) ### Query 2: `select cashier_name, SUM(amount_spent) FROM <min number of tables> GROUP BY cashier_name` To avoid using any `JOINS`, first create a new table with just the information we need. `Table Name: cashier_sales col: Transaction Id Col: Cashier Name Col: Cashier Id col: Amount_Spent ` <img src="images/table20.png" width="350" height="350"> ### TO-DO: Create a new table with just the information you need. ``` # Create the tables try: cur.execute("CREATE TABLE IF NOT EXISTS cashier_sales (transaction_id int, Cashier_name varchar, Cashier_id int, Amount_spent int);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert into all tables try: cur.execute("INSERT INTO cashier_sales (transaction_id, Cashier_name, Cashier_id,Amount_spent) \ VALUES (%s, %s, %s, %s)", \ (1,'Sam',1,40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO cashier_sales (transaction_id, Cashier_name, Cashier_id,Amount_spent) \ VALUES (%s, %s, %s, %s)", \ (2,'Sam',1,19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO cashier_sales (transaction_id, Cashier_name, Cashier_id,Amount_spent) \ VALUES (%s, %s, %s, %s)", \ (3,'Bob',2,45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` ### Run the query ``` try: cur.execute("SELECT Cashier_Name, sum(Amount_Spent) from cashier_sales group by cashier_name;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Your output for the above cell should be the following: ('Sam', 59)<br> ('Max', 45) #### We have successfully taken normalized table and denormalized them inorder to speed up our performance and allow for simplier queries to be executed. ### Drop the tables ``` try: cur.execute("DROP table transactions2") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table transactions") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table employees") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table albums_sold") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table sales") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table cashier_sales") except psycopg2.Error as e: print("Error: Dropping table") print (e) ``` ### And finally close your cursor and connection. ``` cur.close() conn.close() ```
github_jupyter
import psycopg2 try: conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student") except psycopg2.Error as e: print("Error: Could not make connection to the Postgres database") print(e) try: cur = conn.cursor() except psycopg2.Error as e: print("Error: Could not get cursor to the Database") print(e) conn.set_session(autocommit=True) # TO-DO: Add all Create statements for all tables try: cur.execute('DROP TABLE IF EXISTS transactions2;') cur.execute('DROP TABLE IF EXISTS employees;') cur.execute('DROP TABLE IF EXISTS albums_sold;') cur.execute('DROP TABLE IF EXISTS sales;') except psycopg2.Error as e: print('Error: Issue dropping tables') print(e) try: cur.execute("CREATE TABLE IF NOT EXISTS transactions2 (transaction_id int,Customer_Name varchar, Cashier_id int, Year int)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS albums_sold (Album_id int, Transaction_id int, Album_Name varchar)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS employees (Employee_id int, Employee_Name varchar)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS sales (Transaction_id int, Amount_Spent int)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) # TO-DO: Insert data into the tables try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (1, "Amanda", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (2, "Toby", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (3, "Max", 2, 2018)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (1, 1, "Rubber Soul")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (2, 1, "Let It Be")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (3, 2, "My Generation")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (4, 3, "Meet the Beatles")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (5, 3, "Help!")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO employees (employee_id, employee_name) \ VALUES (%s, %s)", \ (1, "Sam")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO employees (employee_id, employee_name) \ VALUES (%s, %s)", \ (2, "Bob")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (1, 40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (2, 19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (3, 45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) print("Table: transactions2\n") try: cur.execute("SELECT * FROM transactions2;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: albums_sold\n") try: cur.execute("SELECT * FROM albums_sold;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: employees\n") try: cur.execute("SELECT * FROM employees;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: sales\n") try: cur.execute("SELECT * FROM sales;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() try: cur.execute("""SELECT t2.transaction_id, t2.Customer_Name, e.Employee_Name, t2.Year, abs.album_name, s.Amount_spent FROM (((transactions2 as t2 join employees as e on t2.Cashier_id = e.Employee_id) join albums_sold as abs on t2.transaction_id = abs.transaction_id) join sales as s on t2.transaction_id = s.transaction_id) """) except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() # TO-DO: Create all tables try: cur.execute("CREATE TABLE IF NOT EXISTS transactions (transaction_id int, Customer_Name varchar, Cashier_id int, Year int, Amount_spent int)") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert data into all tables try: cur.execute("INSERT INTO transactions (transaction_id, Customer_Name, Cashier_id,Year, Amount_Spent) \ VALUES (%s, %s, %s, %s, %s)", \ (1,'Amanda',1,2000,40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (transaction_id, Customer_Name, Cashier_id,Year, Amount_Spent) \ VALUES (%s, %s, %s, %s, %s)", \ (2,"Max",1,2000,19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (transaction_id, Customer_Name, Cashier_id,Year, Amount_Spent) \ VALUES (%s, %s, %s, %s, %s)", \ (3,'Max',2,2018,45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("SELECT transaction_id,Customer_name, Amount_spent FROM transactions;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() # Create the tables try: cur.execute("CREATE TABLE IF NOT EXISTS cashier_sales (transaction_id int, Cashier_name varchar, Cashier_id int, Amount_spent int);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert into all tables try: cur.execute("INSERT INTO cashier_sales (transaction_id, Cashier_name, Cashier_id,Amount_spent) \ VALUES (%s, %s, %s, %s)", \ (1,'Sam',1,40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO cashier_sales (transaction_id, Cashier_name, Cashier_id,Amount_spent) \ VALUES (%s, %s, %s, %s)", \ (2,'Sam',1,19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO cashier_sales (transaction_id, Cashier_name, Cashier_id,Amount_spent) \ VALUES (%s, %s, %s, %s)", \ (3,'Bob',2,45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("SELECT Cashier_Name, sum(Amount_Spent) from cashier_sales group by cashier_name;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() try: cur.execute("DROP table transactions2") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table transactions") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table employees") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table albums_sold") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table sales") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table cashier_sales") except psycopg2.Error as e: print("Error: Dropping table") print (e) cur.close() conn.close()
0.1273
0.879716
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn as sk import tensorflow as tf from sklearn import datasets from sklearn.model_selection import train_test_split from tensorflow.contrib import learn from sklearn.decomposition import PCA # Read from CSV where every column = [FFT freq bin] # every row = [epoch 1 (2s): electrode 1, 2, 3, 4] + [epoch 2: electrode 1, 2, 3, 4] + ... relax = pd.read_csv("../Muse Data/josh_relax_apr03_night.csv", header=0, index_col=False) focus = pd.read_csv("../Muse Data/josh_corvo_task_apr03_night.csv", header=0, index_col=False) # Chop off irrelevant frequencies relax = relax.iloc[:,0:54] focus = focus.iloc[:,0:54] # Add labels relax['label'] = 0 focus['label'] = 1 # Combine focus and relax dataframes into a numeric values and complementary labels dataframe # rows = [relax data] + [focus data] values = pd.concat([relax.iloc[:,1:3], relax.iloc[:,4:54], focus.iloc[:,1:3], focus.iloc[:,2:54]]).reset_index(drop=True) labels = pd.concat([pd.DataFrame(relax['label']), pd.DataFrame(focus['label'])]).reset_index(drop=True) # Convert labels from a dataframe to a 1D matrix #c, r = labels.shape #labels = labels.as_matrix().reshape(c,) # Split values and labels arrays into random train and test subsets (20% set aside for testing) X_train, X_test, y_train, y_test = train_test_split(values,labels,test_size=0.2) X_train = X_train.as_matrix() # Convert labels from a dataframe to a 1D matrix c, r = y_train.shape y_train = y_train.as_matrix().reshape(c,) d, s = y_test.shape y_test = y_test.as_matrix().reshape(d,) # Create graph sess = tf.Session() # Declare batch size, get some sizes to use batch_size = len(X_train) x_length, x_width = X_train.shape y_length = len(y_train) # Initialize placeholders x_data = tf.placeholder(shape=[None, x_width], dtype=tf.float32) y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32) # Create variables for linear regression A = tf.Variable(tf.random_normal(shape=[x_width, x_length])) b = tf.Variable(tf.random_normal(shape=[1, y_length])) # Declare model operations model_output = tf.subtract(tf.matmul(x_data, A), b) # Declare vector L2 'norm' function squared l2_norm = tf.reduce_sum(tf.square(A)) # Declare loss function # Loss = max(0, 1-pred*actual) + alpha * L2_norm(A)^2 # L2 regularization parameter, alpha alpha = tf.constant([0.01]) # Margin term in loss classification_term = tf.reduce_mean(tf.maximum(0., tf.subtract(1., tf.multiply(model_output, y_target)))) # Put terms together loss = tf.add(classification_term, tf.multiply(alpha, l2_norm)) # Declare prediction function prediction = tf.sign(model_output) accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, y_target), tf.float32)) # Declare optimizer my_opt = tf.contrib.linear_optimizer.SDCAOptimizer train_step = my_opt() # Initialize variables init = tf.global_variables_initializer() sess.run(init) # Training loop loss_vec = [] train_accuracy = [] test_accuracy = [] for i in range(500): rand_index = np.random.choice(len(X_train), size=batch_size) rand_x = X_train[rand_index] rand_y = np.transpose([y_train[rand_index]]) sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}) loss_vec.append(temp_loss) train_acc_temp = sess.run(accuracy, feed_dict={x_data: X_train, y_target: np.transpose([y_train])}) train_accuracy.append(train_acc_temp) test_acc_temp = sess.run(accuracy, feed_dict={x_data: X_test, y_target: np.transpose([y_test])}) test_accuracy.append(test_acc_temp) if (i+1)%100==0: print('Step #' + str(i+1)) print('Loss = ' + str(temp_loss)) # Plot train/test accuracies plt.plot(train_accuracy, 'k-', label='Training Accuracy') plt.plot(test_accuracy, 'r--', label='Test Accuracy') plt.title('Train and Test Set Accuracies') plt.xlabel('Generation') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.show() # Plot loss over time plt.plot(loss_vec, 'k-') plt.title('Loss per Generation') plt.xlabel('Generation') plt.ylabel('Loss') plt.show() ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn as sk import tensorflow as tf from sklearn import datasets from sklearn.model_selection import train_test_split from tensorflow.contrib import learn from sklearn.decomposition import PCA # Read from CSV where every column = [FFT freq bin] # every row = [epoch 1 (2s): electrode 1, 2, 3, 4] + [epoch 2: electrode 1, 2, 3, 4] + ... relax = pd.read_csv("../Muse Data/josh_relax_apr03_night.csv", header=0, index_col=False) focus = pd.read_csv("../Muse Data/josh_corvo_task_apr03_night.csv", header=0, index_col=False) # Chop off irrelevant frequencies relax = relax.iloc[:,0:54] focus = focus.iloc[:,0:54] # Add labels relax['label'] = 0 focus['label'] = 1 # Combine focus and relax dataframes into a numeric values and complementary labels dataframe # rows = [relax data] + [focus data] values = pd.concat([relax.iloc[:,1:3], relax.iloc[:,4:54], focus.iloc[:,1:3], focus.iloc[:,2:54]]).reset_index(drop=True) labels = pd.concat([pd.DataFrame(relax['label']), pd.DataFrame(focus['label'])]).reset_index(drop=True) # Convert labels from a dataframe to a 1D matrix #c, r = labels.shape #labels = labels.as_matrix().reshape(c,) # Split values and labels arrays into random train and test subsets (20% set aside for testing) X_train, X_test, y_train, y_test = train_test_split(values,labels,test_size=0.2) X_train = X_train.as_matrix() # Convert labels from a dataframe to a 1D matrix c, r = y_train.shape y_train = y_train.as_matrix().reshape(c,) d, s = y_test.shape y_test = y_test.as_matrix().reshape(d,) # Create graph sess = tf.Session() # Declare batch size, get some sizes to use batch_size = len(X_train) x_length, x_width = X_train.shape y_length = len(y_train) # Initialize placeholders x_data = tf.placeholder(shape=[None, x_width], dtype=tf.float32) y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32) # Create variables for linear regression A = tf.Variable(tf.random_normal(shape=[x_width, x_length])) b = tf.Variable(tf.random_normal(shape=[1, y_length])) # Declare model operations model_output = tf.subtract(tf.matmul(x_data, A), b) # Declare vector L2 'norm' function squared l2_norm = tf.reduce_sum(tf.square(A)) # Declare loss function # Loss = max(0, 1-pred*actual) + alpha * L2_norm(A)^2 # L2 regularization parameter, alpha alpha = tf.constant([0.01]) # Margin term in loss classification_term = tf.reduce_mean(tf.maximum(0., tf.subtract(1., tf.multiply(model_output, y_target)))) # Put terms together loss = tf.add(classification_term, tf.multiply(alpha, l2_norm)) # Declare prediction function prediction = tf.sign(model_output) accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, y_target), tf.float32)) # Declare optimizer my_opt = tf.contrib.linear_optimizer.SDCAOptimizer train_step = my_opt() # Initialize variables init = tf.global_variables_initializer() sess.run(init) # Training loop loss_vec = [] train_accuracy = [] test_accuracy = [] for i in range(500): rand_index = np.random.choice(len(X_train), size=batch_size) rand_x = X_train[rand_index] rand_y = np.transpose([y_train[rand_index]]) sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}) loss_vec.append(temp_loss) train_acc_temp = sess.run(accuracy, feed_dict={x_data: X_train, y_target: np.transpose([y_train])}) train_accuracy.append(train_acc_temp) test_acc_temp = sess.run(accuracy, feed_dict={x_data: X_test, y_target: np.transpose([y_test])}) test_accuracy.append(test_acc_temp) if (i+1)%100==0: print('Step #' + str(i+1)) print('Loss = ' + str(temp_loss)) # Plot train/test accuracies plt.plot(train_accuracy, 'k-', label='Training Accuracy') plt.plot(test_accuracy, 'r--', label='Test Accuracy') plt.title('Train and Test Set Accuracies') plt.xlabel('Generation') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.show() # Plot loss over time plt.plot(loss_vec, 'k-') plt.title('Loss per Generation') plt.xlabel('Generation') plt.ylabel('Loss') plt.show()
0.772187
0.776284
``` import pandas as pd import numpy as np import folium import seaborn as sns import matplotlib.pyplot as plt import requests india_json_data = requests.get('https://raw.githubusercontent.com/ammishra08/COVID-19/master/covid_19_datasets/covid19_india/india_statewise.json').json() india_json_data df_india=pd.io.json.json_normalize(india_json_data['data']['statewise']) df_india df_india.set_index('state',inplace=True) sns.set_style('darkgrid') plt.figure(figsize=(15,10)) sns.barplot(x= df_india.index,y=df_india['confirmed'],palette='viridis') plt.xticks(rotation=90) plt.show() ``` # Bargraph using Plotly ``` import plotly.express as px figure=px.bar(df_india,x=df_india.index,height=800,width=900,y='confirmed',color='confirmed') figure.show() location = { "Sikkim": [27.5330,88.5122], "Maharashtra" : [19.7515,75.7139], "West Bengal": [22.9868,87.8550], "Chandigarh":[30.7333,76.7794], "Karnataka": [15.3173,75.7139], "Telangana": [18.1124,79.0193], "Uttar Pradesh": [26.8467,80.9462], "Gujarat":[22.2587,71.1924], "Odisha":[20.9517,85.0985], "Delhi" : [28.7041,77.1025], "Tamil Nadu": [11.1271,78.6569], "Haryana": [29.0588,76.0856], "Madhya Pradesh":[22.9734,78.6569], "Kerala" : [10.8505,76.2711], "Rajasthan": [27.0238,74.2179], "Jammu and Kashmir":[33.7782,76.5762], "Ladakh": [34.1526,77.5770], "Andhra Pradesh":[15.9129,79.7400], "Bihar": [25.0961,85.3131], "Chhattisgarh":[21.2787,81.8661], "Uttarakhand":[30.0668,79.0193], "Himachal Pradesh":[31.1048,77.1734], "Goa": [15.2993,74.1240], "Tripura":[23.9408,91.9882], "Andaman and Nicobar Islands": [11.7401,92.6586], "Puducherry":[11.9416,79.8083], "Manipur":[24.6637,93.9063], "Mizoram":[23.1645,92.9376], "Assam":[26.2006,92.9376], "Meghalaya":[25.4670,91.3662], "Arunachal Pradesh":[28.2180,94.7278], "Jharkhand" : [23.6102,85.2799], "Nagaland": [26.1584,94.5624], "Punjab":[31.1471,75.3412], "Dadra and Nagar Haveli":[20.1809,73.0169], "Lakshadweep":[10.5667,72.6417], "Daman and Diu":[20.4283,20.4283] } df_india["Lat"]= '' df_india['Long']= '' for index in df_india.index: df.india.loc[df_india.index==index, "Lat"]=location[index][0] df.india.loc[df_india.index==index, "Long"]=location[index][1] df_india india_map = folium.Map(location= [10,80], zoom_start=4, max_zoom=8, height = 1000, width='100%', tiles = 'CartoDB dark_matter') for i in range(0, len(df_india)): folium.Circle(location=[df_india.iloc[i]['Lat'], df_india.iloc[i]['Long']], radius=(int(np.log2(df_india.iloc[i]['confirmed']+1.0001)))*12000, tooltip= "<h5 style='text-align:center;font-weight: bold'>"+ df_india.iloc[i].name +"</h5>"+ "<li>Confirmed "+str(df_india.iloc[i]['confirmed'])+"</li>"+ "<li>Deaths "+str(df_india.iloc[i]['deaths'])+"</li>"+ "<li>Active "+str(df_india.iloc[i]['active'])+"</li>"+ "</ul>", fill=True).add_to(india_map), india_map ```
github_jupyter
import pandas as pd import numpy as np import folium import seaborn as sns import matplotlib.pyplot as plt import requests india_json_data = requests.get('https://raw.githubusercontent.com/ammishra08/COVID-19/master/covid_19_datasets/covid19_india/india_statewise.json').json() india_json_data df_india=pd.io.json.json_normalize(india_json_data['data']['statewise']) df_india df_india.set_index('state',inplace=True) sns.set_style('darkgrid') plt.figure(figsize=(15,10)) sns.barplot(x= df_india.index,y=df_india['confirmed'],palette='viridis') plt.xticks(rotation=90) plt.show() import plotly.express as px figure=px.bar(df_india,x=df_india.index,height=800,width=900,y='confirmed',color='confirmed') figure.show() location = { "Sikkim": [27.5330,88.5122], "Maharashtra" : [19.7515,75.7139], "West Bengal": [22.9868,87.8550], "Chandigarh":[30.7333,76.7794], "Karnataka": [15.3173,75.7139], "Telangana": [18.1124,79.0193], "Uttar Pradesh": [26.8467,80.9462], "Gujarat":[22.2587,71.1924], "Odisha":[20.9517,85.0985], "Delhi" : [28.7041,77.1025], "Tamil Nadu": [11.1271,78.6569], "Haryana": [29.0588,76.0856], "Madhya Pradesh":[22.9734,78.6569], "Kerala" : [10.8505,76.2711], "Rajasthan": [27.0238,74.2179], "Jammu and Kashmir":[33.7782,76.5762], "Ladakh": [34.1526,77.5770], "Andhra Pradesh":[15.9129,79.7400], "Bihar": [25.0961,85.3131], "Chhattisgarh":[21.2787,81.8661], "Uttarakhand":[30.0668,79.0193], "Himachal Pradesh":[31.1048,77.1734], "Goa": [15.2993,74.1240], "Tripura":[23.9408,91.9882], "Andaman and Nicobar Islands": [11.7401,92.6586], "Puducherry":[11.9416,79.8083], "Manipur":[24.6637,93.9063], "Mizoram":[23.1645,92.9376], "Assam":[26.2006,92.9376], "Meghalaya":[25.4670,91.3662], "Arunachal Pradesh":[28.2180,94.7278], "Jharkhand" : [23.6102,85.2799], "Nagaland": [26.1584,94.5624], "Punjab":[31.1471,75.3412], "Dadra and Nagar Haveli":[20.1809,73.0169], "Lakshadweep":[10.5667,72.6417], "Daman and Diu":[20.4283,20.4283] } df_india["Lat"]= '' df_india['Long']= '' for index in df_india.index: df.india.loc[df_india.index==index, "Lat"]=location[index][0] df.india.loc[df_india.index==index, "Long"]=location[index][1] df_india india_map = folium.Map(location= [10,80], zoom_start=4, max_zoom=8, height = 1000, width='100%', tiles = 'CartoDB dark_matter') for i in range(0, len(df_india)): folium.Circle(location=[df_india.iloc[i]['Lat'], df_india.iloc[i]['Long']], radius=(int(np.log2(df_india.iloc[i]['confirmed']+1.0001)))*12000, tooltip= "<h5 style='text-align:center;font-weight: bold'>"+ df_india.iloc[i].name +"</h5>"+ "<li>Confirmed "+str(df_india.iloc[i]['confirmed'])+"</li>"+ "<li>Deaths "+str(df_india.iloc[i]['deaths'])+"</li>"+ "<li>Active "+str(df_india.iloc[i]['active'])+"</li>"+ "</ul>", fill=True).add_to(india_map), india_map
0.099457
0.683499
# Linear Elasticity in 2D for 3 Phases ## Introduction This example provides a demonstration of using PyMKS to compute the linear strain field for a three-phase composite material. It demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared with the finite element data for a large problem. PyMKS uses the finite element tool [SfePy](http://sfepy.org) to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy. ### Elastostatics Equations and Boundary Conditions The governing equations for linear elasticity and the boundary conditions used in this example are the same as those provided in the [Linear Elastic in 2D example](./elasticity.ipynb). ## Modeling with MKS ### Calibration Data and Delta Microstructures The first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met we can expect a mean absolute error of 2% or less when comparing the MKS results with those computed using finite element methods [[1]](#References). Because we are using distinct phases and the contrast is low enough to only need the first-order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first-order influence coefficients [[2]](#References). The `generate_delta` function can be used to create the two delta microstructures needed to calibrate the first-order influence coefficients for a two phase microstructure. This function uses the Python module [SfePy](http://sfepy.org/doc-devel/index.html) to compute the strain fields using finite element methods. ``` %matplotlib inline %load_ext autoreload %autoreload 2 import dask.array as da import numpy as np from sklearn.pipeline import Pipeline from pymks import ( generate_delta, plot_microstructures, solve_fe, PrimitiveTransformer, LocalizationRegressor, coeff_to_real, ) x_delta = generate_delta(n_phases=3, shape=(21, 21)).persist() plot_microstructures(*x_delta[:3], titles=['[0]', '[1]', '[2]'], cmap='gray') ``` Using delta microstructures for the calibration of the first-order influence coefficients is essentially the same, as using a unit [impulse response](http://en.wikipedia.org/wiki/Impulse_response) to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain. The number of delta microstructures that are needed to calibrated the first-order coefficients is $N(N-1)$ where $N$ is the number of phases, therefore in this example 6 delta microstructures are required. ### Generating Calibration Data In this example, the microstructures have three phases with elastic moduli values of 80, 100 and 120 and Poisson's ratio values all equal to 0.3. The macroscopic imposed strain is 0.02. A helper function `strain_xx` is created to solve the finite element problem and return the $\varepsilon_{xx}$ component of the strain. The length of the values for the `elastic_modulus` and `poissons_ratio` parameteres indicate the number of phases. ``` strain_xx = lambda x: solve_fe( x, elastic_modulus=(80, 100, 120), poissons_ratio=(0.3, 0.3, 0.3), macro_strain=0.02 )['strain'][...,0] y_delta = strain_xx(x_delta).persist() ``` Observe strain field. ``` plot_microstructures(y_delta[0], titles=[r'$\mathbf{\varepsilon_{xx}}$']) ``` ### Calibrating First-Order Influence Coefficients Calibrate the influence coefficients by creating a model pipeline using the `PrimitiveTransformer` and the `LocalizationRegressor`. ``` model = Pipeline(steps=[ ('discretize', PrimitiveTransformer(n_state=3, min_=0.0, max_=2.0)), ('regressor', LocalizationRegressor()) ]) ``` Now, pass the delta microstructures and their strain fields into the `fit` method to calibrate the first-order influence coefficients. ``` model.fit(x_delta, y_delta); ``` Observe the influence coefficients. ``` to_real = lambda x: coeff_to_real(x.steps[1][1].coeff).real coeff = to_real(model) plot_microstructures( coeff[..., 0], coeff[..., 1], coeff[..., 2], titles=['Influence coeff [0]', 'Influence coeff [1]', 'Influence coeff [2]'] ) ``` ### Predict of the Strain Field for a Random Microstructure Use the calibrated `model` to compute the strain field for a random two phase microstructure and compare it with the results from a finite element simulation. The `strain_xx` helper function is used to generate the strain field. ``` da.random.seed(99) x_data = da.random.randint(2, size=(1,) + x_delta.shape[1:]).persist() y_data = strain_xx(x_data).persist() plot_microstructures( y_data[0], titles=[r'FE - $\mathbf{\varepsilon_{xx}}$'] ) ``` **Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.** Now to get the strain field from the model, pass the same microstructure to the `predict` method. ``` y_predict = model.predict(x_data) ``` Finall, compare the results from finite element simulation and the MKS model. ``` plot_microstructures( y_data[0], y_predict[0], titles=[ r'$\mathbf{\varepsilon_{xx}}$ - FE', r'$\mathbf{\varepsilon_{xx}}$ - MKS' ] ) ``` Plot the difference between the two strain fields. ``` plot_microstructures( (y_data - y_predict)[0], titles=['FE - MKS'] ) ``` The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures. ## Resizing the Coefficeints to use on Larger Microstructures The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [[3]](#ref3), but accuracy of the MKS model drops slightly. To demonstrate how this is done, generate a new larger random microstructure and its strain field. ``` new_shape = tuple(np.array(x_delta.shape[1:]) * 3) x_large = da.random.randint(2, size=(1,) + new_shape).persist() y_large = strain_xx(x_large).persist() plot_microstructures(y_large[0], titles=[r'$\mathbf{\varepsilon_{xx}}$ - FE (large)']) ``` The influence coefficients that have already been calibrated need to be resized to match the shape of the new larger microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the `coeff_resize` method. ``` model.steps[1][1].coeff_resize(x_large[0].shape); ``` Observe the resized influence coefficients. ``` coeff_large = to_real(model) plot_microstructures( coeff_large[..., 0], coeff_large[..., 1], coeff_large[..., 2], titles=['Influence coeff [0]', 'Influence coeff [1]', 'Influence coeff [2]'] ) ``` The resized coefficients will only work with the large microstructure now. ``` y_large_predict = model.predict(x_large).persist() plot_microstructures( y_large[0], y_large_predict[0], titles=[ r'$\mathbf{\varepsilon_{xx}}$ - FE (large)', r'$\mathbf{\varepsilon_{xx}}$ - MKS (large)' ] ) ``` Plot the difference between the two strain fields. ``` plot_microstructures( (y_large - y_large_predict)[0], titles=['FE - MKS'] ) ``` The results from the strain field computed with the resized influence coefficients are not as accurate as they were before they were resized. This decrease in accuracy is expected when using spectral interpolation [[4]](#References). ## References <a id="ref1"></a> [1] Binci M., Fullwood D., Kalidindi S.R., A new spectral framework for establishing localization relationships for elastic behavior of composites and their calibration to finite-element models. Acta Materialia, 2008. 56 (10) p. 2272-2282 [doi:10.1016/j.actamat.2008.01.017](http://dx.doi.org/10.1016/j.actamat.2008.01.017). <a id="ref2"></a> [2] Landi, G., S.R. Niezgoda, S.R. Kalidindi, Multi-scale modeling of elastic response of three-dimensional voxel-based microstructure datasets using novel DFT-based knowledge systems. Acta Materialia, 2009. 58 (7): p. 2716-2725 [doi:10.1016/j.actamat.2010.01.007](http://dx.doi.org/10.1016/j.actamat.2010.01.007). <a id="ref3"></a> [3] Marko, K., Kalidindi S.R., Fullwood D., Computationally efficient database and spectral interpolation for fully plastic Taylor-type crystal plasticity calculations of face-centered cubic polycrystals. International Journal of Plasticity 24 (2008) 1264–1276 [doi:10.1016/j.ijplas.2007.12.002](http://dx.doi.org/10.1016/j.ijplas.2007.12.002). <a id="ref4"></a> [4] Marko, K. Al-Harbi H. F. , Kalidindi S.R., Crystal plasticity simulations using discrete Fourier transforms. Acta Materialia 57 (2009) 1777–1784 [doi:10.1016/j.actamat.2008.12.017](http://dx.doi.org/10.1016/j.actamat.2008.12.017).
github_jupyter
%matplotlib inline %load_ext autoreload %autoreload 2 import dask.array as da import numpy as np from sklearn.pipeline import Pipeline from pymks import ( generate_delta, plot_microstructures, solve_fe, PrimitiveTransformer, LocalizationRegressor, coeff_to_real, ) x_delta = generate_delta(n_phases=3, shape=(21, 21)).persist() plot_microstructures(*x_delta[:3], titles=['[0]', '[1]', '[2]'], cmap='gray') strain_xx = lambda x: solve_fe( x, elastic_modulus=(80, 100, 120), poissons_ratio=(0.3, 0.3, 0.3), macro_strain=0.02 )['strain'][...,0] y_delta = strain_xx(x_delta).persist() plot_microstructures(y_delta[0], titles=[r'$\mathbf{\varepsilon_{xx}}$']) model = Pipeline(steps=[ ('discretize', PrimitiveTransformer(n_state=3, min_=0.0, max_=2.0)), ('regressor', LocalizationRegressor()) ]) model.fit(x_delta, y_delta); to_real = lambda x: coeff_to_real(x.steps[1][1].coeff).real coeff = to_real(model) plot_microstructures( coeff[..., 0], coeff[..., 1], coeff[..., 2], titles=['Influence coeff [0]', 'Influence coeff [1]', 'Influence coeff [2]'] ) da.random.seed(99) x_data = da.random.randint(2, size=(1,) + x_delta.shape[1:]).persist() y_data = strain_xx(x_data).persist() plot_microstructures( y_data[0], titles=[r'FE - $\mathbf{\varepsilon_{xx}}$'] ) y_predict = model.predict(x_data) plot_microstructures( y_data[0], y_predict[0], titles=[ r'$\mathbf{\varepsilon_{xx}}$ - FE', r'$\mathbf{\varepsilon_{xx}}$ - MKS' ] ) plot_microstructures( (y_data - y_predict)[0], titles=['FE - MKS'] ) new_shape = tuple(np.array(x_delta.shape[1:]) * 3) x_large = da.random.randint(2, size=(1,) + new_shape).persist() y_large = strain_xx(x_large).persist() plot_microstructures(y_large[0], titles=[r'$\mathbf{\varepsilon_{xx}}$ - FE (large)']) model.steps[1][1].coeff_resize(x_large[0].shape); coeff_large = to_real(model) plot_microstructures( coeff_large[..., 0], coeff_large[..., 1], coeff_large[..., 2], titles=['Influence coeff [0]', 'Influence coeff [1]', 'Influence coeff [2]'] ) y_large_predict = model.predict(x_large).persist() plot_microstructures( y_large[0], y_large_predict[0], titles=[ r'$\mathbf{\varepsilon_{xx}}$ - FE (large)', r'$\mathbf{\varepsilon_{xx}}$ - MKS (large)' ] ) plot_microstructures( (y_large - y_large_predict)[0], titles=['FE - MKS'] )
0.578686
0.994253
### 1. Data Setup ``` import os import sys import pylab import caffe import numpy import matplotlib ``` + Import related modules. ``` caffe_root='/Users/Vayne-Lover/Desktop/CS/Caffe/caffe' sys.path.insert(0,caffe_root+'python') os.chdir(caffe_root) !data/mnist/get_mnist.sh !examples/mnist/create_mnist.sh os.chdir('examples/') ``` + Download data and change data to lmdb. ### 2. Net Setup ``` from caffe import layers as L from caffe import params as P def lenet(lmdb, batch_size): n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open('mnist/lenet_auto_train.prototxt', 'w') as f: f.write(str(lenet('mnist/mnist_train_lmdb', 64))) with open('mnist/lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('mnist/mnist_test_lmdb', 100))) ``` + Set up net with 2 convolution layers,2 pooling layers,2 inner product layers,a relu layer,a loss layer. ### 3. Running Setup ``` caffe.set_mode_cpu() solver=None solver=caffe.SGDSolver('mnist/lenet_auto_solver.prototxt') ``` + Here i choose Stochastic Gradient Descent for solver. #### 3.1 Details of Net + The details of net can be seen below. ``` [(k, v.data.shape) for k, v in solver.net.blobs.items()] ``` #### 3.2 Details of Layers ``` [(k, v[0].data.shape) for k, v in solver.net.params.items()] ``` ### 4. Train Model ``` solver.net.forward() solver.test_nets[0].forward() ``` #### 4.1 Details of Data + The details of data can be seen below. ``` from pylab import * %matplotlib inline imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray'); axis('off') print 'train labels:', solver.net.blobs['label'].data[:8] ``` #### 4.2 Start Training ``` import numpy as np import time start=time.clock() niter = 500 test_interval = niter / 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter / test_interval))) output = zeros((niter, 10, 10)) for it in range(niter): solver.step(1) train_loss[it] = solver.net.blobs['loss'].data solver.test_nets[0].forward(start='conv1') output[it] = solver.test_nets[0].blobs['score'].data[:10] if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 for test_it in range(100): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / 1e4 end=time.clock() print "LeNet Model cost: %f s" %(end-start) ``` ### 5. Test Model #### 5.1 The accuracy of model ``` _, ax1 = subplots() ax2 = ax1.twinx() ax1.plot(arange(niter), train_loss) ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('LeNet Test Accuracy: {:.2f}'.format(test_acc[-1])) ``` + We can see that the accuracy is 0.97. #### 5.2 Visual Predictions ``` for i in range(4): figure(figsize=(2, 2)) imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray') figure(figsize=(10, 2)) imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray') xlabel('iteration') ylabel('label') ``` ### 6. Optimize Model + Here i will choose other parameters to optimize the model. ### 6.1 Adam #### 6.1.1 Define Adam Model ``` train_net_path = 'mnist/custom_auto_train.prototxt' test_net_path = 'mnist/custom_auto_test.prototxt' solver_config_path = 'mnist/custom_auto_solver.prototxt' def adam_net(lmdb, batch_size): n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open(train_net_path, 'w') as f: f.write(str(adam_net('mnist/mnist_train_lmdb', 64))) with open(test_net_path, 'w') as f: f.write(str(adam_net('mnist/mnist_test_lmdb', 100))) from caffe.proto import caffe_pb2 s = caffe_pb2.SolverParameter() s.random_seed = 0xCAFFE s.train_net = train_net_path s.test_net.append(test_net_path) s.test_interval = 500 s.test_iter.append(100) s.max_iter = 10000 s.type = "Adam" s.base_lr = 0.01 s.momentum = 0.9 s.weight_decay = 5e-4 s.lr_policy = 'inv' s.gamma = 0.0001 s.power = 0.75 s.display = 1000 s.snapshot = 5000 s.snapshot_prefix = 'mnist/custom_net' # Train on the GPU s.solver_mode = caffe_pb2.SolverParameter.CPU with open(solver_config_path, 'w') as f: f.write(str(s)) solver = None solver = caffe.get_solver(solver_config_path) ``` #### 6.1.2 Train Adam Model ``` import time start=time.clock() niter = 500 test_interval = niter / 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter / test_interval))) for it in range(niter): solver.step(1) train_loss[it] = solver.net.blobs['loss'].data if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 for test_it in range(100): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / 1e4 end=time.clock() print "Adam Model cost: %f s" %(end-start) ``` #### 6.1.3 Test Adam Model ``` _, ax1 = subplots() ax2 = ax1.twinx() ax1.plot(arange(niter), train_loss) ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('Adam Test Accuracy: {:.2f}'.format(test_acc[-1])) ``` + We can see that the accuracy is 0.93. ### 6.2 Nesterov #### 6.2.1 Define Nesterov Model ``` train_net_path = 'mnist/custom_auto_train.prototxt' test_net_path = 'mnist/custom_auto_test.prototxt' solver_config_path = 'mnist/custom_auto_solver.prototxt' def nesterov_net(lmdb, batch_size): n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open(train_net_path, 'w') as f: f.write(str(nesterov_net('mnist/mnist_train_lmdb', 64))) with open(test_net_path, 'w') as f: f.write(str(nesterov_net('mnist/mnist_test_lmdb', 100))) from caffe.proto import caffe_pb2 s = caffe_pb2.SolverParameter() s.random_seed = 0xCAFFE s.train_net = train_net_path s.test_net.append(test_net_path) s.test_interval = 500 s.test_iter.append(100) s.max_iter = 10000 s.type = "Nesterov" s.base_lr = 0.01 s.momentum = 0.9 s.weight_decay = 5e-4 s.lr_policy = 'inv' s.gamma = 0.0001 s.power = 0.75 s.display = 1000 s.snapshot = 5000 s.snapshot_prefix = 'mnist/custom_net' s.solver_mode = caffe_pb2.SolverParameter.CPU with open(solver_config_path, 'w') as f: f.write(str(s)) solver = None solver = caffe.get_solver(solver_config_path) ``` #### 6.2.2 Train Nesterov Model ``` import time start=time.clock() niter = 500 test_interval = niter / 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter / test_interval))) for it in range(niter): solver.step(1) train_loss[it] = solver.net.blobs['loss'].data if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 for test_it in range(100): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / 1e4 end=time.clock() print "Nesterov Model cost: %f s" %(end-start) ``` #### 6.2.3 Test Nesterov Model ``` _, ax1 = subplots() ax2 = ax1.twinx() ax1.plot(arange(niter), train_loss) ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('Nesterov Test Accuracy: {:.2f}'.format(test_acc[-1])) ``` + We can see that the accuracy is 0.97. ### 7. Comparision + After test all model,now i will choose the best model. | Model | Time | Accuracy | | :-----------: |:-------------:| :----:| | LeNet | 143.23 s | 97% | | Adam | 128.56 s | 93% | | Nesterov | 103.88 s | 97% | + In my opinion,though the accuracy of Nesterov model and LeNet is both 97%,but LeNet costs more time.Therefore I will choose Nesterov for my final model.
github_jupyter
import os import sys import pylab import caffe import numpy import matplotlib caffe_root='/Users/Vayne-Lover/Desktop/CS/Caffe/caffe' sys.path.insert(0,caffe_root+'python') os.chdir(caffe_root) !data/mnist/get_mnist.sh !examples/mnist/create_mnist.sh os.chdir('examples/') from caffe import layers as L from caffe import params as P def lenet(lmdb, batch_size): n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open('mnist/lenet_auto_train.prototxt', 'w') as f: f.write(str(lenet('mnist/mnist_train_lmdb', 64))) with open('mnist/lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('mnist/mnist_test_lmdb', 100))) caffe.set_mode_cpu() solver=None solver=caffe.SGDSolver('mnist/lenet_auto_solver.prototxt') [(k, v.data.shape) for k, v in solver.net.blobs.items()] [(k, v[0].data.shape) for k, v in solver.net.params.items()] solver.net.forward() solver.test_nets[0].forward() from pylab import * %matplotlib inline imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray'); axis('off') print 'train labels:', solver.net.blobs['label'].data[:8] import numpy as np import time start=time.clock() niter = 500 test_interval = niter / 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter / test_interval))) output = zeros((niter, 10, 10)) for it in range(niter): solver.step(1) train_loss[it] = solver.net.blobs['loss'].data solver.test_nets[0].forward(start='conv1') output[it] = solver.test_nets[0].blobs['score'].data[:10] if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 for test_it in range(100): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / 1e4 end=time.clock() print "LeNet Model cost: %f s" %(end-start) _, ax1 = subplots() ax2 = ax1.twinx() ax1.plot(arange(niter), train_loss) ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('LeNet Test Accuracy: {:.2f}'.format(test_acc[-1])) for i in range(4): figure(figsize=(2, 2)) imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray') figure(figsize=(10, 2)) imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray') xlabel('iteration') ylabel('label') train_net_path = 'mnist/custom_auto_train.prototxt' test_net_path = 'mnist/custom_auto_test.prototxt' solver_config_path = 'mnist/custom_auto_solver.prototxt' def adam_net(lmdb, batch_size): n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open(train_net_path, 'w') as f: f.write(str(adam_net('mnist/mnist_train_lmdb', 64))) with open(test_net_path, 'w') as f: f.write(str(adam_net('mnist/mnist_test_lmdb', 100))) from caffe.proto import caffe_pb2 s = caffe_pb2.SolverParameter() s.random_seed = 0xCAFFE s.train_net = train_net_path s.test_net.append(test_net_path) s.test_interval = 500 s.test_iter.append(100) s.max_iter = 10000 s.type = "Adam" s.base_lr = 0.01 s.momentum = 0.9 s.weight_decay = 5e-4 s.lr_policy = 'inv' s.gamma = 0.0001 s.power = 0.75 s.display = 1000 s.snapshot = 5000 s.snapshot_prefix = 'mnist/custom_net' # Train on the GPU s.solver_mode = caffe_pb2.SolverParameter.CPU with open(solver_config_path, 'w') as f: f.write(str(s)) solver = None solver = caffe.get_solver(solver_config_path) import time start=time.clock() niter = 500 test_interval = niter / 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter / test_interval))) for it in range(niter): solver.step(1) train_loss[it] = solver.net.blobs['loss'].data if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 for test_it in range(100): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / 1e4 end=time.clock() print "Adam Model cost: %f s" %(end-start) _, ax1 = subplots() ax2 = ax1.twinx() ax1.plot(arange(niter), train_loss) ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('Adam Test Accuracy: {:.2f}'.format(test_acc[-1])) train_net_path = 'mnist/custom_auto_train.prototxt' test_net_path = 'mnist/custom_auto_test.prototxt' solver_config_path = 'mnist/custom_auto_solver.prototxt' def nesterov_net(lmdb, batch_size): n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1./255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier')) n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX) n.fc1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) n.relu1 = L.ReLU(n.fc1, in_place=True) n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier')) n.loss = L.SoftmaxWithLoss(n.score, n.label) return n.to_proto() with open(train_net_path, 'w') as f: f.write(str(nesterov_net('mnist/mnist_train_lmdb', 64))) with open(test_net_path, 'w') as f: f.write(str(nesterov_net('mnist/mnist_test_lmdb', 100))) from caffe.proto import caffe_pb2 s = caffe_pb2.SolverParameter() s.random_seed = 0xCAFFE s.train_net = train_net_path s.test_net.append(test_net_path) s.test_interval = 500 s.test_iter.append(100) s.max_iter = 10000 s.type = "Nesterov" s.base_lr = 0.01 s.momentum = 0.9 s.weight_decay = 5e-4 s.lr_policy = 'inv' s.gamma = 0.0001 s.power = 0.75 s.display = 1000 s.snapshot = 5000 s.snapshot_prefix = 'mnist/custom_net' s.solver_mode = caffe_pb2.SolverParameter.CPU with open(solver_config_path, 'w') as f: f.write(str(s)) solver = None solver = caffe.get_solver(solver_config_path) import time start=time.clock() niter = 500 test_interval = niter / 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter / test_interval))) for it in range(niter): solver.step(1) train_loss[it] = solver.net.blobs['loss'].data if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 for test_it in range(100): solver.test_nets[0].forward() correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1) == solver.test_nets[0].blobs['label'].data) test_acc[it // test_interval] = correct / 1e4 end=time.clock() print "Nesterov Model cost: %f s" %(end-start) _, ax1 = subplots() ax2 = ax1.twinx() ax1.plot(arange(niter), train_loss) ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') ax1.set_xlabel('iteration') ax1.set_ylabel('train loss') ax2.set_ylabel('test accuracy') ax2.set_title('Nesterov Test Accuracy: {:.2f}'.format(test_acc[-1]))
0.430147
0.784938
<img src="../../../img/logo-bdc.png" align="right" width="64"/> # <span style="color:#336699"> Web Land Trajectory Service (WLTS)</span> <hr style="border:2px solid #0077b9;"> **W**eb **L**and **T**rajectory **S**ervice (WLTS) is a service that aims to facilitate the access to various land use and cover data collections through a tailored API. WLTS brings the concept of Land Use and Cover Trajectories as a high level abstraction. Given a location and a time interval you can retrieve the trajectory of the these various data collections. WLTS is based on three operations: - ``list_collections``: returns the list of collections available in the service. - ``describe_collection``: returns the metadata of a given data collection. - ``trajectory``: returns the land use and cover trajectory from the collections given a location in space. The property result contains the feature identifier information, class, time, and the collection associated to the data item. This Jupyter Notebook shows how to use R Client Library for Web Land Trajectory Service. # 1. R Client API <hr style="border:1px solid #0077b9;"> For running the examples in this Jupyter Notebook you will need to install the [WLTS client for R](https://github.com/brazil-data-cube/rwlts). The command below installs `rwlts` direct from the code repository of the Brazil Data Cube project using [devtools](https://www.r-project.org/nosvn/pandoc/devtools.html). > [devtools](https://www.r-project.org/nosvn/pandoc/devtools.html) devtools is an R package that provides features that facilitate day-to-day development activities using the R environment. If you do not have the package installed, use the command below to install it. ```r # only run this if you don't have devtools installed install.packages("devtools") ``` ``` devtools::install_github("brazil-data-cube/rwlts") ``` With the package installed, to make use of its features, you need to load the package. In the R language, the package can be loaded as shown below: ``` library(rwlts) ``` WLTS is a client-server service. On the server-side, the data is stored, which is accessible through each of the API operations, described earlier. On the client-side (what this tutorial covers), you can use the operations and consume the data. In this tutorial, we will use the R client to access the data. We need to define the URL where the WLTS server is operating. The code below defines the URL of the WLTS server ``` wlts_service <- "https://brazildatacube.dpi.inpe.br/wlts/" ``` # 2. Listing the Available Collections <hr style="border:1px solid #0077b9;"> In WLTS, datasets that aggregate features from different classification systems, which various projects can generate, are represented through collections. Thus, the first operation presented is `list_collections`. This operation returns the list of all data collections that are available in the WLTS. In the rwlts client, this operation is used via the `list_collections` function. The function takes as a parameter the address of the server to be queried. ``` list_collections(wlts_service) ``` The names returned can be used in subsequent operations. # 3. Retrieving the Metadata of a collection <hr style="border:1px solid #0077b9;"> Each collection is associated with a set of metadata that describes it. In WLTS a, there is the describe_collection operation, which allows the retrieval of this information. In rwlts, this operation is used through the `describe_collection`. > The example below retrieves the metadata from the collection named `mapbiomas5_amazonia` ``` describe_collection(wlts_service, 'mapbiomas5_amazonia') ``` # 4. Retrieving the Trajectory <hr style="border:1px solid #0077b9;"> <br> **Single collections** In WLTS, since a collection is associated with a dataset with time variation, it is possible to retrieve the land use and land cover trajectory of a given point. The figure below illustrates this process. <br> <div align="center"> <img src="../../../img/wlts/traj1.png" width="700px"> <div align="center"> <b>Figure 1</b> - WLTS trajectory extraction </div> </div> The operation in WLTS that allows the retrieval of the trajectory as shown in the figure is `trajectory`. This operation is used in rwlts through the `get_trajectory` function. > The `get_trajectory` function receives the service URL to be queried, the location, and the collection name. The example below shows get_trajectory to retrieve the point with the location at `latitude -12.0` and `longitude -54.0`. > The CRS of the requests is EPSG:4326 ``` tj = get_trajectory(wlts_service, latitude=-12.0, longitude=-54.0, collections='mapbiomas5_amazonia') tj ``` One more point can be passed. To do this, it is necessary to pass pairs of values to the `latitude` and `longitude` arguments > In the example below the points `(-12.0, -54.0)` and `(-12.59, -54.5)` are being recovered. > **Note**, the returned `id` column is inserted to identify each of the points being passed to the function. Thus, the point of `id` 1 represents the first latitude/longitude pair gave. The `id` 2 represents the second pair, and so on. ``` tj = get_trajectory(wlts_service, latitude=c(-12.0, -12.59), longitude=c(-54.0, -54.5), collections='mapbiomas5_amazonia') tj ``` <br> **Multiple collections** So far, all our queries have been made considering only one data collection. WLTS allows more than one collection to be accessed at the same time for the same point. By doing this, a trajectory for each project will be extracted. This way of operation is illustrated by the figure below. <br> <div align="center"> <img src="../../../img/wlts/traj2.png" width="800px"> <div align="center"> <b>Figure 2</b> - WLTS trajectory extraction using multiple collections </div> </div> To retrieve multiple collections, insert each collection's name that needs to be queried into the get_trajectory function. The names are entered in the collections parameter and must be separated by a comma. As an example, the code below retrieves the trajectories considering the collections `mapbiomas5_amazonia` and `terraclass_amazonia`. ``` tj = get_trajectory(wlts_service, latitude=-12.0, longitude=-54.0, collections='mapbiomas5_amazonia,terraclass_amazonia') tj ``` # 5. Visualizing the Trajectory with Tibble <hr style="border:1px solid #0077b9;"> When data is retrieved from the server, it is inserted in a [tibble](https://tibble.tidyverse.org/), allowing easy manipulation. To make its use is necessary to access the information present in the `result` key in the result returned by the `get_trajectory` function. ``` tj = get_trajectory(wlts_service, latitude=-12.0, longitude=-54.0, collections='mapbiomas5_amazonia') head(tj$result, 5) tj = get_trajectory(wlts_service, latitude=-4.090, longitude=-63.353, collections='mapbiomas5_amazonia') head(tj$result, 5) tj = get_trajectory(wlts_service, latitude=c(-12.0, -12.59), longitude=c(-54.0, -54.5), collections='mapbiomas5_amazonia') head(tj$result, 5) ``` # 6. References <hr style="border:1px solid #0077b9;"> To learn more about the WLTS ecosystem, see the Brazil Data Cube project repositories on github - [WLTS Server](https://github.com/brazil-data-cube/wlts) - [WLTS OpenAPI 3 Specification](https://github.com/brazil-data-cube/wlts-spec) - [R Client Library for Web Land Trajectory Service - GitHub Repository](https://github.com/brazil-data-cube/rwlts)
github_jupyter
# only run this if you don't have devtools installed install.packages("devtools") devtools::install_github("brazil-data-cube/rwlts") library(rwlts) wlts_service <- "https://brazildatacube.dpi.inpe.br/wlts/" list_collections(wlts_service) describe_collection(wlts_service, 'mapbiomas5_amazonia') tj = get_trajectory(wlts_service, latitude=-12.0, longitude=-54.0, collections='mapbiomas5_amazonia') tj tj = get_trajectory(wlts_service, latitude=c(-12.0, -12.59), longitude=c(-54.0, -54.5), collections='mapbiomas5_amazonia') tj tj = get_trajectory(wlts_service, latitude=-12.0, longitude=-54.0, collections='mapbiomas5_amazonia,terraclass_amazonia') tj tj = get_trajectory(wlts_service, latitude=-12.0, longitude=-54.0, collections='mapbiomas5_amazonia') head(tj$result, 5) tj = get_trajectory(wlts_service, latitude=-4.090, longitude=-63.353, collections='mapbiomas5_amazonia') head(tj$result, 5) tj = get_trajectory(wlts_service, latitude=c(-12.0, -12.59), longitude=c(-54.0, -54.5), collections='mapbiomas5_amazonia') head(tj$result, 5)
0.445771
0.990413
# Spanish NIF Numbers ## Introduction The function `clean_es_nif()` cleans a column containing Spanish NIF number (NIF) strings, and standardizes them in a given format. The function `validate_es_nif()` validates either a single NIF strings, a column of NIF strings or a DataFrame of NIF strings, returning `True` if the value is valid, and `False` otherwise. NIF strings can be converted to the following formats via the `output_format` parameter: * `compact`: only number strings without any seperators or whitespace, like "B58378431" * `standard`: NIF strings with proper whitespace in the proper places. Note that in the case of NIF, the compact format is the same as the standard one. Invalid parsing is handled with the `errors` parameter: * `coerce` (default): invalid parsing will be set to NaN * `ignore`: invalid parsing will return the input * `raise`: invalid parsing will raise an exception The following sections demonstrate the functionality of `clean_es_nif()` and `validate_es_nif()`. ### An example dataset containing NIF strings ``` import pandas as pd import numpy as np df = pd.DataFrame( { "nif": [ 'ES B-58378431', 'B64717839', 'BE 428759497', 'BE431150351', "002 724 334", "hello", np.nan, "NULL", ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "1111 S Figueroa St, Los Angeles, CA 90015", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df ``` ## 1. Default `clean_es_nif` By default, `clean_es_nif` will clean nif strings and output them in the standard format with proper separators. ``` from dataprep.clean import clean_es_nif clean_es_nif(df, column = "nif") ``` ## 2. Output formats This section demonstrates the output parameter. ### `standard` (default) ``` clean_es_nif(df, column = "nif", output_format="standard") ``` ### `compact` ``` clean_es_nif(df, column = "nif", output_format="compact") ``` ## 3. `inplace` parameter This deletes the given column from the returned DataFrame. A new column containing cleaned NIF strings is added with a title in the format `"{original title}_clean"`. ``` clean_es_nif(df, column="nif", inplace=True) ``` ## 4. `errors` parameter ### `coerce` (default) ``` clean_es_nif(df, "nif", errors="coerce") ``` ### `ignore` ``` clean_es_nif(df, "nif", errors="ignore") ``` ## 4. `validate_es_nif()` `validate_es_nif()` returns `True` when the input is a valid NIF. Otherwise it returns `False`. The input of `validate_es_nif()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame. When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_es_nif()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_es_nif()` returns the validation result for the whole DataFrame. ``` from dataprep.clean import validate_es_nif print(validate_es_nif('ES B-58378431')) print(validate_es_nif('B64717839')) print(validate_es_nif('BE 428759497')) print(validate_es_nif('BE431150351')) print(validate_es_nif("004085616")) print(validate_es_nif("hello")) print(validate_es_nif(np.nan)) print(validate_es_nif("NULL")) ``` ### Series ``` validate_es_nif(df["nif"]) ``` ### DataFrame + Specify Column ``` validate_es_nif(df, column="nif") ``` ### Only DataFrame ``` validate_es_nif(df) ```
github_jupyter
import pandas as pd import numpy as np df = pd.DataFrame( { "nif": [ 'ES B-58378431', 'B64717839', 'BE 428759497', 'BE431150351', "002 724 334", "hello", np.nan, "NULL", ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "1111 S Figueroa St, Los Angeles, CA 90015", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df from dataprep.clean import clean_es_nif clean_es_nif(df, column = "nif") clean_es_nif(df, column = "nif", output_format="standard") clean_es_nif(df, column = "nif", output_format="compact") clean_es_nif(df, column="nif", inplace=True) clean_es_nif(df, "nif", errors="coerce") clean_es_nif(df, "nif", errors="ignore") from dataprep.clean import validate_es_nif print(validate_es_nif('ES B-58378431')) print(validate_es_nif('B64717839')) print(validate_es_nif('BE 428759497')) print(validate_es_nif('BE431150351')) print(validate_es_nif("004085616")) print(validate_es_nif("hello")) print(validate_es_nif(np.nan)) print(validate_es_nif("NULL")) validate_es_nif(df["nif"]) validate_es_nif(df, column="nif") validate_es_nif(df)
0.292393
0.960657
``` import numpy as np import pandas as pd from scipy.optimize import minimize from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras import layers from scipy.optimize import minimize import nevergrad as ng ``` ## Data ``` df = pd.DataFrame(columns=['Pressure','Temprerature','Speed','ProductionYield']) df['Pressure'] = np.random.randint(low= 2, high=10, size=2000) df['Temprerature'] = np.random.randint(10, 30, size=2000) df['Speed'] = np.random.weibull(2, size=2000) df['ProductionYield'] = (df['Pressure'])**2 + df['Temprerature'] * df['Speed'] + 10 df['ProductionYield']= df['ProductionYield'].clip(0, 100) print(df.head()) df.describe() from pandas.plotting import scatter_matrix #now plot using pandas scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde'); ``` ## Prediction algorithm ``` x_train, x_test, y_train, y_test = train_test_split(df[['Pressure','Temprerature','Speed']].values, df['ProductionYield'].values, test_size=0.33, random_state=42) def build_model(): # create model model = tf.keras.Sequential() model.add(layers.Dense(64, input_dim=3, kernel_initializer='normal', activation='relu')) model.add(layers.Dense(128, kernel_initializer='normal', activation='relu')) model.add(layers.Dense(1, kernel_initializer='normal')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') return model def plot_loss(history): plt.plot(history.history['loss'], label='loss') plt.plot(history.history['val_loss'], label='val_loss') plt.ylim([0, 15]) plt.xlabel('Epoch') plt.ylabel('Error [MPG]') plt.legend() plt.grid(True) model = build_model() model.summary() history = model.fit(x_train, y_train, validation_split=0.2, verbose=0, epochs=1000) plot_loss(history) from sklearn.metrics import r2_score r2_score(y_test, model.predict(x_test)) tp_indx = 4 print(f"real value: {y_test[tp_indx]}, model predicted: {model.predict(x_test[tp_indx].reshape(1,-1))}") ``` ## Multi-dimensional optimization ### 'Nelder-Mead' ``` def wrapper(x, *args) -> float: network_input = x.reshape(1,-1) network_output = model.predict(network_input, *args) scipy_output = float(network_output) if (scipy_output > 0) & (scipy_output < 110): return -scipy_output else: return 1000000 x0 = x_train[-1].reshape(1,-1) res = minimize(wrapper, x0, method='Nelder-Mead', tol=1e-6) res model.predict(res.x.reshape(1,-1)) ``` ### Bound 'L-BFGS-B' ``` x0 = x_train[-5].reshape(1,-1) bnds = ((2, 10), (10, 40), (0, 4)) res = minimize(wrapper, x0, method='L-BFGS-B',bounds=bnds, tol=1e-6) res model.predict(res.x.reshape(1,-1)) ```
github_jupyter
import numpy as np import pandas as pd from scipy.optimize import minimize from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras import layers from scipy.optimize import minimize import nevergrad as ng df = pd.DataFrame(columns=['Pressure','Temprerature','Speed','ProductionYield']) df['Pressure'] = np.random.randint(low= 2, high=10, size=2000) df['Temprerature'] = np.random.randint(10, 30, size=2000) df['Speed'] = np.random.weibull(2, size=2000) df['ProductionYield'] = (df['Pressure'])**2 + df['Temprerature'] * df['Speed'] + 10 df['ProductionYield']= df['ProductionYield'].clip(0, 100) print(df.head()) df.describe() from pandas.plotting import scatter_matrix #now plot using pandas scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde'); x_train, x_test, y_train, y_test = train_test_split(df[['Pressure','Temprerature','Speed']].values, df['ProductionYield'].values, test_size=0.33, random_state=42) def build_model(): # create model model = tf.keras.Sequential() model.add(layers.Dense(64, input_dim=3, kernel_initializer='normal', activation='relu')) model.add(layers.Dense(128, kernel_initializer='normal', activation='relu')) model.add(layers.Dense(1, kernel_initializer='normal')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') return model def plot_loss(history): plt.plot(history.history['loss'], label='loss') plt.plot(history.history['val_loss'], label='val_loss') plt.ylim([0, 15]) plt.xlabel('Epoch') plt.ylabel('Error [MPG]') plt.legend() plt.grid(True) model = build_model() model.summary() history = model.fit(x_train, y_train, validation_split=0.2, verbose=0, epochs=1000) plot_loss(history) from sklearn.metrics import r2_score r2_score(y_test, model.predict(x_test)) tp_indx = 4 print(f"real value: {y_test[tp_indx]}, model predicted: {model.predict(x_test[tp_indx].reshape(1,-1))}") def wrapper(x, *args) -> float: network_input = x.reshape(1,-1) network_output = model.predict(network_input, *args) scipy_output = float(network_output) if (scipy_output > 0) & (scipy_output < 110): return -scipy_output else: return 1000000 x0 = x_train[-1].reshape(1,-1) res = minimize(wrapper, x0, method='Nelder-Mead', tol=1e-6) res model.predict(res.x.reshape(1,-1)) x0 = x_train[-5].reshape(1,-1) bnds = ((2, 10), (10, 40), (0, 4)) res = minimize(wrapper, x0, method='L-BFGS-B',bounds=bnds, tol=1e-6) res model.predict(res.x.reshape(1,-1))
0.582491
0.814791
# Graph Visualization We've learned from the [Workflow](./basic_workflow.ipynb) tutorial that every Nipype workflow is a directed acyclic graph. Some workflow structures are easy to understand directly from the script and some others are too complex for that. Luckily, there is the ``write_graph`` method! ## ``write_graph`` **``write_graph``** allows us to visualize any workflow in five different ways: - **``orig``** - creates a top-level graph without expanding internal workflow nodes - **``flat``** - expands workflow nodes recursively - **``hierarchical``** - expands workflow nodes recursively with a notion on the hierarchy - **``colored``** - expands workflow nodes recursively with a notion on hierarchy in color - **``exec``** - expands workflows to depict iterables Which graph visualization should be used is chosen by the **``graph2use``** parameter. Additionally, we can also choose the format of the output file (png or svg) with the **``format``** parameter. A third parameter, called **``simple_form``** can be used to specify if the node names used in the graph should be of the form ***``nodename (package)``*** or ***``nodename.Class.package``***. ## Preparation Instead of creating a new workflow from scratch, let's just import one from the Nipype workflow library. ``` # Import the function to create an spm fmri preprocessing workflow from nipype.workflows.fmri.spm import create_spm_preproc # Create the workflow object spmflow = create_spm_preproc() ``` For a reason that will become clearer under the ``exec`` visualization, let's add an iternode at the beginning of the ``spmflow`` and connect them together under a new workflow, called ``metaflow``. The iternode will cause the workflow to be executed three times, once with the ``fwhm`` value set to 4, once set to 6 and once set to 8. For more about this see the [Iteration](./basic_iteration.ipynb) tutorial. ``` # Import relevant modules from nipype import IdentityInterface, Node, Workflow # Create an iternode that iterates over three different fwhm values inputNode = Node(IdentityInterface(fields=['fwhm']), name='iternode') inputNode.iterables = ('fwhm', [4, 6, 8]) # Connect inputNode and spmflow in a workflow metaflow = Workflow(name='metaflow') metaflow.connect(inputNode, "fwhm", spmflow, "inputspec.fwhm") ``` # ``orig`` graph This visualization gives us a basic overview of all the nodes and internal workflows in a workflow and shows in a simple way the dependencies between them. ``` # Write graph of type orig spmflow.write_graph(graph2use='orig', dotfilename='./graph_orig.dot') # Visualize graph from IPython.display import Image Image(filename="graph_orig.png") ``` # ``flat`` graph This visualization gives us already more information about the internal structure of the ``spmflow`` workflow. As we can, the internal workflow ``getmask`` from the ``orig`` visualization above was replaced by the individual nodes contained in this internal workflow. ``` # Write graph of type flat spmflow.write_graph(graph2use='flat', dotfilename='./graph_flat.dot') # Visualize graph from IPython.display import Image Image(filename="graph_flat.png") ``` # ``hierarchical`` graph To better appreciate this visualization, let's look at the ``metaflow`` workflow that has one hierarchical level more than the ``spmflow``. As you can see, this visualization makes it much clearer which elements of a workflow are nodes and which ones are internal workflows. Also, each connection is shown as an individual arrow, and not just represented by one single arrow between two nodes. Additionally, iternodes and mapnodes are visualized differently than normal nodes to make them pop out more. ``` # Write graph of type hierarchical metaflow.write_graph(graph2use='hierarchical', dotfilename='./graph_hierarchical.dot') # Visualize graph from IPython.display import Image Image(filename="graph_hierarchical.png") ``` # ``colored`` graph This visualization is almost the same as the ``hierarchical`` above. The only difference is that individual nodes and different hierarchy levels are colored coded differently. ``` # Write graph of type colored metaflow.write_graph(graph2use='colored', dotfilename='./graph_colored.dot') # Visualize graph from IPython.display import Image Image(filename="graph_colored.png") ``` # ``exec`` graph This visualization is the most different from the rest. Like the ``flat`` visualization, it depicts all individual nodes. But additionally, it drops the ``utility`` nodes from the workflow and expands workflows to depict iterables (can be seen in the ``detailed_graph`` visualization further down below). ``` # Write graph of type exec metaflow.write_graph(graph2use='exec', dotfilename='./graph_exec.dot') # Visualize graph from IPython.display import Image Image(filename="graph_exec.png") ``` # Detailed graphs The ``orig``, ``flat`` and ``exec`` visualization also create a **detailed graph** whenever ``write_graph`` is executed. A detailed graph shows a node with not just the node name, but also with all its input and output parameters. ## detailed ``flat`` graph For example, the detailed graph of the ``flat`` graph looks as follows: ``` from IPython.display import Image Image(filename="graph_flat_detailed.png") ``` Such a visualization might be more complicated to read, but it gives you a complete overview of a workflow and all its components. ## detailed ``exec`` graph Now, if we look at the detailed graph of the ``exec`` visualization, we can see where the iteration takes place: ``` from IPython.display import Image Image(filename="graph_exec_detailed.png") ``` In the middle left of the figure, we have three ``preproc.smooth`` nodes of the ``spm`` interface with the names "a0", "a1" and "a2". Those represent the three smoothing nodes with the ``fwhm`` parameter set to 4, 6 and 8. Now if those nodes would be connected to another workflow, this would mean that the workflow that follows would be depicted three times, each time for another input coming from the ``preproc.smooth`` node. Therefore, the **detailed ``exec``** visualization makes all individual execution elements very clear and allows it to see which elements can be executed in parallel. # ``simple_form`` Last but not least is the third ``write_graph`` argument, ``simple_form``. If this parameter is set to ``False``, this means that the node names in the visualization will be written in the form of ***``nodename.Class.package``***, instead of ***``nodename (package)``***. For example, let's look at the ``orig``visualization with ``simple_form`` set to ``False``. ``` # Write graph of type orig spmflow.write_graph(graph2use='orig', dotfilename='./graph_orig_notSimple.dot', simple_form=False) # Visualize graph from IPython.display import Image Image(filename="graph_orig_notSimple.png") ```
github_jupyter
# Import the function to create an spm fmri preprocessing workflow from nipype.workflows.fmri.spm import create_spm_preproc # Create the workflow object spmflow = create_spm_preproc() # Import relevant modules from nipype import IdentityInterface, Node, Workflow # Create an iternode that iterates over three different fwhm values inputNode = Node(IdentityInterface(fields=['fwhm']), name='iternode') inputNode.iterables = ('fwhm', [4, 6, 8]) # Connect inputNode and spmflow in a workflow metaflow = Workflow(name='metaflow') metaflow.connect(inputNode, "fwhm", spmflow, "inputspec.fwhm") # Write graph of type orig spmflow.write_graph(graph2use='orig', dotfilename='./graph_orig.dot') # Visualize graph from IPython.display import Image Image(filename="graph_orig.png") # Write graph of type flat spmflow.write_graph(graph2use='flat', dotfilename='./graph_flat.dot') # Visualize graph from IPython.display import Image Image(filename="graph_flat.png") # Write graph of type hierarchical metaflow.write_graph(graph2use='hierarchical', dotfilename='./graph_hierarchical.dot') # Visualize graph from IPython.display import Image Image(filename="graph_hierarchical.png") # Write graph of type colored metaflow.write_graph(graph2use='colored', dotfilename='./graph_colored.dot') # Visualize graph from IPython.display import Image Image(filename="graph_colored.png") # Write graph of type exec metaflow.write_graph(graph2use='exec', dotfilename='./graph_exec.dot') # Visualize graph from IPython.display import Image Image(filename="graph_exec.png") from IPython.display import Image Image(filename="graph_flat_detailed.png") from IPython.display import Image Image(filename="graph_exec_detailed.png") # Write graph of type orig spmflow.write_graph(graph2use='orig', dotfilename='./graph_orig_notSimple.dot', simple_form=False) # Visualize graph from IPython.display import Image Image(filename="graph_orig_notSimple.png")
0.566019
0.983565
# House Price Prediction for King County(USA) using Linear Regression Techniques This notebook is described further in three parts: Part 1: Exploratory Data Analysis Part 2: This notebook presents a thought process of predicting a prices of houses using Machine Learning model. Linear Regression algorithm has been used for price prediction. Part 3: Conclusion Dataset Source : Kaggle [https://www.kaggle.com/harlfoxem/housesalesprediction] -Seattle is located in King County ### Import Libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns plt.show() %matplotlib inline ``` ## Part 1 : Exploratory Data Analysis ``` # Import csv file into dataframe df = pd.read_csv('kc_house_data.csv') df.head() ``` **All columns contain numeric data, so there is no need to further change the data values. Let's check if this dataset contains any missing data** ``` df.info() ``` ** Data is clean, No Missing data. Let's get data summary** ``` df.describe() ``` ## Correlation ``` df.corr()[1:2].transpose().sort_values('price') ``` ## Top five columns correlation with House Price **sqft_living = 0.702035 , grade = 0.667434 , sqft_above = 0.605567 , sqft_living15 = 0.585379 , bathrooms = 0.525138** ``` sns.heatmap(df[['sqft_living', 'grade', 'sqft_above', 'sqft_living15', 'bathrooms', 'price']].corr(), cmap='coolwarm') df.columns import warnings warnings.filterwarnings("ignore") ``` ** Let's use seaborn to create a jointplot to compare the number of sqft_living and House Price columns. Does this correlation make sense?** ``` sns.jointplot(x='sqft_living',y='price',data=df) ``` ** Let's plot grade and price ** ``` sns.jointplot(x='grade',y='price',data=df) sns.pairplot(df[['price', 'bedrooms', 'bathrooms', 'floors', 'waterfront', 'view']]) sns.pairplot(df[['price','sqft_above', 'sqft_basement', 'yr_renovated','lat', 'sqft_living15']]) sns.set_style('whitegrid') ``` **Sqft_Living Column is very strongly correlated with Price Column** ``` sns.regplot(df.sqft_living, df.price, order=1, ci=None, scatter_kws={'color':'r', 's':9}) plt.xlim(0,13540) plt.ylim(ymin=0); ``` ## Training a Linear Regression Model Let's now begin to train out regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. ### X and y arrays ``` #Using all features to train model for Linear Regression X = df[['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode', 'lat', 'long', 'sqft_living15', 'sqft_lot15']] y = df['price'] ``` ## Train Test Split Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101) ``` ## Creating and Training the Model ``` from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X_train,y_train) predictions = lm.predict(X_test) plt.scatter(y_test,predictions) ``` **Residual Histogram** ``` sns.distplot((y_test-predictions),bins=50); print('Intercept:',lm.intercept_) coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient']) coeff_df ``` Interpreting the coefficients: Examples: - Holding all other features fixed, a 1 unit increase in **Bathrooms** is associated with an **increase of 36276 dollars in house price**. - Holding all other features fixed, a 1 unit increase in **Sqft_Living** is associated with an **increase of 109 dollars in house price**. - Holding all other features fixed, a 1 unit increase in **Grade** is associated with an **increase of 96102 dollars in house price**. - Holding all other features fixed, a 1 unit increase in **Sqft_Living15** is associated with an **increase of 24 dollars in house price**. - Holding all other features fixed, a 1 unit increase in **Sqft_Above** is associated with an **increase of 70 dollars in house price**. ## Regression Evaluation Metrics Comparing these metrics: - **MAE** is the easiest to understand, because it's the average error. - **MSE** is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world. - **RMSE** is even more popular than MSE, because RMSE is interpretable in the "y" units. All of these are **loss functions**, because we want to minimize them. ``` from sklearn import metrics print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) ``` ## R2 Score ``` from sklearn.metrics import r2_score print('R2 Score : ',r2_score(y_test, predictions)) ``` ## Conclusion ## Results : R2 Score value is 70.9% which is a good indication to predict house prices with Linear Regression model under given features. Note : This score can change based on data variability. **Completed: Jan 2019** <center><h1>Update : Improving R2 Score </h1> Jun 2020</center> ``` coeff_df.sort_values('Coefficient') ``` ### Filtering out features with less or no impact on house prices based on Coefficient shown above ``` from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, LassoCV from sklearn.model_selection import cross_val_score, GridSearchCV from sklearn.ensemble import RandomForestRegressor X = df[['bedrooms', 'bathrooms', 'sqft_living', 'floors', 'waterfront', 'view', 'condition', 'grade', 'yr_built', 'lat', 'long']] y = df['price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101) model = RandomForestRegressor(n_jobs=-1, n_estimators=35) model.fit(X_train,y_train) model.score(X_train,y_train) predictions=model.predict(X_test) print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) print('R2 Score : ',r2_score(y_test, predictions)) print('Intercept:',lm.intercept_) plt.scatter(y_test,predictions); sns.distplot((y_test-predictions),bins=50); pd.DataFrame({'R2 Score': ['70%', '87%'],}, index=['Linear Regressor', 'RandomForest Regressor']) ``` <center><h3> Conclusion: R2 Score has improved 16% using Random Forest Regressor and feature engineering. </h3></center> <center><h3> RMSE has lowered by 68k points and Mean absolute error has lowered by 51k points which means new house price prediction has improved significantly. </h3></center> **Update: I thought of visiting this project to see if I can improve it with my refined knowledge in Machine Learning because with time and practice our understanding of concepts advances and we can see better ways to do things.** ``` # Last edited : Jun 30, 2020 By Monika Bagyal ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns plt.show() %matplotlib inline # Import csv file into dataframe df = pd.read_csv('kc_house_data.csv') df.head() df.info() df.describe() df.corr()[1:2].transpose().sort_values('price') sns.heatmap(df[['sqft_living', 'grade', 'sqft_above', 'sqft_living15', 'bathrooms', 'price']].corr(), cmap='coolwarm') df.columns import warnings warnings.filterwarnings("ignore") sns.jointplot(x='sqft_living',y='price',data=df) sns.jointplot(x='grade',y='price',data=df) sns.pairplot(df[['price', 'bedrooms', 'bathrooms', 'floors', 'waterfront', 'view']]) sns.pairplot(df[['price','sqft_above', 'sqft_basement', 'yr_renovated','lat', 'sqft_living15']]) sns.set_style('whitegrid') sns.regplot(df.sqft_living, df.price, order=1, ci=None, scatter_kws={'color':'r', 's':9}) plt.xlim(0,13540) plt.ylim(ymin=0); #Using all features to train model for Linear Regression X = df[['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode', 'lat', 'long', 'sqft_living15', 'sqft_lot15']] y = df['price'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101) from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X_train,y_train) predictions = lm.predict(X_test) plt.scatter(y_test,predictions) sns.distplot((y_test-predictions),bins=50); print('Intercept:',lm.intercept_) coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient']) coeff_df from sklearn import metrics print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) from sklearn.metrics import r2_score print('R2 Score : ',r2_score(y_test, predictions)) coeff_df.sort_values('Coefficient') from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, LassoCV from sklearn.model_selection import cross_val_score, GridSearchCV from sklearn.ensemble import RandomForestRegressor X = df[['bedrooms', 'bathrooms', 'sqft_living', 'floors', 'waterfront', 'view', 'condition', 'grade', 'yr_built', 'lat', 'long']] y = df['price'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101) model = RandomForestRegressor(n_jobs=-1, n_estimators=35) model.fit(X_train,y_train) model.score(X_train,y_train) predictions=model.predict(X_test) print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) print('R2 Score : ',r2_score(y_test, predictions)) print('Intercept:',lm.intercept_) plt.scatter(y_test,predictions); sns.distplot((y_test-predictions),bins=50); pd.DataFrame({'R2 Score': ['70%', '87%'],}, index=['Linear Regressor', 'RandomForest Regressor']) # Last edited : Jun 30, 2020 By Monika Bagyal
0.592902
0.98191